Table of Contents |
---|
...
The purpose of these set of tests is to measure performance of Kiwi release. Find possible issues, bottlenecks.
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
- Kiwi release was able ho harvest 7,808,200 records in 19 hr 8 min (1M records per 2 hours and 15 min).
- Average response time per request with resumption token 0.874s.
- No memory or CPU issues were found (after the first couple of JIRAs below had been fixed)
- KPIs:
- mod-oai-pmh CPU usage 120% (on data transferring) 100% on harvesting.
- RDS CPU usage 80% on data transferring and ±15 % on harvesting
- Memory usage 105-107% on mod-source-record-manager. 35% on mod-oai-pmh. No signs of memory leaks on related modules.
- A few issues were found
- OutOfMemory exception:
Jira Legacy server System JiraJIRA serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key MODOAIPMH-374 - Thread block issue:
Jira Legacy server System JiraJIRA serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key MODOAIPMH-374 - When instances didn't have underlying MARC records, multiple repeating calls from mod-edge-oai-pmh to mod-oai-pmh were occurred, resulting in the end-client receiving an timeout, see
Jira Legacy server System JiraJIRA serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key MODOAIPMH-383
- OutOfMemory exception:
...
1) OutOfMemory exception. fixed in scope of
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
2) Thread block issue. fixed in scope of
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
3) Client timeouts.
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
- Total Underlying SRS records: 1,212,039
- Duration: 4 hr 57 min
- Records transferred: 4,770,043 (should be 8,415,303)Records harvested
- - 20,618 X 100 = 2,061,800.Calls performed 20,618
We can see here unstable part of test. This spikes on chart showing extremely increased response times. which leads to throughput gaps. At this point we still not sure this does it what was happening so we 've checked :the logs of
- RDS response times (logs) ;: PGLogs.log
- mod-oai-pmh (logs);
- nginx-oai-pmh (logs);
- edge-oai-pmh (logs);
- okapi (logs);
At each point we have good response times and we can't see correlation between logs and this chart.
...
- While data transferring process is going on the background DB CPU usage has reached 70%-75%.
- Data transferring process has failed in 10 minutes and transfer only 4770043 from 8M records.
- Harvesting itself consumes 15% DB CPU.
Test 2
- Total Underlying SRS records: 1,212,039
- Duration: 4 hr 25 min
- Records transferred: 3,815,867 (should be 8,415,303)
- Records harvested - 22305 X 100 = 2 230 500Calls performed - 22305
Results were the same as in test 1, showing consistency in failures due to missing a large number of underlying MARC records.
Test 3 (with Bugfest Dataset)
...
With new data set there is no "unstable parts" in this test. The results in this test is the best and accurate representation of OAI-PMH performance in Kiwi.
CPU usage is stable and without big spikes. As you can see there is higher CPU usage at the beginning of the test. It's data transferring process between mod-inventory-stirage and mod-oai-pmh.
...
Notable observations:
- OutOfMemory exception MODOAIPMH-374 and Thread block issue MODOAIPMH-374 were found and resolved early on in the testing cycle.
- The remaining issue: unstable parts of first couple of tests were made by data set MODOAIPMH-383
- Instances didn't have underlying records and this causes multiple repeating calls from mod-edge-oai-pmh to mod-oai-pmh.
- This leads to end client to wait until oai-pmh will find records with underlying records. And often client fail with 504 getaway timeout (load balancer timeout 400 seconds).
- The workaround for this issue was by testing with a dataset that has underlying MARC records, which was Bugfest's. This is an edge case as most systems would have more MARC records than instances. It does not need to be resolved for the Kiwi release.
...