Table of Contents |
---|
...
JMeter script was used with the 40 requests per minute throughput limit that helped to obtain full harvesting successful results. The throughput limit usage, honestly, isn't a simulation of real user behaviour. The purpose of the current OAI-PMH tests is to measure the performance of the Orchid release by the EBSCO Harvester recommended tool and to find possible issues, and bottlenecks per Jira Legacy server System JiraJIRA serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key PERF-510
...
- Tests were executed by EBSCO Harvester in the AWS ptf-windows instance. There is no control of the 40 requests per minute throughput limit as it was done by the Jmeter script. That is why a successful harvesting operation by EBSCO Harvester took less time (~18 hours) than by JMeter script with a throughput limit (~25 hours). Moreover, EBSCO Harvester has a retry functionality that helps to keep on harvesting even after failed time-out requests.
- When the mod-oai-pmh.instance table gets accumulated with instance UUIDs from previous harvests, as was when the PTF env mod-oai-pmh.instance table reached 30M records, it took more time to insert new records into the table, therefore the overall duration of creating (downloading) new 'instanceUUIDs' records was increased as well. (Not a critical issue).
- '(504) A gateway timeout' were caused by mod-oai-pmh 'java.lang.OutOfMemoryError: Java heap space'. On the other hand, the 'java.lang.OutOfMemoryError' exceptions appeared when not all 'instanceUUIDs ' records were created for the request. The reason why it happens should be investigated additionally.
- 'repository.fetchingChunkSize=8000' option increased the duration of the harvesting request. The default value (5000) shows optimal results.
- All test executions show similar Service Memory behaviour. Only after restarting the service, the service memory usage is in an optimal level. After the harvesting operation starts, Service memory usage is grown up to 99%, stabilized on this level, and doesn't change reached value (even for future harvesting processes).
On the one hand, investigating how much memory is used per harvesting is unavailable. On the other hand, the reason why Service memory usage is not decreased if there are no activities should be investigated additionally as it could be a functionality of AWS displaying aggregated memory of the containers or could be a FOLIO issue. - In case all 'instanceUUIDs ' records have been created for the request as expected (ocp2 - 10'023'100 'instance' records) with different 'repository.fetchingChunkSize' (5000, 8000), the harvesting operation was less than 24 hours and was completed successfully. But the instability of the harvesting operation for Orchid release due to not all 'instance' records being created for the request should be investigated and fixed by
Jira Legacy server System JiraJIRA serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key MODOAIPMH-509
Recommendations & Jiras
Jira Legacy | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
...
Comparison with full harvesting testing by JMeter script OAI-PMH data harvesting (Orchid)
Source | maxRecordsPerResponse | Duration - Orchid JMeter script | Duration - Orchid EBSCO Harvester |
---|---|---|---|
SRS | 100 | ~25 h (approx. all 'instancesUUIDs ' were created) | ~ 17 h (all 'instancesUUIDs ' were created) |
SRS+Inventory | 100 | ~ 9 h (not all 'instancesUUIDs ' and no issues in console) | ~ 18 h (all 'instancesUUIDs ' were created) |
...
'mod-oai-pmh' uses up to ~97% of CPU during harvesting.
The spike in CPU utilization is probably due to the activity toward the end of the test when it was running out of memory and the Operating System was trying to clear whatever memory it could and/or swapping out blocks of memory to keep the process alive - until it cannot do it anymore and everything toppled
RDS CPU Utilization
Successful test (#13)
...
Before running OAI-PMH for the first time, please run the following database commands to optimize the tables (from https://wikifolio-org.folioatlassian.orgnet/wiki/display/FOLIOtips/OAI-PMH+Best+Practices#OAIPMHBestPractices-SlowPerformance):
Code Block |
---|
REINDEX index <tenant>_mod_inventory_storage.audit_item_pmh_createddate_idx ; REINDEX index <tenant>_mod_inventory_storage.audit_holdings_record_pmh_createddate_idx; REINDEX index <tenant>_mod_inventory_storage.holdings_record_pmh_metadata_updateddate_idx; REINDEX index <tenant>_mod_inventory_storage.item_pmh_metadata_updateddate_idx; REINDEX index <tenant>_mod_inventory_storage.instance_pmh_metadata_updateddate_idx; analyze verbose <tenant>_mod_inventory_storage.instance; analyze verbose <tenant>_mod_inventory_storage.item; analyze verbose <tenant>_mod_inventory_storage.holdings_record; |
...