Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

  • The purpose of the concurrent OAI-PMH, data import and CI/CO tests is to determine how these workflows affect each other. The report contain results for the
    Jira Legacy
    serverSystem Jira
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyPERF-492

Summary

  • The OAI-PMH has an influence on CI/CO response times - it worsen results up to 13%. DI worsens CI/CO results mostly with create job profile up to 55% with 1.000 records and up to 71% with 100.000 records.  DI with update profile worsens CI/CO results less than create profile - up to 25% with 1.000 records and 36% with 100.000 records. OAI-PMH (incremental) duration was in range from 04:20 min till 05:20 min for all tests of DI with 1.000 records and without it.  OAI-PMH duration calculation is described in Methodology/Approach section.
  • For major services memory usage didn't exceed the level of 60%. The highest level was registered for mod-source-record-manager 107% and mod-inventory-b 98%. After tests for Scenario 1 it achieved its stable level and didn't change.
  • Running OAI-PMH, DI and CI/CO simultaneously it has been shown that the environment can handle such load.
  • CI/CO response times during DI and OAI-PMH has degradation depending on the profile that was used. It was a number of consequent DI operations (create and update job profiles).
  • After  90 minutes of full harvest the growth of CPU utilization up to 188 % was observed for mod-oai-pmh-b. This increased CPU utilization lasted during 10 minutes. After it got back to steady state ( 5-7 % ). 
  • Service CPU Utilization at the beginning of DI mostly used by mod-di-converter-storage-b ( 253 % ), mod-inventory-b ( 172 % ), mod-quick-marc-b ( 108 % ). For the rest of modules it was under 70%. At the highest level it was mod-di-converter-storage-b ( 453 % ), mod-inventory-b ( 190 % ), mod-quick-marc-b ( 121 % ).
  • RDS CPU Utilization during incremental harvesting didn't exceed 60 % for all DI job profiles (1.000 records). Data export took 40%  But for full harvesting with DI Create job profile (100.000 records) it became instantly 96 %  and stayed on this level major part of process. DI Update used up to 90%. 
  • All oai-pmh tests were executed by EBSCO Harvester in the AWS ptf-windows instance.
  • During full harvesting (504) Gateway Timeout issue happened after all DI create and update were done so it didn't affect the results. It happened during all two Full harvesting runs with returned instances count ( during first 5 hours CI/CO - 1764989 records, other - 1166089 out of total 10433728 ).

Recommendations & Jiras

  • Allocate more CPU resources to mod-di-convertor-storage and mod-inventory-b


Test Runs & Results


Data import duration and CI/CO response times with DI & OAI-PMH results

Test #

CI/CO 10 users

Scenario

Job profile

OAI-PMH onlyOAI-PMH + DI + CI/CO Duration

DI Duration

CI average 

CO average 

Load level

Comments

Scenario 1

OAI-PMH incremental



40 min


DI MARC Bib Create

PTF - Create 2

00:04:4600:05:18

00:00:48

0.961

1.398

For scenario 1

1K (with pause ~5 min)

All incremental harvests were stopped manually after ~ 8000 instances




DI MARC Bib Update

PTF - Updates Success - 1

00:05:14

00:00:56

0.706

1.125

DI MARC Bib Create

PTF - Create 2

00:05:1100:04:20

00:00:43

0.843

1.402

DI MARC Bib Update

PTF - Updates Success - 1

00:04:24

00:00:44

0.848

1.335

Scenario 2

OAI-PMH full mode


5 hours


DI MARC Bib Create

PTF - Create 2


04:42:20

00:53:30

1.0781.545

For scenario 2

100K (with pause ~5 min)

During scenario 2 full harvests stopped due to ERROR: Error saving an xml document: The remote server returned an error: (504) Gateway Timeout.



DI MARC Bib Update

PTF - Updates Success - 1


01:04:38

0.7251.231

DI MARC Bib Update

PTF - Updates Success - 1


01:05:48

0.691.249

5 hours

DI MARC Bib Update

PTF - Updates Success - 1


03:44:2001:17:580.9031.333

DI MARC Bib Update

PTF - Updates Success - 1


01:18:080.7371.221

DI MARC Bib Update

PTF - Updates Success - 1


01:21:210.621.106Last 30 minutes without OAI-PMH

Comparisons

Comparison table for CI/CO response times


CI/CO onlyCI/CO + OAI-PMH

CI/CO + OAI-PMH

+ DI Create 1k

CI/CO + OAI-PMH

+ DI Update 1k

CI/CO

after

CI/CO + OAI-PMH

+ DI Create 100k

CI/CO

between

CI/CO + OAI-PMH

+ DI Update 100k

CI/CO

after

RequestsAverageAverageAverageAverage
Average
Average
Check-Out Controller0.904

1.024

↑13.27%

1.398

↑54.65%

1.125

↑24.45%

0.900

1.545

↑70.91%

0.914

1.231

↑36.17%

0.926
Check-In Controller0.629

0.666

↑5.88%

0.961

↑52.78%

0.706

↑12.24%

0.625

1.078

↑71.38%

0.569

0.725

↑15.26%

0.515


Scenario 1

Response time

This table shows s40 minutes of CI/CO


Service CPU Utilization


Service Memory Utilization


RDS CPU Utilization


Scenario 2

Response time

The table shows first 5 hours of CI/CO (it contains Create and 2 Updates with 100.000 records file)

The table shows second 5 hours of CI/CO (it contains 3 Updates with 100.000 records file)

Service CPU Utilization


Service Memory Utilization


RDS CPU Utilization



Errors

Scenario 1 - no errors

Scenario 2

All errors are connected to 

Check-Out Controller

Request nameNumber
POST_circulation/check-out-by-barcode (Submit_barcode_checkout)_POST_4228
GET_inventory/items (Submit_barcode_checkout)_GET_2006
GET_groups_ID (Submit_patron_barcode)_GET_4001



Appendix

Methodology/Approach

OAI-PMH (incremental) was carried out with manual stop from AWS instance machine after approximately 8000 instances and holdings were harvested up. To define time duration for the certain harvest just find difference between timestamps of second call and the last one in the definite log file in log folder. 

Circulation rules should be modified before CI/CO test in Circulation rules editor to run it without issues from POST_circulation/check-out-by-barcode (Submit_barcode_checkout) side.

Partitions number should be equal to 2 in all DI related topics.

Before running OAI-PMH with full harvest, following database commands to optimize the tables should be executed (from https://wiki.folio.org/display/FOLIOtips/OAI-PMH+Best+Practices#OAIPMHBestPractices-SlowPerformance):

REINDEX index <tenant>_mod_inventory_storage.audit_item_pmh_createddate_idx ;
REINDEX index <tenant>_mod_inventory_storage.audit_holdings_record_pmh_createddate_idx;
REINDEX index <tenant>_mod_inventory_storage.holdings_record_pmh_metadata_updateddate_idx;
REINDEX index <tenant>_mod_inventory_storage.item_pmh_metadata_updateddate_idx;
REINDEX index <tenant>_mod_inventory_storage.instance_pmh_metadata_updateddate_idx;
analyze verbose <tenant>_mod_inventory_storage.instance;
analyze verbose <tenant>_mod_inventory_storage.item;
analyze verbose <tenant>_mod_inventory_storage.holdings_record;

  1. Execute the following query in a related database for removing existed 'instances' created by previous harvesting request and a request itself:

TRUNCATE TABLE fs09000000_mod_oai_pmh.request_metadata_lb cascade

Infrastructure

  • 8 m6i.2xlarge EC2 instances located in US East (N. Virginia)
  • 2 instances of db.r6.xlarge database instances, one reader, and one writer 
  • MSK ptf-kakfa-3
    • 4 brokers
    • Apache Kafka version 2.8.0

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true
    • og.retention.minutes=480
    • default.replication.factor=3
  • Front End:

    • Item Check-in (folio_checkin-8.0.100000491)
    • Item Check-out (folio_checkout-9.0.100000595)

Modules

ModuleTask Def. RevisionModule VersionTask CountMem Hard LimitMem Soft limitCPU unitsXmxMetaspaceSizeMaxMetaspaceSizeR/W split enabled
ocp2-pvt
Mon Jul 03 14:54:13 UTC 2023
mod-inventory-storage4579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-inventory-storage:26.0.022208195210241440384512FALSE
mod-inventory3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-inventory:20.0.0-SNAPSHOT.39222880259210241814384512FALSE
mod-source-record-storage5579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-source-record-storage:5.6.525600500020483600384512FALSE
mod-source-record-manager3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-source-record-manager:3.6.0-SNAPSHOT.19724096368810242048384512FALSE
mod-data-import3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-data-import:2.7.0-SNAPSHOT.1011204818442561292384512FALSE
mod-di-converter-storage1579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-di-converter-storage:2.1.0-SNAPSHOT.322102489612876888128FALSE
mod-data-import-converter-storage3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-data-import-converter-storage:1.16.0-SNAPSHOT.1322102489612876888128FALSE
mod-remote-storage3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-remote-storage:2.0.0-SNAPSHOT.8324920447210243960512512FALSE
mod-users3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-users:19.2.0-SNAPSHOT.5842102489612876888128FALSE
mod-configuration3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-configuration:5.9.2-SNAPSHOT.2912102489612876888128FALSE
mod-circulation-storage3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-circulation-storage:16.1.0-SNAPSHOT.3052153614401024896384512FALSE
mod-circulation3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-circulation:23.5.0-SNAPSHOT.55621024896102476888128FALSE
mod-authtoken3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-authtoken:2.14.0-SNAPSHOT.23821440115251292288128FALSE
mod-pubsub3579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-pubsub:2.10.0-SNAPSHOT.1242153614401024922384512FALSE
pub-okapi2579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/pub-okapi:2022.03.022102489612876800FALSE
okapi-b2579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/okapi:5.1.0-SNAPSHOT.13523168414401024922384512FALSE

Partitions

Expand
titleClick here to expand partitions