Data Import test report (Orchid) baseline for ocp3


Overview

This document contains the results of testing Data Import for MARC Bibliographic records in the Orchid release to detect the baseline for ocp3. PERF-662 - Getting issue details... STATUS  

Summary

  • Duration for DI correlates with number of the records imported (100k records- 32 min, 250k - 1 hour 33 min, 500k - 3 hours 33 min). Multitenant DI could be performed successfully for up to 9 jobs in parallel. If jobs are big they will start one by one in order for each tenant but processed in parallel on 3 tenants. Small DI (1 record) could be finished faster not in order.  Response time for Check-In/Check-Out is prolonged twice (for Check-In from 0.517s to 1.138s, for Check-Out from 0.796s to 1.552s) during DI.
  • The increase in memory utilization was due to the scheduled cluster shutdown. no memory leak is suspected for DI modules.
  • Average CPU usage for the test with 500k records Created for mod-di-converter-storage was about 462%, and for all other modules did not exceed 150 %. We can observe spikes in CPU usage of mod-data-import at the beginning of the Data Import jobs up to 400%.
  • Approximately DB CPU usage is up to 95%.

Recommendations and Jiras

It is recommended to increase CPU units for mod-di-converter-storage to 512.

Results

Test #

Profile

Duration

ocp3

Results
1

100K MARC Create

PTF - Create 232-33 minutesCompleted
1

250K MARC Create 

PTF - Create 21 hour 33 min - 1 hour 57 minCompleted
1500K MARC CreatePTF - Create 23 hours 33 minCompleted
2Multitenant MARC Create (100k, 50k, and 1 record)PTF - Create 23 hours 1 minCompleted
3CI/CO + DI MARC Create (20 users CI/CO, 25k records DI on 3 tenants)PTF - Create 224 minCompleted *


 * - One record on one tenant could be discarded with error: io.netty.channel.StacklessClosedChannelException

Test #3 With CI/CO 20 users and DI 25k records on each of the 3 tenants

Test#3CI/CO Response Time with DICI/CO Response Time  without DI
Check-In1.138 s0.517 s
Check-Out1.552 s0.796 s
Test#3DI Duration with CI/CODI Duration without CI/CO*
Tenant _120 min14 min (18 min for run 2)
Tenant _219 min16 min (18 min for run 2)
Tenant _316 min16 min (15 min for run 2)

 * - Same approach testing DI: 3 DI jobs total on 3 tenants without CI/CO. Start the second job after the first one reaches 30%, and start another job on a third tenant after the first job reaches 60% completion. DI file size: 25k

Memory Utilization

The increase in memory utilization was due to the scheduled cluster shutdown. no memory leak is suspected for DI modules.

MARC BIB CREATE

Test#1 100k, 250k, 500k records DI

Test#2 Multitenant  DI (9 concurrent jobs)

Test#3 With CI/CO

Service CPU Utilization 

MARC BIB CREATE

Average CPU usage for the test with 500k records Created for mod-di-converter-storage was about 462%, and for all other modules did not exceed 150 %. We can observe spikes in CPU usage of mod-data-import at the beginning of the Data Import jobs up to 400%.

Test#1  250k, 500k records DI

Test#2 Multitenant

Test#3 With CI/CO

Instance CPU Utilization

Test#1  250k, 500k records DI

Test#2 Multitenant DI (9 concurrent jobs)

RDS CPU Utilization 

MARC BIB CREATE

Approximately DB CPU usage is up to 95%

Test#1  250k, 500k records DI

Test#2 Multitenant  DI (9 concurrent jobs)

Test#3 With CI/CO

RDS Database Connections

MARC BIB CREATE
 For DI  job Create- Maximum 520 connections count.

Test#1  250k, 500k records DI

Test#2 Multitenant

Test#3 With CI/CO

Appendix

Infrastructure ocp3

Records count :

  • tenant0_mod_source_record_storage.marc_records_lb = 9674629
  • tenant2_mod_source_record_storage.marc_records_lb = 0
  • tenant3_mod_source_record_storage.marc_records_lb = 0
  • tenant0_mod_source_record_storage.raw_records_lb = 9604805
  • tenant2_mod_source_record_storage.raw_records_lb = 0
  • tenant3_mod_source_record_storage.raw_records_lb = 0
  • tenant0_mod_source_record_storage.records_lb = 9674677
  • tenant2_mod_source_record_storage.records_lb = 0
  • tenant3_mod_source_record_storage.records_lb = 0
  • tenant0_mod_source_record_storage.marc_indexers =  620042011
  • tenant2_mod_source_record_storage.marc_indexers =  0
  • tenant3_mod_source_record_storage.marc_indexers =  0
  • tenant0_mod_source_record_storage.marc_indexers with field_no 010 = 3285833
  • tenant2_mod_source_record_storage.marc_indexers with field_no 010 = 0
  • tenant3_mod_source_record_storage.marc_indexers with field_no 010 = 0
  • tenant0_mod_source_record_storage.marc_indexers with field_no 035 = 19241844
  • tenant2_mod_source_record_storage.marc_indexers with field_no 035 = 0
  • tenant3_mod_source_record_storage.marc_indexers with field_no 035 = 0
  • tenant0_mod_inventory_storage.authority = 4
  • tenant2_mod_inventory_storage.authority = 0
  • tenant3_mod_inventory_storage.authority = 0
  • tenant0_mod_inventory_storage.holdings_record = 9592559
  • tenant2_mod_inventory_storage.holdings_record = 16
  • tenant3_mod_inventory_storage.holdings_record = 16
  • tenant0_mod_inventory_storage.instance = 9976519
  • tenant2_mod_inventory_storage.instance = 32
  • tenant3_mod_inventory_storage.instance = 32 
  • tenant0_mod_inventory_storage.item = 10787893
  • tenant2_mod_inventory_storage.item = 19
  • tenant3_mod_inventory_storage.item = 19

PTF -environment ocp3 

  • 10 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
  • 2 database  instances, one reader, and one writer

    NameAPI NameMemory GIBvCPUsmax_connections
    R6G Extra Largedb.r6g.xlarge32 GiB4 vCPUs2731
  • MSK ptf-kakfa-3
    • 4 m5.2xlarge brokers in 2 zones
    • Apache Kafka version 2.8.0

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true
    • log.retention.minutes=480
    • default.replication.factor=3
  • Kafka topics partitioning: - 2 partitions for DI topics
Module
ocp3-pvt
Mon Sep 11 09:33:28 UTC 2023
Task Def. RevisionModule VersionTask CountMem Hard LimitMem Soft limitCPU unitsXmxMetaspaceSizeMaxMetaspaceSizeR/W split enabled
mod-remote-storage13579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-remote-storage:2.0.324920447210243960512512false
mod-agreements8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-agreements:5.5.2215921488128968384512false
mod-data-import7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-data-import:2.7.11204818442561292384512false
mod-search30579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-search:2.0.1225922480204814405121024false
mod-authtoken7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-authtoken:2.13.021440115251292288128false
mod-configuration7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-configuration:5.9.12102489612876888128false
mod-inventory-storage1579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-inventory-storage:26.1.0-SNAPSHOT.66502208195210241440384512false
mod-circulation-storage15579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-circulation-storage:16.0.122880259215361814384512false
mod-source-record-storage11579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-source-record-storage:5.6.725600500020483500384512false
mod-calendar7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-calendar:2.4.22102489612876888128false
mod-inventory12579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-inventory:20.0.622880259210241814384512false
mod-circulation9579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-circulation:23.5.622880259215361814384512false
mod-di-converter-storage8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-di-converter-storage:2.0.52102489612876888128false
mod-pubsub8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-pubsub:2.9.12153614401024922384512false
mod-users8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-users:19.1.12102489612876888128false
mod-patron-blocks8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-patron-blocks:1.8.021024896102476888128false
mod-source-record-manager9579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-source-record-manager:3.6.425600500020483500384512false
nginx-edge7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/nginx-edge:2023.06.1421024896128000false
mod-quick-marc7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-quick-marc:3.0.01228821761281664384512false
nginx-okapi7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/nginx-okapi:2023.06.1421024896128000false
okapi-b8579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/okapi:5.0.13168414401024922384512false
mod-feesfines7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-feesfines:18.2.12102489612876888128false
mod-patron7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-patron:5.5.22102489612876888128false
mod-notes7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-notes:5.0.121024896128952384512false
pub-okapi7579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/pub-okapi:2023.06.142102489612876800false

Methodology/Approach

Test 1: Manually tested 100k, 250k, and 500k records files started one by one on one tenant only.

Test 2: Manually tested 100k+50k+1 record files DI started simultaneously on every 3 tenants (9 jobs total).

Test 3: Run CICO on one tenant, DI jobs 3 tenants, including the one that runs CICO. Start the second job after the first one reaches 30%, and start another job on a third tenant after the first job reaches 60% completion. CICO: 20 users- 2 hours, DI file size: 25k. In Addition to test #3 run DI with the same pattern on 3 tenants but without CI/CO.