Overview
The purpose of the document is getting results of testing Data Import Create MARC holdings records and to detect performance trends in Quesnelia in scope of ticket - PERF-855Getting issue details... STATUS
Compared with results in previous test report: Data Import Create MARC holdings records [Poppy]
Summary
- Data import create holdings job durations increased significantly in Quesnelia release. 4 times longer with 10k file. Failed to complete with 80k file because it was stopped after 4 hours of test run with only 46 committed jobs (total for the test was 81).
- Top CPU utilization: mod-inventory-b - 16%, nginx-okapi - 5%, mod-source-record-storage-b - 4%, mod-quick-marc-b - 7%. Such low resource utilization from modules side can be explained by DB queries huge average latency during INSERT and UPDATE processes which had lock on the same tuple.
- Top memory consumption: mod-inventory-storage-b - 85%, mod-data-import-b - 52%, mod-source-record-storage-b - 45%, mod-source-record-manager-b - 43%. Growing trend was defined in tests set #1 for mod-inventory-storage-b - 85%
- DI job duration for the same file size grew from test to test if to use the same instance HRID to create holdings
- DI perform faster if to use files with 1 unique instance HRID for every 1000 records. DI duration corresponds to file size with such approach. Memory utilized without growing trend. CPU and RDS utilization increased because there are less locks in DB.
Recommendations & Jiras
- Investigate growing trend for mod-inventory-storage in tests set #1 (using 1 instance HRID to create all Holdings)
- Define high number of Holdings associated with one instance HRID that's still realistic
- Consider limit the request /inventory/items-by-holdings-id with limit. Now limit=0.
Errors
- error status for 32'd split job during 80k file importing- SNAPSHOT_UPDATE_ERROR
Test Runs
Profile used for testing - Default - Create Holdings and SRS MARC Holdings
Set of tests â„– | Scenario | Test Conditions | Status |
---|---|---|---|
1 | DI Holdings Create (previous approach) 1 instance HRID for all created holdings | 1K, 5K, 10K, 80K sequentially | Completed |
2 | DI Holdings Create (new approach) 1 instance HRID for every 1000 created holdings | 1K, 5K, 10K, 80K sequentially | Completed |
Test Results
Set 1 - Files used to test DI create Holdings had 1 instance HRID for all created Holdings
Set 2 - Files used to test DI create Holdings had 1 unique instance HRID for every 1000 created Holdings (new approach)
Test | File | Duration: Orchid (previous results) | Duration: Poppy (previous results) | Duration: Quesnelia [ECS] Set #1 | Status and Errors Quesnelia [ECS] Set #1 | Duration: Quesnelia [ECS] Set #2 | Status and Errors Quesnelia [ECS] Set #2 |
---|---|---|---|---|---|---|---|
1 | 1k | 45s | 32s | 1 min 22 sec | Success | 1 min 3 sec | Success |
2 | 5k | 7m 47s | 2m 14s | 8 min | Success | 4 min 16 sec | Success |
3 | 10k | 19m 46s | 4m 35s | 22 min 40 sec | Success | 8 min 59 sec | Success |
4 | 80k | 20m (error*) | 36m 25s | 4 hours 13 min | Stopped by user after 46 job COMMITTED from 81 - 56% finished 1 job status - ERROR, with error status - SNAPSHOT_UPDATE_ERROR (job number - 32, file_name = '1718290065265-80k_holdings_Create_32.mrc') | 52 min 5 sec | Success |
Compared with results in previous test report: Data Import Create MARC holdings records [Poppy]
Comparison
Table contains comparison between Quesnelia and Poppy
Set #1 (Compared Poppy and Quesnelia with different data sets)
Test | File | Duration: Poppy | Duration: Quesnelia set #1 | Difference absolute | Difference percentage |
---|---|---|---|---|---|
1 | 1k | 00:00:32 | 00:01:22 | 00:00:50 | 156% |
2 | 5k | 00:02:14 | 00:08:00 | 00:05:46 | 258% |
3 | 10k | 00:04:35 | 00:22:40 | 00:18:05 | 395% |
4 | 80k | 00:36:25 | 04:13:00 | 03:36:35 | 595% |
Service CPU Utilization
Set #1: mod-inventory-b - 16%, nginx-okapi - 5%, mod-source-record-storage-b - 4%, mod-quick-marc-b - 7%
Set #1
Set #2
Set #2: mod-inventory-b - 33%, nginx-okapi - 23%, mod-source-record-storage-b - 11%, mod-quick-marc-b - 7%
Memory Utilization
Set #1
Set #2
RDS CPU Utilization
Set #1
62% for major part of the tests which is 20% less than in Poppy. It raised to 73% with 80k file after 50 minutes of test tun.
Set #2
99% during all tests
DB Connections
Set #1
DB connections number- 1430
Set #2
DB connections number- 1500
DB Load
Set #1
Set #2
SQL queries
Set #1
Set #2
UPDATE cs00000int_0001_mod_inventory_storage.holdings_record SET jsonb = $1::jsonb WHERE id = '[UUID]' INSERT INTO cs00000int_0001_mod_inventory_storage.holdings_record (id, jsonb) VALUES ($1, $2) RETURNING jsonb autovacuum: VACUUM cs00000int_mod_entities_links.authority autovacuum: VACUUM cs00000int_mod_entities_links.authority_archive autovacuum: VACUUM pg_toast.pg_toast_40004 INSERT INTO cs00000int_mod_search.consortium_instance (tenant_id, instance_id, json, created_date, updated_date) VALUES ($1, $2, $3::json, $4, $5) ON CONFLICT (tenant_id, instance_id) DO UPDATE SET json = EXCLUDED.json, updated_date = EXCLUDED.updated_date SELECT jsonb,id FROM cs00000int_0001_mod_inventory_storage.instance_holdings_item_view WHERE id='db87a6b4-d1f5-4e3d-b34b-d4bf06426127' LIMIT 1 OFFSET 0
INSERT INTO cs00000int_0001_mod_inventory_storage.holdings_record (id, jsonb) VALUES ($1, $2) RETURNING jsonb UPDATE cs00000int_0001_mod_inventory_storage.holdings_record SET jsonb = $1::jsonb WHERE id = '47ee9b78-3d8f-4e8b-b09e-82e9396eb3b3' with "cte" as (select count(*) from "records_lb" where ("records_lb"."snapshot_id" <> cast($1 as uuid) and "records_lb"."external_id" = cast($2 as uuid) and "records_lb"."record_type" = $3::"record_type")) select "records_lb"."id", "records_lb"."snapshot_id", "records_lb"."matched_id", "records_lb"."generation", "records_lb"."record_type", "records_lb"."external_id", "records_lb"."state", "records_lb"."leader_record_status", "records_lb"."order", "records_lb"."suppress_discovery", "records_lb"."created_by_user_id", "records_lb"."created_date", "records_lb"."updated_by_user_id", "records_lb"."updated_date", "records_lb"."external_hrid", "marc_records_lb"."content" as "parsed_record_content", "raw_records_lb"."content" as "raw_record_content", "error_records_lb"."content" as "error_record_content", "error_records_lb"."description", "count" from "records_lb" left outer join "marc_records_lb" on "records_lb"."id" = "marc_records_lb"."id" left outer join "raw_records_lb" on "records_lb"."id" = "raw_records_lb"."id" left outer join "error_records_lb" on "records_lb"."id" = "error_records_lb"."id" right outer join (select * from "cte") as "alias_80949780" on true where ("records_lb"."snapshot_id" <> cast($4 as uuid) and "records_lb"."external_id" = cast($5 as uuid) and "records_lb"."record_type" = $6::"record_type") offset $7 rows fetch next $8 rows only INSERT INTO cs00000int_0001_mod_source_record_manager.events_processed (handler_id, event_id) VALUES ($1, $2) INSERT INTO cs00000int_0001_mod_source_record_manager.journal_records (id, job_execution_id, source_id, source_record_order, entity_type, entity_id, entity_hrid, action_type, action_status, error, action_date, title, instance_id, holdings_id, order_id, permanent_location_id, tenant_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17) autovacuum: VACUUM pg_toast.pg_toast_40004
Infrastructure
PTF - environment qcon
- 10 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 1 database instances, writer
Name | Memory GIB | vCPUs | Engine version | Architecture settings |
---|---|---|---|---|
db.r6g.xlarge | 32 GB | 4 vCPUs | 16.1 | Non-multitenant architecture |
- MSK tenant
- 2 m5.2xlarge brokers in 2 zones
- Apache Kafka version 2.8.0
- EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=2
Methodology/Approach
- Prepare Data Import Files 1k, 5k, 10k, 80k with defined number of holding records associated with instance HRID (1 instance HRID for all records or 1 per 1000 records)
- replace instance HRID field with active one from the environment (example: =004 colin00001144043)
- replace location field (example =852 01$bme3CC$hKFN5860.A6$iC732) where me3CC - the code of tenant location. Go to /settings/tenant-settings/location-locations and take the code of the location with active status
- to replace the field 004 - extract instance HRIDs of active instances for this tenant. Use sql query below
Get total jobs durations
SQL to get job durationsselect file_name,total_records_in_file,started_date,completed_date, completed_date - started_date as duration ,status,error_status from [tenant]_mod_source_record_manager.job_execution where subordination_type = 'COMPOSITE_PARENT' -- where started_date > '2024-06-13 14:47:54' and completed_date < '2024-06-13 19:01:50.832' order by started_date desc limit 10
Get instance HRID ids
SQL to get instance HRIDsselect jsonb->>'hrid' as instanceHRID from [tenant]_mod_inventory_storage.instance where jsonb->>'discoverySuppress' = 'false' and jsonb->>'source' = 'MARC' limit 80
- Put instance HRID ids into stringsHRID.txt file without double quotes and headers. Every row should contain only HRID id
- Use PY script to replace HRID ids in mrc file if needed. Script is located in Git repository perf-testing\workflows-scripts\data-import\Holdings\Data_preparation_steps
- Run Data Import sequentially one by one from the UI with 5 min delay (delay time can vary - this time defined as comfortable to get results).