You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 8
Next »
Overview
- This document contains the results of testing Data Import for MARC Bibliographic records with an update job on the Quesnelia [ECS] release on qcon environment.
PERF-846
-
Getting issue details...
STATUS
Summary
- Data Import tests finished successfully on qcon environment using the PTF - Updates Success - 2 profile and file with 25k records.
- Comparing with previous testing results Poppy and Quesnelia releases
- Data Import processed all jobs including test on 3 tenants concurrently without errors for Quesnelia releases.
- Data Import durations stayed in the same time range in Average for Quesnelia releases but it works stable and without errors.
- During testing, we noticed that mod-permission did not have any spikes and used 12% CPU for Quesnelia releases. For Poppy releases we had error.
Test Runs
Test â„– | Scenario | Test Conditions | Results |
---|
1 | DI MARC Bib Create | 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) | Completed |
CICO | 8 users |
2 | DI MARC Bib Update | 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) | Completed
|
CICO | 8 users |
Test Results
This table contains durations for Data Import.
Comparison
This table contains durations comparison between Poppy and Quesnelia releases.
Resource utilization for Test â„–1
Resource utilization table
CPU | RAM |
---|
mod-data-import-b | 56% | mod-inventory-b | 65% |
nginx-okapi | 56% | mod-data-import-b | 53% |
mod-di-converter-storage-b | 38% | mod-source-record-manager-b | 48% |
okapi-b | 36% | mod-source-record-storage-b | 43% |
mod-inventory-storage-b | 23% | okapi-b | 34% |
mod-source-record-storage-b | 13% | mod-di-converter-storage-b | 33% |
mod-source-record-manager-b | 11% | mod-feesfines-b | 33% |
mod-feesfines-b | 10% | mod-patron-blocks-b | 31% |
mod-quick-marc-b | 8% | mod-quick-marc-b | 31% |
mod-pubsub-b | 8% | mod-pubsub-b | 30% |
mod-authtoken-b | 7% | mod-configuration-b | 28% |
mod-configuration-b | 6% | mod-users-bl-b | 26% |
pub-okapi | 4% | mod-circulation-b | 25% |
mod-remote-storage-b | 3% | mod-authtoken-b | 20% |
mod-circulation-storage-b | 3% | mod-circulation-storage-b | 20% |
mod-inventory-update-b | 2% | mod-inventory-storage-b | 18% |
mod-circulation-b | 2% | mod-remote-storage-b | 17% |
mod-patron-blocks-b | 1% | nginx-okapi | 4% |
mod-users-bl-b | 1% | pub-okapi | 4% |
Service CPU Utilization
Here we can see that mod-data-import used 150% CPU in spikes.
Service Memory Utilization
Here we can see that all modules show a stable trend.
DB CPU Utilization
DB CPU was 92%.
DB Connections
Max number of DB connections was 1690.
DB load
Top SQL-queries
# | TOP 5 SQL statements |
---|
1 | INSERT INTO cs00000int_0001_mod_source_record_manager.events_processed (handler_id, event_id) VALUES ($1, $2)
|
2 | insert into "marc_records_lb" ("id", "content") values (cast($1 as uuid), cast($2 as jsonb)) on conflict ("id") do update set "content" = cast($3 as jsonb)
|
3 | INSERT INTO cs00000int_0001_mod_source_record_manager.journal_records (id, job_execution_id, source_id, source_record_order, entity_type, entity_id, entity_hrid, action_type, action_status, error, action_date, title, instance_id, holdings_id, order_id, permanent_location_id, tenant_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17)
|
4 | INSERT INTO cs00000int_mod_search.consortium_instance (tenant_id, instance_id, json, created_date, updated_date)
VALUES ($1, $2, $3::json, $4, $5)
ON CONFLICT (tenant_id, instance_id)
DO UPDATE SET json = EXCLUDED.json, updated_date = EXCLUDED.updated_date
|
5 | INSERT INTO cs00000int_0001_mod_inventory_storage.holdings_record (id, jsonb) VALUES ($1, $2) RETURNING jsonb
|
Resource utilization for Test â„–2
Resource utilization table
RAM | CPU |
---|
nginx-okapi | 67% | mod-data-import-b | 72% |
mod-data-import-b | 50% | mod-inventory-b | 66% |
okapi-b | 42% | mod-source-record-manager-b | 52% |
mod-di-converter-storage-b | 41% | mod-source-record-storage-b | 45% |
mod-source-record-storage-b | 21% | mod-pubsub-b | 35% |
mod-inventory-storage-b | 20% | okapi-b | 35% |
mod-source-record-manager-b | 9% | mod-di-converter-storage-b | 34% |
mod-quick-marc-b | 9% | mod-feesfines-b | 33% |
mod-feesfines-b | 9% | mod-patron-blocks-b | 32% |
mod-authtoken-b | 9% | mod-quick-marc-b | 31% |
mod-pubsub-b | 7% | mod-circulation-storage-b | 30% |
mod-configuration-b | 6% | mod-configuration-b | 29% |
pub-okapi | 4% | mod-users-bl-b | 28% |
mod-circulation-storage-b | 2% | mod-circulation-b | 28% |
mod-remote-storage-b | 2% | mod-inventory-storage-b | 22% |
mod-inventory-update-b | 2% | mod-authtoken-b | 21% |
mod-circulation-b | 1% | mod-remote-storage-b | 18% |
mod-patron-blocks-b | 1% | nginx-okapi | 4% |
mod-users-bl-b | 0.8% | pub-okapi | 4% |
Service CPU Utilization
Here we can see that mod-data-import used 130% CPU in spikes.
Service Memory Utilization
Here we can see that all modules show a stable trend.
DB CPU Utilization
DB CPU was 92%.
DB Connections
Max number of DB connections was 1685.
DB load
Top SQL-queries
# | TOP 5 SQL statements |
---|
1 | insert into "marc_records_lb" ("id", "content") values (cast($1 as uuid), cast($2 as jsonb)) on conflict ("id") do update set "content" = cast($3 as jsonb)
|
2 | INSERT INTO cs00000int_0001_mod_source_record_manager.events_processed (handler_id, event_id) VALUES ($1, $2)
|
3 | INSERT INTO cs00000int_0001_mod_source_record_manager.journal_records (id, job_execution_id, source_id, source_record_order, entity_type, entity_id, entity_hrid, action_type, action_status, error, action_date, title, instance_id, holdings_id, order_id, permanent_location_id, tenant_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17)
|
4 | INSERT INTO cs00000int_mod_search.consortium_instance (tenant_id, instance_id, json, created_date, updated_date)
VALUES ($1, $2, $3::json, $4, $5)
ON CONFLICT (tenant_id, instance_id)
DO UPDATE SET json = EXCLUDED.json, updated_date = EXCLUDED.updated_date
|
5 | UPDATE cs00000int_0001_mod_inventory_storage.instance SET jsonb = $1::jsonb WHERE id=?
|
Appendix
Infrastructure
PTF - environment Quesnelia (qcon)
11 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1 [Number of ECS instances, instance type, location region]
1 instance of db.r6.xlarge database instance: Writer instance
OpenSearch
MSK - tenat
4 kafka.m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
auto.create.topics.enable=true
log.retention.minutes=480
default.replication.factor=3
Kafka consolidated topics enabled
Quesnelia modules memory and CPU parameters
Methodology/Approach
DI tests scenario (DI MARC Bib Create and Update) were started from UI on Quesnelia (qcon) env with file splitting features enabled on a ecs environment..
Test runs:
- Test 1: Manually tested 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) records files, DI (DI MARC Bib Create ) started on College tenant(cs00000int_0001) only, and CICO with 8 users on background.
- Test 2: Manually tested 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) records files, DI (DI MARC Bib Update) started on College tenant(cs00000int_0001) only, and CICO with 8 users on background.
At the time of the test run, Grafana was not available. As a result, response times for Check-In/Check-Out were parsed manually from a .jtl files, using the start and finish dates of the data import tests. These results were visualized in JMeter using a Listener (Response Times Over Time).