Overview
- This document contains the results of testing Check-in/Check-out and Data Import for MARC Bibliographic records on the Quesnelia [ECS] release on qcon environment.
- PERF-846Getting issue details... STATUS
Summary
- Data Import with Check In/Check Out tests finished successfully on qcon environment using the PTF - Create 2 and PTF - Updates Success - 2 profiles with files 5K, 10K, 25K, 50K, 100K records.
- Comparing with previous testing results Poppy and Quesnelia releases
- Data Import durations for create jobs has performance degradation in Average up to 10% for Quesnelia releases.
- Data Import durations for update jobs has performance degradation in Average up to 40% for Quesnelia releases.
- Check In/Check Out Response time has slight improvement in Average up to 10% for Quesnelia releases.
- During testing, we noticed that mod-data-import module used maximum 57% CPU for Quesnelia releases. For Poppy releases we mod-data-import module used 130% CPU and 320% in spike.
- No memory leaks are observed.
Test Runs
Test â„– | Scenario | Test Conditions | Results |
---|---|---|---|
1 | DI MARC Bib Create | 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) | Completed |
CICO | 8 users | ||
2 | DI MARC Bib Update | 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) |
|
CICO | 8 users |
Test Results
This table contains durations for Data Import.
Profile | MARC File | DI Duration Quesnelia (hh:mm:ss) | Check In, Check Out Response time (8 users) Quesnelia | |
---|---|---|---|---|
CI Average sec | CO Average sec | |||
DI MARC Bib Create (PTF - Create 2) | 5K.mrc | 0:03:21 | 0.831 | 1.357 |
10K.mrc | 0:06:51 | 0.845 | 1.410 | |
25K.mrc | 0:12:41 | 0.719 | 1.333 | |
50K.mrc | 0:23:19 | 0.691 | 1.327 | |
100K.mrc | 0:51:24 | 0.664 | 1.335 | |
DI MARC Bib Create (PTF - Updates Success - 2) | 5K.mrc | 0:04:12 | 0.764 | 1.458 |
10K.mrc | 0:08:15 | 0.779 | 1.377 | |
25K.mrc | 0:20:38 | 0.755 | 1.401 | |
50K.mrc | 0:43:06 | 0.750 | 1.444 | |
100K.mrc | 1:29:09 | 0.730 | 1.458 |
Check-in/Check-out without DI
Scenario | Load level | Request | Response time, sec Quesnelia | |
---|---|---|---|---|
95 perc | average | |||
Circulation Check-in/Check-out (without Data import) | 8 users | Check-in | 0.635 | 0.493 |
Check-out | 1.243 | 1.078 |
Comparison
This table contains DI durations with CICO comparison between Poppy and Quesnelia releases.
Profile | MARC File | DI Durationwith CI/CO | DI Delta Poppy/Quesnelia (hh:mm:ss / %) | Check In, Check Out Response time (8 users) | Check In, Check Out Response time (8 users) | Delta, % | ||||
Poppy | Quesnelia | Poppy/Quesnelia | Poppy/Quesnelia | |||||||
Poppy | Quesnelia | CI Average sec | CO Average sec | CI Average sec | CO Average sec | CI | CO | |||
DI MARC Bib Create (PTF - Create 2) | 5K.mrc | 00:02:53 | 0:03:21 | +0:00:28 / +16.18% | 0.901 | 1.375 | 0.831 | 1.357 | -7.77% | -1.31% |
10K.mrc | 00:04:32 | 0:06:51 | +0:02:19 / +51.10% | 0.902 | 1.47 | 0.845 | 1.410 | -6.32% | -4.08% | |
25K.mrc | 00:11:14 | 0:12:41 | +0:01:27 / +12.91% | 1 | 1.571 | 0.719 | 1.333 | -28.1% | -15.15% | |
50K.mrc | 00:21:55 | 0:23:19 | +0:01:24 / +6.39% | 0.981 | 1.46 | 0.691 | 1.327 | -29.57% | -9.11% | |
100K.mrc | 00:47:02 | 0:51:24 | +0:04:22 / +9.28% | 1.018 | 1.491 | 0.664 | 1.335 | -34.78% | -10.47% | |
DI MARC Bib Create (PTF - U1.571pdates Success - 2) | 5K.mrc | 00:03:19 | 0:04:12 | +0:00:53 /+26.63% | 0.755 | 1.169 | 0.764 | 1.458 | +1.19% | +24.73% |
10K.mrc | 00:06:20 | 0:08:15 | +0:01:55 / +30.26% | 0.75 | 1.307 | 0.779 | 1.377 | +3.87% | +5.36% | |
25K.mrc | 00:14:04 | 0:20:38 | +0:06:34 / +46.68% | 0.822 | 1.403 | 0.755 | 1.401 | -8.15% | -0.14% | |
50K.mrc | 00:29:59 | 0:43:06 | +0:13:07 / +43.74% | 0.893 | 1.424 | 0.750 | 1.444 | -16.01% | +1.40% | |
100K.mrc | 01:03:03 | 1:29:09 | +0:26:06 / +41.40% | 0.908 | 1.51 | 0.730 | 1.458 | -19.60% | -3.44% |
Detailed CICO response time comparison without DI
Scenario | Load level | Request | Response time, sec Poppy | Response time, sec Quesnelia | ||
---|---|---|---|---|---|---|
95 perc | average | 95 perc | average | |||
Circulation Check-in/Check-out (without Data import) | 8 users | Check-in | 0.489 | 0.431 | 0.635 | 0.493 |
Check-out | 0.969 | 0.828 | 1.243 | 1.078 |
Resource utilization for Test â„–1
Service CPU Utilization
Here we can see that mod-data-import used 150% CPU in spikes.
Service Memory Utilization
Here we can see that all modules show a stable trend.
DB CPU Utilization
DB CPU was 92%.
DB Connections
Max number of DB connections was 1690.
DB load
Top SQL-queries
# | TOP 5 SQL statements |
---|---|
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
Resource utilization for Test â„–2
Service CPU Utilization
Here we can see that mod-data-import used 130% CPU in spikes.
Service Memory Utilization
Here we can see that all modules show a stable trend.
DB CPU Utilization
DB CPU was 92%.
DB Connections
Max number of DB connections was 1685.
DB load
Top SQL-queries
# | TOP 5 SQL statements |
---|---|
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
Appendix
Infrastructure
PTF - environment Quesnelia (qcon)
11 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 instance of db.r6.xlarge database instance: Writer instance
OpenSearch
domain: fse
Number of nodes: 6
Version: OpenSearch_2_7_R20240502
MSK - tenat
4 kafka.m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
auto.create.topics.enable=true
log.retention.minutes=480
default.replication.factor=3
Kafka consolidated topics enabled
Methodology/Approach
DI tests scenario (DI MARC Bib Create and Update) were started from UI on Quesnelia (qcon) env with file splitting features enabled on a ecs environment..
Test runs:
- Test 1: Manually tested 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) records files, DI (DI MARC Bib Create ) started on College tenant(cs00000int_0001) only, and CICO with 8 users on background.
- Test 2: Manually tested 5K, 10K, 25K, 50K, 100K consequentially (with 5 min pause) records files, DI (DI MARC Bib Update) started on College tenant(cs00000int_0001) only, and CICO with 8 users on background.
At the time of the test run, Grafana was not available. As a result, response times for Check-In/Check-Out were parsed manually from a .jtl files, using the start and finish dates of the data import tests. These results were visualized in JMeter using a Listener (Response Times Over Time).