Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Overview
This document contains the results of testing Data Import for MARC Bibliographic records at Ramsons release [ECS].
Ticket:
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
[ A brief introduction about the content of the page:
What we are testing? Provide context of the test. Is it for a new service? Is it an experiment? Is it regression test?
Include major things like environment settings (ECS, non-ECS, Eureka, non-Eureka, w/RW split, etc…)
What are the goals of the testing? Ex: Want to see the effect of using a different ec2 instance type. If regression: to see how vB compares to vA
Include defined SLAs, if available
Reference the Jira(s)
]
Summary
[ A bulleted-list of the most important and relevant observations from the test results. What are the most important things the readers need to know about this testing effort? Some suggestions
Comparison to previous test or release of response times or API durations
Any notable changes
Particular response time or durations
Service memory and/or CPU utilization
RDS memory and/or CPU utilization
Other interesting observations
The summary points should answer the goals stated in Overview: did the test achieve the goals laid out? What goals were not met and why? SLAs were met or not?
]
Recommendations & Jiras (Optional)
[ If there are recommendations for the developers or operations team, or anything worth calling out, list them here.
Configuration options
Memory/CPU settings
Environment variables settings.
Also include any Jiras created for follow-up work]
Results
...
Test #
Summary
All Data-imports jobs finished successfully without errors.
Duration of data imports for creates and updates are mostly the same as was in Q release.
The PTF - Updates Success - 2 profile(based on rcp1: PTF - Updates Success - 6 ) was created for the RCON Ramsons release on tenant: cs00000int_0001.
DI duration growth correlates to the number of records imported.
No memory leak is suspected for DI modules.
Services CPU utilization, Service memory utilization, and DB CPU utilization have the same utilization trend and values as in the Q release.
Recommendations & Jiras
mod-search deadlocks ticket
Jira Legacy server System Jira serverId 01505d01-b853-3c2e-90f1-ee9b165564fc key MSEARCH-932
Results
Test # | Data-import test | Profile | Duration Ramsons (rcon) | Duration Quesnelia (qcon) | Duration Quesnelia (qcp1) | Results |
---|---|---|---|---|---|---|
1 | 10k MARC BIB Create | PTF - Create 2 | 5 min 10 s | 4 min 14 sec | 6 minutes | Completed |
2 | 25k MARC BIB Create | PTF - Create 2 | 10 min 30 s | 9 min 41 sec | 13 min 41 sec | Completed |
3 | 50k MARC BIB Create | PTF - Create 2 | 15 min 43 s | 18 min 18 sec | 21 min 59 sec | Completed |
4 | 100k MARC BIB Create | PTF - Create 2 | 31 min 51 s | 38 min 36 sec | 40 min 16 sec | Completed |
5 | 500k MARC BIB Create | PTF - Create 2 | 2 hr 37 min | 3 hours 30 min | 3 hours 27 min | Completed |
116 | 10k MARC BIB Update | PTF - Updates Success - 6 | 7 min 10 s | 5 min 59 sec | 10 min 27 sec | Completed |
127 | 25k MARC BIB Update | PTF - Updates Success - 6 | 19 min 3 s | 19 min 52 sec | 23 min 16 sec | Completed |
138 | 50k MARC BIB Update | PTF - Updates Success - 6 | 38 min 53 sec | 37 min 53 sec | 40 min 52 sec | Completed |
149 | 100k MARC BIB Update | PTF - Updates Success - 6 | 1 hr 23 min | 1 hrs 14 min | 1 hrs 2 min | Completed |
1510 | 500k MARC BIB Update | PTF - Updates Success - 6 | 6 hrs 39 min | 5 hrs 31 min | Completed |
Memory Utilization
Memory usage for both sets of tests (Creates and Updates) showing stable trend. Memory of all modules returned to normal state after tests finished. No memory leak suspects observed.
...
Memory usage for set of MARC BIB updates
...
CPU Utilization
...
RDS CPU Utilization
[Description of notable observations of reader and writer instances CPU utilization with screenshots and tables, RDS Database connections, and other Database metrics]CPU Usage is stable for all modules involved for creates and updates. Most used module is mod-inventory (10%).
...
RDS Metrics
Database CPU was stable for 10 K 205K 50 K 25K 50K 100 K and 500,000 records
...
500K records. As expected DB CPU reached 100% and stayed same during all tests.
DB metrics for create tests
...
Note: on 10K and 25K Create tests mod-orders was triggered. It was disabled on 50,100,500K tests. Results was not affected.
...
Note: here visible that mod-search queries is not included in top 10 queries, meaning that mod-search runtime indexing issue are fixed.
...
DB metrics for update tests
...
Open Search service
Open search CPU utilisation didn’t exceeds 30% on both data and master nodes, showing stable trend.
...
MSK service
...
Additional information from module and database logs
Discussion (Optional)
[ This section gives more space to elaborate on any observations and results. See Perform Lookups By Concatenating UUIDs (Goldenrod)#Discussions for example. Anything that was discussed at length at the DSUs are worthy to be included here]
Errors
This section should detail any errors encountered during the testing process, their impact on testing outcomes, and the steps taken to address these issuesOpen search CPU utilisation didn’t exceeds 30% on both data and master nodes, showing stable trend.
CPU usage data nodes
...
CPU usage master node
...
MSK service
MSK service showed stable trend. Max CPU usage during tests was ±60% on one of brokers.
Disk usage on all brokers didn’t exceed 10%. (300GB of memory is allocated per broker).
...
Additional information from module and database logs
Deadlocks observed on DB side during creates and updates data import. These deadlock is not affecting functionality of DI itself and runtime indexing functionality too as mod-search handling deadlocks on back-end side. (Deadlocks happening during runtime reindexing when mod-search working with DB). Ticket created
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
Errors
No critical errors observed during data import (Creates and Updates).
The only issue observed during 500K create import. 8 records failed to create due to data in file corruption.
Appendix
Infrastructure
PTF -environment rcon
11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
db.r6.xlarge database instances, writer
MSK fse-test
4 kafka.m7g.xlarge brokers in 2 zones
Apache Kafka version 3.7.x (KRaft mode)
EBS storage volume per broker 300 GiB
auto.create.topics.enable=true
log.retention.minutes=480
default.replication.factor=3
Cluster Resources - rcon-pvt
...
Record type | Number of records |
---|---|
Instances | 1 163 924 |
Holdings | 1 348 036 |
Items | 2 091 901 |
Methodology/Approach
Pre-generated files were used for DI Create job profile
10K, 25K, 50K, 100K and 500K files.
Run DI Create on a single tenant(cs00000int_0001) one by one with the delay with files using PTF - Create 2 profile.
Prepare files for DI Update with the Data export app, using previously imported items
Run DI Update on a single tenant(cs00000int_0001) one by one with the delay with prepared files using PTF - Update Success 2 profile
...