Test status: PASSED
Overview
Regression testing of export of deleted MARC authority records via API. Measurement the performance of an export operation of 2K (paged) records for all 100K/300K deleted records
ECS environment with PTF data set
Classic PTF configuration with no additional improvements.
The purpose of this testing is to compare test results of Ramsons release with previous, Quasnelia release, check for improvements, possible issues/degradation.
Expected export duration - under a minute.
Jiras/ links:
Quasnelia release ticket: - PERF-897Getting issue details... STATUS . Report
Related to improvement task: - MDEXP-769Getting issue details... STATUS
Summary
[ A bulleted-list of the most important and relevant observations from the test results. What are the most important things the readers need to know about this testing effort? Some suggestions
Comparison to previous test or release of response times or API durations
Any notable changes
Particular response time or durations
Service memory and/or CPU utilization
RDS memory and/or CPU utilization
Other interesting observations
The summary points should answer the goals stated in Overview: did the test achieve the goals laid out? What goals were not met and why? SLAs were met or not?
]
Recommendations & Jiras (Optional)
[ If there are recommendations for the developers or operations team, or anything worth calling out, list them here.
Configuration options
Memory/CPU settings
Environment variables settings.
Also include any Jiras created for follow-up work]
Test Runs/Results
[Table of tests with short descriptions. If there are motivations to run additional tests because of any reason, include a note column to explain]
Test # | Test Conditions | Duration | Load generator size (recommended) | Load generator Memory(GiB) (recommended) |
---|---|---|---|---|
1 | 100K | 8s 652 ms | t3.medium | 3 |
2 | 100K (rerun) | 8s 440 ms | t3.medium | 3 |
3 | 100K (10 times ) | 7s 860 ms (avg) | ||
4 | 100K (10 times ) | 7s 591 ms (avg) | ||
5 | 300K | 29s 567 ms | ||
6 | 300K | 29s 989 ms | ||
7 | 300K (10 times ) | 29s 579 ms (avg) | ||
8 | 300K (10 times ) | 28s 393 ms (avg) |
Comparisons
[Part to compare test data to previous tests or releases. It's important to know if performance improves or degrades]
The following table compares additional test results to previous release numbers and to the CICO baselines Nolana (of Check In average time 0.456s and Checkout average time 0.698s). Note that Lotus numbers are in red, Nolana numbers are in black, and Kiwi numbers are in blue.
In the Nolana version, there is a significant improvement in the performance of data import and CheckIn/CheckOut.
For the baseline test the mod-source-record-manager version was 3.5.0 for the test with CI/CO it was 3.5.4. Maybe it is the reason why the time of Data Import with CI/CO is even better than without CI/CO.
Attach the link to the report from which the data for comparison was extracted.
Test | Ramsons | Quasnelia | ||
---|---|---|---|---|
Duration (s/ ms) | GET_authority-storage/authorities response time (ms) | Duration (s/ ms) | GET_authority-storage/authorities response time (ms) | |
100K | 8s 652 ms | 178 ms | 13s 317 ms | 262 ms |
300K | 29s 989 ms | 216 ms | 29s 109 ms | 288 ms |
Memory Utilization
[Description of notable observations of memory utilization with screenshots(of all modules and involved modules) and tables]
Nolana Avg | Nolana Min | Nolana Max | |
---|---|---|---|
mod-circulation-storage | 24% | 23% | 25% |
mod-patron-blocks | 34% | 33% | 34% |
CPU Utilization
[Description of notable observations of modules and eCPU utilization with screenshots (of all modules and involved modules) and tables]. Annotate graphs to show when a specific test started or ended, and select only the modules that are relevant to the test to show on the graphs]
RDS CPU Utilization
[Description of notable observations of reader and writer instances CPU utilization with screenshots and tables, RDS Database connections, and other Database metrics]
Additional information from module and database logs (Optional)
[ Although it is optional to look at logs it is always recommended to look at the logs to see if there were any errors, exceptions, or warnings. If there were any, create Jiras for the module that generated the warnings/errors/exceptions]
Discussion (Optional)
[ This section gives more space to elaborate on any observations and results. See Perform Lookups By Concatenating UUIDs (Goldenrod)#Discussions for example. Anything that was discussed at length at the DSUs are worthy to be included here]
Errors
This section should detail any errors encountered during the testing process, their impact on testing outcomes, and the steps taken to address these issues.
Appendix
Infrastructure
[ List out environment's hardware and software settings. For modules that involve Kafka/MSK, list the Kafka settings as well. For modules that involve OpenSearch, list these settings, too]
PTF -environment ncp3 [ environment name]
9 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1 [Number of ECS instances, instance type, location region]
2 instances of db.r6.xlarge database instances, one reader, and one writer [database instances, type, size, main parameters]
MSK ptf-kakfa-3 [ kafka configurations]
4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
auto.create.topics.enable=true
log.retention.minutes=480
default.replication.factor=3
Modules memory and CPU parameters [table of services properties, will be generated with script soon]
Use fse-get-ecs-cluster-services-info Jenkins job to get table with services configuration.
Modules | Version | Task Definition | Running Tasks | CPU | Memory | MemoryReservation | MaxMetaspaceSize | Xmx |
---|---|---|---|---|---|---|---|---|
mod-inventory | 19.0.1 | 1 | 2 | 1024 | 2880 | 2592 | 512m | 1814m |
okapi | 4.14.7 | 1-2 | 3 | 1024 | 1684 (1512 in MG) | 1440 (1360 in MG) | 512m | 922m |
MG- Morning Glory release
Front End: [ front end app versions (optional)]
Item Check-in (folio_checkin-7.2.0)
Item Check-out (folio_checkout-8.2.0)
Dataset Size is important for testing. What was the size of the dataset? Include one or more related tables' sizes.
Methodology/Approach
[ In order to be able to reproduce the test, list the high-level methodology that was used to carry out the tests. This is important for complex tests that involve multiple workflows.
Preparation Steps: Provide a comprehensive overview of the preparation process preceding the test. This includes setting up the test scripts, configuring relevant parameters, and ensuring all necessary tools and resources are in place.
Data preparation scripts. In the context of performance testing, data preparation is a critical step to ensure that the testing environment accurately reflects real-world usage patterns and can handle the intended load efficiently. To facilitate this process, specific scripts are used to populate the test database with the necessary data, simulate user transactions, or configure the environment appropriately. Add links needed scripts to github and write a short description of how to use/run them.
Test Configuration: Specify the exact configurations utilized during the test execution. Duration, number of virtual users, ramp-up period etc.
It's important to inform readers of how the tests were performed so that they can comment on any flaw in the test approach or that they can try to reproduce the test results themselves. For example:
Start CICO test first
Run a Data Import job after waiting for 10 minutes
Run an eHoldings job after another 10 minutes
On another tenant run another DI job after 30 minutes in
The steps don't need to be very specific because the details are usually contained in the participating workflow's README files (on GitHub). However, anything worth calling out that was not mentioned elsewhere should be mentioned here.
Metric Collection Approach: Describe the methodology adopted to collect and interpret metrics during testing. Highlight the tools employed for data collection, SQL queries to get data, or other approaches(Get metrics from JMeter jtl reports) that was used for specific PTF-tests
Also, it is necessary to include the approach taken to obtain the final results. For example, please document if the results were obtained by zooming into a portion of the graphs in Grafana (which portion?, why?), how the numbers were calculated if not obvious. ]
Additional Screenshots of graphs or charts
[ Include additional screenshots of graphs on the Cloudwatch and Grafana dashboards for completeness sake. Include any raw data that includes the timestamps of tests or any screenshots/charts/graphs. These data may be separate files or may be one Miror board or one Sheet/Doc that has everything in it. Raw data is important to consult for additional insights if the report omits them initially. ]
Test Artifacts
Attach the test artifacts - excluding any sensitive data. These artifacts are deviations from the main files that were checked into Github, but are relevant for this test.