PTF - Performance testing of CPU=0 for Services (QCP1)
Overview
- In this report, PTF tries to reproduce a bug where one of the modules fails with
500
response code possibly due to resource starvation. The theory was that a few high-usage modules were possibly placed on the same instance with theCPU=0
parameter, leading to resource contention and failure in one of the modules. Setting CPU=0 doesn't set any limit or declare any required CPU resources at services' startup, so theoretically a service may use up as much CPU as it needs. But also theoretically ECS should do a fair job of limiting the CPU usage of a running service if other services are running at the same time. The experiments performed in this ticket are designed to see if ECS does it job, that is, fairly regulate CPU utilization of all modules and if not, would it lead to resource starvation and 500 errors.
PERF-1010
-
Retest DI + CICO with CPU =0
Closed
Summary
- In Test №4, the following services were grouped per instance:
- Instance 1:
mod-circulation
,mod-source-record-storage
(2 tasks),okapi
tasks - Instance 2:
mod-circulation-storage
(2 tasks),mod-circulation
tasks - Instance 3:
mod-source-record-manager
,nginx-okapi
,mod-inventory-storage
,okapi
,mod-inventory
,mod-authorization
tasks - Instance 4:
mod-authorization
,okapi
,mod-inventory
,mod-source-record-manager
,p
ub-okapi
,nginx-okapi
,mod-inventory-storage
tasks
- Instance 1:
- During the tests, we observed that when Data Import began without CICO (Check-In Check-Out), some modules, such as
mod-inventory
, consumed a significant amount of CPU. However, when CICO started concurrently,mod-inventory
's CPU usage decreased. Once CICO was completed,mod-inventory
resumed its higher CPU usage. This means that ECS regulates the CPU utilizations of the services and does not allow any one service to use up all CPU resources in the ec2 instance. In all tests, there were no 500 errors observed, so the cause of the HTTP 500 bug is not due to CPU starvation, and definitely not because of setting CPU = 0.
Test Runs
Test # | Description | Status |
---|---|---|
Test 1 | The QCP1 Environment configuration was applied. Check In\Check Out with 30 users for one tenant on 30 minutes and Data Import - Create with 50k records files tests were run. | Completed |
Test 2 | The QCP1 Environment configuration was applied. Check In\Check Out with 30 users for one tenant on 30 minutes and Data Import - Create with 50k records files tests were run. | Completed |
Test 3 | The QCP1 Environment configuration was applied. Check In\Check Out with 30 users for one tenant on 30 minutes and Data Import - Create with 50k records files tests were run. | Completed |
Test 4 | The QCP1 Environment configuration was applied. Check In\Check Out with 30 users for one tenant on 30 minutes and Data Import - Create with 50k records files tests were run. | Completed |
Test Results
This table contains response time for Check In\Check Out and durations for Data Import tests.
Requests | Test №1 | Test №2 | Test №3 | Test №4 | ||||||||
Error % | Average | 95th pct | Error % | Average | 95th pct | Error % | Average | 95th pct | Error % | Average | 95th pct | |
Check-In Controller | 1.01% | 1868 | 2388 | 0.87% | 1936 | 2308 | 0% | 1614 | 2044 | 0% | 2524 | 3413 |
Check-Out Controller | 1.46% | 3250 | 4320 | 1.24% | 3347 | 3978 | 0.03% | 2819 | 3551 | 0.06% | 4548 | 6088 |
Data Import | 0% | Duration | 1 record failed | Duration | 0% | Duration | 0% | Duration | ||||
0:50:25 | 0:49:54 | 0:46:04 | 0:45:46 |
Test №1
The Baseline QCP1 Environment configuration was applied, and CPU=0 was set for all modules, ECS infrastructure was configured according to requested steps.
Results: Results had errors for authorization service.
Service CPU Utilization
Here we can see that mod-inventory module used 36% Instances CPU power before CICO started, than mod-inventory module used 27% Instances CPU power during CICO and 24% Instances CPU power after CICO.
Service Memory Utilization
Here we can see that mod-permissions used 80% memory.
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
DB Connections
Max number of DB connections was 900.
DB load
Top SQL-queries
Test №2
The Baseline QCP1 Environment configuration was applied, and CPU=0 was set for all modules, ECS infrastructure was configured according to requested steps.
Results: Results had errors for authorization service, problem was in Rump up periods.
Service CPU Utilization
Here we can see that mod-inventory module used 35% Instances CPU power before CICO started, than mod-inventory module used 24% Instances CPU power during CICO and 27% Instances CPU power after CICO.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 89%
DB Connections
Max number of DB connections was 975.
DB load
Top SQL-queries
Test №3
The Baseline QCP1 Environment configuration was applied, and CPU=0 was set for all modules, ECS infrastructure was configured according to requested steps, Rump up set 120 for avoid errors with authorization.
Results: Result did not have errors for authorization service, only one error with data preparation "Item is already checked out".
Service CPU Utilization
Here we can see that mod-inventory module used 34% Instances CPU power before CICO started, than mod-inventory module used 26% Instances CPU power during CICO and 25% Instances CPU power after CICO.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 90% maximum.
DB Connections
Max number of DB connections was 2112.
DB load
Top SQL-queries
Test №4
The Baseline QCP1 Environment configuration was applied, and CPU=0 was set for all modules, ECS infrastructure was configured according to requested steps, Rump up set 120 for avoid errors with authorization.
Results: Result did not have errors for authorization service, only two errors with data preparation "Item is already checked out".
Instance CPU Utilization
Tasks placement
Service CPU Utilization
Here we can see that mod-inventory module used 36% Instances CPU power before CICO started, than mod-inventory module used 26% Instances CPU power during CICO and 44% Instances CPU power after CICO.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 81%.
DB Connections
Max number of DB connections was 925.
DB load
Top SQL-queries
Appendix
Infrastructure
PTF - Baseline QCP1 environment configuration (was changed during testing)
- 10 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 database instance, writer
Name Memory GIB vCPUs db.r6g.xlarge
32 GB 4 vCPUs - Open Search ptf-test
- Data nodes
- Instance type - r6g.2xlarge.search
- Number of nodes - 4
- Version: OpenSearch_2_7_R20240502
- Dedicated master nodes
- Instance type - r6g.large.search
- Number of nodes - 3
- Data nodes
- MSK fse-tenant
- 2 brokers, kafka.m7g.xlarge brokers in 2 zones
Apache Kafka version 3.7.x
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Methodology/Approach
CICO Tests: scenarios were started for 30 users by JMeter script from load generator and concurrently run manually DI 50k records from UI.
Baseline QCP1 Environment configuration and steps to configure ECS infrastructure:
Having one ASG (auto scaling group) change instance type to be r7g.large to accelerate resource starvation for modules (baseline has r7g.2xlarge)
Manually unpause environment. (Not using FSE-Unpause folio job)
Unpause okapi, okapi-nginx, pub-okapi, mod-authtoken
Unpause mod-SRS, mod-SRM, mod-inventory, inventory-storage, mod-circulation, mod-circulation storage. Taking into account order of unpausing - it will place them all on instances that available now.
Unpause everything else by FSE-Unpause folio job (doesn’t matter placing here).
- Test 1: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, ECS infrastructure was configured according to steps above. Check In\Check Out with 30 users for one tenant on 30 minutes and Data Import - Create with 50k records files tests were run.
- Test 2: Repeat Test 1.
- Test 3: Changed Rump-up period from 30 to 120 in JMeter script and run test for the same configuration like for Test 1.
- Test 4: Restart environment and configure the same ECS infrastructure again like for Test 1 and run tests.