Overview
- In this report, PTF investigates the impact of setting CPU allocations to 0 units across all tasks within an AWS ECS cluster. The purpose of this study is to determine whether removing CPU constraints reveals the actual CPU usage of the tasks and to assess how this adjustment affects overall performance. By comparing key workflows across different environments, we aim to identify any potential changes in efficiency, throughput, or resource utilization that may result from setting CPU = 0. The findings from these tests will help inform best practices for resource allocation and performance optimization within ECS clusters.
- PERF-959Getting issue details... STATUS
Summary
- Much performance degradation when we use less instance count for environment or instance type: .x2gd.large with less resources.
- During the tests Parameter CPU=0 improved performance for a several workflows, so this point we are going to investigate more in this ticket.
No memory leaks, memory consumption was stable during all of the tests.
- Tests had 100% errors count for AIE_TC: Create Invoices, AIE_TC: Invoices Approve, AIE_TC: Paying Invoices, TC: Receiving-an-Order-Line, Unreceiving-a-Piece and Unreceiving-a-Piece Workflows because data was not regenerated.
Test Runs and Results
This table contains durations for all Workflows.
Workflow | Test 1 CPU=0 | Test 2 CPU=0 | Test 3 CPU=0 | Test 4 CPU=1 | ||||
---|---|---|---|---|---|---|---|---|
Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | |
AIE_TC: Create Invoices | 8350 | 100% | 13458 | 100% | 7421 | 100% | 8251 | 100% |
AIE_TC: Invoices Approve | 3180 | 100% | 5251 | 100% | 2960 | 100% | 3088 | 100% |
AIE_TC: Paying Invoices | 3289 | 100% | 4919 | 100% | 3060 | 100% | 3285 | 100% |
CICO_TC_Check-In Controller | 2323 | 0% | 3322 | 7% | 2228 | 0% | 2313 | 0% |
CICO_TC_Check-Out Controller | 4154 | 0% | 5700 | 10% | 3896 | 0% | 3985 | 0% |
CSI_TC:Share local instance | 13008 | 19% | 14781 | 14% | 12969 | 20% | 13022 | 19% |
DE_Exporting MARC Bib records custom workflow | 52983 | 0% | 379912 | 95% | 78759 | 0% | 100700 | 0% |
DE_Exporting MARC Bib records workflow | 42785 | 0% | 402026 | 98% | 73957 | 0% | 109794 | 0% |
EVA_TC: View Account | 803 | 3% | 962 | 3% | 700 | 1% | 832 | 3% |
ILR_TC: Create ILR | 1527 | 0% | 2215 | 4% | 1451 | 0% | 1518 | 0% |
MSF_TC: mod search by auth query | 672 | 0% | 1830 | 7% | 3096 | 4% | 1023 | 0% |
MSF_TC: mod search by boolean query | 165 | 0% | 485 | 2% | 737 | 1% | 212 | 0% |
MSF_TC: mod search by contributors | 398 | 0% | 1063 | 3% | 1664 | 2% | 604 | 0% |
MSF_TC: mod search by filter query | 286 | 0% | 713 | 2% | 1070 | 2% | 417 | 0% |
MSF_TC: mod search by keyword query | 284 | 0% | 658 | 1% | 1006 | 1% | 422 | 0% |
MSF_TC: mod search by subject query | 407 | 0% | 1112 | 1% | 1530 | 2% | 623 | 0% |
MSF_TC: mod search by title query | 1031 | 0% | 2449 | 1% | 2907 | 1% | 1758 | 0% |
OPIH_/oai/records | 6042 | 0% | 4649 | 100% | 3587 | 0% | 5448 | 0% |
POO_TC: Add Order Lines | 55334 | 0% | 99076 | 19% | 54440 | 0% | 55224 | 0% |
POO_TC: Approve Order | 42567 | 0% | 79022 | 12% | 42191 | 0% | 42907 | 0% |
POO_TC Create Order | 31933 | 0% | 56468 | 13% | 31406 | 0% | 32166 | 0% |
RTAC_TC: edge-rtac | 4150 | 0% | 4726 | 0% | 4084 | 0% | 4099 | 0% |
SDIC_Single Record Import (Create) | 13777 | 19% | 20475 | 13% | 13773 | 20% | 13873 | 19% |
SDIU_Single Record Import (Update) | 19549 | 0% | 36639 | 16% | 19582 | 1% | 19568 | 0% |
TC: Receiving-an-Order-Line | 45895 | 100% | 83735 | 100% | 45888 | 100% | 46890 | 100% |
Serials-Receiving-Workflow | 48584 | 100% | 86692 | 100% | 47338 | 100% | 48019 | 100% |
Unreceiving-a-Piece | 8080 | 100% | 13729 | 100% | 8013 | 100% | 8297 | 100% |
ULR_TC: Users loan Renewal Transaction | 3189 | 0% | 4818 | 6% | 2970 | 0% | 3125 | 0% |
Comparison
This graph shows the durations of all workflows compared between tests.
Average Case workflows: Part 1.
Average Case workflows: Part 2.
High Load workflows: Part 1.
High Load workflows: Part 2.
High Load workflows: Part 3.
Test №1-2-3
The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules
Instance CPU Utilization
Service CPU Utilization
Here we can see that nginx-okapi modules used 20% CPU for Test-2 (high load case).
Service Memory Utilization
Here we can see that Memory exceed 113% of the unit power mod-data-export-b modules for Test-2 (high load case).
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
DB Connections
Max number of DB connections was 6000 in maximum fir Test 1, 7500 for Test 2 and 6000 for Test 3.
DB load
Top SQL-queries
Test №4
The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules.
Instance CPU Utilization
Service CPU Utilization
Here we can see that okapi used 100000% of the CPU power parameter CPU=1 for module.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
DB Connections
Max number of DB connections was 5600.
DB load
Top SQL-queries
Resource utilization for Test №3
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 6, and CPU=0 was set for all services.
Service CPU Utilization
Here we can see that mod-permissions used 20% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 64% maximum.
DB Connections
Max number of DB connections was 2040.
Resource utilization for Test №4
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 8, and CPU=0 was set for all services.
Service CPU Utilization
Here we can see that okapi used 20% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 91%.
DB Connections
Max number of DB connections was 4840.
DB load
Top SQL-queries
Resource utilization for Test №5
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 10, and CPU=0 was set for all services.TT
Service CPU Utilization
Here we can see that okapi used 36% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 42%.
DB Connections
Max number of DB connections was 3650.
DB load
Top SQL-queries
Resource utilization for Test №6
The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14 but 12 were used, and CPU=2 was set for all services.
Service CPU Utilization
Here we can see that okapi used 46000% CPU of unit power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Inctanse CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
DB Connections
Max number of DB connections was 5150.
DB load
Top SQL-queries
Resource utilization for Test №7
The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Service CPU Utilization
Here we can see that okapi used 44000% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Inctanse CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
DB Connections
Max number of DB connections was 5150.
DB load
Top SQL-queries
Resource utilization for Test №8
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Service CPU Utilization
Here we can see that okapi used 38% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was maximum 53%.
DB Connections
Max number of DB connections was 3842.
DB load
Top SQL-queries
Appendix
Infrastructure
PTF - Baseline MCPT environment configuration
- 14 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 database instance, writer
Name Memory GIB vCPUs db.r6g.4xlarge
128 GiB 16 vCPUs - Open Search ptf-test
- Data nodes
- Instance type - r6g.2xlarge.search
- Number of nodes - 4
- Version: OpenSearch_2_7_R20240502
- Dedicated master nodes
- Instance type - r6g.large.search
- Number of nodes - 3
- Data nodes
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Methodology/Approach
MOBIUS Tests: scenarios were started by JMeter script from load generator. We had 100% error count for AIE_TC: Create Invoices, AIE_TC: Invoices Approve, AIE_TC: Paying Invoices, TC: Receiving-an-Order-Line, Unreceiving-a-Piece and Unreceiving-a-Piece Workflows because data was not regenerated.
Baseline MCPT Environment configuration according to tunning environment from previous report: task count: 4 for services: mod-permissions, mod-search, mod-patron, mod-inventory, mod-inventory-storage, mod-circulation, mod-circulation-storage, mod-order, mod-order-storage, mod-invoice, mod-invoice-storage, for mod-users and mod-authtoken task count 6. Parameter srs.marcIndexers.delete.interval.seconds=86400 for mod-source-record-storage. Instance type: m6g.2xlarge. Instances count: 14. Database r6g.4xlarge, Amazon OpenSearch Service ptf-test: r6g.2хlarge.search (4 nodes).
- Test 1: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules, Fixed Load (average case) MOBIUS test was run.
- Test 2: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules, Fixed Load (high load case) MOBIUS test was run.
- Test 3: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules, Fixed Load (average case) MOBIUS test was run (rerun Test 1).
- Test 4: The Baseline MCPT Environment configuration was applied, and CPU=1 was set for all modules, Fixed Load (average case) MOBIUS test was run.
- Test 5: The Baseline MCPT Environment configuration was applied, and CPU=2 was set for all modules, Data Import - Create with 25k and 100k records files tests were run.
- Test 6: The Baseline MCPT Environment configuration was applied, and CPU=2 was set for all modules, Check In\Check Out with 20 users for one tenant on 30 minutes test was run.
- Test 7: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, Data Import - Create with 25k and 100k records files tests were run.
- Test 8: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, Check In\Check Out with 20 users for one tenant on 30 minutes test was run.