Overview
- Currently FSE's hosting of FOLIO consists of running ECS tasks (module docker containers) over 10-14 ec2 m6g.2xlarge instances. We'd like to experiment with a different types of ec2 instance, x2gd, and with different numbers of ec2 instances to see if the performance is comparable to the baseline. Also, we'd like to understand the true CPU usage of the modules by setting setting CPU (units) in the modules' task definitions to 0. To do this we ran the MOBIUS (multi-workflows, multi-tenants) tests on the mcpt environment with different environment configurations and compare the test results to the baseline's report. This report contains the test results of these experiments.
- PERF-942Getting issue details... STATUS
Summary
- Much performance degradation when we use less instance count for environment or instance type: .x2gd.large with less resources.
- During the tests Parameter CPU=0 improved performance for a several workflows, so this point we are going to investigate more in this ticket.
No memory leaks, memory consumption was stable during all of the tests.
- Tests had 100% errors count for AIE_TC: Create Invoices, AIE_TC: Invoices Approve, AIE_TC: Paying Invoices, TC: Receiving-an-Order-Line, Unreceiving-a-Piece and Unreceiving-a-Piece Workflows because data was not regenerated.
Test Runs and Results
This table contains durations for all Workflows.
Workflow | Test 1 Baseline Env configuration | Test 2 CPU=0 | Test 3 x2gd.xlarge CPU=0 instances=6 | Test 4 x2gd.xlarge CPU=0 instances=8 | Test 5 x2gd.large CPU=0 instances=10 | Test 6 r6g.xlarge CPU=2 instances=12(14) | Test 7 r6g.xlarge CPU=2 instances=14 placement strategy (one task per host) | Test 8 x2gd.large CPU=2 instances=14 placement strategy (one task per host) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | |
AIE_TC: Create Invoices | 7682 | 100% | 8106 | 100% | 50468 | 100% | 15554 | 100% | 92805 | 100% | 10191 | 100% | 9716 | 100% | 91611 | 100% |
AIE_TC: Invoices Approve | 2818 | 100% | 3016 | 100% | 24781 | 100% | 7720 | 100% | 51554 | 100% | 4512 | 100% | 4210 | 100% | 51626 | 100% |
AIE_TC: Paying Invoices | 2882 | 100% | 3164 | 100% | 32994 | 100% | 9788 | 100% | 64642 | 100% | 5430 | 100% | 4902 | 100% | 65029 | 100% |
CICO_TC_Check-In Controller | 2000 | 0% | 2214 | 0% | 19075 | 0% | 6038 | 0% | 32909 | 0% | 3659 | 0% | 3194 | 0% | 30831 | 0% |
CICO_TC_Check-Out Controller | 3548 | 0% | 3799 | 0% | 29224 | 1% | 10560 | 0% | 57347 | 0% | 6439 | 0% | 5595 | 0% | 55503 | 0% |
CSI_TC:Share local instance | 13050 | 19% | 13073 | 19% | 14265 | 16% | 16192 | 0% | 17361 | 2% | 13782 | 14% | 13339 | 17% | 17460 | 1% |
DE_Exporting MARC Bib records custom workflow | 84707 | 0% | 59099 | 0% | 382491 | 56% | 116869 | 0% | 567389 | 83% | 127885 | 0% | 88541 | 0% | 500759 | 71% |
DE_Exporting MARC Bib records workflow | 74126 | 0% | 43093 | 0% | 417982 | 57% | 97656 | 4% | 623610 | 91% | 82126 | 0% | 83046 | 0% | 457212 | 58% |
EVA_TC: View Account | 519 | 0% | 565 | 0% | 16034 | 19% | 1736 | 0% | 40619 | 2% | 1138 | 1% | 972 | 1% | 37669 | 1% |
ILR_TC: Create ILR | 1422 | 0% | 1499 | 0% | 11607 | 1% | 3882 | 0% | 23676 | 0% | 2414 | 0% | 1997 | 0% | 22335 | 0% |
MSF_TC: mod search by auth query | 755 | 2% | 668 | 0% | 1888 | 0% | 1500 | 0% | 6180 | 0% | 1080 | 0% | 910 | 0% | 5894 | 0% |
MSF_TC: mod search by boolean query | 205 | 1% | 159 | 0% | 594 | 0% | 427 | 0% | 2022 | 0% | 258 | 0% | 238 | 0% | 1871 | 0% |
MSF_TC: mod search by contributors | 440 | 1% | 394 | 0% | 856 | 0% | 839 | 0% | 3454 | 0% | 616 | 0% | 530 | 0% | 3300 | 0% |
MSF_TC: mod search by filter query | 302 | 0% | 284 | 0% | 512 | 0% | 539 | 0% | 2018 | 0% | 416 | 0% | 362 | 0% | 1924 | 0% |
MSF_TC: mod search by keyword query | 310 | 0% | 280 | 0% | 521 | 0% | 537 | 0% | 2013 | 0% | 416 | 0% | 361 | 0% | 1912 | 0% |
MSF_TC: mod search by subject query | 448 | 0% | 406 | 0% | 797 | 0% | 778 | 0% | 3076 | 0% | 620 | 0% | 519 | 0% | 2917 | 0% |
MSF_TC: mod search by title query | 1090 | 1% | 1025 | 0% | 1435 | 0% | 1387 | 0% | 3687 | 0% | 1432 | 0% | 1149 | 0% | 3543 | 0% |
OPIH_/oai/records | 5330 | 0% | 5404 | 0% | 9330 | 0% | 7677 | 0% | 6881 | 0% | 3327 | 0% | 6990 | 0% | 8335 | 0% |
POO_TC: Add Order Lines | 52142 | 0% | 54193 | 0% | 282192 | 0% | 79206 | 0% | 412004 | 0% | 57735 | 0% | 57749 | 0% | 399917 | 0% |
POO_TC: Approve Order | 40656 | 0% | 42523 | 0% | 211747 | 0% | 56446 | 0% | 265167 | 0% | 43930 | 0% | 43834 | 0% | 275935 | 0% |
POO_TC Create Order | 30734 | 0% | 31652 | 0% | 107318 | 0% | 42940 | 0% | 49234 | 0% | 32121 | 0% | 43834 | 0% | 175643 | 0% |
RTAC_TC: edge-rtac | 3735 | 0% | 3828 | 0% | 16295 | 0% | 1387 | 0% | 57195 | 0% | 4205 | 0% | 3950 | 0% | 55595 | 0% |
SDIC_Single Record Import (Create) | 13279 | 19% | 13894 | 19% | 45024 | 16% | 17305 | 0% | 79053 | 2% | 14650 | 14% | 14531 | 17% | 74201 | 1% |
SDIU_Single Record Import (Update) | 18466 | 0% | 19432 | 0% | 218270 | 100% | 28207 | 0% | 118399 | 0% | 21736 | 0% | 20965 | 0% | 115777 | 0% |
TC: Receiving-an-Order-Line | 43765 | 100% | 46104 | 100% | 218270 | 100% | 65242 | 100% | 325230 | 100% | 49024 | 100% | 48538 | 100% | 322267 | 100% |
Serials-Receiving-Workflow | 45694 | 100% | 47336 | 100% | 198116 | 100% | 68545 | 100% | 302203 | 100% | 49873 | 100% | 50028 | 100% | 295508 | 100% |
Unreceiving-a-Piece | 7823 | 100% | 7757 | 100% | 40059 | 100% | 13335 | 100% | 64155 | 100% | 9018 | 100% | 8717 | 100% | 60295 | 100% |
ULR_TC: Users loan Renewal Transaction | 2810 | 0% | 3078 | 0% | 22602 | 1% | 7829 | 0% | 38383 | 0% | 4673 | 0% | 4030 | 0% | 36143 | 0% |
Comparison
This graph shows the total durations of all workflows compared between tests.
Test №1
Introduction: Baseline MCPT Environment configuration.
Objective: The goal of this test is to establish a baseline performance benchmark for the MCPT environment under standard conditions. This test will provide a reference to measure the impact of any subsequent changes, such as modifications in CPU allocation or other resource management strategies. The baseline will be used to assess the performance, responsiveness, and stability of the system with default configurations.
Service CPU Utilization
Here we can see that CPU did not exceed 95% of the unit CPU power for all of the modules.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
DB Connections
Max number of DB connections was 5568 in maximum.
Test №2
Introduction: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules.
Objective: The goal of this test is to evaluate the performance and behavior of the MCPT environment when CPU reservations are set to 0 for all modules. By not reserving any specific CPU resources, we aim to observe how the system dynamically allocates CPU resources based on availability and demand, and to assess any impacts on module performance, responsiveness, and overall system stability.
Results: Some workflows exhibited significant performance improvements, and the overall average duration for all workflows was better than in Baseline Test №1.
Service CPU Utilization
Here we can see that okapi used 13% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
DB Connections
Max number of DB connections was 5788.
DB load
Top SQL-queries
Test №3
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 6, and CPU=0 was set for all services.
Objective: The objective of this test is to assess the impact of using fewer, more memory-optimized instances (from m6g.2xlarge
to x2gd.xlarge
) on the performance of the MCPT environment. By reducing the number of instances while selecting a different instance type with a higher memory-to-vCPU ratio, this test aims to observe how the system handles workloads under these conditions and whether the overall efficiency and performance improve.
Results: This configuration led to a significant performance degradation, with performance being four times worse compared to the baseline test. The reduced number of instances in this setup was clearly insufficient to maintain the required performance levels.
Service CPU Utilization
Here we can see that mod-permissions used 20% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 64% maximum.
DB Connections
Max number of DB connections was 2040.
Resource utilization for Test №4
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 8, and CPU=0 was set for all services.
Introduction:
Objective:
Results:
Service CPU Utilization
Here we can see that okapi used 20% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 91%.
DB Connections
Max number of DB connections was 4840.
DB load
Top SQL-queries
Resource utilization for Test №5
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 10, and CPU=0 was set for all services.
Introduction:
Objective:
Results:
Service CPU Utilization
Here we can see that okapi used 36% of the absolute CPU power of the container instance.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was 42%.
DB Connections
Max number of DB connections was 3650.
DB load
Top SQL-queries
Resource utilization for Test №6
The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14 but 12 were used, and CPU=2 was set for all services.
Introduction:
Objective:
Results:
Service CPU Utilization
Here we can see that okapi used 46000% CPU of unit power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Inctanse CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
DB Connections
Max number of DB connections was 5150.
DB load
Top SQL-queries
Resource utilization for Test №7
The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Introduction:
Objective:
Results:
Service CPU Utilization
Here we can see that okapi used 44000% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Inctanse CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
DB Connections
Max number of DB connections was 5150.
DB load
Top SQL-queries
Resource utilization for Test №8
The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Introduction:
Objective:
Results:
Service CPU Utilization
Here we can see that okapi used 38% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
DB CPU Utilization
DB CPU was maximum 53%.
DB Connections
Max number of DB connections was 3842.
DB load
Top SQL-queries
Appendix
Infrastructure
PTF - Baseline MCPT environment configuration
- 14 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 database instance, writer
Name Memory GIB vCPUs db.r6g.4xlarge
128 GiB 16 vCPUs - Open Search ptf-test
- Data nodes
- Instance type - r6g.2xlarge.search
- Number of nodes - 4
- Version: OpenSearch_2_7_R20240502
- Dedicated master nodes
- Instance type - r6g.large.search
- Number of nodes - 3
- Data nodes
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Methodology/Approach
MOBIUS Tests: scenarios were started by JMeter script from load generator. We had 100% error count for AIE_TC: Create Invoices, AIE_TC: Invoices Approve, AIE_TC: Paying Invoices, TC: Receiving-an-Order-Line, Unreceiving-a-Piece and Unreceiving-a-Piece Workflows because data was not regenerated.
- Test 1: Baseline MCPT Environment configuration according to tunning environment from previous report: task count: 4 for services: mod-permissions, mod-search, mod-patron, mod-inventory, mod-inventory-storage, mod-circulation, mod-circulation-storage, mod-order, mod-order-storage, mod-invoice, mod-invoice-storage, for mod-users and mod-authtoken task count 6. Parameter srs.marcIndexers.delete.interval.seconds=86400 for mod-source-record-storage. Instance type: m6g.2xlarge. Instances count: 14. Database r6g.4xlarge, Amazon OpenSearch Service ptf-test: r6g.2хlarge.search (4 nodes).
- Test 2: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules.
- Test 3: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 6, and CPU=0 was set for all services.
- Test 4: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 8, and CPU=0 was set for all services.
- Test 5: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 10, and CPU=0 was set for all services.
- Test 6: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14 but 12 were used, and CPU=2 was set for all services.
- Test 7: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
- Test 8: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.