Table of Contents outline true
...
Workflow | Test 1 Baseline Env configuration | Test 2 CPU=0 | Test 3 x2gd.xlarge CPU=0 instances=6 | Test 4 x2gd.xlarge CPU=0 instances=8 | Test 5 x2gd.large CPU=0 instances=10 | Test 6 r6g.xlarge CPU=2 instances=12(14) | Test 7 r6g.xlarge CPU=2 instances=14 placement strategy (one task per host) | Test 8 x2gd.large CPU=2 instances=14 placement strategy (one task per host) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | Average (milliseconds) | Errors | |
AIE_TC: Create Invoices | 7682 | 100% | 8106 | 100% | 50468 | 100% | 15554 | 100% | 92805 | 100% | 10191 | 100% | 9716 | 100% | 91611 | 100% |
AIE_TC: Invoices Approve | 2818 | 100% | 3016 | 100% | 24781 | 100% | 7720 | 100% | 51554 | 100% | 4512 | 100% | 4210 | 100% | 51626 | 100% |
AIE_TC: Paying Invoices | 2882 | 100% | 3164 | 100% | 32994 | 100% | 9788 | 100% | 64642 | 100% | 5430 | 100% | 4902 | 100% | 65029 | 100% |
CICO_TC_Check-In Controller | 2000 | 0% | 2214 | 0% | 19075 | 0% | 6038 | 0% | 32909 | 0% | 3659 | 0% | 3194 | 0% | 30831 | 0% |
CICO_TC_Check-Out Controller | 3548 | 0% | 3799 | 0% | 29224 | 1% | 10560 | 0% | 57347 | 0% | 6439 | 0% | 5595 | 0% | 55503 | 0% |
CSI_TC:Share local instance | 13050 | 19% | 13073 | 19% | 14265 | 16% | 16192 | 0% | 17361 | 2% | 13782 | 14% | 13339 | 17% | 17460 | 1% |
DE_Exporting MARC Bib records custom workflow | 84707 | 0% | 59099 | 0% | 382491 | 56% | 116869 | 0% | 567389 | 83% | 127885 | 0% | 88541 | 0% | 500759 | 71% |
DE_Exporting MARC Bib records workflow | 74126 | 0% | 43093 | 0% | 417982 | 57% | 97656 | 4% | 623610 | 91% | 82126 | 0% | 83046 | 0% | 457212 | 58% |
EVA_TC: View Account | 519 | 0% | 565 | 0% | 16034 | 19% | 1736 | 0% | 40619 | 2% | 1138 | 1% | 972 | 1% | 37669 | 1% |
ILR_TC: Create ILR | 1422 | 0% | 1499 | 0% | 11607 | 1% | 3882 | 0% | 23676 | 0% | 2414 | 0% | 1997 | 0% | 22335 | 0% |
MSF_TC: mod search by auth query | 755 | 2% | 668 | 0% | 1888 | 0% | 1500 | 0% | 6180 | 0% | 1080 | 0% | 910 | 0% | 5894 | 0% |
MSF_TC: mod search by boolean query | 205 | 1% | 159 | 0% | 594 | 0% | 427 | 0% | 2022 | 0% | 258 | 0% | 238 | 0% | 1871 | 0% |
MSF_TC: mod search by contributors | 440 | 1% | 394 | 0% | 856 | 0% | 839 | 0% | 3454 | 0% | 616 | 0% | 530 | 0% | 3300 | 0% |
MSF_TC: mod search by filter query | 302 | 0% | 284 | 0% | 512 | 0% | 539 | 0% | 2018 | 0% | 416 | 0% | 362 | 0% | 1924 | 0% |
MSF_TC: mod search by keyword query | 310 | 0% | 280 | 0% | 521 | 0% | 537 | 0% | 2013 | 0% | 416 | 0% | 361 | 0% | 1912 | 0% |
MSF_TC: mod search by subject query | 448 | 0% | 406 | 0% | 797 | 0% | 778 | 0% | 3076 | 0% | 620 | 0% | 519 | 0% | 2917 | 0% |
MSF_TC: mod search by title query | 1090 | 1% | 1025 | 0% | 1435 | 0% | 1387 | 0% | 3687 | 0% | 1432 | 0% | 1149 | 0% | 3543 | 0% |
OPIH_/oai/records | 5330 | 0% | 5404 | 0% | 9330 | 0% | 7677 | 0% | 6881 | 0% | 3327 | 0% | 6990 | 0% | 8335 | 0% |
POO_TC: Add Order Lines | 52142 | 0% | 54193 | 0% | 282192 | 0% | 79206 | 0% | 412004 | 0% | 57735 | 0% | 57749 | 0% | 399917 | 0% |
POO_TC: Approve Order | 40656 | 0% | 42523 | 0% | 211747 | 0% | 56446 | 0% | 265167 | 0% | 43930 | 0% | 43834 | 0% | 275935 | 0% |
POO_TC Create Order | 30734 | 0% | 31652 | 0% | 107318 | 0% | 42940 | 0% | 49234 | 0% | 32121 | 0% | 43834 | 0% | 175643 | 0% |
RTAC_TC: edge-rtac | 3735 | 0% | 3828 | 0% | 16295 | 0% | 1387 | 0% | 57195 | 0% | 4205 | 0% | 3950 | 0% | 55595 | 0% |
SDIC_Single Record Import (Create) | 13279 | 19% | 13894 | 19% | 45024 | 16% | 17305 | 0% | 79053 | 2% | 14650 | 14% | 14531 | 17% | 74201 | 1% |
SDIU_Single Record Import (Update) | 18466 | 0% | 19432 | 0% | 218270 | 100% | 28207 | 0% | 118399 | 0% | 21736 | 0% | 20965 | 0% | 115777 | 0% |
TC: Receiving-an-Order-Line | 43765 | 100% | 46104 | 100% | 218270 | 100% | 65242 | 100% | 325230 | 100% | 49024 | 100% | 48538 | 100% | 322267 | 100% |
Serials-Receiving-Workflow | 45694 | 100% | 47336 | 100% | 198116 | 100% | 68545 | 100% | 302203 | 100% | 49873 | 100% | 50028 | 100% | 295508 | 100% |
Unreceiving-a-Piece | 7823 | 100% | 7757 | 100% | 40059 | 100% | 13335 | 100% | 64155 | 100% | 9018 | 100% | 8717 | 100% | 60295 | 100% |
ULR_TC: Users loan Renewal Transaction | 2810 | 0% | 3078 | 0% | 22602 | 1% | 7829 | 0% | 38383 | 0% | 4673 | 0% | 4030 | 0% | 36143 | 0% |
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 99% average with ERW: Exporting Receiving Information
...
Test №3
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 6, and CPU=0 was set for all services.
Objective: The objective of this test is to assess the impact of using fewer, more memory-optimized instances (from m6g.2xlarge
to x2gd.xlarge
) on the performance of the MCPT environment. By reducing the number of instances while selecting a different instance type with a higher memory-to-vCPU ratio, this test aims to observe how the system handles workloads under these conditions and whether the overall efficiency and performance improve.
Results: This configuration led to a significant performance degradation, with performance being four times worse compared to the Baseline Test №1. The reduced number of instances in this setup was clearly insufficient to maintain the required performance levels.
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 64% maximum.
...
Test №4
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 8, and CPU=0 was set for all services.
Objective: The goal of this test is to evaluate whether increasing the number of instances from 6 to 8, while maintaining the instance type as x2gd.xlarge and setting CPU=0 for all services, can mitigate the performance degradation observed in Test №3. This test aims to determine the optimal instance count needed to maintain stable performance in the MCPT environment under these conditions
Results: As a result, there was a 3x improvement in overall average duration for all workflows compared to Test №3. However, performance still lagged by more than 30% when compared to the Baseline Test №1.
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 91%.
...
Test №5
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 10, and CPU=0 was set for all services.
Objective: The goal of this test is to evaluate the impact of switching to a smaller instance type, x2gd.large, with a slightly increased instance count of 10, while keeping CPU reservations at 0 for all services. This test aims to find a balance between instance size and count that could potentially optimize performance while reducing costs. We seek to observe how these changes affect the overall system performance compared to previous tests.
Results: The test resulted in a significant performance degradation, with a 7x reduction in performance compared to Baseline Test №1. The change to the x2gd.large
instance type and the increased number of instances did not mitigate the performance issues, leading to a much worse overall performance compared to previous configurations in Test №4.
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 42%.
...
Test №6
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14 but 12 were used, and CPU=2 was set for all services.
Objective: To assess the performance and behavior of the MCPT environment with a different instance type (r6g.xlarge) and increased CPU allocation (CPU=2 for all services), while using a similar number of instances (12 out of 14). The goal is to determine the impact of these changes on overall system performance, responsiveness, and stability compared to the Baseline configuration.
Results: The test revealed a minor performance degradation of approximately 15-20% in the total duration of all workflows compared to Baseline Test №1. Additionally, we observed uneven distribution of CPU utilization across instances. Some instances experienced high CPU loads of up to 80%, while others had significantly lower utilization, around 5%. This imbalance appears to be due to two main factors: firstly, some modules had only a few tasks concentrated on single instances, and secondly, there were instances with 15 or more tasks, while others had only 0-2 tasks.
...
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
...
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
...
Test №7
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Objective: The objective of this test is to evaluate the performance and behavior of the MCPT environment with a consistent instance type (r6g.xlarge
), a fixed number of instances (14), and a new placement strategy ("one task per host"). By setting CPU=2 for all services, we aim to assess the impact of these changes on overall system performance, responsiveness, and stability, and to address the uneven CPU utilization observed in Test №6.
Results: The test revealed a minor performance improvement of approximately 3%-5% compared to Test №6. However, the total duration of all workflows showed a performance degradation of around 10%-15% compared to Baseline Test №1. The updated placement strategy of "one task per host" had a positive effect on addressing the previously observed imbalance but did not fully achieve the expected results. We still observed instances with high CPU loads (up to 80%) and others with significantly lower utilization (around 5%). Despite the more balanced distribution of tasks, some instances continued to have a high number of tasks (15+), while others had very few (2), leading to persistent CPU load imbalance.
Service CPU Utilization
Here we can see that okapi used 44000% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
...
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was 98%.
DB Connections
Max number of DB connections was 5150.
DB load
Top SQL-queries
Test №8
Introduction: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
Objective: The objective of this test is to evaluate the performance and behavior of the MCPT environment using the x2gd.large
instance type with a consistent number of instances (14) and the "one task per host" placement strategy. By setting CPU=2 for all services, we aim to assess the impact of these configurations on overall system performance, responsiveness, and stability, and to compare the results with previous tests to determine the effectiveness of the new setup.
Results: The test revealed a significant performance degradation, with the total average duration for all workflows being approximately 5 times worse compared to the Baseline Test №1. Additionally, there was an imbalance in CPU utilization across instances, indicating that some instances were underutilized while others were overloaded.
Service CPU Utilization
Here we can see that okapi used 38% of the unit CPU power.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Instance CPU Utilization
Kafka metrics
DB CPU Utilization
DB CPU was maximum 53%.
DB Connections
Max number of DB connections was 3842.
DB load
Top SQL-queries
...
PTF - Baseline MCPT environment configuration
- 14 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 database instance, writer
Name Memory GIB vCPUs db.r6g.4xlarge
128 GiB 16 vCPUs - Open Search ptf-test
- Data nodes
- Instance type - r6g.2xlarge.search
- Number of nodes - 4
- Version: OpenSearch_2_7_R20240502
- Dedicated master nodes
- Instance type - r6g.large.search
- Number of nodes - 3
- Data nodes
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
...
- Test 1: Baseline MCPT Environment configuration according to tunning environment from previous report: task count: 4 for services: mod-permissions, mod-search, mod-patron, mod-inventory, mod-inventory-storage, mod-circulation, mod-circulation-storage, mod-order, mod-order-storage, mod-invoice, mod-invoice-storage, for mod-users and mod-authtoken task count 6. Parameter srs.marcIndexers.delete.interval.seconds=86400 for mod-source-record-storage. Instance type: m6g.2xlarge. Instances count: 14. Database r6g.4xlarge, Amazon OpenSearch Service ptf-test: r6g.2хlarge.search (4 nodes).
- Test 2: The Baseline MCPT Environment configuration was applied, and CPU=0 was set for all modules.
- Test 3: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 6, and CPU=0 was set for all services.
- Test 4: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.xlarge, the number of instances was changed to 8, and CPU=0 was set for all services.
- Test 5: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 10, and CPU=0 was set for all services.
- Test 6: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14 but 12 were used, and CPU=2 was set for all services.
- Test 7: The Baseline MCPT Environment configuration was applied, the instance type was changed to r6g.xlarge, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.
- Test 8: The Baseline MCPT Environment configuration was applied, the instance type was changed to x2gd.large, the number of instances was changed to 14, placement strategy was updated to "one task per host", and CPU=2 was set for all services.