Overview
- The primary objective of testing was to evaluate the performance of the Baseline MCPT Environment configuration while attempting to optimize costs by adjusting instance types and reducing the number of instances. The tests were designed to compare the performance outcomes across different configurations, including variations in instance types and counts within multiple Auto Scaling Groups (ASGs). By systematically modifying these variables, the goal was to maintain or improve the performance observed in the baseline configuration while achieving cost efficiency.
- PERF-962Getting issue details... STATUS
Summary
- Through a series of experiments involving different placement strategies, instance types, and total instance counts, we found that the performance remained consistent when using this configuration:
three
c7g.large
instances dedicated to theokapi
service alongside fiver7g.2xlarge
instances for all other services, with the CPU parameter set to 2 for all services.- five
r7g.2xlarge
instances for all services, with the CPU parameter set to 2 for all services.
- Optimized environment configurations offers a 20-40% cost reduction compared to the existing setup, making it a more economical option without compromising on performance.
- Configurations with three c7g.large instances for the okapi service and five r7g.2xlarge instances for all other services show the best performance across all experiments.
- In fact, some workflows show better performance with this new setup than correct infrastructures.
- The CPU utilization on EC2 level better now - around 30-60%, previously it was under 20%.
AWS Configuration Costs
Cluster | Instance Type | Cost per Month (USD) | Number of Instances | Total Cost per Cluster (USD) |
---|---|---|---|---|
QCP1 | m6g.2xlarge | $221.76 | 10 | $2,217.60 |
MCPT | m6g.2xlarge | $221.76 | 14 | $3,104.64 |
Optimized Infrastructure Two Auto Scaling Groups | c7g.large | $52.20 | 3 | $1,698.84 |
r7g.2xlarge | $308.45 | 5 | ||
Optimized Infrastructure One Auto Scaling Groups | r7g.2xlarge | $308.45 | 5 | $1,542.25 |
Cost Comparison (Before vs After)
Cluster | Previous Total Cost | New Total Cost | Percentage Saving |
---|---|---|---|
QCP1 | $2,217.60 | $1,698.84 | 23.39% |
MCPT | $3,104.64 | $1,698.84 | 45.28% |
Test Runs
Test # | Description | Status |
---|---|---|
Test 1 | Instance type: m6g.2xlarge. Instances count: 10. | Completed |
Test 2 | Instance type: m6g.2xlarge. Instances count: 10 (Repeat Test 1). | Completed |
Test 3 | Used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: m6g.2xlarge for others services. | Completed |
Test 4 | Used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: m6g.2xlarge for others services (Repeat Test 3). | Completed |
Test 5 | CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules. | Completed |
Test 6 | CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules (Repeat Test 5). | Completed |
Test 7 | CPU=2 was set for all modules except CPU=2048 for mod-search, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules. | Completed |
Test 8 | CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type: c7g.large for all services. | |
Test 9 | CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type: c7g.large for all services (Repeat Test 8). | |
Test 10 | CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services. | |
Test 11 | CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services (Repeat Test 10). | |
Test 12 | CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services (Repeat Test 11). |
Test Results
This table contains durations for all Workflows.
Workflows | Test 1 | Test 2 | Test 3 | Test 4 (Repeat 3) | Test 5 | Test 6 (Repeat 5) | Test 7 | Test 8 | Test 9 (Repeat 8) | Test 10 | Test 11 (Repeat 10) | Test 12 (Repeat 11) | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | Average response time (milliseconds) | Errors | |
DATA IMPORT | 0:52:03 | 0:44:55 | 0:46:07 | 0:47:09 | 0:51:41 | 0:58:35 | 1:00:03 | 0:43:53 | 0:47:06 | 0:55:30 | 0:45:46 | 0:44:09 | ||||||||||||
DATA EXPORT | 0:58:11 | 0:44:43 | 0:47:41 | 0:50:32 | 0:38:59 | 0:45:53 | 0:48:26 | not finished for main | 0:45:41 | 0:48:49 | 0:56:49 | 0:42:16 | 0:44:26 | |||||||||||
CICO_TC_Check-In Controller | 1163 | 0% | 948 | 0% | 932 | 0% | 958 | 0% | 849 | 0% | 895 | 0% | 940 | 0% | 912 | 0% | 993 | 0% | 1176 | 0% | 892 | 0% | 967 | 0% |
CICO_TC_Check-Out Controller | 1697 | 0% | 1481 | 0% | 1408 | 0% | 1428 | 0% | 1318 | 0% | 1318 | 0% | 1367 | 0% | 1345 | 0% | 1445 | 0% | 1675 | 0% | 1371 | 0% | 1467 | 0% |
DE_Exporting MARC Bib records workflow | 2528 | 0% | 3818 | 0% | 3675 | 0% | 2830 | 0% | 1918 | 0% | 2223 | 0% | 1865 | 0% | 3363 | 0% | 2420 | 0% | 1872 | 0% | 3398 | 0% | 5033 | 0% |
ILR_TC: Create ILR | 1023 | 0% | 874 | 0% | 730 | 0% | 804 | 0% | 624 | 0% | 662 | 0% | 802 | 0% | 820 | 0% | 767 | 0% | 1116 | 0% | 877 | 0% | 780 | 0% |
ILR_TC: Get ItemId | 132 | 0% | 131 | 0% | 107 | 0% | 114 | 0% | 112 | 0% | 109 | 0% | 109 | 0% | 116 | 0% | 127 | 0% | 165 | 0% | 136 | 0% | 130 | 0% |
MSF_TC: mod search by auth query | 4830 | 0% | 4835 | 6% | 5480 | 6% | 7658 | 10% | 7570 | 19% | 2175 | 0% | 2474 | 0% | 6094 | 10% | 4119 | 12% | 2938 | 0% | 5546 | 11% | 7075 | 18% |
MSF_TC: mod search by boolean query | 469 | 0% | 621 | 3% | 1456 | 2% | 1027 | 5% | 1876 | 5% | 256 | 1% | 605 | 0% | 1343 | 5% | 527 | 4% | 333 | 0% | 1197 | 7% | 1263 | 6% |
MSF_TC: mod search by contributors | 475 | 0% | 1655 | 3% | 1805 | 6% | 1714 | 3% | 3467 | 8% | 496 | 0% | 503 | 0% | 1966 | 4% | 755 | 6% | 465 | 0% | 2090 | 5% | 2036 | 8% |
MSF_TC: mod search by filter query | 229 | 0% | 844 | 2% | 703 | 2% | 1137 | 0% | 1573 | 2% | 217 | 0% | 262 | 0% | 1100 | 2% | 429 | 3% | 238 | 0% | 944 | 4% | 1068 | 6% |
MSF_TC: mod search by keyword query | 228 | 0% | 519 | 3% | 1161 | 4% | 1245 | 8% | 1312 | 4% | 198 | 0% | 256 | 0% | 1144 | 3% | 434 | 2% | 250 | 0% | 979 | 4% | 1018 | 8% |
MSF_TC: mod search by subject query | 577 | 0% | 1846 | 3% | 1825 | 3% | 1634 | 7% | 2328 | 2% | 445 | 0% | 539 | 0% | 2163 | 3% | 840 | 7% | 473 | 0% | 1476 | 6% | 1919 | 8% |
MSF_TC: mod search by title query | 2141 | 0% | 3251 | 3% | 3770 | 1% | 4598 | 2% | 4887 | 8% | 2038 | 0% | 2294 | 0% | 4285 | 5% | 3214 | 6% | 2054 | 0% | 4145 | 7% | 4437 | 9% |
DI_TC: Importing MARC records workflow Transaction &{tenant} | 1059451 | 0% | 20423 | 0% | 935626 | 0% | 950403 | 0% | 1047836 | 0% | 17946 | 0% | 1210429 | 33% | 17258 | 0% | 955349 | 0% | 2238208 | 0% | 18974 | 0% | 18762 | 0% |
PRV_TC: View Patron record Group | 347 | 0% | 310 | 0% | 248 | 0% | 238 | 0% | 213 | 0% | 284 | 0% | 313 | 0% | 265 | 0% | 254 | 0% | 355 | 0% | 242 | 0% | 273 | 0% |
ULR_TC: Users loan Renewal Transaction | 1059451 | 0% | 1852 | 0% | 1528 | 0% | 1717 | 0% | 1551 | 0% | 1595 | 0% | 1749 | 0% | 1526 | 0% | 1761 | 0% | 4117 | 0% | 1496 | 0% | 1617 | 0% |
Comparison
This graph shows the durations of all workflows compared between the best test results.
Part 1.
Part 2.
Test №1
Introduction: Test 1: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run.
Objective: The objective of test was to evaluate the performance of the MCPT environment by applying the baseline configuration.
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-inventory-b modules used 73% CPU.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 97% average.
DB Connections
Max number of DB connections was 1250 in maximum.
DB load
Top SQL-queries
Test №2
Introduction: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run (Repeat Test 1).
Objective: The objective of this test was to validate the consistency of performance observed in Test 1 by repeating the same configuration.
Results: Results was almost the same for all workflows.
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-inventory-b modules used 102% CPU in maximum.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 95% average
DB Connections
Max number of DB connections was 1250.
DB load
Top SQL-queries
Test №3-4
Introduction: Test 3: The Baseline QCP1 Environment configuration was applied, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run. Test 3: Repeat Test 1.
Objective: The objective of test was to evaluate the performance of the QCP1 environment by applying two distinct autoscaling groups with different instance types: c7g.large for the Okapi service and r7g.xlarge for all other services. By running test, the goal was to determine if this modified configuration could achieve similar or improved performance compared to the baseline, while potentially optimizing resource allocation and cost.
Results: We observed nearly identical performance results almost for all the workflows compared to the Baseline configuration and repeated it in Test №4.
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-inventory-b used 78k% of the CPU.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 97% maximum.
DB Connections
Max number of DB connections was 2200.
DB load
Top SQL-queries
Test №5-6-7
Introduction: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run.
Objective: Repeat tests to confirm performance.
Results: Performance was confirmed at the same like baseline.
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-data-export-b used 82k% of the CPU power of parameter CPU=2.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 97%.
DB Connections
Max number of DB connections was 1250.
DB load
Top SQL-queries
Test №8-9
Introduction: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run.
Objective:
Results:
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-data-export-b used 36k% AVG of the CPU power of parameter CPU=2.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 94%.
DB Connections
Max number of DB connections was 1265.
DB load
Top SQL-queries
Test №10
Introduction: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run
Objective:
Results:
Instance CPU Utilization
Service CPU Utilization
Here we can see that mod-data-export-b used 36k% AVG of the CPU power of parameter CPU=2.
Service Memory Utilization
Here we can't see any sign of memory leaks on every module. Memory shows stable trend.
Kafka metrics
OpenSearch metrics
DB CPU Utilization
DB CPU was 94%.
DB Connections
Max number of DB connections was 1265.
DB load
Top SQL-queries
Appendix
Infrastructure
PTF - Baseline QCP1 environment configuration (was changed during testing)
- 10 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
1 database instance, writer
Name Memory GIB vCPUs db.r6g.xlarge
32 GB 4 vCPUs - Open Search ptf-test
- Data nodes
- Instance type - r6g.2xlarge.search
- Number of nodes - 4
- Version: OpenSearch_2_7_R20240502
- Dedicated master nodes
- Instance type - r6g.large.search
- Number of nodes - 3
- Data nodes
- MSK fse-tenant
- 2 brokers, kafka.m7g.xlarge brokers in 2 zones
Apache Kafka version 3.7.x
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Methodology/Approach
Tests scenarios were started by JMeter script from load generator.
Baseline QCP1 Environment configuration: Parameter srs.marcIndexers.delete.interval.seconds=86400 for mod-source-record-storage, number of tasks to launch for service mod-marc-migrations-b was set zero. Instance type: m6g.2xlarge. Instances count: 10. Database db.r6g.xlarge, Amazon OpenSearch Service ptf-test: r6g.2хlarge.search (4 nodes).
- Test 1: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run.
- Test 2: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run (Repeat Test 1).
- Test 3: The Baseline QCP1 Environment configuration was applied, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run.
- Test 4: The Baseline QCP1 Environment configuration was applied, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run (Repeat Test 3).
- Test 5: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run.
- Test 6: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run (Repeat Test 5).
- Test 7: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: c7g.large for others services, qcp1_MiniMaster.jmx script was run.
- Test 8: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run.
- Test 9: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run (Repeat Test 8).
- Test 10: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run.
- Test 11: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run (Repeat Test 10).
- Test 12: The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type: c7g.large for all services, qcp1_MiniMaster.jmx script was run (Repeat Test 11).