Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue

Overview

  • The primary objective of testing was to evaluate the performance of the Baseline MCPT Environment configuration while attempting to optimize costs by adjusting instance types and reducing the number of instances. The tests were designed to compare the performance outcomes across different configurations, including variations in instance types and counts within multiple Auto Scaling Groups (ASGs). By systematically modifying these variables, the goal was to maintain or improve the performance observed in the baseline configuration while achieving cost efficiency.

Jira Legacy
serverSystem Jira
columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-962
 

Summary

  • Through a series of experiments involving different placement strategies, instance types, and total instance counts, we found that the performance remained consistent when using this configuration: 
    • three c7g.large instances dedicated to the okapi service alongside five r7g.2xlarge instances for all other services, with the CPU parameter set to 2 for all services.
    • five r7g.2xlarge instances for all services, with the CPU parameter set to 2 for all services.
  • Optimized environment configurations offers a 20-40% cost reduction compared to the existing setup, making it a more economical option without compromising on performance.
  • Configurations with three c7g.large instances for the okapi service and five r7g.2xlarge instances for all other services show the best performance across all experiments.
  • In fact, some workflows show better performance with this new setup than correct infrastructures.
  • The CPU utilization on EC2 level better now - around 30-60%, previously it was under 20%.

AWS Configuration Costs

...

Cost Comparison (Before vs After)

...

Previous Total Cost
(USD)

...

New Total Cost
(USD)

...

Percentage Saving
(%)

...

Test Runs

...

CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services.

...

CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services (Repeat Test 10).

...

Test Results

This table contains durations for all Workflows. 

...

Test 2
(Repeat 1)

...

Table of Contents
outlinetrue

Overview

  • The primary objective of testing was to evaluate the performance of the Baseline MCPT Environment configuration while attempting to optimize costs by adjusting instance types and reducing the number of instances. The tests were designed to compare the performance outcomes across different configurations, including variations in instance types and counts within multiple Auto Scaling Groups (ASGs). By systematically modifying these variables, the goal was to maintain or improve the performance observed in the baseline configuration while achieving cost efficiency.

Jira Legacy
serverSystem Jira
columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-962
 

Summary

  • Through a series of experiments involving different placement strategies, instance types, and total instance counts, we found that the performance remained consistent when using these configurations: 
    • three c7g.large instances dedicated to the okapi service alongside five r7g.2xlarge instances for all other services, with the CPU parameter set to 2 for all services.
    • five r7g.2xlarge instances for all services, with the CPU parameter set to 2 for all services.
  • Optimized environment configurations offers a 20-40% cost reduction compared to the existing setup, making it a more economical option without compromising on performance.
  • Configurations with three c7g.large instances for the okapi service and five r7g.2xlarge instances for all other services show the best performance across all experiments.
  • In fact, some workflows show better performance with this new setup than correct infrastructures.
  • The CPU utilization on EC2 level better now - around 30-60%, previously it was under 20%.

AWS Configuration Costs

ClusterInstance TypeCost per Month
(USD)
Number of InstancesTotal Cost per Cluster
(USD)
QCP1m6g.2xlarge$221.7610$2,217.60
MCPTm6g.2xlarge$221.7614$3,104.64
Optimized Infrastructure
Two Auto Scaling Groups
c7g.large$52.203$1,698.84
r7g.2xlarge$308.455
Optimized Infrastructure
One Auto Scaling Groups
r7g.2xlarge$308.455$1,542.25


Cost Comparison (Before vs After)

Cluster

Previous Total Cost
(USD)

New Total Cost
(USD)

Percentage Saving
(%)

QCP1$2,217.60$1,698.8423.39%
MCPT$3,104.64$1,698.8445.28%

Test Runs

Test #DescriptionStatus
Test 1Instance type: m6g.2xlargeInstances count: 10Completed
Test 2Instance type: m6g.2xlargeInstances count: 10 (Repeat Test 1)Completed
Test 3Used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: m6g.2xlarge for others services.Completed
Test 4Used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type: m6g.2xlarge for others services (Repeat Test 3).Completed
Test 5CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules.Completed
Test 6CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules (Repeat Test 5).Completed
Test 7CPU=2 was set for all modules except CPU=2048 for mod-search, used two autoscaling groups, 1st with 3 Instance Type: c7g.large, 3 of them for okapi service and 5 Instance Type: r7g.xlarge for others modules.Completed
Test 8

CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services.


Test 9CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services (Repeat Test 8).
Test 10CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services.
Test 11

CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services (Repeat Test 10).


Test 12CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services (Repeat Test 11).

Test Results

This table contains durations for all Workflows. 

Test №1

Introduction: Test 1: The Baseline QCP1 Environment configuration was applied
WorkflowsTest 1

Test 2
(Repeat 1)

Test 3Test 4
(Repeat 3)
Test 5 Test 6
(Repeat 5)
Test 7Test 8Test 9
(Repeat 8)
Test 10Test 11
(Repeat 10)
Test 12
(Repeat 11)

Average response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
ErrorsAverage response time
(milliseconds)
Errors
DATA IMPORT0:52:03
0:44:55
0:46:07
0:47:09
0:51:41
0:58:35
1:00:03
0:43:53
0:47:06
0:55:30
0:45:46
0:44:09
DATA EXPORT0:58:11
0:44:43
0:47:41
0:50:32
0:38:59
0:45:53
0:48:26not finished for main0:45:41
0:48:49
0:56:49
0:42:16
0:44:26
CICO_TC_Check-In Controller11630%9480%9320%9580%8490%8950%9400%9120%9930%11760%8920%9670%
CICO_TC_Check-Out Controller16970%14810%14080%14280%13180%13180%13670%13450%14450%16750%13710%14670%
DE_Exporting MARC Bib records workflow25280%38180%36750%28300%19180%22230%18650%33630%24200%18720%33980%50330%
ILR_TC: Create ILR10230%8740%7300%8040%6240%6620%8020%8200%7670%11160%8770%7800%
ILR_TC: Get ItemId1320%1310%1070%1140%1120%1090%1090%1160%1270%1650%1360%1300%
MSF_TC: mod search by auth query48300%48356%54806%765810%757019%21750%24740%609410%411912%29380%554611%707518%
MSF_TC: mod search by boolean query4690%6213%14562%10275%18765%2561%6050%13435%5274%3330%11977%12636%
MSF_TC: mod search by contributors4750%16553%18056%17143%34678%4960%5030%19664%7556%4650%20905%20368%
MSF_TC: mod search by filter query2290%8442%7032%11370%15732%2170%2620%11002%4293%2380%9444%10686%
MSF_TC: mod search by keyword query2280%5193%11614%12458%13124%1980%2560%11443%4342%2500%9794%10188%
MSF_TC: mod search by subject query5770%18463%18253%16347%23282%4450%5390%21633%8407%4730%14766%19198%
MSF_TC: mod search by filter title query22921410%84432512%3%70337702%1%113745980%2%157348872%8%21720380%26222940%110042852%5%42932143%6%23820540%94441454%7%106844376%9%
MSFDI_TC: mod search by keyword query228Importing MARC records workflow Transaction &{tenant}10594510%519204233%0%11619356264%0%12459504038%0%131210478364%0%198179460%25612104290%33%1144172583%0%4349553492%0%25022382080%979189744%0%1018187628%0%
MSFPRV_TC: mod search by subject query577View Patron record Group3470%18463103%0%18252483%0%16342387%0%23282132%0%4452840%5393130%21632653%0%8402547%0%4733550%14762426%0%19192738%0%
MSFULR_TC: mod search by title query2141Users loan Renewal Transaction10594510%325118523%0%377015281%0%459817172%0%488715518%0%203815950%229417490%42855%32146%20540%41457%44379%DI_TC: Importing MARC records workflow Transaction &{tenant}105945115260%17610%2042341170%93562614960%95040316170%10478360%179460%121042933%172580%9553490%22382080%189740%187620%
PRV_TC: View Patron record Group3470%3100%2480%2380%2130%2840%3130%2650%2540%3550%2420%2730%
ULR_TC: Users loan Renewal Transaction10594510%18520%15280%17170%15510%15950%17490%15260%17610%41170%14960%16170%

Comparison

This graph shows the durations of all workflows compared between the best test results.

Part 1.

 Image Removed

Part 2.

Image Removed


Comparison

This graph shows the durations of all workflows compared between the best test results.

Part 1

 Image Added

Part 2

Image Added


Part 3

Image Added



Test №1

Introduction: Test 1: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run

Objective: The objective of test was to evaluate the performance of the MCPT environment by applying the baseline configuration.

Instance CPU Utilization

Image Added

Service CPU Utilization

Here we can see that mod-inventory-b modules used 73% CPU.

Image Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image Added


Kafka metrics

Image Added


Image Added

OpenSearch metrics

Image Added

Image Added


DB CPU Utilization

DB CPU was 97% average.

Image Added

DB Connections

Max number of DB connections was 1250 in maximum.

Image Added


DB load

 Image Added                                                                                                                   

Top SQL-queries

Image Added


Test №2

Introduction: The Baseline QCP1 Environment configuration was applied, qcp1_MiniMaster.jmx script was run (Repeat Test 1).

Objective: The objective of this test was to validate the consistency of performance observed in Test 1 by repeating the same configuration.

Results: Results was almost the same for all workflows.

Instance CPU Utilization

Image Added

Service CPU Utilization

Here we can see that mod-inventory-b modules used 102% CPU in maximum.

Image Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image Added


Kafka metrics

Image Added


Image Added

OpenSearch metrics

Image Added

Image Added

DB CPU Utilization

DB CPU was 95% average 

Image Added

DB Connections

Max number of DB connections was 1250.

Image Added

DB load

 Image Added                                                                                                                   

Top SQL-queries

Image Added


Test №3-4

Introduction: Test 3: The Baseline QCP1 Environment configuration was applied, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type:  c7g.large for others services, qcp1_MiniMaster.jmx script   was run.  Test 3: Repeat Test 1.

Objective: The objective of test was to evaluate the performance of the MCPT environment by applying the baseline configurationQCP1 environment by applying two distinct autoscaling groups with different instance types: c7g.large for the Okapi service and r7g.xlarge for all other services. By running test, the goal was to determine if this modified configuration could achieve similar or improved performance compared to the baseline, while potentially optimizing resource allocation and cost. 

Results: We observed nearly identical performance results almost for all the workflows compared to the Baseline configuration and repeated it in Test №4.

Instance CPU Utilization

...

Image Added

Service CPU Utilization

Here we can see that mod-inventory-b modules used 73% 78k% of the CPU.

Image RemovedImage Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image RemovedImage Added


Kafka metrics

...

Image Added


Image RemovedImage Added

OpenSearch metrics

Image RemovedImage Added

...

Image Added

DB CPU Utilization

DB CPU was 97% averagemaximum.

Image RemovedImage Added

DB Connections

Max number of DB connections was 1250 in maximum2200.

Image RemovedImage Added

DB load

 Image RemovedImage Added                                                                                                                   

Top SQL-queries

Image RemovedImage Added



Test

№2

№5-6-7

Introduction: The  Baseline  QCP1  Environment configuration  was appliedwas applied, and CPU=2 was set for all modules, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and 5 Instance Type:  c7g.large for others services, qcp1_MiniMaster.jmx  script   was run (Repeat Test 1).

Objective:The objective of this test was to validate the consistency of performance observed in Test 1 by repeating the same configuration Repeat tests to confirm performance.

Results: Results Performance was almost confirmed at the same for all workflowslike baseline.

Instance CPU Utilization

...

Image Added


Service CPU Utilization

Here we can see that mod-data-inventoryexport-b modules used 102% CPU in maximum.

...

used 82k% of the CPU power of parameter CPU=2.

Image Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image RemovedImage Added

Kafka metrics

...

Image Added

...


Image Added


OpenSearch metrics

Image RemovedImage Added

Image RemovedImage Added

DB CPU Utilization

DB CPU was 95% average 97%.

Image RemovedImage Added

DB Connections

Max number of DB connections was 1250.

Image RemovedImage Added

DB load

 Image RemovedImage Added                                                                                                                   

Top SQL-queries

...

Image Added


Test

№3

№8-

4

9

Introduction: Test 3:   The Baseline QCP1 Environment configuration was applied, used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and  and CPU=2 was set for all modules, used ONE autoscaling group with 5 Instance Type:  c7g.large for others all services, qcp1_MiniMaster.jmx script was run.Test 3: Repeat Test 1.

Objective: The objective goal of test was to evaluate the performance of the QCP1 environment by applying two distinct autoscaling groups with different instance types: c7g.large for the Okapi service and r7g.xlarge for all other services. By running test, the goal was to determine if this modified configuration could achieve similar or improved performance compared to the baseline, while potentially optimizing resource allocation and cost. Results: We observed nearly identical performance results almost for all the workflows compared to the Baseline configuration and repeated it in Test №4is to put okapi service in one tier with all services and compare performance with previous result and baseline result.

Results: Performance was confirmed at the same like baseline.

Instance CPU Utilization

Image RemovedImage Added

Service CPU Utilization

Here we can see that mod-inventorydata-export-b used 78k% 36k% AVG of the CPU power of parameter CPU=2.

Image RemovedImage Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image RemovedImage Added

Kafka metrics

Image RemovedImage AddedImage Removed


Image Added


OpenSearch metrics

Image RemovedImage Added


Image RemovedImage Added

DB CPU Utilization

DB CPU was 97% maximum94%.

Image RemovedImage Added

DB Connections

Max number of DB connections was 22001265.

Image RemovedImage Added

DB load

 Image RemovedImage Added                                                                                                                   

Top SQL-queries

...

Image Added


Test

№5-6-7

№10

Introduction:  The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules , used two autoscaling groups, 1st with 3 Instance Type: c7g.large for okapi service and except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for others all services, qcp1_MiniMaster.jmx script was run.

Objective: Repeat tests to confirm performance The goal of test is confirm the test result with previous configuration.

Results: Performance was confirmed at the same like baselineprevious one.

Instance CPU Utilization

...

Image Added

Service CPU Utilization

Here we can see that mod-data-export-b used 82k% 116k% MAX of the CPU power of parameter CPU=2.

Image RemovedImage Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image RemovedImage Added

Kafka metrics

Image RemovedImage Added


Image RemovedImage Added


OpenSearch metrics

Image RemovedImage Added


Image RemovedImage Added

DB CPU Utilization

DB CPU was 97%94%.

Image RemovedImage Added

DB Connections

Max number of DB connections was 12501220.

Image RemovedImage Added

DB load

 Image Removed  Image Added                                                                                                                    

Top SQL-queries

Image RemovedImage Added


Test

№8-9

№11

Introduction:  The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services, qcp1_MiniMaster.jmx script was run.

Objective: : The goal of test is confirm the test result with previous configuration.

Results:   Performance was confirmed at the same like previous one.

Instance CPU Utilization

Image RemovedImage Added

Service CPU Utilization

Here we can see that mod-data-export-b used 36k% AVG of the CPU power of parameter CPU=2.

Image RemovedImage Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image RemovedImage Added

Kafka metrics

Image RemovedImage Added


Image RemovedImage Added


OpenSearch metrics

Image RemovedImage Added


Image RemovedImage Added

DB CPU Utilization

DB CPU was 94%.

Image RemovedImage Added

DB Connections

Max number of DB connections was 12651200.

Image RemovedImage Added

DB load

 Image RemovedImage Added                                                                                                                   

Top SQL-queries

Image RemovedImage Added


Test

№10

№12

Introduction:  The Baseline QCP1 Environment configuration was applied, and CPU=2 was set for all modules except CPU=2048 for mod-search, used ONE autoscaling group with 5 Instance Type:  c7g.large for all services, qcp1_MiniMaster.jmx script was run.

Objective:  The goal of test is confirm the test result with previous configuration.

Results:   Performance was confirmed at the same like previous one.

Instance CPU Utilization

Image Added


Service CPU Utilization

Here we can see that mod-data-export-b used 36k% AVG of the CPU power of parameter CPU=2.

Image Added

Service Memory Utilization

Here we can't see any sign of memory leaks on every module. Memory shows stable trend.

Image Added

Kafka metrics

Image Added



Image Added

OpenSearch metrics

Image Added


Image Added

DB CPU Utilization

DB CPU was 94%.

Image Added

DB Connections

Max number of DB connections was 1265.

Image Added

DB load

    Image Added                                                                                                                   

Top SQL-queries

Image Added

Appendix

Infrastructure

PTF - Baseline QCP1 environment configuration (was changed during testing)

  • 10 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
  • 1 database  instance, writer


    NameMemory GIBvCPUs

    db.r6g.xlarge

    32 GB4 vCPUs


  • Open Search ptf-test 
    • Data nodes
      • Instance type - r6g.2xlarge.search
      • Number of nodes - 4
      • Version: OpenSearch_2_7_R20240502
    • Dedicated master nodes
      • Instance type - r6g.large.search
      • Number of nodes - 3
  • MSK fse-tenant
    • brokers, kafka.m7g.xlarge brokers in 2 zones
    • Apache Kafka version 3.7.x 

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true
    • log.retention.minutes=480
    • default.replication.factor=3

...

Baseline QCP1 Environment configuration: Parameter srs.marcIndexers.delete.interval.seconds=86400 for mod-source-record-storage,  number of tasks to launch for service mod-marc-migrations-b was set zero. Instance type: m6g.2xlargeInstances count: 10Database db.r6g.xlargeAmazon OpenSearch Service  ptf-testr6g.2хlarge.search (4 nodes).

...