...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
FameFlower Test Results
- Overview
- Test Runs
- Results
- 1. High level FameFlower results data
- 2. CPU Utilization comparisons
- 3. Memory trends
- 4. Database CPU trends
- 5. Slow queries
- 6. Missing indexes
- 7. CPU Profiling result
- Summary
...
Overview
Using the Carrier-io framework for capturing and analyzing performance test results, the following tests for the export instance UUIDs workflows were executed.
...
Test Runs
30-min Runs for export instance UUIDs workflow:
...
Test
...
Virtual Users
...
Duration
...
OKAPI log level
...
Profiled
...
Ramp up (total time
in seconds)
...
Size of response (how many instances
were returned)
...
1. FameFlower
...
1
...
30 min
...
INFO
...
No
...
5
...
10K~50K instances
...
2. FameFlower
...
1
...
30 min
...
INFO
...
No
...
1
...
50K~100K instances
...
3. FameFlower
...
1
...
30 min
...
INFO
...
Yes
...
10
...
10K~50K instances
...
4. FameFlower
...
1
...
30 min
...
INFO
...
Yes
...
10
...
50K~100K instances
...
5. FameFlower
...
5
...
30 min
...
INFO
...
No
...
50
...
10K~50K instances
...
6. FameFlower
...
5
...
30 min
...
INFO
...
No
...
10
...
50K~100K instances
...
7. FameFlower
...
5
...
30 min
...
INFO
...
Yes
...
50
...
10K~50K instances
...
8. FameFlower
...
5
...
30 min
...
INFO
...
Yes
...
50
...
50K~100K instances
Results
...
1. High level FameFlower results data
1 and 5 users tests runs, 10K~50K instances
1 and 5 users tests runs, 50K~100K instances
- The chart shows the overall high-level API stats obtained by JMeter calling various APIs in the save instance UUIDs worfklow. It breaks down average response times for 1 and 5 users tests per API call.
A few things to note:
- GET_/inventory/instances and GET_/instance-bulk/ids have the slowest response time. Have failed responses even for 1 user
The workflow with more than 100K records become unresponsive even with 1 user
- The workflow with more than 5 users become unresponsive
The charts below offer a clearer side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances
...
A side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances
2. CPU Utilization comparisons
These services for the selected modules were chosen for their activity in the workflow and prominent values compared to other modules.
Data were obtained from the 30-min test runs for 1 and 5 users, 10K~50K instances
3. Memory trends
Folio build was deployed with 50+ ECS services installed randomly across 4 m5.large instances in the fcp1-pvt cluster and the database was created on the db.r5.xlarge AWS RDS instance. Logging level was set to default INFO.
According to the capacity performance test results, we can say that the saturation point was caused by high CPU utilization on one of four nodes in the fcp1-pvt cluster.
Based on the CPU usage per service we can make a conclusion that the most consuming service was mod-inventory-storage.
The test run with 5 users and 10K~50K instances
The service is 99% of allocated RAM memory.
During testing the workflow with 5 users and 50K~100K instances, mod-inventory-storage was crashing a few times due to OOM. There are 4 instances of mod-inventory-storage active in this test run. This means that it crashed 3 times and spun up new mod-inventory-storage instances
4. Database CPU trends
For 1 user - 30 min run
For 5 users - 30 min run
...
5. Slow queries
Slowest queries which took the most of execution time were initiated by the mod-inventory-storage service presented in the following table:
...
Percent of total time
...
Average Time,ms
...
Calls
...
Query
...
32%
...
10,796
...
15
FameFlower Test Results
Table of Contents |
---|
Overview
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Using the Carrier-io framework for capturing and analyzing performance test results, the following tests for the export instance UUIDs workflows were executed. The slowness depends on the number of records retrieved from the database, so we modeled the JMeter test scripts to make queries for a batch of keywords that return anywhere from 10K to 50K instances, and another batch of keywords that results in between 50K and 100K hits. Since these are resource-intensive operations, we did not test these scenarios with more than 5 concurrent users.
The tests were ran with 1 and 5 user with queries which return 10K~50K instances and 50K~100K instances
Backend:
- mod-inventory-storage-19.1.2
- mod-inventory-14.1.3
- mod-authtoken-2.4.0
- mod-permissions-5.9.0
- okapi-2.38.0
Frontend:
- folio_inventory-2.0.2
Folio build was deployed with 50+ ECS services distributed randomly across four m5.large EC2 instances and the database was created on the db.r5.xlarge AWS RDS instance. Logging level was set to default INFO.
High Level Summary
- Most of failed requests were related to GET_/inventory/instances and GET_/instance-bulk/ids that uses mod-inventory-storage, even in a 1-user 30-min test run. This is due to a combination of slow running queries, high memory usage when exporting more than 100K records.
- mod-inventory-storage used 99% of container memory on a machine with 8GB RAM to return 50K to 100K records.
The workflow that retrieves more than 100K records became non-responsive even with 1 user
- The workflow with more than 5 users became non-responsive, indicative of exhausted memory usage. Consider add more memory to mod-inventory-storage container.
- fasterxml.jackson.databind.ObjectMapper.readValue method of mod-inventory-storage consumed high CPU resources as there were a lot of JSON decoding, this implementation could be reviewed for optimization possibilities.
- Some mod-inventory-storage's SQL queries took more than 500ms to run, see the Slow Queries section and Recommended Improvements for the JIRAs created to address these SQL queries.
- mod-inventory-storage generated missing indexes warnings, see Recommended Improvements for the JIRAs that were created by this testing effort.
Test Runs
30-min Runs for export instance UUIDs workflow:
Test | Virtual Users | Duration | OKAPI log level | Profiled | Ramp up (total time in seconds) | Size of response (how many instances were returned) |
1. FameFlower | 1 | 30 min | INFO | No | 5 | 10K~50K instances |
2. FameFlower | 1 | 30 min | INFO | No | 1 | 50K~100K instances |
3. FameFlower | 1 | 30 min | INFO | Yes | 10 | 10K~50K instances |
4. FameFlower | 1 | 30 min | INFO | Yes | 10 | 50K~100K instances |
5. FameFlower | 5 | 30 min | INFO | No | 50 | 10K~50K instances |
6. FameFlower | 5 | 30 min | INFO | No | 10 | 50K~100K instances |
7. FameFlower | 5 | 30 min | INFO | Yes | 50 | 10K~50K instances |
8. FameFlower | 5 | 30 min | INFO | Yes | 50 | 50K~100K instances |
Results
*All numbers are in milliseconds except for those in the Delta % column, which indicates the difference in percentage going from 1 to 5 users, 10K~50K instances retrieved
1. High level FameFlower results data
1 and 5 users tests runs, 10K~50K instances
Fig 1.1: The chart shows the overall high-level API stats obtained by JMeter calling various APIs in the save instance UUIDs worfklow. It breaks down average response times for 1 and 5 users tests per API call.
Fig 1.2: The charts below offer a clearer side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances
A few things to note:
- GET_/inventory/instances and GET_/instance-bulk/ids have the slowest response time. Have failed responses even for 1 user
The workflow with more than 100K records became non-responsive even with 1 user
- The workflow with more than 5 users became unresponsive at times during the test runs, more on that below.
A side-by-side comparison for the 1 and 5 users tests runs, 10K~50K instances
Fig 1.3 A side-by-side comparison for the 1 and 5 users tests runs, 10K~50K instances
A side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances
Fig 1.4 side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances
2. CPU Utilization
Fig 2.1 CPU utilization percentage chart for 1 and 5 users test runs retrieving ~10K-50K instances
These services for the selected modules were chosen for their activity in the workflow and prominent values compared to other modules.
The test run with 5 users and 10K~50K instances
Fig 2.2: CPU utilization of various modules running on an EC2 instance, including mod-inventory-storage
Based on this CPU usage chart (Fig 2.2) we can make see that the service consumed the most CPU resources was mod-inventory-storage.
3. Memory Usage
The following data shows memory consumption of ECS services, most notably of mod-inventory-storage during the 5 users and 10K-50K instances retrieved test run.
Fig 3.1: mod-inventory-storage service consumed 99% of allocated RAM memory in the 5 users and 10K-50K instances retrieved test run.
Fig 3.1: mod-inventory-storage service consumed 99% of allocated RAM memory in the 5 users and 50K-100K instances retrieved test run.
Note that when turning on the profiler on mod-inventory-storage, the performance took a nose dive and mod-inventory-storage crashed a few times in a 30 minutes test run. Obviously this put more stress on the module (and no one would run the profiler in production) but it's consistent with observations in bug fest environment that when returning more records, mod-inventory-storage would experience OOM exception and crashed. What is seen here is mod-inventory-storage being at the edge of getting an OOM.
4. Disk IO
5. Database CPU Utilization
For 1 user - 30 min run
For 5 users - 30 min run
Database CPU usage was not high, but the queries took a long time to return, see below.
6. Database Slow queries
Anchor | ||||
---|---|---|---|---|
|
Slowest queries which took the most of execution time were initiated by the mod-inventory-storage service presented in the following table:
Percent of total time | Average Time,ms | Calls | Query | ||||||||
32% | 10,796 | 15 | SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8))) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9)),$10), lower(f_unaccent(instance.jsonb->>$11)) | ||||||||
23% | 22,250 | 5 | SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 >>$2 , concat_array_object_values(instance.jsonb->$3>$3,$4) , concat_array_object_values(instance.jsonb->$5>$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9>>$15)),$10$16), lower(f_unaccent(instance.jsonb->>$11>>$17)) | ||||||||
23%13% | 221,250709 | 5 | 37 | SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE ( to_tsvector($1$1, f_unaccent(concat_space_sql(instance.jsonb->>$2 >>$2 , concat_array_object_values(instance.jsonb->$3>$3,$4$4) , concat_array_object_values(instance.jsonb->$5>$5,$6$6)))) @@ (to_tsquery($7$7, f_unaccent($8$8))) ) AND (to_tsvector($9, ORDER BY left(lower(f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(>>$9)),$10), lower(f_unaccent(instance.jsonb->>$15>>$11)) ,$16) LIMIT $12) x | |||||||
12% | 1,818 | 34 | WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(instance.jsonb->>$17)) | 13% | 1,709 | 37 | SELECT COUNT(*) FROM (SELECT jsonb,id FROM >>$1)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($12, f_unaccent(concat_space_sql(instance.jsonb->>$2 3 , concat_array_object_values(instance.jsonb->$34,$45) , concat_array_object_values(instance.jsonb->$56,$67)))) @@ (to_tsquery($78, f_unaccent($89)))) ORDER BY AND left(lower(f_unaccent(instance.jsonb->>$910)),$10), 11) < ( SELECT left(lower(f_unaccent(instance.jsonb->>$1112)) LIMIT ,$12) x | 12% | 1,818 | 34 | WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$1)) AS title 13) FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($2, ORDER BY left(lower(f_unaccent(concat_space_sql(instance.jsonb->>$3 , concat_array_object_values(instance.jsonb->$4,$5) , concat_array_object_values(instance.jsonb->$6,$7)))) @@ (to_tsquery($8, f_unaccent($9)))) AND left(lower(f_unaccent(jsonb->>$10)),$11) < ( SELECT left(lower(f_unaccent(jsonb->>$12)),$13) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET $14 LIMIT $15 ) ORDER BY left(lower(f_unaccent(jsonb->>$16)),$17) LIMIT $18 OFFSET $19 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$20)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($21, f_unaccent(concat_space_sql(instance.jsonb->>$22 , concat_array_object_values(instance.jsonb->$23,$24) , concat_array_object_values(instance.jsonb->$25,$26)))) @@ (to_tsquery($27, f_unaccent($28)))) AND (SELECT COUNT(*) FROM headrecords) < $29 ) SELECT jsonb, title, $30 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= $31 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords >>'title')),600) OFFSET $14 LIMIT $15 ) ORDER BY left(lower(f_unaccent(jsonb->>$16)),$17) LIMIT $18 OFFSET $19 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$20)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($21, f_unaccent(concat_space_sql(instance.jsonb->>$22 , concat_array_object_values(instance.jsonb->$23,$24) , concat_array_object_values(instance.jsonb->$25,$26)))) @@ (to_tsquery($27, f_unaccent($28)))) AND (SELECT COUNT(*) FROM headrecords) < $29 ) SELECT jsonb, title, $30 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= $31 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT $32 OFFSET $33 ) ORDER BY title |
4% | 2,804 | 7 | SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$15)),$16), lower(f_unaccent(instance.jsonb->>$17)) LIMIT $18) x | ||||||||
3% | 1,865 | 9 | EXPLAIN ANALYZE WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND left(lower(f_unaccent(jsonb->>'title')),600) < ( SELECT left(lower(f_unaccent(jsonb->>'title')),600) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET 10000 LIMIT 1 ) ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) LIMIT 100 OFFSET 0 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND (SELECT COUNT(*) FROM headrecords) < 100 ) SELECT jsonb, title, 0 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= 100 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT 100 OFFSET 0 ) ORDER BY title |
6. Missing indexes
7. CPU Profiling result
...
fasterxml.jackson.databind.ObjectMapper.readValue method uses most of CPU capacity which leads to performance degradation
Summary
See Attached FameFlower Performance Test Runs.xlsx for details
- FOLIO performs better without being profiled when the tests are running
Issues
...
The workflow with more than 100K records become unresponsive even with 1 user
...
- mod-inventory-storage has noticeable to significant gains in memory used.
...
title LIMIT 100 OFFSET 0 ) ORDER BY title |
7. Database Missing indexes
8. JVM Profiling result
Anchor | ||||
---|---|---|---|---|
|
JVM profiling of the most resources consuming mod-inventory-storage service showed 6 methods which had a high CPU usage and impact on the overall service performance.
fasterxml.jackson.databind.ObjectMapper.readValue method uses most of CPU capacity which leads to performance degradation
Recommended improvement
Anchor | ||||
---|---|---|---|---|
|
Consider adding more memory to mod-inventory-storage container if there is a need to export over 100K UUIDs.
The following JIRAs are created for mod-inventory-storage missing indexes and
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Appendix
See Attached FameFlower Performance Test Runs.xlsx for details