...
FameFlower Test Results
Overview
Using the Carrier-io framework for capturing and analyzing performance test results, the following tests for the check-in/check-out export instance UUIDs workflows were executed.
The testing was provided with the modules:
Backend:
- mod-inventory-storage-19.1.2
- mod-inventory-14.1.2
- mod-authtoken-2.4.0
- mod-permissions-5.9.0
- okapi-2.35.2
Frontend:
- folio_inventory-2.0.2
Test Runs
- One
30-
hour Runs for checkin/outmin Runs for export instance UUIDs workflow:
Longevity tests
DaisyTest
Virtual Users
Duration
OKAPI log level
Profiled
Rump up
Size of records
1.
1 hourFameFlower-Baseline
1
ERROR30 min
EdelweissINFO
No
5
10K~50K instances
2.
1 hourFameFlower-Baseline
1
ERROR30 min
DaisyINFO
No
1
50K~100K instances
3.
8FameFlower-Baseline
hour1
ERROR
No30 min
4. EdelweissINFO
8Yes
Test
Virtual Users
Duration
OKAPI log level
Profiled
10. Daisy-Baseline
8
8 hours
ERROR
No
11. Edelweiss
8
8 hours
ERROR
No
12. Edelweiss - 16 hrs
8
16 hours
NoERROR
10
1 hour
ERROR
No
5. Daisy-Baseline
20
1 hour
ERROR
No
6. Edelweiss
20
1 hour
ERROR
No
7. Edelweiss
8
1 hour
ERROR
Yes
8. Edelweiss Volume Test
8
1 hour
ERROR
No
9. Edelweiss Volume Test
8
1 hour
ERROR
Yes
10K~50K instances
4. FameFlower-Baseline
1
30 min
INFO
Yes
10
50K~100K instances
5. FameFlower-Baseline
5
30 min
INFO
No
50
10K~50K instances
6. FameFlower-Baseline
5
30 min
INFO
No
10
50K~100K instances
7. FameFlower-Baseline
5
30 min
INFO
Yes
50
10K~50K instances
8. FameFlower-Baseline
5
30 min
INFO
Yes
50
50K~100K instances
Results
*All numbers are in milliseconds except for those in the Delta % column, which indicates the difference in percentage going from v3.2 to v4.0 1 to 5 users1. High level
EdelweissFameFlower results data
- The chart shows the overall high-level API stats obtained by JMeter calling various APIs in the checkin/checkout worfklowssave instance UUIDs worfklow. It breaks down average response times for 1 , 8, and 20 5 users tests per API call. A few things to note:
- the POST methods: circulation/check-out-by-barcode and circulation/check-in-by-barcode response times are above 100 milliseconds, even for 1 user minimum response time, averaging >250ms for 8-users use case.
- These checkout and checkin POST methods request rates peak at 2.5 requests/second.
- Only GET circulation/loans is above 200ms. Other GET methods are below 100ms. GET inventory/items exceeds 100ms above 75th percentile.
...
The workflow with more than 100K records become unresponsive even with 1 user
The charts below offer a clearer side-by-side comparison for the 1 , 8, and 20 5 users tests runs
...
2. CPU Utilization comparisons
...
D = Daisy, E = Edelweiss These services for the selected modules were chosen for their activity in the workflow and prominent values compared to other modules.
Data were obtained from the 1-hour test runs.
Nothing conclusive is seen from CPU utilization perspective. No increasing or decreasing CPU utilization
3. Longevity tests: Memory trends
Memory trends were obtained from longevity tests
- Okapi exhibits a memory leak over 8 and 16 hours test runs
- mod-circulation-storage exhibits a memory leak over 8 and 16 hours test runs
- mod-inventory-storage exhibits a memory leak over 8 and 16 hours test runs
- mod-circulation shows a small leak
- Other modules showed very small increases over the longevity test runs duration (1%-2%)
4. Longevity tests: Database CPU trends
Improvement in CPU trends over 8 hours of testing
Compared to q3.2 Daisy 8 hours run:
5. Volume Test comparisons
...
3. Memory trends
Folio build was deployed with 30 services installed randomly across 4 m5.large instances in the fcp1-pvt cluster and the database was created on the db.r5.xlarge AWS RDS instance. Logging level was set to default INFO.
According to the capacity performance test results, we can say that the saturation point was caused by high CPU utilization on one of four nodes in the fcp1-pvt cluster.
Based on the CPU usage per service we can make a conclusion that the most consuming service was mod-inventory-storage.
The service is 99% of allocated RAM memory.
4. Database CPU trends
For 1 user - 30 min run
For 5 users - 30 min run
5. Slow queries
Slowest queries which took the most of execution time were initiated by the mod-inventory-storage service presented in the following table:
Total time | Average Time,ms | Calls | Query |
32% | 10,796 | 15 | SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8))) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9)),$10), lower(f_unaccent(instance.jsonb->>$11)) |
23% | 22,25 m | 5 | SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$15)),$16), lower(f_unaccent(instance.jsonb->>$17)) |
13% | 1,709 | 37 | SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8))) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9)),$10), lower(f_unaccent(instance.jsonb->>$11)) LIMIT $12) x |
12% | 1,818 | 34 | WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$1)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($2, f_unaccent(concat_space_sql(instance.jsonb->>$3 , concat_array_object_values(instance.jsonb->$4,$5) , concat_array_object_values(instance.jsonb->$6,$7)))) @@ (to_tsquery($8, f_unaccent($9)))) AND left(lower(f_unaccent(jsonb->>$10)),$11) < ( SELECT left(lower(f_unaccent(jsonb->>$12)),$13) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET $14 LIMIT $15 ) ORDER BY left(lower(f_unaccent(jsonb->>$16)),$17) LIMIT $18 OFFSET $19 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$20)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($21, f_unaccent(concat_space_sql(instance.jsonb->>$22 , concat_array_object_values(instance.jsonb->$23,$24) , concat_array_object_values(instance.jsonb->$25,$26)))) @@ (to_tsquery($27, f_unaccent($28)))) AND (SELECT COUNT(*) FROM headrecords) < $29 ) SELECT jsonb, title, $30 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= $31 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT $32 OFFSET $33 ) ORDER BY title |
4% | 2,804 | 7 | SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$15)),$16), lower(f_unaccent(instance.jsonb->>$17)) LIMIT $18) x |
3% | 1,865 | 9 | EXPLAIN ANALYZE WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND left(lower(f_unaccent(jsonb->>'title')),600) < ( SELECT left(lower(f_unaccent(jsonb->>'title')),600) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET 10000 LIMIT 1 ) ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) LIMIT 100 OFFSET 0 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND (SELECT COUNT(*) FROM headrecords) < 100 ) SELECT jsonb, title, 0 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= 100 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT 100 OFFSET 0 ) ORDER BY title |
Summary
See Attached Edelweiss FameFlower Performance Test Runs.xlsx for details
...
- FOLIO performs better without being profiled when the tests are running
Issues
- POST_circulation/check-out-by-barcode and POST_circulation/check-in-by-barcode perform about 30%-35% slower. Generally all methods performed slower, but not sure if it's because of these two POST requests. POST_circulation/check-out-by-barcode and POST_circulation/check-in-by-barcode Most of failed requests were related to GET_/inventory/instances and GET_/instance-bulk/ids that use mod-inventory-storage service
- GET_/inventory/instances and GET_/instance-bulk/ids average response times are still over 300ms in the 8-users tests
- Memory Issues:
- Okapi, mod-circulationinventory-storage , mod-users have has noticeable to significant gains in memory used over time. (Not new)
- Memory Issues: most modules, if not all, trend up over time by about 1%-2%.
- org.slf4j.impl.Log4jLoggerAdapter.isDebugEnabled is still among the slowest methods, after jackson deserialization
Improvements
- There's an improvement in GET_circulation/loans response time
- Memory Issues: mod-inventory-storage OOM seemed to have been fixed in Edelweiss. 3% rise over 8 hours instead of OOM previously
- Q3.2 mod-inventory-storage OOM
OKAPI-2.37.0-SNAPSHOT
...
- .
- fasterxml.jackson.databind.ObjectMapper.readValue method of mod-inventory-storage service overuses CPU resources as there are a lot of JSON decoding, this implementation could be reviewed and improved to reduce operations with JSON