Skip to end of banner
Go to start of banner

Export Instance UUIDs Test Report

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 35 Next »

PERF-28 - Getting issue details... STATUS

FameFlower Test Results

Overview

Using the Carrier-io framework for capturing and analyzing performance test results, the following tests for the export instance UUIDs workflows were executed.

The testing was provided with the modules:

Backend:
- mod-inventory-storage-19.1.2
- mod-inventory-14.1.3
- mod-authtoken-2.4.0
- mod-permissions-5.9.0
- okapi-2.38.0
Frontend:
- folio_inventory-2.0.2

Test Runs

  • 30-min Runs for export instance UUIDs workflow:

    Test

    Virtual Users

    Duration

    OKAPI log level

    Profiled

    Ramp up (total time

    in seconds)

    Size of response (how many instances

    were returned)

    1. FameFlower

    1

    30 min

    INFO

    No

    5

    10K~50K instances

    2. FameFlower

    1

    30 min

    INFO

    No

    1

    50K~100K instances

    3. FameFlower

    1

    30 min

    INFO

    Yes

    10

    10K~50K instances

    4. FameFlower

    1

    30 min

    INFO

    Yes

    10

    50K~100K instances

    5. FameFlower

    5

    30 min

    INFO

    No

    50

    10K~50K instances

    6. FameFlower

    5

    30 min

    INFO

    No

    10

    50K~100K instances

    7. FameFlower

    5

    30 min

    INFO

    Yes

    50

    10K~50K instances

    8. FameFlower

    5

    30 min

    INFO

    Yes

    50

    50K~100K instances

    Results

    *All numbers are in milliseconds except for those in the Delta % column, which indicates the difference in percentage going from 1 to 5 users, 10K~50K instances

    1. High level FameFlower results data

    1 and 5 users tests runs, 10K~50K instances


1 and 5 users tests runs, 50K~100K instances


  • The chart shows the overall high-level API stats obtained by JMeter calling various APIs in the save instance UUIDs worfklow. It breaks down average response times for 1 and 5 users tests per API call. 

A few things to note:

  • GET_/inventory/instances and GET_/instance-bulk/ids have the slowest response time. Have failed responses even for 1 user 
  • The workflow with more than 100K records become unresponsive even with 1 user

  • The workflow with more than 5 users become unresponsive

The charts below offer a clearer side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances



A side-by-side comparison for the 1 and 5 users tests runs, 10K~50K instances


A side-by-side comparison for the 1 and 5 users tests runs, 50K~100K instances


2.  CPU Utilization comparisons

These services for the selected modules were chosen for their activity in the workflow and prominent values compared to other modules.
Data were obtained from the 30-min test runs for 1 and 5 users, 10K~50K instances

3.  Memory trends

Folio build was deployed with 50+ ECS services installed randomly across 4 m5.large instances in the fcp1-pvt cluster and the database was created on the db.r5.xlarge AWS RDS instance. Logging level was set to default INFO.

According to the capacity performance test results, we can say that the saturation point was caused by high CPU utilization on one of four nodes in the fcp1-pvt cluster.



Based on the CPU usage per service we can make a conclusion that the most consuming service was mod-inventory-storage.

The test run with 5 users and 10K~50K instances



The service is 99% of allocated RAM memory.



During testing the workflow with 5 users and 50K~100K instances, mod-inventory-storage was crashing a few times due to OOM. There are 4 instances of mod-inventory-storage active in this test run. This means that it crashed 3 times and spun up new mod-inventory-storage instances

4. Database CPU trends  

For 1 user - 30 min run

For 5 users - 30 min run



5.  Slow queries

Slowest queries which took the most of execution time were initiated by the mod-inventory-storage service presented in the following table:

Percent of total time

Average Time,ms

Calls

Query

32%

10,796

15

SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8))) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9)),$10), lower(f_unaccent(instance.jsonb->>$11))


23%

22,250

5

SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$15)),$16), lower(f_unaccent(instance.jsonb->>$17))


13%

1,709

37

SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8))) ORDER BY left(lower(f_unaccent(instance.jsonb->>$9)),$10), lower(f_unaccent(instance.jsonb->>$11)) LIMIT $12) x

12%

1,818

34

WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$1)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($2, f_unaccent(concat_space_sql(instance.jsonb->>$3 , concat_array_object_values(instance.jsonb->$4,$5) , concat_array_object_values(instance.jsonb->$6,$7)))) @@ (to_tsquery($8, f_unaccent($9)))) AND left(lower(f_unaccent(jsonb->>$10)),$11) < ( SELECT left(lower(f_unaccent(jsonb->>$12)),$13) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET $14 LIMIT $15 ) ORDER BY left(lower(f_unaccent(jsonb->>$16)),$17) LIMIT $18 OFFSET $19 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>$20)) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($21, f_unaccent(concat_space_sql(instance.jsonb->>$22 , concat_array_object_values(instance.jsonb->$23,$24) , concat_array_object_values(instance.jsonb->$25,$26)))) @@ (to_tsquery($27, f_unaccent($28)))) AND (SELECT COUNT(*) FROM headrecords) < $29 ) SELECT jsonb, title, $30 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= $31 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT $32 OFFSET $33 ) ORDER BY title

4%

2,804

7

SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector($1, f_unaccent(concat_space_sql(instance.jsonb->>$2 , concat_array_object_values(instance.jsonb->$3,$4) , concat_array_object_values(instance.jsonb->$5,$6)))) @@ (to_tsquery($7, f_unaccent($8)))) AND (to_tsvector($9, f_unaccent(instance.jsonb->>$10)) @@ replace((to_tsquery($11, f_unaccent($12)))::text, $13, $14)::tsquery) ORDER BY left(lower(f_unaccent(instance.jsonb->>$15)),$16), lower(f_unaccent(instance.jsonb->>$17)) LIMIT $18) x

3%

1,865

9

EXPLAIN ANALYZE WITH headrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND left(lower(f_unaccent(jsonb->>'title')),600) < ( SELECT left(lower(f_unaccent(jsonb->>'title')),600) FROM fs09000000_mod_inventory_storage.instance ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) OFFSET 10000 LIMIT 1 ) ORDER BY left(lower(f_unaccent(jsonb->>'title')),600) LIMIT 100 OFFSET 0 ), allrecords AS ( SELECT jsonb, lower(f_unaccent(jsonb->>'title')) AS title FROM fs09000000_mod_inventory_storage.instance WHERE (to_tsvector('simple', f_unaccent(concat_space_sql(instance.jsonb->>'title' , concat_array_object_values(instance.jsonb->'contributors','name') , concat_array_object_values(instance.jsonb->'identifiers','value')))) @@ (to_tsquery('simple', f_unaccent('''english''')))) AND (SELECT COUNT(*) FROM headrecords) < 100 ) SELECT jsonb, title, 0 AS count FROM headrecords WHERE (SELECT COUNT(*) FROM headrecords) >= 100 UNION (SELECT jsonb, title, (SELECT COUNT(*) FROM allrecords) AS count FROM allrecords ORDER BY title LIMIT 100 OFFSET 0 ) ORDER BY title

6.  Missing indexes

7. CPU Profiling result


CPU profiling of the most resources consuming mod-inventory-storage service showed 6 methods which had a high CPU usage and impact on the overall service performance.



fasterxml.jackson.databind.ObjectMapper.readValue method uses most of CPU capacity which leads to performance degradation


Summary

        See Attached FameFlower Performance Test Runs.xlsx for details 

  • FOLIO performs better without being profiled when the tests are running 

Issues

  • Most of failed requests were related to GET_/inventory/instances and GET_/instance-bulk/ids that use mod-inventory-storage service
  • GET_/inventory/instances and GET_/instance-bulk/ids have failed responses even for 1 user (30 min test run)
  • mod-inventory-storage was crashing a few times due to OutOfMemoryError during the test runs
  • The workflow with more than 100K records become unresponsive even with 1 user

  • The workflow with more than 5 users become unresponsive
  • Memory Issues: 
    • mod-inventory-storage has noticeable to significant gains in memory used.  
  • fasterxml.jackson.databind.ObjectMapper.readValue method of mod-inventory-storage service overuses CPU resources as there are a lot of JSON decoding, this implementation could be reviewed and improved to reduce operations with JSON















  • No labels