Check in items workflow Test Report (Fameflower)












PERF-37 - Getting issue details... STATUS

Overview

In this workflow we are checking the performance of the check in items workflow running in the Fameflower release PERF-37 - Getting issue details... STATUS .  We tested it with 1, 5, 8, and 20 virtual users for 30 minutes.  


Backend:

  • mod-inventory-storage-19.1.2
  • mod-inventory-14.1.3
  • mod-circulation-18.0.9
  • mod-circulation-storage-11.0.8
  • okapi-2.38.0
  • mod-authtoken-2.4.0
  • mod-permissions-5.9.0
  • mod-data-export-1.1.1
  • mod-source-record-storage-3.1.4
  • mod-source-record-manager-2.1.4

Frontend:

  • folio_inventory-2.0.2
  • folio_circulation-2.0.0
  • Item Check-in (folio_checkin-2.0.1)

Environment:

  • 55 back-end modules deployed in 110 ECS services
  • 3 okapi ECS services
  • 8 m5.large  EC2 instances
  •  db.r5.xlarge AWS RDS instance
  • WARNING logging level

High Level Summary

  1. Overall check in time is 781 ms for a typical use case of 8 users, 1127.57 ms for 20 users

  2. Slow APIs taking more than 100ms to run:
    1. POST checkin-by-barcode
    2. GET inventory/items
  3. mod-circulation-storage log warnings for missing indexes. Adding the missing indexes could improve performance while stop logging these warning. See the Recommended Improvements for the JIRAs created to address these indexes. 
  4. JVM profiling shows JSON de/serialization operations one of the slowest operations, totaling more CPU time than other calls.  Since FOLIO modules retrieve and stores JSON objects, making sure that serializing and deserializing JSON efficient is essential, see Recommended Improvements

Test Runs

Test

Virtual Users

Duration

OKAPI log level

Profiled

Ramp up (total time

in seconds)

1. 

1

30 min

WARNING

No

1

2. 

5

30 min

WARNING

No

50

3. 

8

30 min

WARNING

No

80

4. 

20

30 min

WARNING

No

200

5. 

8

30 min

WARNING

Yes

80


Results

  1. Overall check in items workflow time in seconds

    For 30 minutesProfilerAverage 75th %tile
    1 userOFF379.88 ms379.88 ms
    5 usersOFF715.4 ms789.13 ms
    8 usersOFF781 ms822.02 ms
    20 usersOFF1127.57 ms1203.27 ms
    8 usersON1352.67 ms1415.58 ms
  2. Slow APIs taking more than 100 ms to return (Profiling OFF)

    API1 user (75th %tile)5 users (75th %tile)8 users (75th %tile)20 Users (75th %tile)

    POST checkin-by-barcode 

    379.88 ms470.19 ms508.61 ms750.88 ms
    Get inventory/items168.20 ms183.79 ms208.23 ms289.92 ms

JVM Profiling

  • Overall slow methods (between the modules profiled: mod-circulation, mod-circulation-storage, mod-inventory, mod-inventory-storage)


Database

Database shows that CPU usage increases  according with number of users. CPU usage for 1 user is  about 3-5 %.   CPU usage for the high case of 20 users is 60%.

The following WARNING statements of missing indexes were generated during a test run and logged by mod-circulation-storage:

WARNING: Doing SQL query without index for scheduled_notice.jsonb->>'nextRunTime', CQL >>> SQL: nextRunTime < 2020-07-01T06:06:52.364Z >>> scheduled_notice.jsonb->>'nextRunTime' <'2020-07-01T06:06:52.364Z'WARNING: Doing LIKE search without index for scheduled_notice.jsonb->'noticeConfig'->>'sendInRealTime', CQL >>> SQL: noticeConfig.sendInRealTime == true >>> lower(f_unaccent(scheduled_notice.jsonb->'noticeConfig'->>'sendInRealTime')) LIKE lower(f_unaccent('true'))
WARNING: Doing LIKE search without index for scheduled_notice.jsonb->>'triggeringEvent', CQL >>> SQL: triggeringEvent == "Hold expiration" >>> lower(f_unaccent(scheduled_notice.jsonb->>'triggeringEvent')) LIKE lower(f_unaccent('Hold expiration'))
WARNING: Doing LIKE search without index for scheduled_notice.jsonb->>'triggeringEvent', CQL >>> SQL: triggeringEvent == "Request expiration" >>> lower(f_unaccent(scheduled_notice.jsonb->>'triggeringEvent')) LIKE lower(f_unaccent('Request expiration'))
WARNING: Doing LIKE search without index for jsonb->>'requestId', CQL >>> SQL: requestId == 5a1739e3-e9e2-4fe9-82a2-1c8285252fc9 >>> lower(f_unaccent(jsonb->>'requestId')) LIKE lower(f_unaccent('5a1739e3-e9e2-4fe9-82a2-1c8285252fc9'))
2020-07-01T06:08:38.776Z WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'patronId', CQL >>> SQL: patronId == 2c5e835a-c12d-45b4-a949-262c0dc2f65d >>> lower(f_unaccent(patron_action_session.jsonb->>'patronId')) LIKE lower(f_unaccent('2c5e835a-c12d-45b4-a949-262c0dc2f65d'))
WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'actionType', CQL >>> SQL: actionType == Check-in >>> lower(f_unaccent(patron_action_session.jsonb->>'actionType')) LIKE lower(f_unaccent('Check-in'))

Database Slow queries 

Slowest queries which took the most of execution time were initiated by the mod-inventory-storage and mod_circulation_storage services presented in the following table:


Percent of total time

Average Time,ms

Calls

Module

Query

49%39 ms39,233mod_inventory_storage

SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.item WHERE lower(f_unaccent(item.jsonb->>$1)) LIKE lower(f_unaccent($2)) LIMIT $3) x

E.x.

SELECT COUNT(*) FROM (SELECT jsonb,id FROM fs09000000_mod_inventory_storage.item WHERE  lower(f_unaccent(item.jsonb->>'barcode')) LIKE lower(f_unaccent('090761167')) LIMIT 1 ) x

48%39 ms39,232mod_inventory_storage

SELECT jsonb,id FROM fs09000000_mod_inventory_storage.item WHERE lower(f_unaccent(item.jsonb->>$1)) LIKE lower(f_unaccent($2)) LIMIT $3 OFFSET $4

E.x.

SELECT jsonb,id FROM fs09000000_mod_inventory_storage.item WHERE  lower(f_unaccent(item.jsonb->>'barcode')) LIKE lower(f_unaccent('090761167')) LIMIT 1 OFFSET 0

0.3%56 ms185mod_circulation_storage

SELECT jsonb,id FROM fs09000000_mod_circulation_storage.loan WHERE (lower(f_unaccent(loan.jsonb->$1->>$2)) LIKE lower(f_unaccent($3))) AND (loan.jsonb->>$4 ~ $5) LIMIT $6 OFFSET $7

E.x.

SELECT jsonb,id FROM fs09000000_mod_circulation_storage.loan WHERE (lower(f_unaccent(loan.jsonb->'status'->>'name')) LIKE lower(f_unaccent('Open'))) AND (lower(f_unaccent(loan.jsonb->>'itemId')) LIMIT 1 OFFSET 0


CPU Utilization


1 user5 users8 users20 users

Average %Range %Average %Range %Average %Range %Average %Range %
Okapi1.960.30-5.855.951.26 - 16.657.921.41-16.0717.522.36-29.85
mod-inventory0.420.18 - 3.410.780.35-8.460.620.8-2.171.010.25-3.78
mod-inventory-storage0.950.428 - 4.292.980.36-9.113.561.84-19.727.981.88-15.69

Memory

Memory was stable throughout the runs, only a spike here or there, but in a 30 minutes run they were consistent. 


1 user5 users8 users20 users

AverageAverageAverageAverage
Okapi61%61%58%58%
mod-circulation61%62%70%71%
mod-circulation-storage31%34%37%40%
mod-inventory36%36%36%36%
mod-inventory-storage39%41%41%41%

Recommended Improvements 

    • In mod-circulation and okapi consider using a more efficient JSON package or calling use the existing jackson serialization calls in a different way to address the item: JVM profiling shows JSON de/serialization operations one of the slowest operations.
    • The found missing indexes warnings are covered by the stories
    • CIRCSTORE-215 - Getting issue details... STATUS
    • CIRCSTORE-216 - Getting issue details... STATUS
    • Have follow-up stories to study the performance of the two APIs that are still taking over 100ms to return to see where performance could improve.
    • PERF-79 - Getting issue details... STATUS
    • PERF-90 - Getting issue details... STATUS

Appendix

See Attached circulation_checkIn_FF.xlsx for details