Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents
outlinetrue

Overview

This is a report for a series of Check-in-check-out test runs against the Honeysuckle release.

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-135

...

  • 61 back-end modules deployed in 110 ECS services
  • 3 okapi ECS services
  • 8 m5.large  EC2 instances
  • 2 db.r5.xlarge AWS RDS instance
  • INFO logging level

High Level Summary

  • Check-out: Honeysuckle is slower by 9%-28% than Goldenrod
  • Check-in: 4%-22% slower than Goldenrod
  • APIs turned slower in Honeysuckle: GET /automated-patron-blocks/{id} (150% slower) and  GET /circulation/loans (60%). These are covered by MODPATBLK-70 and CIRC-1014, respectively
  • Okapi v4.3.3 seem to be using 2x-3x CPU cycles than in v1.3.2 (Goldenrod).  Potential issue found with the logging methods OKAPI-964 
  • mod-pubsub has a memory leak that would drag down performance under high loads (see section on longevity test): MODPUBSUB-136
  • Caching Okapi tokens in Okapi reduced mod-authtoken's CPU usage by over 90%
  • Database's memory usage improved dramatically from Goldenrod's - little memory consumptions observed.

Test Runs

Test

Virtual Users

Duration

OKAPI log level

1.

130 minsINFO

2.

530 minsINFO

3.

830 minsINFO

4.

2030 minsINFO

5.

2024 HoursINFO

Results

Response Times


Average (seconds)50th %tile (seconds)75th %tile (seconds)95th %tile  (seconds)

Check-inCheck-outCheck-inCheck-outCheck-inCheck-outCheck-inCheck-out
1 user0.9671.9890.8891.8320.9842.2011.2542.815
5 users1.0532.1710.9811.9691.1142.2531.5283.370
8 users1.1932.2441.0762.0221.3392.3721.8953.544
20 users2.3913.9011.6393.0732.2634.124.8118.784

...

  • Subsequent investigations (
    Jira Legacy
    serverSystem JiraJIRA
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyPERF-140
    AND
    Jira Legacy
    serverSystem JiraJIRA
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyCIRC-1014
    ) on GET /circulation/loans do not show degradations by the API itself. We hypothesize that other API calls that were executed during the test run may have dragged down the response time, particularly if it was trying to read and write to the same rows in the database at the same time. 

...

  • Services Modules Memory utilizations 
    • No modules exhibited memory leaks except for mod-pubsub
  • Although there were two instances of mod-pubsub running on two different ec2 instances, mod-pubsub's traffic seemed to have been stickied to one instance. Here are graphs showing mod-pubsub's on one instance using up memory and CPU resources, and on another instance not showing much activities:
    • mod-pubsub and Okapi on another node - Okapi's CPU utilization dwindles while mod-pubsub does not seem to be busy at all

CPUs and Memories

Okapi was profiled because of the apparent 3x CPU utilization compared to the Goldenrod runs.

...

  1. mod-authtoken uses much less CPU in Honeysuckle, over 90% reduction across all tests! This is because of the token caching functionality that was added to Okapi 4.x
  2. mod-circulation's CPU utilization in Honeysuckle averages over 20% lower than in Goldenrod.
  3. mod-circulation's CPU utilization in Honeysuckle is about 10-30% higher than in Goldenrod
  4. mod-inventory's CPU utilization in Honeysuckle averages 30% more than in Goldenrod
  5. mod-inventory-storage's CPU utilization in Honeysuckle averages 20% more than in Goldenrod 
  6. mod-pubsub's CPU utilization in Honeysuckle is about 15% less than in Goldenrod
  7. mod-patron-blocks CPU utilization in Honeysuckle is at least 30% less than in Goldenrod

JVM Profiling

Because Okapi's CPU utilization in Honeysuckle seemed to have averaged 2x to 3x higher than in Goldenrod, it was profiled to get more insights of what happened inside it.

...

Note that the AbstractLogger.Info method in Okapi 4.3.3 total CPU time is about 3x higher than in Goldenrod.  This is confirmed by Okapi 4.3.3's metrics showing ProxyContext.logRequest and ProxyContext.logResponse methods' response times degrade over time. These two methods need to be investigated.


Database

The database CPU utilizations are about the same between the Honeysuckle and Goldenrod

...

Goldenrod's memory profile shows quick claims of memory over 30 minutes tests runs. 


Missing Indexes

Honeysuckle tests revealed the following missing indexes:

mod-circulation-storage missing indexes

Code Block
WARNING: Doing LIKE search without index for jsonb->>'requestId', CQL >>> SQL: requestId == 920e1d64-c221-48a0-a44d-ff50f3ad6cd6 >>> lower(f_unaccent(jsonb->>'requestId')) LIKE lower(f_unaccent('920e1d64-c221-48a0-a44d-ff50f3ad6cd6'))
WARNING: Doing FT search without index for request.jsonb->>'requesterId', CQL >>> SQL: requesterId = ae4c1cf3-0738-4465-8112-e75089e5b5c6 >>> get_tsvector(f_unaccent(request.jsonb->>'requesterId')) @@ tsquery_phrase(f_unaccent('ae4c1cf3-0738-4465-8112-e75089e5b5c6'))
WARNING: Doing FT search without index for request.jsonb->>'pickupServicePointId', CQL >>> SQL: pickupServicePointId = 7068e104-aa14-4f30-a8bf-71f71cc15e07 >>> get_tsvector(f_unaccent(request.jsonb->>'pickupServicePointId')) @@ tsquery_phrase(f_unaccent('7068e104-aa14-4f30-a8bf-71f71cc15e07'))
WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'patronId', CQL >>> SQL: patronId == d7cabcb2-7431-43ea-a2cc-0dfe5bee17c6 >>> lower(f_unaccent(patron_action_session.jsonb->>'patronId')) LIKE lower(f_unaccent('d7cabcb2-7431-43ea-a2cc-0dfe5bee17c6'))
WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'actionType', CQL >>> SQL: actionType == Check-out >>> lower(f_unaccent(patron_action_session.jsonb->>'actionType')) LIKE lower(f_unaccent('Check-out'))

...

Code Block
WARNING: Doing LIKE search without index for accounts.jsonb->>'userId', CQL >>> SQL: userId == e96618a9-04ee-4fea-aa60-306a8f4dd89b >>> lower(f_unaccent(accounts.jsonb->>'userId')) LIKE lower(f_unaccent('e96618a9-04ee-4fea-aa60-306a8f4dd89b'))
WARNING: Doing LIKE search without index for accounts.jsonb->'status'>>'name', CQL >>> SQL: status.name <> Closed >>> lower(f_unaccent(accounts.jsonb>'status'->>'name')) NOT LIKE lower(f_unaccent('Closed'))
WARNING: Doing LIKE search without index for manualblocks.jsonb->>'userId', CQL >>> SQL: userId == a79b533d-8f29-4be1-9415-5f5cd936623b >>> lower(f_unaccent(manualblocks.jsonb->>'userId')) LIKE lower(f_unaccent('a79b533d-8f29-4be1-9415-5f5cd936623b'))

Results for okapi-4.5.2

Results for okapi-4.5.2 for 1,5,8,20 users for 30 minute run. From the response times below, the average Check-out for 20 users is slower. On average 60% slower than okapi-4.3.3.

'+' means performance improvement from okapi-4.3.3

'-' means performance degradation from okapi-4.3.3

For 20 users - 4 requests failed out of 113642

Response Times


Average (seconds)50th %tile (seconds)75th %tile (seconds)95th %tile  (seconds)

Check-inCheck-outCheck-inCheck-outCheck-inCheck-outCheck-inCheck-out
1 user0.9712.0720.921.9061.0132.0931.3262.905
5 users1.
003
092 +2.
114
584 -0.
925
978 +
1
2.
947
323 +1.
055
16 +2.
235
746 +1.
458
622 +
3
4.
149
021 -
8 users1.
217
429 -
2
3.
467
057 -1.
099
285 -2.
207
747 -1.
357
62 -
2
3.
648
354 -
1
2.
931
415 -
4
5.
095
079 -
20 users
2
3.
409
073 +
5
7.
213
877 -2.
141
595 +
4
6.
478
307 +
2
3.
763
411 +
5
8.
682
287 +
4
6.
233
409 +
8
14.
484
703 +


CPUs and Memories

Service CPU Utilization:

CPU Utilization gradually increases as the number of users increase to 20 but this behavior is similar to okapi-4.3.3

...

Memory Utilization is a little high for mod-circulation 105% but for all other modules, it is relatively stable across all test runs for all users.


RDS CPU Utilization

RDS CPU Utilization is around 50% more compared to okapi-4.3.3

...

Comparison okapi-4.

...

5.2 vs okapi-4.6.1

...

okapi-4

...

.

...

6.1 is

...

slower than okapi-4.

...

5.2. Checkin is 3.66% slower and Checkout is 9.48% slower. See below comparison for 8 Users 30-minute test run.

Image Added


Results for okapi-4.6.1

From the response times below, okapi-4.6.1, checkin-checkout for 1 user is a little slower but for 5, 8, 20 users, checkin-checkout is much faster comparing to okapi-4.3.3.

Response Times Okapi-4.3.3


Average (seconds)50th %tile (seconds)75th %tile (seconds)95th %tile  (
seconds)
seconds)

Check-inCheck-outCheck-inCheck-outCheck-inCheck-outCheck-inCheck-out
Check-inCheck-out
1 user0.942.1580.8852.0170.9692.1771.1982.906
5 users1.1262.5741.0252.3391.2112.791.774.007
8 users1.313 2.9481.1772.611.4873.274
2.1955.04520 users

...

2.1955.045
20 users3.2527.4922.6816.3553.6058.3137.06115.747

Response Times Okapi-4.6.1

'+' means a performance improvement from okapi-4.3.3

'-' means a performance degradation from okapi-4.3.3


Average (seconds)50th %tile (seconds)75th %tile (seconds)95th %tile  (seconds
)
)

Check-inCheck-in performance with okapi-4.3.3Check-outCheck-out performance with okapi-4.3.3Check-inCheck-outCheck-inCheck-in performance with okapi-4.3.3Check-outCheck-out performance with okapi-4.3.3
Check-inCheck-out
Check-inCheck-out
Check-inCheck-out
1 user1.041 -9.7%2.332 -
17%
7%0.9572.1391.06-8.5%2.369-8.10%1.3783.394
5 users1.057 
-0
+6.
3%
5%2.374
-9%
+8.4%0.9782.1761.133+6.88%2.532+10.18%1.5243.624
8 users1.277 
-7%
+2.8%2.814 
-25%
+4.7%1.1442.5121.44+3.2%3.074+6.50%2.1124.718
20 users2.374 +
0
36.
7%
9%5.927 
-51
+26.
9%
4%2.1375.2462.716+32.76.552+26.874.1889.426


CPUs and Memories

Service CPU Utilization:

Compared to okapi-4.3.3, CPU Utilization for okapi-4.6.1 has almost doubled! Performance has degraded more than 50%

Image Removed

Service Memory Utilization:

Image Removed

RDS CPU Utilization

Image Removed

...

to okapi-4.

...

3.3, CPU Utilization for okapi-4.6.1 around the same.

Image Added


Service Memory Utilization:

Compared to okapi-4.6.1 is slower than 3.3, Service Memory Utilization for okapi-4.5.2. Checkin is 3.66% slower and Checkout is 9.48% slower. See below comparison for 8 Users 30-minute test run.

...

6.1 around the same.

Image Added


RDS CPU Utilization

RDS CPU Utilization is normal for 1, 5, and 8 Users. For 20 Users, the CPU is higher almost 95% but considering the large load, it is normal as well. 

Image Added

8 Hours Longevity test run for 20 Users

Service CPU Utilization:Service CPU Utilization:

Okapi-4.6.1 consumes less CPU gradually from 1st hour into 8th hour. However, at the same time, mod-pubsub consumes more CPU by gradually increasing from 50% to almost 160% 


Service Memory Utilization:

Memory usage for okapi-4.6.1  and other modules is relatively constant and stable throughout the run. mod-circulation memory consumption increases to 125% and then stabilizes.


Comparison okapi-4.3.3 vs okapi-4.6.1 (okapi metrics enabled)

Below results are for 8 Users, 30 minutes against Checkin-Checkout workflow.

Grafana Performance Dashboard

Okapi-4.6.1 is around 71% faster than Okapi-4.3.3. Okapi-4.6.1 can process more requests and still perform better. In 30 minutes test run, okapi-4.6.1 was able to process 25% more requests with an average request per second(RPS) is 40.

...

Okapi-4.6.1 Grafana performance dashboard:


Checkin-Checkout API level comparison

For Okapi-4.6.1, Check-in is 71% faster and Checkout is around 65% faster.


Log request/response comparison

For Okapi-4.6.1, Log request has improved from 3.16 second to 0.243 seconds. Log request is faster by 1200% faster. Log response has improved from 3.25 seconds to 0.266 seconds which is 1100% faster.

...

Okapi-4.6.1 log request/response comparison:


Service CPU Utilization

Okapi-4.6.1 consumes less CPU and hence more efficient. Average CPU Utilization for okapi-4.6.1 is around 380% vs okapi-4.3.3 which is 600%. 

...

Okapi-4.6.1 Service CPU Utilization:

Appendix

CIRC-1014

https://issuesfolio-org.folioatlassian.orgnet/browse/MODPATBLK-70

https://issuesfolio-org.folioatlassian.orgnet/browse/OKAPI-964

https://issuesfolio-org.folioatlassian.orgnet/browse/OKAPI-965

checkout-checkin-4.5.2-test-runs