Overview
This is a report for a series of Check-in-check-out test runs against the Honeysuckle release. - PERF-135Getting issue details... STATUS
Backend:
- mod-circulation-19.2.7
- mod-circulation-storage-12.1.4
- mod-inventory-16.1.3
- mod-inventory-storage-19.4.4
- mod-authtoken-2.6.0
- mod-pubsub-1.3.3
- okapi-4.3.3 (also with 4.2.2)
Frontend:
- folio_circulation-4.0.1
- Item Check-in (folio_checkin-4.0.1)
- Item Check-out (folio_checkout-5.0.1)
Environment:
- 61 back-end modules deployed in 110 ECS services
- 3 okapi ECS services
- 8 m5.large EC2 instances
- 2 db.r5.xlarge AWS RDS instance
- INFO logging level
High Level Summary
- Check-out: Honeysuckle is slower by 9%-28% than Goldenrod
- Check-in: 4%-22% slower than Goldenrod
- APIs turned slower in Honeysuckle: GET /automated-patron-blocks/{id} (150% slower) and GET /circulation/loans (60%). These are covered by MODPATBLK-70 and CIRC-1014, respectively
- Okapi v4.3.3 seem to be using 2x-3x CPU cycles than in v1.3.2 (Goldenrod). Potential issue found with the logging methods OKAPI-964
- mod-pubsub has a memory leak that would drag down performance under high loads (see section on longevity test): MODPUBSUB-136
- Caching Okapi tokens in Okapi reduced mod-authtoken's CPU usage by over 90%
- Database's memory usage improved dramatically from Goldenrod's - little memory consumptions observed.
Test Runs
Test | Virtual Users | Duration | OKAPI log level |
1. | 1 | 30 mins | INFO |
2. | 5 | 30 mins | INFO |
3. | 8 | 30 mins | INFO |
4. | 20 | 30 mins | INFO |
5. | 20 | 24 Hours | INFO |
Results
Response Times
Average (seconds) | 50th %tile (seconds) | 75th %tile (seconds) | 95th %tile (seconds) | |||||
Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | |
1 user | 0.967 | 1.989 | 0.889 | 1.832 | 0.984 | 2.201 | 1.254 | 2.815 |
5 users | 1.053 | 2.171 | 0.981 | 1.969 | 1.114 | 2.253 | 1.528 | 3.370 |
8 users | 1.193 | 2.244 | 1.076 | 2.022 | 1.339 | 2.372 | 1.895 | 3.544 |
20 users | 2.391 | 3.901 | 1.639 | 3.073 | 2.263 | 4.12 | 4.811 | 8.784 |
The following table shows the slow 75th percentile APIs taking more than 100 ms to return, and also comparing them against Goldenrod's. Other than the 1-user test, starting with the 5-users test, all listed APIs are slower with GET automated-patron-blocks leading the way at 150% slower, while GET circulation/loans regressed up to 60%
Note: GR = Goldenrod build, HS = Honeysuckle build
API | 1 user GR (75th %tile) | 1 user HS (75th %tile) | 5 users GR (75th %tile) | 5 users HS (75th %tile) | 8 users GR (75th %tile) | 8 users HS (75th %tile) | 20 users GR (75th %tile) | 20 users HS (75th %tile) |
---|---|---|---|---|---|---|---|---|
GET circulation/loans | 0.345 | 0.349 | 0.365 | 0.406 | 0.075 | 0.122 | 0.654 | 0.784 |
GET inventory/items | 0.208 | 0.186 | 0.208 | 0.222 | 0.225 | 0.244 | 0.312 | 0.375 |
POST checkin-by-barcode | 0.682 | 0.593 | 0.631 | 0.664 | 0.815 | 0.874 | 1.296 | 1.467 |
POST checkout-by-barcode | 0.750 | 0.717 | 0.688 | 0.784 | 0.733 | 0.877 | 1.205 | 1.469 |
GET automated-patron-blocks | 0.069 | 0.163 | 0.085 | 0.180 | 0.079 | 0.197 | 0.118 | 0.296 |
Average | 50th Percentile | 75th percentile | 95th percentile | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Check-in GR | Check-in HS | Check-out GR | Check-out HS | Check-in GR | Check-in HS | Check-out GR | Check-out HS | Check-in GR | Check-in HS | Check-out GR | Check-out HS | Check-in GR | Check-in HS | Check-out GR | Check-out HS | |
1 user | 1.06 | 0.967 | 1.994 | 1.989 | 0.977 | 0.889 | 1.82 | 1.832 | 1.111 | 0.984 | 2.004 | 2.021 | 1.323 | 1.254 | 2.811 | 2.815 |
5 users | 1.011 | 1.053 | 1.924 | 2.171 | 0.975 | 0.981 | 1.814 | 1.969 | 1.067 | 1.114 | 1.977 | 2.253 | 1.251 | 1.528 | 2.625 | 3.37 |
8 users | 1.142 | 1.193 | 2.044 | 2.244 | 1.061 | 1.076 | 1.899 | 2.022 | 1.274 | 1.339 | 2.093 | 2.372 | 1.569 | 1.895 | 3.107 | 3.544 |
20 users | 1.702 | 2.391 | 3.02 | 3.901 | 1.49 | 1.639 | 2.652 | 3.073 | 1.936 | 2.263 | 3.273 | 4.12 | 2.953 | 4.811 | 5.352 | 8.784 |
This table shows that the average response time of the checkout transaction is 9% to 28% slower than in Goldenrod, and check-in in Honeysuckle likewise is about 4-22% slower (data aggregated from 5 to 20 users test runs).
Update 1/14/2011
- Subsequent investigations ( - PERF-140Getting issue details... STATUS AND - CIRC-1014Getting issue details... STATUS ) on GET /circulation/loans do not show degradations by the API itself. We hypothesize that other API calls that were executed during the test run may have dragged down the response time, particularly if it was trying to read and write to the same rows in the database at the same time.
Longevity test
A 24-hour longevity test was performed with 20 concurrent users and running with Okapi 4.2.2. Key observations:
- Performance degraded after 2 hours.
- Average response time in the first two hours:
- Check-in: 2.28 sec
- Check-out: 4.76 sec
- Average response time in the first 12 hours:
- Check-in: 2.709 sec
- Check-out: 7.898 sec
- Average response time in the last 12 hours:
- Check-in: 4.605 sec
- Check-out: 17.297 sec
- Together the average response time for the entire test run:
- Check-in: 3.414 sec
- Check-out: 11.419 sec
- Average response time in the first two hours:
- Throughput degrades over time, unsurprisingly
- Modules Services CPU utilizations (The black lines indicate the start and finish time of the test run)
- With the load of 20 users, Okapi started out working hardest, but its CPU utilization drops down over time. This is because mod-pubsub's is leaking HTTP Client and taking up more resources therefore slowing down, causing Okapi to not need to route messages as much. Clearly mod-pubsub's CPU utilization went up. The other modules follow Okapi's pattern.
- Services Modules Memory utilizations
- No modules exhibited memory leaks except for mod-pubsub
- Although there were two instances of mod-pubsub running on two different ec2 instances, mod-pubsub's traffic seemed to have been stickied to one instance. Here are graphs showing mod-pubsub's on one instance using up memory and CPU resources, and on another instance not showing much activities:
- mod-pubsub and Okapi on another node - Okapi's CPU utilization dwindles while mod-pubsub does not seem to be busy at all
CPUs and Memories
Okapi was profiled because of the apparent 3x CPU utilization compared to the Goldenrod runs.
Goldenrod test runs:
Honeysuckle test runs:
Clearly Okapi is using more CPU cycles in Honeysuckle than in Goldenrod, 1 user
Other relevant modules' CPU utilizations in Goldenrod
Same modules in Honeysuckle:
A few things to note:
- mod-authtoken uses much less CPU in Honeysuckle, over 90% reduction across all tests! This is because of the token caching functionality that was added to Okapi 4.x
- mod-circulation's CPU utilization in Honeysuckle averages over 20% lower than in Goldenrod.
- mod-circulation's CPU utilization in Honeysuckle is about 10-30% higher than in Goldenrod
- mod-inventory's CPU utilization in Honeysuckle averages 30% more than in Goldenrod
- mod-inventory-storage's CPU utilization in Honeysuckle averages 20% more than in Goldenrod
- mod-pubsub's CPU utilization in Honeysuckle is about 15% less than in Goldenrod
- mod-patron-blocks CPU utilization in Honeysuckle is at least 30% less than in Goldenrod
JVM Profiling
Because Okapi's CPU utilization in Honeysuckle seemed to have averaged 2x to 3x higher than in Goldenrod, it was profiled to get more insights of what happened inside it.
The slowest methods in Honeysuckle are once again the Logger and Jackson serialization methods
Compared to Goldenrod:
Note that the AbstractLogger.Info method in Okapi 4.3.3 total CPU time is about 3x higher than in Goldenrod. This is confirmed by Okapi 4.3.3's metrics showing ProxyContext.logRequest and ProxyContext.logResponse methods' response times degrade over time. These two methods need to be investigated.
Database
The database CPU utilizations are about the same between the Honeysuckle and Goldenrod
Honeysuckle's
Goldenrod's
Honeysuckle's database memory utilization is much much better than Goldenrod's. For the most part Honeysuckle's consecutive test runs did not produce signs of aggressive memory leaks that were seen in Goldenrod.
Goldenrod's memory profile shows quick claims of memory over 30 minutes tests runs.
Missing Indexes
Honeysuckle tests revealed the following missing indexes:
mod-circulation-storage missing indexes
WARNING: Doing LIKE search without index for jsonb->>'requestId', CQL >>> SQL: requestId == 920e1d64-c221-48a0-a44d-ff50f3ad6cd6 >>> lower(f_unaccent(jsonb->>'requestId')) LIKE lower(f_unaccent('920e1d64-c221-48a0-a44d-ff50f3ad6cd6')) WARNING: Doing FT search without index for request.jsonb->>'requesterId', CQL >>> SQL: requesterId = ae4c1cf3-0738-4465-8112-e75089e5b5c6 >>> get_tsvector(f_unaccent(request.jsonb->>'requesterId')) @@ tsquery_phrase(f_unaccent('ae4c1cf3-0738-4465-8112-e75089e5b5c6')) WARNING: Doing FT search without index for request.jsonb->>'pickupServicePointId', CQL >>> SQL: pickupServicePointId = 7068e104-aa14-4f30-a8bf-71f71cc15e07 >>> get_tsvector(f_unaccent(request.jsonb->>'pickupServicePointId')) @@ tsquery_phrase(f_unaccent('7068e104-aa14-4f30-a8bf-71f71cc15e07')) WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'patronId', CQL >>> SQL: patronId == d7cabcb2-7431-43ea-a2cc-0dfe5bee17c6 >>> lower(f_unaccent(patron_action_session.jsonb->>'patronId')) LIKE lower(f_unaccent('d7cabcb2-7431-43ea-a2cc-0dfe5bee17c6')) WARNING: Doing LIKE search without index for patron_action_session.jsonb->>'actionType', CQL >>> SQL: actionType == Check-out >>> lower(f_unaccent(patron_action_session.jsonb->>'actionType')) LIKE lower(f_unaccent('Check-out'))
The following warnings are captured when the background tasks are running
WARNING: Doing LIKE search without index for jsonb->'noticeConfig'>>'timing', CQL >>> SQL: noticeConfig.timing == After >>> lower(f_unaccent(jsonb>'noticeConfig'->>'timing')) LIKE lower(f_unaccent('After')) WARNING: Doing LIKE search without index for jsonb->>'loanId', CQL >>> SQL: loanId == 671233fd-5c15-4f9e-8fab-f86330c389bd >>> lower(f_unaccent(jsonb->>'loanId')) LIKE lower(f_unaccent('671233fd-5c15-4f9e-8fab-f86330c389bd')) WARNING: Doing LIKE search without index for jsonb->>'triggeringEvent', CQL >>> SQL: triggeringEvent == "Due date" >>> lower(f_unaccent(jsonb->>'triggeringEvent')) LIKE lower(f_unaccent('Due date'))
mod-feesfines missing indexes:
WARNING: Doing LIKE search without index for accounts.jsonb->>'userId', CQL >>> SQL: userId == e96618a9-04ee-4fea-aa60-306a8f4dd89b >>> lower(f_unaccent(accounts.jsonb->>'userId')) LIKE lower(f_unaccent('e96618a9-04ee-4fea-aa60-306a8f4dd89b')) WARNING: Doing LIKE search without index for accounts.jsonb->'status'>>'name', CQL >>> SQL: status.name <> Closed >>> lower(f_unaccent(accounts.jsonb>'status'->>'name')) NOT LIKE lower(f_unaccent('Closed')) WARNING: Doing LIKE search without index for manualblocks.jsonb->>'userId', CQL >>> SQL: userId == a79b533d-8f29-4be1-9415-5f5cd936623b >>> lower(f_unaccent(manualblocks.jsonb->>'userId')) LIKE lower(f_unaccent('a79b533d-8f29-4be1-9415-5f5cd936623b'))
Results for okapi-4.5.2
Results for okapi-4.5.2 for 1,5,8,20 users for 30 minute run. From the response times below, the average Check-out for 20 users is slower. On average 60% slower than okapi-4.3.3.
Response Times
Average (seconds) | 50th %tile (seconds) | 75th %tile (seconds) | 95th %tile (seconds) | |||||
Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | |
1 user | 0.971 | 2.072 | 0.92 | 1.906 | 1.013 | 2.093 | 1.326 | 2.905 |
5 users | 1.003 | 2.114 | 0.925 | 1.947 | 1.055 | 2.235 | 1.458 | 3.149 |
8 users | 1.217 | 2.467 | 1.099 | 2.207 | 1.357 | 2.648 | 1.931 | 4.095 |
20 users | 2.409 | 5.213 | 2.141 | 4.478 | 2.763 | 5.682 | 4.233 | 8.484 |
CPUs and Memories
Service CPU Utilization:
CPU Utilization gradually increases as the number of users increase to 20 but this behavior is similar to okapi-4.3.3
Service Memory Utilization:
Memory Utilization is a little high for mod-circulation 105% but for all other modules, it is relatively stable across all test runs for all users.
RDS CPU Utilization
RDS CPU Utilization is around 50% more compared to okapi-4.3.3
Comparison okapi-4.5.2 vs okapi-4.6.1
okapi-4.6.1 is slower than okapi-4.5.2. Checkin is 3.66% slower and Checkout is 9.48% slower. See below comparison for 8 Users 30-minute test run.
Results for okapi-4.6.1
From the response times below, okapi-4.6.1, checkin-checkout for 1 user is a little slower but for 5, 8, 20 users, checkin-checkout is much faster comparing to okapi-4.3.3.
Response Times Okapi-4.3.3
Average (seconds) | 50th %tile (seconds) | 75th %tile (seconds) | 95th %tile (seconds) | |||||
Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | Check-in | Check-out | |
1 user | 0.94 | 2.158 | 0.885 | 2.017 | 0.969 | 2.177 | 1.198 | 2.906 |
5 users | 1.126 | 2.574 | 1.025 | 2.339 | 1.211 | 2.79 | 1.77 | 4.007 |
8 users | 1.313 | 2.948 | 1.177 | 2.61 | 1.487 | 3.274 | 2.195 | 5.045 |
20 users | 3.252 | 7.492 | 2.681 | 6.355 | 3.605 | 8.313 | 7.061 | 15.747 |
Response Times Okapi-4.6.1
'+' means a performance improvement
'-' means a performance degradation
Average (seconds) | 50th %tile (seconds) | 75th %tile (seconds) | 95th %tile (seconds) | |||||||||
Check-in | Check-in performance with okapi-4.3.3 | Check-out | Check-out performance with okapi-4.3.3 | Check-in | Check-out | Check-in | Check-in performance with okapi-4.3.3 | Check-out | Check-out performance with okapi-4.3.3 | Check-in | Check-out | |
1 user | 1.041 | -9.7% | 2.332 | -7% | 0.957 | 2.139 | 1.06 | -8.5% | 2.369 | -8.10% | 1.378 | 3.394 |
5 users | 1.057 | +6.5% | 2.374 | +8.4% | 0.978 | 2.176 | 1.133 | +6.88% | 2.532 | +10.18% | 1.524 | 3.624 |
8 users | 1.277 | +2.8% | 2.814 | +4.7% | 1.144 | 2.512 | 1.44 | +3.2% | 3.074 | +6.50% | 2.112 | 4.718 |
20 users | 2.374 | +36.9% | 5.927 | +26.4% | 2.137 | 5.246 | 2.716 | +32.7 | 6.552 | +26.87 | 4.188 | 9.426 |
CPUs and Memories
Service CPU Utilization:
Compared to okapi-4.3.3, CPU Utilization for okapi-4.6.1 around the same.
Service Memory Utilization:
Compared to okapi-4.3.3, Service Memory Utilization for okapi-4.6.1 around the same.
RDS CPU Utilization
RDS CPU Utilization is normal for 1, 5, and 8 Users. For 20 Users, the CPU is higher almost 95% but considering the large load, it is normal as well.
8 Hours Longevity test run for 20 Users
Service CPU Utilization:
Service Memory Utilization:
Comparison okapi-4.3.3 vs okapi-4.6.1 (okapi metrics enabled)
Below results are for 8 Users, 30 minutes against Checkin-Checkout workflow.
Grafana Performance Dashboard
Okapi-4.6.1 is around 71% faster than Okapi-4.3.3. Okapi-4.6.1 can process more requests and still perform better. In 30 minutes test run, okapi-4.6.1 was able to process 25% more requests with an average request per second(RPS) is 40.
Okapi-4.3.3 Grafana performance dashboard:
Okapi-4.6.1 Grafana performance dashboard:
Checkin-Checkout API level comparison
For Okapi-4.6.1, Check-in is 71% faster and Checkout is around 65% faster.
Log request/response comparison
For Okapi-4.6.1, Log request has improved from 3.16 second to 0.243 seconds. Log request is faster by 1200% faster. Log response has improved from 3.25 seconds to 0.266 seconds which is 1100% faster.
Okapi-4.6.1 can process more log request/response and at the same time be fast. Okapi-4.6.1 processed [211k(logRequest), 220k(logResponse)] vs Okapi-4.3.3 which processed [170k(logRequest), 177k(logResponse)].
Okapi-4.3.3 log request/response comparison:
Okapi-4.6.1 log request/response comparison:
Service CPU Utilization
Okapi-4.6.1 consumes less CPU and hence more efficient. Average CPU Utilization for okapi-4.6.1 is around 380% vs okapi-4.3.3 which is 600%.
Okapi-4.3.3 Service CPU Utilization:
Okapi-4.6.1 Service CPU Utilization:
Appendix
https://issues.folio.org/browse/MODPATBLK-70
https://issues.folio.org/browse/OKAPI-964