GET circulation/loans Report (Fameflower and Goldrenrod)
- PERF-73Getting issue details... STATUS
Overview
GET circulation/loans response time is nearly 300ms (75th percentile) - in fact more than 400 with 8 users. Investigate this API call to understand why it is slow.
API call - [/circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1];
Summary
- GET /circulation/loans averages 400ms for 8 users
- Performance degrades over time as memory leaks on mod-feesfines accumulates.
- - MODFEE-93Getting issue details... STATUS is created to address memory leaks.
- Calls to mod-feesfines account for about half of the response time.
Test Runs
test# | VUsers | Duration (min) | OKAPI log level | API call |
---|---|---|---|---|
4 | 8 | 60 | info | /circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1]; |
3 | 8 | 60 | Info | /circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1]; |
2 | 8 | 60 | warning | /circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1]; |
1 | 8 | 60 | warning | /circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1]; |
0 | 8 | 60 | warning | /circulation/loans?query=%28userId=${UserID}+and+status.name%3C%3EClosed%29&limit=1]; |
Test results
test 4 | Link on grafana test | Requests | Total | OK | KO | % KO | Req/s | Min | 50th pct | 75th pct | 95th pct | 99th pct | Max | Average | Latency | |
GET_circulation/loans | 6003 | 6003 | 0 | 0 | 1.663 | 0.256 | 4.174 | 5.92 | 8.563 | 11.237 | 13.181 | 4.756 | 8.563 | |||
test 3 | Link on grafana test | Requests | Total | OK | KO | % KO | Req/s | Min | 50th pct | 75th pct | 95th pct | 99th pct | Max | Average | Latency | |
GET_circulation/loans | 8998 | 8998 | 0 | 0 | 2.496 | 0.263 | 3.05 | 3.991 | 5.894 | 6.977 | 9.326 | 3.172 | 5.894 | |||
test 2 | Link on grafana test | Requests | Total | OK | KO | % KO | Req/s | Min | 50th pct | 75th pct | 95th pct | 99th pct | Max | Average | Latency | |
GET_circulation/loans | 11853 | 11853 | 0 | 0 | 3.289 | 0.243 | 2.069 | 3.15 | 4.239 | 5.9 | 7.166 | 2.407 | 4.239 | |||
test 1 | Link on grafana test | Requests | Total | OK | KO | % KO | Req/s | Min | 50th pct | 75th pct | 95th pct | 99th pct | Max | Average | Latency | |
GET_circulation/loans | 40663 | 40663 | 0 | 0 | 11.291 | 0.24 | 0.401 | 1.09 | 1.352 | 2.143 | 4.445 | 0.701 | 1.352 | |||
test 0 | Link on grafana test | Requests | Total | OK | KO | % KO | Req/s | Min | 50th pct | 75th pct | 95th pct | 99th pct | Max | Average | Latency | |
GET_circulation/loans | 59209 | 58132 | 1077 | 1.819 | 16.442 | 0.141 | 0.392 | 0.531 | 0.916 | 1.068 | 2.561 | 0.481 | 0.916 |
As you can see there's degradation of response times (test order from 0 to 4) especially it can be seen on 75 percentiles value.
The assumption is - mod- feesfines has a memory leak that causes the high CPU usage and leading to container restart after reaching memory limit.
Resource utilization (test4):
CPU Utilization
RAM Usage
Heap Dump:
As you can see 22% of heap are taken by HTTPClient. The recommendation is to close HTTPClient explicitly to avoid unnecessary heap overflow.
In another test run:
Giraffe Diagram
From chart above, you can see that there's a lot of mod-feesfines calls happened internally. They take between 100 and 300 milliseconds, for now this is more time expensive process founded.
DataBase CPU utilization was always in normal. not higher than 5 %.
No slow queries and missed indexes was found.