...
- 79 back-end modules deployed in 153 ECS tasks
- 3 okapi ECS tasks
- 10 m6i.2xlarge EC2 instances
- 2 db.r6g.xlarge AWS RDS instance
- INFO logging level
...
Test | Virtual Users | Duration | Load generator size (recommended) | Load generator Memory(GiB) (recommended) |
1. | 1 user | 30 mins | t3.medium | 3 |
2. | 5 users | 30 mins | t3.medium | 3 |
3. | 8 users | 30 mins | t3.medium | 3 |
4. | 20 users | 30 mins | t3.medium | 4 |
5. | 50 users | 30 mins | t3.large | 6 |
9. | 20 users longevity | 16 hours | t3.xlarge | 14 |
Results
Response Times (Average of all tests listed above, in seconds)
...
The timeline highlighted below encompasses all 5 test runs (1, 5, 8, 20, and 50 users).
Image Modified
Modules CPUs and Memory Utilization
The relevant services overall seem to occupy CPU resources nominally. Only mod-authtoken seems to have the spikes but the processes did not crash.
Image RemovedImage Added
Image RemovedImage Added
20-users tests | Avg | Max |
---|
mod-users |
%%%%%%%%4% |
mod-circulation-storage |
%%%%%%%%%% | 17% | 17% |
mod-authtoken | 31% | 78% |
Services' memory seems to be stable during the test runs.
%%%%%%%%80% |
mod-circulation-storage |
%%%%%%%%%% | 39% | 39% |
mod-authtoken | 30% | 30% |
Database and network
The freeable memory metric shows how much memory is being consumed and remaining during a test run. In the graph below memory usage is pretty good and stable throughout the test runs but generally slightly trending down after each run and bounced back almost to the starting point after each test. This does not mean there is a memory leak because the 24 16 hours longevity test does not reveal any leaks (see above).
...