Table of Contents |
---|
...
Bulk Edit - Establish a performance baseline for user status bulk updates.
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
- How long does it take to export 100, 1000, 5000, 10k, and 100K records?
- Use it for up to 5 concurrent users.
- Look for a memory trend and CPU usage
- Pay attention to any RTR token expiration messages and observe how/if BE is affected by expiring tokens. If needed set the Access token's expiration time to 300s or less to trigger the Access token's quick expiration.
...
- All tests were successful, and 100K records files after bulk-edit were downloaded(test completed successfully). System has not reached maximum capacity therefore, the number of VU can be increased to 7.
- No errors with messages like "Invalid token" or messages like "Access token has expired" during 2 tests.
- With an increase in the number of virtual users, the operating time of bulk-edit increases(All results are in the first table).
- Comparing the test results on Poppy and Orchid releases, we can conclude that the processing time increased by up to 10%(However, the previous report lacked data on the duration of Bulk-edit ).
- The system was stable during the test. Maximal resource utilization was during 100K records bulk-edit and 5 VU concurrently.
- Max CPU utilization was on nginx-okapi(60%) and okapi(50%) services.
- Service memory usage was without memory leaks. Except the mod-search service during Test 2.
- Average DB CPU usage was about 63%
- Number of DB connections ~ 200
- Comparing the resource utilization graphs, we can say that the system behavior on the Poppy release is the same as on Orchid.
- From Orchid JIRAS "The high CPU usage of mod-users (up to 125% ) needs to be investigated." During 2 tests max CPU consumption was about 40% for mod-users service.
Recommendations & Jiras
- The high memory usage of mod-search service during test 2, needs to be investigated
...
Comparison table of bulk-edit process duration on Poppy and Orchid releases
Number of VU/ Records | 1VU Average | 1VU Orchid Average | diff,% | 2VU Average | 2VU Average | diff,% | 3VU Average | 3VU Average | diff,% | 4VU Average | 4VU Average | diff,% | 5VU Average | 5VU Average | diff,% |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
100 | 00:01:14 | 00:01:09 | 9% | 00:01:14 | not tested | - | 00:01:14 | not tested | - | 00:01:11 | not tested | - | 00:01:13 | not tested | - |
1000 | 00:02:53 | 00:02:36 | 9% | 00:02:58 | not tested | - | 00:02:53 | not tested | - | 00:03:03 | not tested | - | 00:03:08 | not tested | - |
10k | 00:19:38 | 00:17:50 | 9% | 00:20:28 | 00:17:50 | 8.7% | 00:20:15 | 00:18:50 | 9% | 00:20:55 | 00:19:10 | 9% | 00:22:18 | 00:20:20 | 9% |
50K | not tested | 01:58:20 | - | not tested | not tested | - | not tested | not tested | - | not tested | not tested | - | not tested | not tested | - |
100K | 03:14:59 | FAILED | - | 03:24:41 | FAILED | - | 03:24:26 | FAILED | - | 03:24:18 | FAILED | - | 04:07:12 | FAILED | - |
Link to the report with Orchid items apps testing results Bulk Edit Items App report [Orchid] 08/03/2023
Resource utilization
Instance CPU Utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the BulBulk-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing at about 21:30, and the grey graph was running up to 2 a.m.), but on all other bulk-edits jobs, there were no problems with this file.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Blurred areas are contain errors from the load generator.
...
Part of test 1. Bulk-edit 100K records 5VU.
TOP SQL Queries
Appendix
Infrastructure
PTF -environment pcp1
- 11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
...