Table of Contents |
---|
...
- How long does it take to export 100, 1000, 5000, 10k, and 100K records?
- Use it for up to 5 concurrent users.
- Look for a memory trend and CPU usage
- Pay attention to any RTR token expiration messages and observe how/if BE is affected by expiring tokens. If needed set the Access token's expiration time to 300s or less to trigger the Access token's quick expiration.
Summary
- All tests were successful, and 100K records files after bulk-edit were downloaded(test completed successfully). System has not reached maximum capacity therefore, the number of VU can be increased to 7.
- No errors with messages like "Invalid token" or messages like "Access token has expired" during 2 tests.
- With an increase in the number of virtual users, the operating time of bulk-edit increases(All results are in the first table).
- Comparing the test results on Poppy and Orchid releases, we can conclude that the processing time increased by up to 10%(However, the previous report lacked data on the duration of Bulk-edit ).
- The system was stable during the test. Maximal resource utilization was during 100K records bulk-edit and 5 VU concurrently.
- Max CPU utilization was on nginx-okapi(60%) and okapi(50%) services.
- Service memory usage was without memory leaks. Except the mod-search service during Test 2.
- Average DB CPU usage was about 63%
- Number of DB connections ~ 200
- Comparing the resource utilization graphs, we can say that the system behavior on the Poppy release is the same as on Orchid.
- From Orchid JIRAS "The high CPU usage of mod-users (up to 125% ) needs to be investigated." During 2 tests max CPU consumption was about 40% for mod-users service.
Recommendations & Jiras
- The high memory usage of mod-search service during test 2, needs to be investigated
Results
Total processing time of upload and edit - commit changes. Units =hours:minutes: seconds
Number of virtual user/ Records | 1VU | 2VU | 3VU | 4VU | 5VU |
---|---|---|---|---|---|
100 records | 00:01:14 | 00:01:13 00:01:14 | 00:01:15 00:01:13 00:01:13 | 00:01:11 00:01:12 00:01:11 00:01:11 | 00:01:12 00:01:13 00:01:14 00:01:12 00:01:14 |
1000 records | 00:02:53 | 00:03:01 00:02:54 | 00:02:51 00:02:56 00:02:53 | 00:03:04 00:03:03 00:03:02 00:03:06 | 00:03:10 00:03:04 00:03:07 00:03:06 00:03:13 |
5000 records | 00:10:20 | 00:11:13 00:10:33 | 00:11:13 00:10:33 00:10:28 | 00:10:56 00:10:56 00:11:01 00:11:35 | 00:12:34 00:13:19 00:12:34 00:12:30 00:12:33 |
10000 records | 00:19:38 | 00:20:47 00:20:13 | 00:20:50 00:20:40 00:20:04 | 00:21:00 00:20:49 00:21:09 00:20:54 | 00:22:21 00:22:16 00:22:09 00:22:17 00:22:13 |
100K
100K records | 03:14:59 | 03:33:24 03:15:41 | 03:27:06 03:21:31 03:25:10 | 06:10: |
23* 03:33:23 03:20:39 03:21:24 | 04:04:24 04:02:45 04:03:11 04:06:32 04:12:04 |
Comparison table of bulk-edit process duration on Poppy and Orchid releases
Number of VU/ Records | 1VU Average | 1VU Orchid Average | 2VU Average | 2VU Average | 3VU Average | 3VU Average | 4VU Average | 4VU Average | 5VU Average | 5VU Average |
---|---|---|---|---|---|---|---|---|---|---|
100 | 00:01:14 | 00:01:09 | 00:01:14 | not tested | 00:01:14 | not tested | 00:01:11 | not tested | 00:01:13 | not tested |
1000 | 00:02:53 | 00:02:36 | 00:02:58 | not tested | 00:02:53 | not tested | 00:03:03 | not tested | 00:03:08 | not tested |
10k | 00:19:38 | 00:17:50 | 00:20:28 | 00:17:50 | 00:20:15 | 00:18:50 | 00:20:55 | 00:19:10 | 00:22:18 | 00:20:20 |
50K | not tested | 01:58:20 | not tested | not tested | not tested | not tested | not tested | not tested | not tested | not tested |
100K | 03:14:59 | FAILED | 03:24:41 | FAILED | 03:24:26 | FAILED | 03:24:18 | FAILED | 04:07:12 | FAILED |
Resource utilization
Instance CPU Utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the Bul-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing at about 21:30, and the grey graph was running up to 2 a.m.), but on all others other bulk-edits jobs, there were no problems with this file.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Blurred areas are errors from the load generator.
Memory usage
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.
...
Part of test 1. Bulk-edit 100K records 5VU.
TOP SQL Queries
Appendix
Infrastructure
PTF -environment pcp1
- 11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
...