Table of Contents |
---|
...
Bulk Edit - Establish a performance baseline for user status bulk updates.
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
- How long does it take to export 100, 1000, 5000, 10k, and 100K records?
- Use it for up to 5 concurrent users.
- Look for a memory trend and CPU usage
- Pay attention to any RTR token expiration messages and observe how/if BE is affected by expiring tokens. If needed set the Access token's expiration time to 300s or less to trigger the Access token's quick expiration.
...
Summary
- All tests were successful, and 100K records
...
- Can it be used with up to 5 concurrent users?
- Run consecutively four jobs editing 10k item records
- Run simultaneously four jobs editing 10k item records
- Look for a memory trend and CPU usage
...
- files after bulk-edit were downloaded(test completed successfully). System has not reached maximum capacity therefore, the number of VU can be increased to 7.
- No errors with messages like "Invalid token" or messages like "Access token has expired" during 2 tests.
- With an increase in the number of virtual users, the operating time of bulk-edit increases(All results are in the first table).
- Comparing the test results on Poppy and Orchid releases, we can conclude that the processing time increased by up to 10%(However, the previous report lacked data on the duration of Bulk-edit ).
- The system was stable during the test. Maximal resource utilization was during 100K records bulk-edit and 5 VU concurrently.
- Max CPU utilization was on nginx-okapi(60%) and okapi(50%) services.
- Service memory usage was without memory leaks. Except the mod-search service during Test 2.
- Average DB CPU usage was about 63%
- Number of DB connections ~ 200
- Comparing the resource utilization graphs, we can say that the system behavior on the Poppy release is the same as on Orchid.
- From Orchid JIRAS "The high CPU usage of mod-users (up to 125% ) needs to be investigated." During 2 tests max CPU consumption was about 40% for mod-users service.
Recommendations & Jiras
- The high memory usage of mod-search service during test 2, needs to be investigated
Results
Total processing time of upload and edit - commit changes. Units =hours:minutes: seconds
...
Number of virtual user/ Records | 1VU | 2VU | 3VU | 4VU | 5VU |
---|---|---|---|---|---|
100 records | 00:01:14 | 00:01:13 00:01:14 | 00:01:15 00:01:13 00:01:13 | 00:01:11 00:01:12 00:01:11 00:01:11 | 00:01:12 00:01:13 00:01:14 00:01:12 00:01:14 |
1000 records | 00:02:53 | 00:03:01 00:02:54 | 00:02:51 00:02:56 00:02:53 | 00:03:04 00:03:03 00:03:02 00:03:06 | 00:03:10 00:03:04 00:03:07 00:03:06 00:03:13 |
5000 records | 00:10:20 | 00:11:13 00:10:33 | 00:11:13 00:10:33 00:10:28 | 00:10:56 00:10:56 00:11:01 00:11:35 | 00:12:34 00:13:19 00:12:34 00:12:30 00:12:33 |
10000 records |
100K
records00:19:38 | 00:20:47 00:20:13 | 00:20:50 00:20:40 00:20:04 | 00:21:00 00:20:49 00:21:09 00:20:54 | 00:22:21 00:22:16 00:22:09 00:22:17 00:22:13 | |
100K records | 03:14:59 | 03:33:24 03:15:41 | 03:27:06 03:21:31 03:25:10 | 06:10: |
23* 03:33:23 03:20:39 03:21:24 | 04:04:24 04:02:45 04:03:11 04:06:32 04:12:04 |
Comparison table of bulk-edit process duration on Poppy and Orchid releases
Number of VU/ Records | 1VU Average | 1VU Orchid Average | diff,% | 2VU Average | 2VU Average | diff,% | 3VU Average | 3VU Average | diff,% | 4VU Average | 4VU Average | diff,% | 5VU Average | 5VU Average | diff,% |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
100 | 00:01:14 | 00:01:09 | 9% | 00:01:14 | not tested | - | 00:01:14 | not tested | - | 00:01:11 | not tested | - | 00:01:13 | not tested | - |
1000 | 00:02:53 | 00:02:36 | 9% | 00:02:58 | not tested | - | 00:02:53 | not tested | - | 00:03:03 | not tested | - | 00:03:08 | not tested | - |
10k | 00:19:38 | 00:17:50 | 9% | 00:20:28 | 00:17:50 | 8.7% | 00:20:15 | 00:18:50 | 9% | 00:20:55 | 00:19:10 | 9% | 00:22:18 | 00:20:20 | 9% |
50K | not tested | 01:58:20 | - | not tested | not tested | - | not tested | not tested | - | not tested | not tested | - | not tested | not tested | - |
100K | 03:14:59 | FAILED | - | 03:24:41 | FAILED | - | 03:24:26 | FAILED | - | 03:24:18 | FAILED | - | 04:07:12 | FAILED | - |
Link to the report with Orchid items apps testing results Bulk Edit Items App report [Orchid] 08/03/2023
Resource utilization
Instance CPU Utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the Bulk-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing at about 21:30, and the grey graph was running up to 2 a.m.), but on all other bulk-edits jobs, there were no problems with this file.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Blurred areas contain errors from the load generator.
Memory usage
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU
Service CPU usage
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU
RDS CPU utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. Average CPU for 1VU ~ 32%; 2VU ~ 43%; 3VU ~ 50%; 4VU ~ 52%; 5VU ~ 63%;
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Average CPU for 1VU ~ 40%; 2VU ~ 51%; 3VU ~ 53%; 4VU ~ 62%; 5VU ~ 80%;
Database connections
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. The number of connections to the database did not exceed 200
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. The number of connections to the database did not exceed 200
Database load
Part of test 1. Bulk-edit 100K records 5VU.
TOP SQL Queries
Appendix
Infrastructure
PTF -environment pcp1
- 11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
...
- Item records update:
- Upload file with item barcodes
- Click the Start bulk edit option in the Action menu and make the following changes:
- Set Temporary location to Clear field
- Set Permanent location to < to the value available on test environment>
- Set Status to Unknown
- Set Temporary loan type to Clear field
- Set Permanent loan type to < to the value available on test environment>
- Add Administrative note by adding text: "This is a new administrative note"
- Add Action note by adding text: "This is a new action note"
- Suppress from discovery (set the value to true)
- Confirm the changes
- Commit the changes
- Verify the changes are correct
- Download the file with updated records
- Download the file with errors (if applicable)
Test 1: From Was run manually from UI run Bulk-edit job with the configuration above. Run up to 5 concurrent processes in new browser tabs. Check if after the update files are downloaded successfully.
Test 2: Manually tested 100k+50k+1 record files DI started simultaneously on every 3 tenants (9 jobs total).
Test 3: Run CICO on one tenant, DI jobs 3 tenants, including the one that runs CICO. Start the second job after the first one reaches 30%, and start another job on a third tenant after the first job reaches 60% completion. CICO: 20 users, DI file size: 25k
Test 4. To define the optimal value for RECORDS_PER_SPLIT_FILE(500, 1K, 2K, 5K) data-import job with PTF-Create-2 profile were run for 25K for 1 tenant simultaneously, for 2 tenants and for 3 tenants.. Was run from the Jmeter script. The configuration above was added to the POST method to run bulk-edit with proper configuration. Tests were run for 100-1000-5000-10k records successively from 1VU up to 5VU