Table of Contents |
---|
Overview
...
- Can it be used with up to 5 concurrent users?
- Run consecutively four jobs editing 10k item records
- Run simultaneously four jobs editing 10k item records
- Look for a memory trend and CPU usage
Summary
Results
Total processing time of upload and edit - commit changes. Units =hours:minutes: seconds
...
Number of virtual user/ Records | 1VU | 2VU | 3VU | 4VU | 5VU |
---|---|---|---|---|---|
100 records | 00:01:14 | 00:01:13 00:01:14 | 00:01:15 00:01:13 00:01:13 | 00:01:11 00:01:12 00:01:11 00:01:11 | 00:01:12 00:01:13 00:01:14 00:01:12 00:01:14 |
1000 records | 00:02:53 | 00:03:01 00:02:54 | 00:02:51 00:02:56 00:02:53 | 00:03:04 00:03:03 00:03:02 00:03:06 | 00:03:10 00:03:04 00:03:07 00:03:06 00:03:13 |
5000 records | 00:10:20 | 00:11:13 00:10:33 | 00:11:13 00:10:33 00:10:28 | 00:10:56 00:10:56 00:11:01 00:11:35 | 00:12:34 00:13:19 00:12:34 00:12:30 00:12:33 |
10000 records | 00:19:38 | 00:20:47 00:20:13 | 00:20:50 00:20:40 00:20:04 | 00:21:00 00:20:49 00:21:09 00:20:54 | 00:22:21 00:22:16 00:22:09 00:22:17 00:22:13 |
100K records | 03:14:59 | 03:33:24 03:15:41 | 03:27:06 03:21:31 03:25:10 | 06:10:23 03:33:23 03:20:39 03:21:24 | 04:04:24 04:02:45 04:03:11 04:06:32 04:12:04 |
Resource utilization
Instance CPU Utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the Bul-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing about 21:30, and the grey graph was running up to 2 a.m), but on all others bulk-edits jobs there were no problems with this file.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU
Memory usage
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU
Service CPU usage
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.
...
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU
RDS CPU utilization
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. Average CPU for 1VU ~ 32%; 2VU ~ 43%; 3VU ~ 50%; 4VU ~ 52%; 5VU ~ 63%;
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Average CPU for 1VU ~ 40%; 2VU ~ 51%; 3VU ~ 53%; 4VU ~ 62%; 5VU ~ 80%;
Database connections
Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. The number of connections to the database did not exceed 200
Test 2. Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. The number of connections to the database did not exceed 200
Database load
Part of test 1. Bulk-edit 100K records 5VU.
TOP SQL Queries
Infrastructure
PTF -environment pcp1
- 11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK tenant
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
...
- Item records update:
- Upload file with item barcodes
- Click the Start bulk edit option in the Action menu and make the following changes:
- Set Temporary location to Clear field
- Set Permanent location to < to the value available on test environment>
- Set Status to Unknown
- Set Temporary loan type to Clear field
- Set Permanent loan type to < to the value available on test environment>
- Add Administrative note by adding text: "This is a new administrative note"
- Add Action note by adding text: "This is a new action note"
- Suppress from discovery (set the value to true)
- Confirm the changes
- Commit the changes
- Verify the changes are correct
- Download the file with updated records
- Download the file with errors (if applicable)
Test 1: From Was run manually from UI run Bulk-edit job with the configuration above. Run up to 5 concurrent processes in new browser tabs. Check if after the update files are downloaded successfully.
Test 2: Manually tested 100k+50k+1 record files DI started simultaneously on every 3 tenants (9 jobs total).
Test 3: Run CICO on one tenant, DI jobs 3 tenants, including the one that runs CICO. Start the second job after the first one reaches 30%, and start another job on a third tenant after the first job reaches 60% completion. CICO: 20 users, DI file size: 25k
Test 4. To define the optimal value for RECORDS_PER_SPLIT_FILE(500, 1K, 2K, 5K) data-import job with PTF-Create-2 profile were run for 25K for 1 tenant simultaneously, for 2 tenants and for 3 tenants.. Was run from the Jmeter script. The configuration above was added to the POST method to run bulk-edit with proper configuration. Tests were run for 100-1000-5000-10k records successively from 1VU up to 5VU