Overview
Bulk Edits - Establish a performance baseline for combined bulk updates PERF-480 in the Orchid release that has architectural changes that were implemented in UXPROD-3842. The goal is to make sure that different bulk edits can be performed simultaneously.
- How long does it take to export 100, 1000, 2500, 5000 records?
- Use it for up to 5 concurrent users.
- Look for a memory trend and CPU usage
Summary
Test report for Bulk Edits users-app functionality 2022-10-26.
- 5k records per user, 5 users simultaneously (25k records total) can be uploaded in about 6 min 13 seconds,
- The files with identifiers should be strictly determined.
- The memory of all modules during the tests for 5000 records with 5 parallel was stable, the memory gap in figure 1 was cause by the import data process.
- Instance CPU usage
- maximal value for text 4VU (0.1-1-2.5k-5k) was 26%
- maximal value for text 5VU (0.1-1-2.5k-5k) was 27%
- Service CPU usage
- CPU - 2500 records per user for 5 parallel jobs -CPU of mod-users was 199%, for all other modules did not exceed 15%. 5000 or 10k records per user for 5 parallel jobs -CPU of mod-users was 174%, and for all other modules did not exceed 23%.
- RDS CPU utilization did not exceed 36% for 5jobs 2500 records and 49% for 5jobs 5k or 10k records.
Recommendations & Jiras
For further testing Users' bulk editing can be performed with 10k records
Update data records
Infrastructure
PTF -environment
- 9 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK ptf-kakfka-1
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=truec
- log.retention.minutes=480
- default.replication.factor=3
Test Runs
Total processing time of upload and edit - commit changes. Units =minutes:seconds
Number of virtual user/ Records | 1VU | 2VU | 3VU | 4VU | 5VU |
---|---|---|---|---|---|
100 records | 00:01:05 | 00:01:03 00:01:03 | 00:01:03 00:01:03 00:01:05 | 00:01:03 00:01:03 00:01:03 00:01:04 | 00:01:04 00:01:04. 00:01:04 00:01:04 00:01:04 |
1000 records | 00:01:37 | 00:01:59 00:01:35 | 00:02:02 00:02:05 00:01:36 | 00:02:02 00:02:02 00:01:59 00:02:00 | 00:02:11 00:02:13 00:02:13 00:02:05 00:02:06 |
2500 records | 00:03:28 | 00:03:30 00:03:31 | 00:03:42 00:03:42 00:01:10(*1) | 00:00:23(*2) 00:03:47 00:03:46 00:03:46 | 00:03:48 00:04:03 00:03:44 00:04:04 00:04:04 |
5000 records | 00:05:13 | 00:06:36 (*3) | 00:06:48 00:06:50 00:06:50 (*3) | 00:06:32 00:06:32 00:02:29. 00:02:29 (*3) | 00:06:13 00:06:13 00:06:11 00:06:08 00:06:13 (*3) |
(*1) Duplicate barcodes in CSV input data, 1200 out of 2500 records were processed
(*2) Index 546 out of bounds for length 13 (ArrayIndexOutOfBoundsException) - MODEXPW-306Getting issue details... STATUS
(*3) Was running as a separate test to avoid duplication of input data
Comparison with previous results
5VU, Records | Nolana | Orchid |
---|---|---|
2500 | 2 min 9 sec | 4 minutes 4 seconds |
5000 records | 3 min 47 sec | 6minutes 13 seconds |
Memory usage
Figure 1 shows memory usage during the testing with 1-5 concurrent and 100-1000-2500-5000 records
Figure 2 shows memory usage during the testing with 2-5 concurrent and 5000 records
Figure 3 shows memory usage for the same time range as Figure 2, but only involved services were selected.
The time range marked with # Several modules were restarted and then data import jobs were running on the environment
Figure 1
Figure 2
Figure 3
Instance CPU utilization
Figure 4. CPU instance utilization for 1 up to 5 Vu with 100-1000-2500-5000 records
The test 5VU(0.1-1-2.5k-5k) was restarted because there were errors in the input data
Figure 5. CPU instance utilization for 2 up to5 Vu with 5000 records
Service CPU utilization
Figure 6. CPU instance utilization for 1 up to 5 Vu with 100-1000-2500-5000 records
Figure 7. CPU instance utilization for 1 up to 5 Vu with 100-1000-2500-5000 records
RDS CPU utilization
For all tests - RDS CPU utilization exceeds 40% (for tests with 5k users records). For the tests with 2500 user records did not exceed 27%.
Database connections
Errors in logs during testing
2023-03-22T19:47:19.986Z 19:47:19 [] [] [] [] ERROR ? HTTP response code=404 msg=No suitable module found for path /holdings-sources/ for tenant fs09000000 ncp5/okapi-b/8dcac0276f1c46cba21d6e5814ec6cd0 Field Value @ingestionTime 1679514444708 @log 054267740449:ncp5-folio-eis @logStream ncp5/okapi-b/8dcac0276f1c46cba21d6e5814ec6cd0 @message 19:47:19 [] [] [] [] ERROR ? HTTP response code=404 msg=No suitable module found for path /holdings-sources/ for tenant fs09000000 @timestamp 1679514439986
2023-03-22T19:47:19.985Z 19:47:19 [${FolioLoggingContext:requestid}] [${FolioLoggingContext:tenantid}] [${FolioLoggingContext:userid}] [${FolioLoggingContext:moduleid}] ERROR oldingsDataProcessor Holdings source was not found by id=null ncp5/mod-bulk-operations/bfcbe6d984e1443bb3e2e49dbd14601e Field Value @ingestionTime 1679514442775 @log 054267740449:ncp5-folio-eis @logStream ncp5/mod-bulk-operations/bfcbe6d984e1443bb3e2e49dbd14601e @message 19:47:19 [${FolioLoggingContext:requestid}] [${FolioLoggingContext:tenantid}] [${FolioLoggingContext:userid}] [${FolioLoggingContext:moduleid}] ERROR oldingsDataProcessor Holdings source was not found by id=null @timestamp 1679514439985
Appendix
Infrastructure
PTF -environment ncp5 [ environment name]
- 8 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1 [Number of ECS instances, instance type, location region]
- 2 instances of db.r6.xlarge database instances: Writer & reader instances
- MSK ptf-kakfa-3 [ kafka configurations]
- 4 kafka.m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Modules memory and CPU parameters:
Module | SoftLimit | XMX | Revision | Version | desiredCount | CPUUnits | RWSplitEnabled | HardLimit | Metaspace | MaxMetaspaceSize |
---|---|---|---|---|---|---|---|---|---|---|
mod-inventory-storage-b | 1952 | 1440 | 3 | mod-inventory-storage:26.1.0-SNAPSHOT.644 | 2 | 1024 | False | 2208 | 384 | 512 |
mod-inventory-b | 2592 | 1814 | 7 | mod-inventory:20.0.0-SNAPSHOT.392 | 2 | 1024 | False | 2880 | 384 | 512 |
okapi-b | 1440 | 922 | 1 | okapi:5.1.0-SNAPSHOT.1352 | 3 | 1024 | False | 1684 | 384 | 512 |
mod-users-b | 896 | 768 | 4 | mod-users:19.2.0-SNAPSHOT.584 | 2 | 128 | False | 1024 | 88 | 128 |
mod-data-export-worker | 2600 | 2048 | 3 | mod-data-export-worker:3.0.0-SNAPSHOT.104 | 2 | 1024 | False | 3072 | 384 | 512 |
mod-data-export-spring | 1844 | 1292 | 3 | mod-data-export-spring:2.0.0-SNAPSHOT.67 | 1 | 256 | False | 2048 | 200 | 256 |
mod-bulk-operations | 3864 | 0 | 10 | mod-bulk-operations:1.0.2 | 2 | 400 | False | 4096 | 384 | 512 |
mod-notes | 896 | 322 | 3 | mod-notes:5.1.0-SNAPSHOT.245 | 2 | 128 | False | 1024 | 128 | 128 |
mod-agreements | 2580 | 2048 | 3 | mod-agreements:5.6.0-SNAPSHOT.117 | 2 | 128 | False | 3096 | 384 | 512 |
nginx-okapi | 896 | 0 | 3 | nginx-okapi:2022.03.02 | 2 | 128 | False | 1024 | 0 | 0 |