Bulk Edit Users App report [Orchid] September 2023

Overview

Bulk Edits - Establish a performance baseline for combined bulk updates PERF-480 in the Orchid release that has architectural changes that were implemented in UXPROD-3842. The goal is to make sure that bulk edits can be performed simultaneously. 

  • How long does it take to export 100, 1000, 2500, 5000 records?
  • Use it for up to 5 concurrent users.  
  • Look for a memory trnend and CPU usage

Summary.

Test report for Bulk Edits users-app functionality 2023-09-15. 

  • 5k records per user, 5 users simultaneously (25k records total) can be uploaded in about 6 min 13 seconds, 
    Possible job duration degradation compared to NOLANA realese is caused by adding of new mod-bulk-operation module and by fully changing of bulk-edit architecture.
  • The files with identifiers should be strictly determined.
  • The memory of all modules during the tests for 5000 records with 5 parallel was stable, the memory gap in Figure 1 was caused by restarting several modules before DI job run. 
  • Instance CPU usage 
    • maximal value for text 4VU (0.1-1-2.5k-5k) was 26%
    • maximal value for text 5VU (0.1-1-2.5k-5k) was 27%
  • Service CPU usage for test 5VU (0.1-1-2.5k-5k) 
    • CPU of mod-bulk-operations 126%, for all other modules, did not exceed 22%. 
  • RDS CPU utilization did not exceed 43% for 5jobs 5000 records and 34% for 4jobs  5k.


Recommendations & Jiras

For further testing Users' bulk editing can be performed with 10k records


Test Runs

Total processing time of upload and edit - commit changes. Units =minutes:seconds

Number of virtual user/ Records1VU2VU3VU4VU5VU
100 records00:01:05

00:01:03

00:01:03

00:01:03
00:01:03
00:01:05
00:01:03
00:01:03
00:01:03
00:01:04
00:01:04
00:01:04.
00:01:04
00:01:04
00:01:04
1000 records00:01:3700:01:59
00:01:35
00:02:02
00:02:05
00:01:36
00:02:02
00:02:02
00:01:59
00:02:00
00:02:11
00:02:13
00:02:13
00:02:05
00:02:06
2500 records00:03:2800:03:30
00:03:31
00:03:42
00:03:42
00:01:10(*1)
00:00:23(*2)
00:03:47
00:03:46
00:03:46
00:03:48
00:04:03
00:03:44
00:04:04
00:04:04
5000 records00:05:13

00:06:36
00:06:38

(*3)

00:06:48
00:06:50
00:06:50
(*3)
00:06:44
00:06:36
00:06:39.
00:06:46 
(*3)
00:06:13
00:06:13
00:06:11
00:06:08
00:06:13
(*3)

(*1) Duplicate barcodes in CSV input data, 1200 out of 2500 records were processed

(*2) Index 546 out of bounds for length 13 (ArrayIndexOutOfBoundsException) MODEXPW-306 - Getting issue details... STATUS

(*3) Was running as a separate test to avoid duplication of input data

Comparison with previous results

5VU,
Records
Nolana  

Orchid
2500 2 min 9 sec4 minutes 4 seconds
5000 records3 min 47 sec6minutes 13
seconds

Possible job duration degradation caused by adding of new mod-bulk-operation module and by fully changing of bulk-edit architecture.

Memory usage

The time range marked with # Several modules were restarted and then data import jobs were running on the environment (DI jobs were not part of Bulk Edits testing). 

Memory usage during the testing with 1-5 concurrent and  100-1000-2500-5000 records

Memory usage during the testing with 2-5 concurrent and 5000 records

Memory usage for the same time range as the figure below, but only involved services were selected.

Instance CPU utilization

CPU instance utilization for 1 up to 5 Vu with 100-1000-2500-5000 records

 The test 5VU(0.1-1-2.5k-5k) was restarted because there were in the input data(Barcodes duplication)

CPU instance utilization for 2 up to 5 Vu with 5000 records

Service CPU utilization

CPU instance utilization for 1 up to 5 Vu with 100-1000-2500-5000 records

CPU instance utilization for 2 up to 5 VU with 5000 records

RDS CPU utilization

For all tests - RDS CPU utilization exceeds 40% (for tests with 5k user records). For the tests with 2500 user records did not exceed 27%.

Database connections

Errors in logs during testing

(*1) Duplicate barcodes in CSV input data, 1200 out of 2500 records were processed

(*2) Index 546 out of bounds for length 13 (ArrayIndexOutOfBoundsException)