Bulk Edit Items App report [Poppy] 06/12/2023

Overview

Bulk Edit - Establish a performance baseline for user status bulk updates. PERF-750 - Getting issue details... STATUS

  • How long does it take to export 100, 1000, 5000, 10k, and 100K records?
  • Use it for up to 5 concurrent users.  
  • Look for a memory trend and CPU usage
  • Pay attention to any RTR token expiration messages and observe how/if BE is affected by expiring tokens. If needed set the Access token's expiration time to 300s or less to trigger the Access token's quick expiration. 

Summary 

  • All tests were successful, and 100K records files after bulk-edit were downloaded(test completed successfully). System has not reached maximum capacity therefore, the number of VU can be increased to 7.
  • No errors with messages like "Invalid token" or messages like "Access token has expired" during 2 tests.
  • With an increase in the number of virtual users, the operating time of bulk-edit increases(All results are in the first table).
  • Comparing the test results on Poppy and Orchid releases, we can conclude that the processing time increased by up to 10%(However, the previous report lacked data on the duration of Bulk-edit ).
  • The system was stable during the test. Maximal resource utilization was during 100K records bulk-edit and 5 VU concurrently.
    • Max CPU utilization was on nginx-okapi(60%) and okapi(50%) services. 
    • Service memory usage was without memory leaks. Except the mod-search service during Test 2.
    • Average DB CPU usage was about 63%
    • Number of DB connections ~ 200
  • Comparing the resource utilization graphs, we can say that the system behavior on the Poppy release is the same as on Orchid.
  • From Orchid JIRAS "The high CPU usage of mod-users (up to 125% ) needs to be investigated."  During 2 tests max CPU consumption was about 40% for mod-users service.

    Recommendations & Jiras

  • The high memory usage of mod-search service during test 2, needs to be investigated


 

Results

Total processing time of upload and edit - commit changes. Units =hours:minutes: seconds

Number of virtual user/ Records

1VU

2VU

3VU

4VU

5VU

100 records00:01:1400:01:13
00:01:14

00:01:15

00:01:13

00:01:13

00:01:11
00:01:12
00:01:11
00:01:11

00:01:12

00:01:13

00:01:14

00:01:12

00:01:14

1000 records00:02:53

00:03:01

00:02:54

00:02:51

00:02:56

00:02:53

00:03:04

00:03:03

00:03:02

00:03:06

00:03:10

00:03:04

00:03:07

00:03:06

00:03:13

5000 records00:10:20

00:11:13

00:10:33

00:11:13

00:10:33

00:10:28

00:10:56

00:10:56

00:11:01

00:11:35

00:12:34

00:13:19

00:12:34

00:12:30

00:12:33

10000 records00:19:38

00:20:47

00:20:13

00:20:50

00:20:40

00:20:04

00:21:00

00:20:49

00:21:09

00:20:54

00:22:21

00:22:16

00:22:09

00:22:17

00:22:13

100K records

03:14:59

03:33:24

03:15:41

03:27:06

03:21:31

03:25:10

06:10:23* 

03:33:23

03:20:39

03:21:24

04:04:24

04:02:45

04:03:11

04:06:32

04:12:04


Comparison table of bulk-edit process duration on Poppy and Orchid releases

Number of VU/ Records

1VU 
Poppy

Average 

1VU
Orchid
Average 
diff,%

2VU 
Poppy

Average 

2VU
Orchid

Average 

diff,%

3VU 
Poppy

Average 

3VU
Orchid

Average 

diff,%

4VU 
Poppy

Average 

4VU
Orchid

Average 

diff,%

5VU 
Poppy

Average 

5VU
Orchid

Average 

diff,%

10000:01:1400:01:099%00:01:14not tested-00:01:14not tested-00:01:11not tested-00:01:13not tested-
100000:02:5300:02:369%00:02:58not tested-00:02:53not tested-00:03:03not tested-00:03:08not tested-
10k00:19:3800:17:509%00:20:2800:17:508.7%00:20:1500:18:509%00:20:5500:19:109%00:22:1800:20:209%
50Knot tested01:58:20-not testednot tested-not testednot tested-not testednot tested-not testednot tested-
100K 03:14:59FAILED-03:24:41FAILED-03:24:26FAILED-03:24:18FAILED-04:07:12FAILED-

Link to the report with Orchid items apps testing results Bulk Edit Items App report [Orchid] 08/03/2023



Resource utilization

Instance CPU Utilization

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the Bulk-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing at about 21:30, and the grey graph was running up to 2 a.m.), but on all other bulk-edits jobs, there were no problems with this file.

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Blurred areas contain errors from the load generator.

Memory usage

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. 

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU

Service CPU usage

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. 

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU

RDS CPU utilization

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.  Average CPU for 1VU ~ 32%; 2VU ~ 43%; 3VU ~ 50%; 4VU ~ 52%; 5VU ~ 63%;