Bulk Edit Items App report [Poppy] 06/12/2023

Bulk Edit Items App report [Poppy] 06/12/2023

Overview

Bulk Edit - Establish a performance baseline for user status bulk updates. PERF-750: [Poppy] [Bulk edit] Item recordsClosed

  • How long does it take to export 100, 1000, 5000, 10k, and 100K records?

  • Use it for up to 5 concurrent users.  

  • Look for a memory trend and CPU usage

  • Pay attention to any RTR token expiration messages and observe how/if BE is affected by expiring tokens. If needed set the Access token's expiration time to 300s or less to trigger the Access token's quick expiration. 

Summary 

  • All tests were successful, and 100K records files after bulk-edit were downloaded(test completed successfully). System has not reached maximum capacity therefore, the number of VU can be increased to 7.

  • No errors with messages like "Invalid token" or messages like "Access token has expired" during 2 tests.

  • With an increase in the number of virtual users, the operating time of bulk-edit increases(All results are in the first table).

  • Comparing the test results on Poppy and Orchid releases, we can conclude that the processing time increased by up to 10%(However, the previous report lacked data on the duration of Bulk-edit ).

  • The system was stable during the test. Maximal resource utilization was during 100K records bulk-edit and 5 VU concurrently.

    • Max CPU utilization was on nginx-okapi(60%) and okapi(50%) services. 

    • Service memory usage was without memory leaks. Except the mod-search service during Test 2.

    • Average DB CPU usage was about 63%

    • Number of DB connections ~ 200

  • Comparing the resource utilization graphs, we can say that the system behavior on the Poppy release is the same as on Orchid.

  • From Orchid JIRAS "The high CPU usage of mod-users (up to 125% ) needs to be investigated."  During 2 tests max CPU consumption was about 40% for mod-users service.

    Recommendations & Jiras

  • The high memory usage of mod-search service during test 2, needs to be investigated

 

 

Results


Total processing time of upload and edit - commit changes. Units =hours:minutes: seconds

Number of virtual user/ Records

1VU

2VU

3VU

4VU

5VU

Number of virtual user/ Records

1VU

2VU

3VU

4VU

5VU

100 records

00:01:14

00:01:13
00:01:14

00:01:15

00:01:13

00:01:13

00:01:11
00:01:12
00:01:11
00:01:11

00:01:12

00:01:13

00:01:14

00:01:12

00:01:14

1000 records

00:02:53

00:03:01

00:02:54

00:02:51

00:02:56

00:02:53

00:03:04

00:03:03

00:03:02

00:03:06

00:03:10

00:03:04

00:03:07

00:03:06

00:03:13

5000 records

00:10:20

00:11:13

00:10:33

00:11:13

00:10:33

00:10:28

00:10:56

00:10:56

00:11:01

00:11:35

00:12:34

00:13:19

00:12:34

00:12:30

00:12:33

10000 records

00:19:38

00:20:47

00:20:13

00:20:50

00:20:40

00:20:04

00:21:00

00:20:49

00:21:09

00:20:54

00:22:21

00:22:16

00:22:09

00:22:17

00:22:13

100K records

03:14:59

03:33:24

03:15:41

03:27:06

03:21:31

03:25:10

06:10:23* 

03:33:23

03:20:39

03:21:24

04:04:24

04:02:45

04:03:11

04:06:32

04:12:04

 

Comparison table of bulk-edit process duration on Poppy and Orchid releases

Number of VU/ Records

1VU 
Poppy

Average 

1VU
Orchid
Average 

diff,%

2VU 
Poppy

Average 

2VU
Orchid

Average 

diff,%

3VU 
Poppy

Average 

3VU
Orchid

Average 

diff,%

4VU 
Poppy

Average 

4VU
Orchid

Average 

diff,%

5VU 
Poppy

Average 

5VU
Orchid

Average 

diff,%

Number of VU/ Records

1VU 
Poppy

Average 

1VU
Orchid
Average 

diff,%

2VU 
Poppy

Average 

2VU
Orchid

Average 

diff,%

3VU 
Poppy

Average 

3VU
Orchid

Average 

diff,%

4VU 
Poppy

Average 

4VU
Orchid

Average 

diff,%

5VU 
Poppy

Average 

5VU
Orchid

Average 

diff,%

100

00:01:14

00:01:09

9%

00:01:14

not tested

-

00:01:14

not tested

-

00:01:11

not tested

-

00:01:13

not tested

-

1000

00:02:53

00:02:36

9%

00:02:58

not tested

-

00:02:53

not tested

-

00:03:03

not tested

-

00:03:08

not tested

-

10k

00:19:38

00:17:50

9%

00:20:28

00:17:50

8.7%

00:20:15

00:18:50

9%

00:20:55

00:19:10

9%

00:22:18

00:20:20

9%

50K

not tested

01:58:20

-

not tested

not tested

-

not tested

not tested

-

not tested

not tested

-

not tested

not tested

-

100K 

03:14:59

FAILED

-

03:24:41

FAILED

-

03:24:26

FAILED

-

03:24:18

FAILED

-

04:07:12

FAILED

-

Link to the report with Orchid items apps testing results Bulk Edit Items App report [Orchid] 08/03/2023



Resource utilization

Instance CPU Utilization

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. From the Instance CPU utilization graph, you can see that the bulk-edit process consists of 2 parts: file uploading(90-120 minutes) and records processing.
*During the Bulk-edit for 4 VU, one process was uploading 100K records file for more than 4 hours( jobs finished processing at about 21:30, and the grey graph was running up to 2 a.m.), but on all other bulk-edits jobs, there were no problems with this file.

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Blurred areas contain errors from the load generator.

Memory usage

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. 

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU

Service CPU usage

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. 

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU

RDS CPU utilization

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU.  Average CPU for 1VU ~ 32%; 2VU ~ 43%; 3VU ~ 50%; 4VU ~ 52%; 5VU ~ 63%; 

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. Average CPU for 1VU ~ 40%; 2VU ~ 51%; 3VU ~ 53%; 4VU ~ 62%; 5VU ~ 80%; 

Database connections

Test 1. Bulk-edit 5 consecutive runs of 100K records, starting from 1VU up to 5VU. The number of connections to the database did not exceed 200

Test 2.   Bulk-edit 100-1000-5000-10k records successively from 1VU up to 5VU. The number of connections to the database did not exceed 200

Database load 

Part of test 1. Bulk-edit 100K records 5VU. 

TOP SQL Queries





Appendix

Infrastructure

PTF -environment pcp1

  • 11 m6g.2xlarge EC2 instances located in US East (N. Virginia)us-east-1

  • 2 instances of db.r6.xlarge database instances, one reader, and one writer

  • MSK tenant

    • 4 m5.2xlarge brokers in 2 zones

    • Apache Kafka version 2.8.0

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true

    • log.retention.minutes=480

    • default.replication.factor=3

Modules memory and CPU parameters

Module

Task Def. Revision

Module Version

Task Count

Mem Hard Limit

Mem Soft limit

CPU units

Xmx

MetaspaceSize

MaxMetaspaceSize

R/W split enabled

pcp1-pvt

Dec 06 13:08:42 UTC 2023

mod-authtoken

13

/mod-authtoken:2.14.1

2

1440

1152

512

922

88

128

FALSE

mod-inventory-update

9

/mod-inventory-update:3.2.1

2

1024

896

128

768

88

128

FALSE

mod-bulk-operations

8

mod-bulk-operations:1.1.0

2

3072

2600

1024

1536

384

512

FALSE

mod-users-bl

9

/mod-users-bl:7.6.0

2

1440

1152

512

922

88

128

FALSE

mod-inventory-storage

12

mod-inventory-storage:27.0.3

2

4096

3690

2048

3076

384

512

FALSE

mod-data-export-worker

9

mod-data-export-worker:3.1.0

2

3072

2800

1024