Bulk Edit report for Users App [Morning Glory]
IN PROGRESS
Overview
Per PERF-267, test Bulk Edit (PERF-271) of 10K records to understand the workflow behavior before and when the mod-data-export-worker task crashes, if it crashes at all.
- How long does it take to export 10K records?
- What happens to the job that is running, will it be able to resume and complete successfully when the new task is spun up?
- Look for a memory trend and use it to decide on the number of concurrent jobs needed to reach the tipping point.
Infrastructure
PTF -environment
- 10 m6i.2xlarge EC2 instances (changed. In Lotus it was m5.xlarge)
- 2 instances of db.r6.xlarge database instances, one reader and one writer
- MSK
- 4 m5.2xlarge brokers in 2 zones
- auto.create-topics.enable = true
- log.retention.minutes=120
- 2 partitions per DI topics
- okapi (running tasks -3)
- 1024 CPU units, 1360 MB mem
- mod-users (running tasks -2)
- 128 CPU units, 896 MB mem
- mod-data-export (running tasks -1)
- 1024 CPU units, 896 MB mem
- mod-data-export-spring (running tasks -1)
- 256 CPU units, 1844 MB mem
- mod-data-export-worker (running tasks -1)
- 256 CPU units, 1844 MB mem
- mod-notes (running tasks -2)
- 128 CPU units, 896 MB mem
- mod-agreements (running tasks -2)
- 128 CPU units, 1382 MB mem
Software Versions
- mod-data-export-worker v 1.4.1
- mod-data-export-spring v 1.4.1
- mod-agreements:5.2.0
- mod-notes: 3.1.0
- mod-users: 18.3.0
Test Runs
Scenario
*Users App:
1.Navigate to the Bulk edit app
2.Select Users App
3.Select Users identifier from "Records identifier" dropdown
4.Upload .csv file with Users identifiers by dragging it on the Drag & drop area
5.Click "Actions" menu => "Download matched records (CSV)"
6.Open the downloaded to the local machine file
7.Modify Users status or patron group in the file => Save changes
8.Click "Actions" menu => Select "Start bulk edit (CSV)"
9.Upload the modified file to the Drag & drop zone => Hit "Next" => Hit "Commit changes"
10.Click "Actions" => Select "Download changed records (CSV)"
Record identifier files location - bulk_edit_test_data.zip
Test | Records number | Duration | Results for finding records | Comment | Record identifier file name | Time to process (file upload time+ edited file upload time+ commit time) |
1. | 100 | multiple time check | PASS | Always Pass | 100_User_barcodes_ptf.csv | about 5+5+2sec (file upload time+ edited file upload time+ commit time) |
2. | 1000 | multiple time check | PASS | Always Pass | 1.000_User_barcodes_ptf.csv | about 15+15+5 sec |
3. | 2000 | multiple time check | PASS | Always Pass | 2.000_User_barcodes_ptf.csv | 20+20+15 sec |
4. | 2500 | multiple time check | PASS | Always Pass | 2.500_User_barcodes_ptf.csv | about 30+ 30 sec |
5. | 2560 | multiple time check | PASS | Always Pass (the max record number) | 2.560_User_barcodes_ptf.csv | about 30 +30sec |
6. | 2590 | multiple time check | PASS/FAIL | Sometimes PASS or FAIL | 2.590_User_barcodes_ptf.csv | about 30 sec |
7. | 2600 | multiple time check | FAIL | the identifier file can be uploaded but the edited file upload is not available | 2.600_User_barcodes_ptf.csv | about 30 sec |
8. | 3000 | multiple time check | FAIL | the identifier file can be uploaded but the edited file upload is not available | 3.000_User_barcodes_ptf.csv | about 30 sec |
9. | 5000 | multiple time check | FAIL | the identifier file can be uploaded but the edited file upload is not available | 5.000_User_barcodes_ptf.csv | about 30 sec |
10. | 10000 | multiple time check | FAIL | the identifier file can be uploaded but the edited file upload is not available | 10.000_User_barcodes_ptf.csv | about 30 sec |
Results
Summary
- This is the initial test report for Bulk Edits Users App functionality.
- 10 K records can not be exported, there is a limit of about 2560 records with no fails(3 times try) up to 2590 records can be successful from time to time, and from 2600 records-fails.
- records file uploading time of about 30 sec for both success and failure;
- records file with edited data uploading time of about 30 sec
- The system is unstable and every time fails during the commit changes procedure for more than 2000 users (folio account blocks).
- files with the whole amount of data can be downloaded for making changes. When you use windows and try to open the file in excel-> barcode values are automatically changed to the suggested format and as a result, users can change barcodes.
- The start is unidentified (no warnings about the process is started available);
- Memory trend: memory usage is stable.
- CPU utilization for mod-users was very high up to 135% for 3000, 5000, and 10000 records bulk edit. Record identifier file upload failed.
- Parallel jobs can be performed simultaneously only if started with the ramp-up of a minimum of 10sec (for both upload and editing processes). If the jobs started with fewer 10s intervals they will have IN_PROGRESS status forever.
- Failover test performed for uploading the file with 2000 records (mod-data-export-worker task was stopped) - result "Fail to upload file"→ Jobs status becomes "In progress" and does not change.
Failover test
Memory usage
RDS CPU utilization
RDS CPU utilization did not exceed 10.5%
CPU utilization
CPU utilization for mod-users was very high up to 135% and for 3000, 5000, and 10000 records bulk edit fail
Notable observations
- There is no way to track exporting progress.
- How many records are updated at this time?
- Has file upload started yet?
- Have changes committed yet (You can click "Commit changes" multiple times)?