Overview
Per PERF-267, test EDIFACT PERF-270 exports of 10K records to understand the workflow behavior before and when the mod-data-export-worker task crashes, if it crashes at all.
- How long does it take to export 10K records?
- What happens to the job that is running, will it be able to resume and complete successfully when the new task is spun up?
- Look for a memory trend and use it to decide on the number of concurrent jobs needed to reach the tipping point.
Infrastructure
- 10 m6i.2xlarge EC2 instances (changed. In Lotus it was m5.xlarge)
- 2 instances of db.r6.xlarge database instances, one reader and one writer
- MSK
- 4 m5.2xlarge brokers in 2 zones
- auto.create-topics.enable = true
- log.retention.minutes=120
- 2 partitions per DI topics
Software Versions
- mod-data-export-worker v 1.4.2
- mod-data-export-spring v 1.4.2
- mod-agreements:5.2.0
- mod-notes:3.1.0
- mod-orders-storage:13.3.0
- mod-orders:12.4.1
Results
Summary
job Num | Number of records | Duration |
---|---|---|
000257 | 1k | 1 min 52 s |
000258 | 2k | 3 min 33 s |
000259 | 5k | 8 min 50 s |
000260 | 10k | 17 min 4 s |
- All resources shows predictable behaviour without gaps and sudden spikes.
- Memory usage growing on mod-orders (48%-56%) and mod-orders-storage (45%-51%). However it doesn’t looks like memory leak as it didn't grow continuously. It just grows at the beginning of subsequent exports.
Knowing issue:
Error rendering macro 'jira' : Unable to locate Jira server for this macro. It may be due to Application Link configuration.
. EDIFACT export scheduling doesn't support work in the cluster and after restart docker containerResources usage
Endurance test
Below charts represents that system is working stable.
For the endurance testing scheduler was set to occur hourly over night.
As you can see there is no growing of memory usage