Test status: PASSED| PASSED WITH RESTICTION| FAILED
Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
The general recommendation for report formatting:[Do not include the section in reports ]
Make the page full-width Set your Confluence page to full-width to utilize the entire space available. This makes your report more readable and gives more room for arranging content aesthetically.
Image wide 1600px. This width offers a good balance, ensuring that images are clear and detailed without causing excessive loading times or appearing too large on the page.
For graphs from AWS CloudWatch use 1 minute metrics aggregation.
Further changes are welcome
Overview
[ A brief introduction about the content of the page, why we are testing the workflow and reference the Jira(s)]
...
[Table of tests with short descriptions. If there are motivations to run additional tests because of any reason, include a note column to explain]
Test # | Test Conditions | Duration | Load generator size (recommended) | Load generator Memory(GiB) (recommended) | Notes (Optional) |
---|---|---|---|---|---|
8 users CI/CO + DI 50k MARC BIB Create+ 10k Items editing | 30 mins | t3.medium | 3 | ||
8 users CI/CO + DI 50k MARC BIB Create+ 10k holdings editing | 30 mins | t3.medium | 3 |
...
Response Times (Average of all tests listed above, in seconds)
...
Check-in-check-out | Bulk edit | Data Import | ||||
---|---|---|---|---|---|---|
Average (seconds) | Items | Holdings | MARC BIB | |||
Check-in | Check-out | 10k records | 10k records | 50k Create | 50k Update | |
Test 1 | 0.715 | 1.332 | 40 min | - | 20 min 19 sec | - |
Test 2 | 0.756 | 1.383 | - | 20 min 30 sec | 21min 07 sec | - |
Comparisons
[Part to compare test data to previous tests or releases. It's important to know if performance improves or degrades]
...
For the baseline test the mod-source-record-manager version was 3.5.0 for the test with CI/CO it was 3.5.4. Mabey it is the reason why the time of Data Import with CI/CO is even better than without CI/CO.
Profile | Duration KIWI (Lotus) without CICO | Duration with CICO 8 users KIWI (Lotus) | Duration Nolana without CICO | Duration with CICO 8 users Nolana | CheckIn average (seconds) | CheckOut average (seconds) | Deviation From the baseline CICO response times | |
---|---|---|---|---|---|---|---|---|
5K MARC BIB Create | PTF - Create 2 | 5 min, 8 min (05:32.264) (08:48.556) | 5 min (05:48.671) | 2 min 51 s | 00:01:56.847 | 0.851 0.817 | 1.388 1.417 | CI: 44% CO: 51% |
5K MARC BIB Update | PTF - Updates Success - 1 | 11 min, 13 min (10:07.723) | 7 min 06:27.143 | 2 min 27s | 00:02:51.525 | 1.102 0.747 | 1.867 1.094 | CI: 39% CO: 36 |
Attach the link to the report from which the data for comparison was extracted.
Memory Utilization
[Description of notable observations of memory utilization with screenshots(of all modules and involved modules) and tables]
...
Nolana Avg | Nolana Min | Nolana Max | |
---|---|---|---|
mod-circulation-storage | 24% | 23% | 25% |
mod-patron-blocks | 34% | 33% | 34% |
...
[Description of notable observations of modules and instances CPU utilization with screenshots (of all modules and involved modules) and tables]
...
RDS CPU Utilization
...
[ This section gives more space to elaborate on any observations and results. See Perform Lookups By Concatenating UUIDs (Goldenrod)#Discussions for example.]
Errors
This section should detail any errors encountered during the testing process, their impact on testing outcomes, and the steps taken to address these issues.
Appendix
Infrastructure
[ L:ist out environment's hardware and software settings. For modules that involve Kafka/MSK, list the Kafka settings as well. For modules that involve OpenSearch, list these settings, too]
...
[ List the high-level methodology that was used to carry out the tests. This is important for complex tests that involve multiple workflows.
1. Preparation Steps: Provide a comprehensive overview of the preparation process preceding the test. This includes setting up the test scripts, configuring relevant parameters, and ensuring all necessary tools and resources are in place.
2. Data preparation scripts. In the context of performance testing, data preparation is a critical step to ensure that the testing environment accurately reflects real-world usage patterns and can handle the intended load efficiently. To facilitate this process, specific scripts are used to populate the test database with the necessary data, simulate user transactions, or configure the environment appropriately. Add links needed scripts to git hub and write a short description of how to use/run them.
3. Test Configuration: Specify the exact configurations utilized during the test execution. Duration, number of virtual users, rumup period etc.
It's important to inform readers of how the tests were performed so that they can comment on any flaw in the test approach or that they can try to reproduce the test results themselves. For example:
...
The steps don't need to be very specific because the details are usually contained in the participating workflow's README files (on GitHub). However, anything worth calling out that was not mentioned elsewhere should be mentioned here.
4. Metric Collection Approach: Describe the methodology adopted to collect and interpret metrics during testing. Highlight the tools employed for data collection, SQL queries to get data, or other approaches(Get metrics from JMeter jtl reports) that was used for specific PTF-tests
Also, it is necessary to include the approach taken to obtain the final results. For example, please document if the results were obtained by zooming into a portion of the graphs in Grafana (which portion?, why?), how the numbers were calculated if not obvious. ]
...