Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Test status: PASSED| PASSED WITH RESTICTION| FAILED

Table of Contents
minLevel1
maxLevel6
outlinefalse
styledefault
typelist
printabletrue

...

The general recommendation for report formatting:[Do not include the section in reports ]

  1. Make the page full-width Set your Confluence page to full-width to utilize the entire space available. This makes your report more readable and gives more room for arranging content aesthetically.

  2. Image wide 1600px. This width offers a good balance, ensuring that images are clear and detailed without causing excessive loading times or appearing too large on the page.

  3. For graphs from AWS CloudWatch use 1 minute metrics aggregation.

...

[ A brief introduction about the content of the page, why we are testing the workflow and reference :

  • What we are testing? Provide context of the test. Is it for a new service? Is it an experiment? Is it regression test?

    • Include major things like environment settings (ECS, non-ECS, Eureka, non-Eureka, w/RW split, etc…)

  • What are the goals of the testing? Ex: Want to see the effect of using a different ec2 instance type. If regression: to see how vB compares to vA

  • Reference the Jira(s)

  • ]

Summary

[ A bulleted-list of the most important and relevant observations from the test results. What are the most important things the readers need to know about this testing effort? Some suggestions

  • Comparison to previous test or release of response times or API durations

  • Any notable changes

  • Particular response time or durations

  • Service memory and/or CPU utilization

  • RDS memory and/or CPU utilization 

  • Other interesting observations

The summary points should answer the goals stated in Overview: did the test achieve the goals laid out? What goals were not met and why?

]

Recommendations & Jiras (Optional)

[ If there are recommendations for the developers or operations team, or anything worth calling out, list them here.

  • Configuration options

  • Memory/CPU settings

  • Environment variables settings.

Also include any Jiras created for follow-up work]

...

For the baseline test the mod-source-record-manager version was 3.5.0 for the test with CI/CO it was 3.5.4. Mabey Maybe it is the reason why the time of Data Import with CI/CO is even better than without CI/CO.

...

[Description of notable observations of modules and instances CPU eCPU utilization with screenshots (of all modules and involved modules) and tables]. Annotate graphs to show when a specific test started or ended, and select only the modules that are relevant to the test to show on the graphs]

...

RDS CPU Utilization 

[Description of notable observations of reader and writer instances CPU utilization with screenshots and tables, RDS Database connections, and other Database metrics]

...

[ Although it is optional to look at logs it is always recommended to look at the logs to see if there were any errors,  exceptions, or warnings.  If there were any, create Jiras fo rthe for the module that generated the warnings/errors/exceptions]

...

[ This section gives more space to elaborate on any observations and results.  See Perform Lookups By Concatenating UUIDs (Goldenrod)#Discussions for example. Anything that was discussed at length at the DSUs are worthy to be included here]

Errors

This section should detail any errors encountered during the testing process, their impact on testing outcomes, and the steps taken to address these issues.

Appendix

Infrastructure

[ L:ist List out environment's hardware and software settings. For modules that involve Kafka/MSK, list the Kafka settings as well.  For modules that involve OpenSearch, list these settings, too] 

...

  • Item Check-in (folio_checkin-7.2.0)

  • Item Check-out (folio_checkout-8.2.0)

Dataset Size is important for testing. What was the size of the dataset? Include one or more related tables' sizes.

Methodology/Approach

[ List In order to be able to reproduce the test, list the high-level methodology that was used to carry out the tests.  This is important for complex tests that involve multiple workflows.

1. Preparation Steps: Provide a comprehensive overview of the preparation process preceding the test. This includes setting up the test scripts, configuring relevant parameters, and ensuring all necessary tools and resources are in place.
2. Data preparation scripts. In the context of performance testing, data preparation is a critical step to ensure that the testing environment accurately reflects real-world usage patterns and can handle the intended load efficiently. To facilitate this process, specific scripts are used to populate the test database with the necessary data, simulate user transactions, or configure the environment appropriately. Add links needed scripts to git hub github and write a short description of how to use/run them.
3. Test Configuration: Specify the exact configurations utilized during the test execution. Duration, number of virtual users, rumup ramp-up period etc.
It's important to inform readers of how the tests were performed so that they can comment on any flaw in the test approach or that they can try to reproduce the test results themselves.  For example:

...

Also, it is necessary to include the approach taken to obtain the final results. For example, please document if the results were obtained by zooming into a portion of the graphs in Grafana (which portion?, why?), how the numbers were calculated if not obvious. ] 

Additional Screenshots of graphs or charts

...

[ Optionally include Include additional screenshots of graphs on the Cloudwatch and Grafana dashboards for completeness sake. These don't have to be solely included in here but can be added in any section if they complement other graphs and fit the narrative. Include any raw data that includes the timestamps of tests or any screenshots/charts/graphs. These data may be separate files or may be one Miror board or one Sheet/Doc that has everything in it. Raw data is important to consult for additional insights if the report omits them initially. ]