PTF - Report Template


Overview

[ A brief introduction about the content of the page, why we are testing the workflow and reference the Jira(s)]

Summary

[ A bulleted-list of the most important and relevant observations from the test results. What are the most important things the readers need to know about this testing effort? Some suggestions

  • Comparison to previous test or release of response times or API durations
  • Any notable changes
  • Particular response time or durations
  • Service memory and/or CPU utilization
  • RDS memory and/or CPU utilization 
  • Other interesting observations

]

Recommendations & Jiras (Optional)

[ If there are recommendations for the developers or operations team, or anything worth calling out, list them here. Also include any Jiras created for follow-up work]

Test Runs 

[Table of tests with short descriptions. If there are motivations to run additional tests because of any reason, include a note column to explain]

Test #

Test Conditions

Duration 

Load generator size (recommended)Load generator Memory(GiB) (recommended)

Notes

(Optional)

1.

8 users CI/CO + DI 50k MARC BIB Create+ 10k Items editing30 minst3.medium3

2.

8 users CI/CO + DI 50k MARC BIB Create+ 10k holdings editing30 minst3.medium3

Results


















[ Tables of detailed test results with comments]

Response Times (Average of all tests listed above, in seconds)


Check-in-check-outBulk editData Import

Average (seconds)ItemsHoldingsMARC BIB

Check-inCheck-out10k records10k records50k Create50k Update
Test 10.7151.33240 min-20 min 19 sec-
Test 20.7561.383-20 min 30 sec21min  07 sec-


Comparisons

[Part to compare test data to previous tests or releases. It's important to know if performance improves or degrades]

The following table compares additional test results to previous release numbers and to the CICO baselines Nolana (of Check In average time 0.456s and Checkout average time 0.698s). Note that Lotus numbers are in red, Nolana numbers are in black, and Kiwi numbers are in blue.

In the Nolana version, there is a significant improvement in the performance of data import and CheckIn/CheckOut.

For the baseline test the mod-source-record-manager version was 3.5.0 for the test with CI/CO it was 3.5.4. Mabey it is the reason why the time of Data Import with CI/CO is even better than without CI/CO.


Profile

Duration

KIWI (Lotus) without CICO

Duration

with CICO 8 users KIWI (Lotus)

Duration

Nolana without CICO

Duration

with CICO 8 users Nolana

CheckIn average (seconds) 

CheckOut average (seconds) 

Deviation From the baseline CICO response times

5K MARC BIB Create

PTF - Create 2

5 min, 8 min (05:32.264)

(08:48.556)

 5 min

(05:48.671)

2 min 51 s00:01:56.847

0.851

0.817

1.388

1.417

CI:  44%

CO:  51%

5K MARC BIB Update

PTF - Updates Success - 1

11 min, 13 min

(10:07.723)

7 min

06:27.143

2 min 27s00:02:51.525

1.102

0.747

1.867

1.094

CI:  39%

CO:  36

Memory Utilization

[Description of notable observations of memory utilization with screenshots(of all modules and involved modules) and tables]



Nolana Avg

Nolana Min

Nolana Max

mod-circulation-storage24%23%25%
mod-patron-blocks34%33%34%

CPU Utilization 

[Description of notable observations of modules and instances CPU utilization with screenshots (of all modules and involved modules) and tables]



RDS CPU Utilization 

[Description of notable observations of reader and writer instances CPU utilization with screenshots and tables, RDS Database connections, and other Database metrics]


Additional information from module and database logs (Optional)

[ Although it is optional to look at logs it is always recommended to look at the logs to see if there were any errors,  exceptions, or warnings.  If there were any, create Jiras fo rthe module that generated the warnings/errors/exceptions]


Discussion (Optional)

[ This section gives more space to elaborate on any observations and results.  See Perform Lookups By Concatenating UUIDs (Goldenrod)#Discussions for example.]

Appendix

Infrastructure

[ L:ist out environment's hardware and software settings. For modules that involve Kafka/MSK, list the Kafka settings as well.  For modules that involve OpenSearch, list these settings, too] 

PTF -environment ncp3 [ environment name] 

  • m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1 [Number of ECS instances, instance type, location region]
  • 2 instances of db.r6.xlarge database instances, one reader, and one writer [database instances, type, size, main parameters]
  • MSK ptf-kakfa-3 [ kafka configurations]
    • 4 m5.2xlarge brokers in 2 zones
    • Apache Kafka version 2.8.0

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true
    • log.retention.minutes=480
    • default.replication.factor=3


Modules memory and CPU parameters [table of services properties, will be generated with script soon]

Modules

Version

Task Definition

Running Tasks 

CPU

Memory

MemoryReservation

MaxMetaspaceSize

Xmx

mod-inventory19.0.112102428802592512m1814m
okapi4.14.71-231024

1684

(1512 in MG)

1440

(1360 in MG)

512m922m

MG- Morning Glory release

Front End: [ front end app versions (optional)]

  • Item Check-in (folio_checkin-7.2.0)
  • Item Check-out (folio_checkout-8.2.0)

Methodology/Approach

[ List the high-level methodology that was used to carry out the tests.  This is important for complex tests that involve multiple workflows. It's important to inform readers of how the tests were performed so that they can comment on any flaw in the test approach or that they can try to reproduce the test results themselves.  For example:

  1. Start CICO test first
  2. Run a Data Import job after waiting for 10 minutes
  3. Run an eHoldings job after another 10 minutes
  4. On another tenant run another DI job after 30 minutes in

The steps don't need to be very specific because the details are usually contained in the participating workflow's README files (on GitHub).  However, anything worth calling out that was not mentioned elsewhere should be mentioned here.

Also, it is necessary to include the approach taken to obtain the final results. For example, please document if the results were obtained by zooming into a portion of the graphs in Grafana (which portion?, why?), how the numbers were calculated if not obvious.

Additional Screenshots of graphs or charts (Optional)

[ Optionally include additional screenshots of graphs on the Cloudwatch and Grafana dashboards for completeness sake. These don't have to be solely included in here but can be added in any section if they complement other graphs and fit the narrative. ]