Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

Performance Testing is a software testing process used for testing the speed, response time, stability, reliability, scalability, and resource usage of a software application under a particular workload. The main purpose of performance testing is to identify and eliminate the performance bottlenecks in the software application.

Features and Functionality supported by a software system are not the only concern. A software application’s performance, like its response time, reliability, resource usage, and scalability, do matter. The goal of Performance Testing is not to find bugs but to eliminate performance bottlenecks.

...

https://issues.folio.org/secure/RapidBoard.jspa?rapidView=264&view=planning.nodetail&issueLimit=100

System modelling

Know your physical test environment, production environment and what testing tools are available. Understand the details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests. It will also help identify possible challenges that testers may encounter during the performance testing procedures.

PROD Config

Test ENV #1 - ncp3 by AWS ECS

Test ENV #2 - ncp4 by AWS ECS

Database: PostgreSQL by AWS RDS

Queue Manager: Kafka by AWS MSK

Environment

  • Use the default UChicago dataset - 27M records

  • Other datasets and their sizes: Check with P.Os, depending on the workflow to test.

  • Run two environments - 1 with a profiler and the other one withOUT a profiler.

Test development

Determine how usage is likely to vary amongst end users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end users, plan performance test data and outline what metrics will be gathered. 

In the implementation phase, performance test cases are ordered into performance test procedures. These performance test procedures should reflect the steps normally taken by the user and other functional activities that are to be covered during performance testing. A test implementation activity is establishing and/or resetting the test environment before each test execution. Since performance testing is typically data-driven, a process is needed to establish test data that is representative of actual production data in volume and type so that production use can be simulated. 

...

The regression pack is an automated process.

TBD

System tuning

Consolidate, analyze and share test results. Then fine tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power.

Test result reporting

For analysis results of Server-side Performance Testing should be used following metrics:

...

For reporting, the next PTF - Report Template could be used.

WIKI Space: [Reporting] Performance Testing Reports

...

PreTest

  • Establishing test scenarios and conditions, and SLA with POs, especially for the scenarios that we come up with.
  • Maintaining a test log - write down time of tests execution and conditions (see the attachment for sample logs)
    • parameters:
      • dataset name or the number of records in the database
      • log level of all modules and/or a specific module
      • FOLIO version and/or specific modules versions
      • With or Without profiler
      • Number of users
      • Duration
      • Other configurations or settings (TBD)
  • Feasible to restart the cluster so that all the ECS services have a fresh starting point in terms of CPU and memory?
    • Short duration tests, no need to restart environment every time
      • Keep an eye on env's metrics such as CPU and memory utilization, may need to take proactive action to restart the module or the whole env if the metrics reach a critical level. 
    • Long-duration tests, need to restart the environment to have a clean starting point. 
  • Baseline tests/results:
    • Only when absolutely required? E.g., a whole new set of workflow
    • Each time adding a new version of a module 
    • If parameters haven't changed, then don't need to rerun the baseline. 
  • pgHero is a tool that captures slow queries. Clear out pgHero if it has not been cleared already.
  • Run a smoke test to verify that there are no functional errors or that the environment has been set up successfully
  • Longevity tests
    • Take a heap dump
  • Triple-check the Jenkins job's parameters
  • If the environment has been restarted, make sure that all ECS services are stable for at least 15 minutes

During test:

  • Capture any observations. 
  • Capturing heap dumps, esp. for longevity tests - manual process at the moment, maybe automated in the future. At a minimum: the beginning, middle, and end of the test run.

Post Test:

...

  • Average response time (Obtained from Grafana)
  • Errors (thresholds for failing an API call (Obtained from Grafana)) 
  • Modules logs to see if any errors entries
  • TPS - transactions per second
  • CPU utilization for a particular module or for any abnormal behaviour observed (from any module)
  • Memory usage for a particular module or for any abnormal behaviour observed (from any module)

...

  • With the observations above, any anomalies
  • Update the timestamps or a Grafana URL so that we can go back and look at the graphs later

...

  • Save-As from browser

...