Environment for testing JMeter Script

Overview

An open question for the PTF team is which environment can teams use to query for runtime records that their JMeter script uses and to test these JMeter scripts.
Additionally this environment should satisfy the following 3 criteria:

  1. Env(s) must have one set of data to write script against and to test against, preferably using the same data used by the PTF.
  2. In this environment(s), when modules and DB schemas need to be updated, as well as the Perf dataset needs to be migrated: who will do it? how manually or automatically?
  3. How to get data for script and where to test their JMeter scripts (existing or new) when there are schema changes?

Options

Create a new FOLIO environment that has a duplicate duplicate of PTF's database.

Pros:   Has all records that teams can use.
Cons:  Maintenance nightmare: Who will do it? This env is needed to be kept up to date all the time as teams will need to write JMeter script for and to test their script against the latest FOLIO. This means that when a team makes a change to the db schema, this environment will need to be updated right away.
          With a DB that has 27M records, it will take 6+ hours to run migration script. Not something that we want teams to do every day.
          monetary cost to pay AWS

Create a Vagrant/docker container that contains a database which has a subset of the PTF perf database's records.

Pros:  Packaged, easily distributable
           Has enough record for teams to develop their JMeter tests
Cons: Teams can't test their API against this env.

Create a Vagrant/docker container that has the current snapshot of FOLIO deployed fully with a subset of the PTF perf dataset.

Pros: Packaged, easily distributable
Cons: The vagrant box, with lots of modules, require a powerful machine to run on.
          Complexities in running database migration script on sample dataset?

Using existing FOLIO-SNAPSHOT-LOAD

1. Teams may use FOLIO-SNAPSHOT-LOAD to test their script. This environment is rebuilt nightly, so any schema changes will be updated. The data is from reference and sample data, so they should have been updated if there were any changes to the schema.
2. Specifically, we are proposing the following for teams to develop and test their script:

  • Teams could either grab existing data from the FOLIO-SNAPSHOT-LOAD or create a script to generate their own data and apply them to the FOLIO-SNAPSHOT-LOAD database.  If using existing data in this environment, create a script to load them or inform PTF to grab it.
         (New data will be ephemeral, short lived, automatically wiped out. Teams can always have a clean env to play with the next day)
  • These data generating scripts should be checked into Github and referenced in the JIRA once the script is tested.
  • PTF will use these scripts to generate and add this data on top of its big database.
  • For any reference data in this env that the JMeter script, when exercised, will touch, PTF would grab the data from its sources and add them on top of PTF's database

Pros: Reusing an existing env that teams know about.
         Because they can generate additional data for testing, teams can theoretically apply these steps on any environment, not necessarily FOLIO-SNAPSHOT-LOAD.
         Teams don't need to get data from one env and test the script in another env.

         Adequately addresses the 3 criteria above


Cons: Teams need to write one more script (3 total: JMeter script, restore database state script, and data generating script) to send PTF.