Master JMeter Script
Overview
Currently there are existing workflow scripts that test the workflows individually. Sometimes these workflows are tested in combination with other workflows by invoking a second workflow manually, such as an automated Check In Check Out test with a manual Data Import. In order to simulate production usage of FOLIO, it is necessary to have a "master" JMeter script which executes all of these workflows so that all of them can be run concurrently in an automated fashion. This page outlines various aspects and considerations of designing such a script and the final decisions that will lead to creating the master script.
Script Requirements
- Script that runs the workflows described in - PERF-457Getting issue details... STATUS
- The workflows are configurable before running the tests. Some scripts or workflows require specific configurations. While the Check In Check Out workflow needs to have the number of users and ramp up time configured, Data Import scripts may require the job profile and MARC files that are being loaded at runtime set.
- Script should run from Carrier-io. Currently Carrier-io kicks off a test using Jenkins job and the job is configurable via the job's forms with text boxes and a Jenkins file. The master script may want to follow this model.
- Script should be able to distribute the virtual users to mimic live usage. In real life there will be a percentage of librarians checking in books for their patrons, a handful of librarians ordering books, a few of them exporting the items' bibliographic data, etc.. The script should be configured to match real life's distribution of usage on the system.
- Either built into the master script or Jenkins job as a step or implemented as a manual step, there should be a process to restore the database by way(s) of cleaning up the records that were created or altered so that the test can be rerun from the same starting point.
Considerations
- Variables
- Distribute variables into classes
- class: Main: Okapi Host, Username, Tenant
- class: Main (workflows specific): edge-host, API Keys
- class: Load: User, duration, ramp up <-- workflow specific
- Long vs. short durations
- Big jobs vs. time-bound jobs.
- class: Flow-based: profile, file <-- workflow specific
- Passing in variables from Jenkins job.
- Group the like workflows' variables into separate groups and assigning probabilities in each group
- JSON configuration that has configurations for each workflow
- Pros: Does not need to create a long list of parameters in the Jenkins job.
- Cons: Long file, hard to work with.
- Store the workflow configuration file on Github (pull it down when building the job)
- Pros: Changes to the config are documented as Github as commit comment each time the file is checked in.
- Cons: The configuration still needs to be in some form (JSON, XML, plaint text/csv, etc..)
- Store the workflow config file in Artifact package
- Pros:
- Cons:
- If a slight change to one of the parameters is made, a new artifact package will have to be recreated.
- Need to keep record of the versions of the packages that have whichever changes.
- Expose the parameters directly in Jenkins as text boxes and other controls (drop down lists, check boxes, etc..)
- Pros:
- Can be configurable with JSON file in one field
- Jenkins could store default values for parameters.
- Cons:
- A very long Jenkins job that may have well over 200 parameters (approximately 4-5 parameters for each workflow + up to 15 general parameters).
- Pros:
- Distribute variables into classes
- Probability of calls (TBD - with small POCs)
- Flow based?
- tenant/cluster based - to control the distribution of calls to the workflows.
- Implementation:
- Smaller thread groups easier to manage, debug
How: New thread group for each workflow
Pros:
Cons: - One thread group
How: Extremely hard to combine them in one thread
Pros:
Cons:- Hard to manage all in one thread
- There are too many lines with transaction controllers in one thread.
- We have different workflows for some of them load is configured with number of users and for others with the size of processed file, thus it is extremely hard to combine them in one thread.
- Smaller thread groups easier to manage, debug
- How to retrieve secured variables like username, password?
- Automation: Any modifications to the current Jenkins job needed?
- Script size: How big is too big? Can it reasonably accommodate 30-40 workflows?
- Test results monitoring:
- Do we need to create new Grafana dashboard?
- Will we have automated reporting?
- How will we take workflow process duration like Data Import/Export duration from the database?