Skip to end of banner
Go to start of banner

Folijet - Automated Large-Scale Import Testing (Proposal)

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Overview

Data Import is a feature that works on large datasets. To date there are automated tests of various APIs and modules of Data Import including Karate Tests and Cypress Tests, but these tests only importing a handful of records, therefore large-scale imports (of even over 1K records) have not been verified even functionally. Moreover the Data Import profiles that are used in these imports are made-up at best that may not reflect real-life scenarios. Thus there is a need for an automated test system that runs continuously to verify large imports using real-life profiles. This document proposes the requirements for such a system so that it can be built.

Automated Large-Scale Data Import Testing System Features

It is best for Folijet to own this system and not share it with other teams. It may be hosted on Rancher or on the community’s AWS account, or FSE’s AWS account. This way enables Folijet to have full control of the system and be able to troubleshoot issues at any time. This also ensures that the system is dedicated to a single purpose alone and any traffics to it is accounted for easily.

FOLIO Instance

  1. The FOLIO instance does not need to be hosted on beefy machines and database. It needs to be able to store thousands of SRS and inventory records upon the jobs' creating these records or for the purpose of doing matchings, finding records in the database that match a particular criteria, on update imports. Therefore, the database may store up to 500K or 1M records but no more than this.

  2. This FOLIO instance should run code from latest commits of the Data Import modules that are pulled nightly, built and deployed to the instance. (Ideally each commit from any DI module would trigger a rebuild and deployment of the module to the instance, but this may introduce unforeseen consequences when the build fails or test failures and may require unnecessary attention throughout the day.)

  3. Each FOLIO nightly build should be tagged with the date and test failure reports should also reference this tag so that the test failures are easily traced to the commits that were done on any given date.

  4. The FOLIO instance needs to maintain Data Import profiles found in production or contributed by the community. These profiles may be created/copied manually from production, or auto-generated via the “jpwrangler” tool.

  5. The system should at least stay up for the test run if not 24/7.. Staying up 24/7 would allow ad hoc testing, manual or triggered by a team member to test out something.

  6. Folijet team members should be able to review the basic metrics of the system, such as the containers' CPU and memory utilizations, and DB’s CPU and memory utilizations.

  7. Folijet team members should be able to access the modules' logs in a convenient way.

  8. Folijet team members should be able to configure a module’s environment variables easily.

  9. The system should be able allow a debugger to be attached to enable the team members to troubleshoot any issues conveniently.

Test Infrastructure

  1. May leverage the existing automated Karate test infrastructure/framework to execute the tests.

  2. May leverage existing Karate Integration Test suite of Data Import

  3. May leverage the existing JMeter script that the PTF created.

  4. The test system should run the tests nightly at least once a day and produce a report after the test run.

  5. Tests should be designed to run with different profiles and input MARC files.

  6. Expose test result logs for troubleshooting.

  7. New tests should be easily added to the system, old tests should be easily modified.

  8. Team should be able to access the database, e.g., pointing a PgAdmin instance to it.

Responsibilities

Kitfox

  • Creates the FOLIO and automated the test systems

  • Periodically maintaining the system, such as truncating old records

  • Troubleshoot any build deployment issues

Folijet

  • Team members and/or QA team contribute the tests and corresponding MARC files and profiles

  • Team members troubleshoot any test failure on demand the day of the tests failing.

  • No labels