Skip to end of banner
Go to start of banner

Performance Testing Methodology

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Environment

  • UChicago dataset - 27M records
  • Other datasets: Check with P.Os.  
  • Run two environments - 1 with profiler and the other one withOUT profiler.

PreTest

  • Establishing test scenarios and conditions, SLA with POs, especially for the scenarios that we come up with.
  • Maintaining a test log - write down time of tests execution and conditions
    • parameters: data in the database - 
  • * Feasible to restart the cluster so that all the ECS services have a fresh starting point in terms of CPU and memory?
    • Short duration tests, no need to restart environment every time
      • Keep an eye on env' metrics, may need to take proactive action to restart the module or the whole env if the metrics reach a critical level. 
    • Long duration tests, need to restart environment to have a clean starting point. 
  • Baseline tests/results:
    • Only when absolutely required? E.g., whole new set of workflow
    • Each time adding new version of module 
    • If parameters haven't changed, then don't need to rerun baseline. 
  • Clear out pgHero
  • No labels