- honeycomb approach
- define some quality gate of unit/integration test coverage, 70%? 80%?
- run tests per-commit (per PR) (this is already in place)
- e2e tests: involve POs in compiling the scenarios to cover
- run these relatively frequently: after merge to master? once daily?
- do not couple e2e test output to build environments
- how to better report (or communicate) the output of these tests, e.g. reportportal.io
- With local dev envs, will be able to run integration and e2e tests prior to PR merges, and then verify on reference envs as well
- can use Jenkins job, internal to rancher env i.e. separate from community Jenkins jobs, for this
- FYI, there are specs for many e2e tests that are run manually as part of BugFest quarterly releases
- "code coverage" is a bit of a false metric: it doesn't prove that things perform correctly, only that the code was run during part of test execution.
- e2e tests have very high value in terms of overall functionality, but should result in a widely-accessible report, not a blocked build
- per-team rancher envs will provide the PO the ability to preview work pre-PR-merge.
|
10 min | Introduction of tool selection criteria | | Take emotions out of the process by agreeing on selection criteria that will be applied to every proposed solution (tool group). Folks have strong feelings, strong opinions, but this needs to be done impartially. Each proposed tool should go through a spike and presented to the UI Testing team for review with ratings for each selection criteria. - Speed: must run FAST
- Reliability: must not make issues further down in the suite opaque
- Relevance:
- Mocking facility (Sharing mocks for core modules)
- at present, every module has to build own facility for this → lots of redundancy for mocks of core modules. WOULD BE SO VERY VERY NICE but may be hard to achieve.
- Integration vs Unit vs
|