| | | |
|---|
10 Min | UI Testing Team Objectives for 2020 | @Anton Emelianov (Deactivated) | UI testing approach for FOLIO for the next 2 years Haven't met since October 2018; need to revisit the decisions made then since ecosystem has evolved since then. Agree, document, and communicate our decisions to dev teams to set their expectations about our guidelines.
New UI code changes acceptance criteria Acceptance criteria for including a UI module into FOLIO release UI testing tools: Selection (BigTest, Jest, RTL, etc) set guideline that all modules are obligated to respect can't just choose own framework and expect it to be part of the project. a spike is acceptable of course, but as a spike not as a final decision. All tools in use should be validated by this team. Not saying a firm "No" to other options, but looking to have a deliberate approach to adding new tools to the project.
Folio specific documentation Adoption and training
|
30 min | UI Testing approach at a high level. | @Aliaksei Chumakou | Discuss current state of UI tests Want e2e tests that provide additional quality gates, not as requirements for builds current approach is haphazard; we don't really a test-pyramid because we don't formally maintain Nightmare test we create many test only to turn PRs green, not for any other value scattershot approach of NightmareJS, BigTest, RTL, Cypress. don't have manual testers, and they would be expensive even if we could get them.
Proposed changes honeycomb approach define some quality gate of unit/integration test coverage, 70%? 80%? run tests per-commit (per PR) (this is already in place) e2e tests: involve POs in compiling the scenarios to cover run these relatively frequently: after merge to master? once daily? do not couple e2e test output to build environments how to better report (or communicate) the output of these tests, e.g. reportportal.io
With local dev envs, will be able to run integration and e2e tests prior to PR merges, and then verify on reference envs as well FYI, there are specs for many e2e tests that are run manually as part of BugFest quarterly releases "code coverage" is a bit of a false metric: it doesn't prove that things perform correctly, only that the code was run during part of test execution. e2e tests have very high value in terms of overall functionality, but should result in a widely-accessible report, not a blocked build per-team rancher envs will provide the PO the ability to preview work pre-PR-merge.
|
10 min | Introduction of tool selection criteria | @Anton Emelianov (Deactivated) | Take emotions out of the process by agreeing on selection criteria that will be applied to every proposed solution (tool group). Folks have strong feelings, strong opinions, but this needs to be done impartially. Each proposed tool should go through a spike and presented to the UI Testing team for review with ratings for each selection criteria. Speed: must run FAST Reliability: must not make issues further down in the suite opaque Relevance: Mocking facility (Sharing mocks for core modules) Integration vs Unit vs e2e tests (can the same tool to be used for both?) e.g. can use the same tool with both real backend and with mocks? if yes, this is a huge win: reduces the amount of tooling folks need to know
Cost to migrate/rebuild existing tests Multi browser support not necessary now, but likely required in the future implicit in this statement is that some amount of real-browser testing is necessary for some tests (NB: Nightmare, Jest both do not) maybe unit tests can/should run headless/Electron/Whatever, but functional tests and e2e need a real browser. What belongs where, what kind of testing do we want to do?
Anything else?
|
10 min | Discuss Spikes and assignments for next meeting | @Anton Emelianov (Deactivated) | |