Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Page Properties


Submitted Dateyyyy-mm-dd

 

Approved Dateyyyy-mm-dd

 

StatusACCEPTED
ImpactHIGH/MEDIUM/LOW


 

Overrides/Supersedes 

When applicable, provide links to all DRs that this DR overrides or supersedes 

RFC 

Provide links to all relevant RFC's

Stakeholders

@mention individual(s) who has vested interest in the DR. This helps us to identify who needs to be aware of the decision

Contributors

List individual(s) sposnoring/advocating for the decisionThis decision was migrated from the Tech Leads Decision Log as part of a consolidation process.  The original decision record can be found here.

RFC 

N/A

Stakeholders

Anton Emelianov (Deactivated)

Contributors

Aliaksei Chumakou Mikita Siadykh 

Approvers

This decision was made by the Tech Leads group prior to the adoption of current decision making processes within the FOLIO project.

Background/Context

Explain the need that triggered the need for making a decision

Assumptions

List all assumptions that were made when making the decision

Constraints

List any constraints that lead us to make a certain decision

Rationale

Document the thought process, list reasons that lead to the final decision

Decision

Short summary of the decision goes here

Implications

...

  • Provide a link to RFC when applicable

...

Improving of testing process for FOLIO UI.

Current state
  • No formal testing approach (it might be looking like a Tests Pyramid, but we don't actually work on E2E tests, only support existing)
  • Tests are for coverage
  • Some tests cover a lot of, but test nothing
  • Some modules don’t meet acceptance criteria
  • No defined toolset
  • No manual testers (only BugFest phase, what sometimes looks not good for testing due it's limited time)
  • Regression testing during quarter releases

Assumptions

N/A

Constraints

N/A

Rationale

To increase code quality/detect regression bugs on early stage.

Basically, we do have already DOD for tests coverage, so we can just formalize this for unit and integration tests, and check if we need to change testing tools on the upcoming meetings to provide more or less real coverage.

But for E2E tests we don't write any and if we formalize that in requirement (PO participation in what scenarios should be covered) - we can proceed with debating on tools as well on upcoming meetings. But it will be already a huge step forward for decreasing regressions, since they could be noticed even if something was changed in stripes-components, but ui-orders scenarios has started to fail.

Decision

  • Follow honeycomb approach (UI part is marked with bold lines on picture)
  • 70-80% unit/integration tests coverage - it's possible to use one testing tool for that so it shouldn't be a problem to count coverage for both test types;
  • Per commit unit tests execution
  • POs/QAs define set of e2e scenarios (for existing features they could be gathered from testrail, but for new features PO can define scenarios in the tickets itself, so e2e tests could be counted in ticket's estimation)
  • e2e tests are executed frequently (after merge to master, 1 per day, etc ?)
  • e2e tests don’t block environments
  • e2e tests reporting (e.g. reportportal.io)

Image Added

Action items

Implications

  • Pros
    • N/A
  • Cons
    • N/A


Other Related Resources