Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Note
titleSummary

Our goal was to test Q4 release in the similar fashion as a customer would conduct UAT. We wanted to see if our current process can provide desired quality level. After triage we had 67 open defects. This doesn't count 15 defects has been closed with "duplicate" or "won't do" resolutions. Needles to say that, if it would be a true UAT event we would have failed and customer won't accept this build for deployment in production. We don't have enough quality gates to be built into out process that can protect releases from a large number of defects. Hopefully, quality improvement work that we're planning in Q1 and beyond will yield us better results in the future.


It was the first time we attempted to execute such an extensive manual product test run. That being said, it went relatively smooth for the first try:

  • There were relatively few questions about the process in the bug-fest Slack channel at the beginning of the week.
  • Participants did't require a lot of support in following the directions for Bug Fest.
  • We incurred one performance issue on Tuesday that blocked all the testing activities. It was quickly resolved by Hongwei. 
  • We were not able to execute all test tasks in the first 2 days and event had to be extended for another 3 days.
  • Some areas of Folio included into test plan didn't get executed. We have to do a better job planning task assignments next time. 

Here are the highlights of the results:

  • 82 defect has been logged over the course of 5 days.
  • 15 defects has been closed as "duplicate' or "won't do". We have 65 active defects remaining.
  • We had 14 participants from across FOLIO (admin, POs, regular testers, library staff)
  • Testers logged the most number for defects against ui-users and ui-inventory modules (please see chart below.)
  • Most defects against ui-requests has been dismissed because they were logged against incomplete features.