Thunderjet - Quality Metrics
The question is - how will we know that the overall quality of the software in Acquisition & Finance domain is gradually improving as we implement our ideas? This requires metrics that characterize the state of A&F, and of particular importance is the understanding of these metrics in dynamics, over several release cycles. So, what metric can be used?
This Thunderjet - Bugs metrics Jira dashboard provides a lot of interesting statistics in terms of bugs and bugs reported in the Jira tracking system.
So, the following metrics are proposed:
Metric | Sense / Expectations | Source | Frequency of collection / analysis |
---|---|---|---|
Number of bugs including their priority | expectation is zero P1 | Jira dashboard | once per sprint |
Number of overall detected bugs by priorities and releases | the expected trend is to reduce the total number of bugs from release to release while reducing the priority of identified bugs | Jira dashboard | once per release |
Number of bugs escaped into Production | the expected trend is to minimize and even avoid P1/P2 bugs in Production (P3 is acceptable); Production bugs are those which have RRT or Support label | Jira dashboard | once per release |
Number of occurrences (similar bugs reported by different customers) | FSE / Rally log, as a part of RCA | once per release | |
Average time of bug fixing | there does not seem to be any SLA in this area yet, so the expected trend is to decrease this time as a result of improved tools for troubleshooting and analysis | manually, by the special script | once per release |
No performance degradation from release to release | no Baseline metrics in this area; there are only measurements for exporting orders in Edifact format (Morning Glory -> Nolana - it got better) Action item: contact PTF, write performance tests, organize and load test against a set of scenarios in Orchid to get baseline values | performance testing | once per release |