Reviewed the tool demo sites, filled in some of the evaluation criteria sheet.
Summary of the prioritization process steps that the tools are trying to support (we didn't discuss this per se but putting here to help my own understanding – Michelle Futornick )
SIG rating (or is it ranking?) of relevant epics
discuss within SIG
determine SIG rating (up to the SIG how this is done, might involve "voting" by SIG members, or consensus discussion with smaller SIGs)
Institutional rating on all epics
discuss within institution
determine institution's rating (similar to SIG: up to the institution how they want to determine this, by formal voting or consensus discussion)
collect ratings from all institutions and calculate average
Product Owner developing and refining their own ranking, taking into account SIG and Institution rating
Some overall observations/challenges about the prioritization process and tools to support that process:
if SIGs and institutions are only rating epics, that leaves out some individual critical issues within epics: how to pull out those critical issues to allow them to be ranked alongside epics. or should they be epics? how would PO know what those issues are?
how to handle epics that cross multiple SIGs, assuming there is a single "SIG Rank" field on an epic (or issue). one strategy: add a comment in JIRA that states which SIGs rated the epic, and which each SIG's rating was, and put the average into the SIG Rank field.
how a SIG or an institution determines its ratings of epics is up to the SIG or institution. we can help by suggesting tools
the part of the process that needs a community-wide tool is the collection of ratings from all the institutions and determining a single institutional rating per epic
no matter the tool, it requires effort to set it up, especially the first time around
Next steps
Next meetings:
4th May cancelled due to vacations
11th May Meeting to get our presentation for PC ready
19th May presentation of the gathered feedback/updated proposal and tools to the PC