II | Requirements analysis

II | Requirements analysis

Brainstorming

Looking at the current ranking and the past pointing process - what went well, what could be improved?

Feedback

Challenge

Possible solution

Comments / Discussion

Feedback

Challenge

Possible solution

Comments / Discussion

Ranking in JIRA combines all information in the same spot /MS

There are not enough JIRA accounts for every institution to rank

  • Ranking per service provider

 

The GBV is ranking as a service provider for its libraries currently: GBV ranks for its libraries; that includes preranking to find agreement amongst the libraries

Either everyone ranks via service providers or no one (except for libraries hosting themselves); so we need to overcome the current mixture, otherwise its not fair and balanced; networks and providers could get more points than libraries ranking for themselves - when we go this way

Ranking in JIRA enables institutions to track features via dashboards /MS 

 

  • Keeping JIRA as ranking system

  • libraries should not need to have a shadow system to be able to keep track with the highly ranked features

  • on the other hand: maybe libraries already have shadow systems to get feedback to the community 

Involve SIGs better into ranking process /MS

find a fair ranking mechanism within SIG; everyone needs to be able to give feedback

  • Add a SIG ranking to JIRA

  • add label to JIRA tickets

  • voting system within Slack channels of SIGs - pro: people could take part without attending SIG meetings

  • use Confluence ranking possibilities (was used in ACQ SIG several times)

  • individual features (where possible) should be grouped into epics and the epics are voted on.

  • Context is very important

Hard to understand many UXPROD:s /MWi

 

Group related features together and describe them with an straightforward language  /MWi

  • Features need enough information so that everyone understands and features can be differentiated

  • Maybe we should make sure that there are no Jiras that are not connected to some Epic.

  • SIGs could help describing features - inside or outside of; SIG could describe outside of JIRA for people to vote, then bring back to JIRA

Only highly ranked UXPROD:s was in Kiwi pointing /MWi

Some features not visible  /MWi

 

 

Many UXPROD:s to rank in Kiwi pointing /MWi

 

Group related features together  /MWi

 

Some UXPROD:s was partly overlapping in Kiwi pointing, hard to distribute points /MWi

 

Group related features together  /MWi

 

Non-fancy technical issues are not attracting votes. (In some cases it was tricky to determine if a UXPROD was a feature or a technical non functional requirement in the Kiwi pointing.) /MWi

The foundations of the systems are not prioritzed this way.  /MWi

Technical issues handled outside of the prio process? /MWi

+1 Always devote e.g. 20% dev time to NFRs and rank separately /MS

 

Each community institution was forced to prioritize during Kiwi pointing, could not say that every desired feature was of highest importance. Good. /MWi

 

If using Jira, a limited number of R1 could be allowed? /MWi

 

Each community institution was treated equally during Kiwi pointing. /MWi

Small and/or instutions far from implementation has the same weight as large and/or already implemented institutions. Good or bad?  /MWi

 

  • newer institutions should be able to rank equally to the ones that can rank in JIRA

  • how are legal requirements weighted - higher weight needed?

  • more weight for libaries putting in dev resources

Pointing process was independant from the number of features a dev team can do /MS

Independant from rankings or pointings: every dev team can only develop a specific number of features

 Having multiple rankings per apps, one per development team /MS, TT