2022-08-10 Meeting notes

Date

Attendees

Thomas Trutt Jacquie Samples Jana Freytag Martina Schildt Martina TumullaĀ 

Regrets: Michelle FutornickĀ 

Zoom link: https://openlibraryfoundation.zoom.us/j/88593295877?pwd=Zm53aG1Ga2g5SVV1OFAvK0lMVVVQdz09

Calendar invite: https://openlibraryfoundation.zoom.us/meeting/tZwofuqqpz4iHdMaS6vffyjDAlO5x1_KkNTf/ics?icsToken=98tyKuGgqzIpGN2QuB6ARpw-GYr4b-rxmCVHgqdwnSyzFSZVewnSF-5tZ6ouL_Pb


Discussion items

ItemNotes

To Dos | Follow upsĀ 

  • Next steps:
    • meet with POs - not scheduled - no feedback from Khalilah so far
      • question to POs: do features fit into epics?
      • entering new requests in cases where multiple POs are concerned ā†’ choose main focus and "cc" all other relevant POs as part of description ā†’ are there other ways?
      • is the voting option in JIRA used? Does that have any impact?
    • meet with SIGs - share task - after WOLFcon
    • meet with CC? - at or after WOLFcon

Meeting on Aug 24th is cancelled

Tentative meeting Aug 17th

WOLFcon planning
How can prioritization feed into roadmap?
  • SIG can discuss the results and propose themes to the PC/Roadmap team
  • what feedsĀ  into the roadmap - the SIG or the institutional rank - or both
  • the survey can and needs to feed into the roadmap as well
  • bring results from different rankings to SIGs - discuss themes and subthemesĀ 
Decisionsplease see table below
Future steps
  • JIRA Questions - invite Peter to one of our meetings?
    • is there a limit of JIRA accounts?
    • what is the limit?
    • if they have an enterprise level it looks like 35,000 members.
    • how close are we to the limit?

Decisions

DECIDED Group made a decision to propose to PC

TBD Needs decision making

QUESTION There is an open question

TBD PC PC decision needed

REJECTED Idea is rejected


TopicDecisionArgumentationNotesGroup status
New Requests
  • Create forms for each SIG and one cross-app form in ConfluenceĀ 
  • New requests are submitted via these forms
  • Requester enters descriptive title and description with use cases
  • default information is pre-entered, to capture all needed details
    • e.g. Assignee (=PO)
    • SIG label
    • "new_request" label
  • New requests should be assigned to a PO (=assignee)
    • in cases where multiple POs are concerned ā†’ choose main focus and "cc" all other relevant POs as part of description
    • this option should be added as a separate field
  • there will be an automated import to JIRA via API
    • once a week
  • These new requests will be added as features with a new, separate status and label ā€œnew_requestā€
  • new frequests can be monitored via dashboards
  • PO or convener does duplicate check
  • PO or convener brings request into SIG for further refinement and discussion
  • PO or conevener changes status of request
  • if necessary: PO can directly change status without SIG discussion (for urgent cases) ā†’ this should not be the standard process
  • Confluence is open to everyone
  • no account limit
  • no expert knowledge needed
  • Requests cannot get lost and are easier to track for the providing institutionĀ 
  • new requests are trackable via dashboards
    • for requester, SIG and PO
  • possibility to comment
    • e.g. if others would like to push the request
    • for PO to mark as duplicate
  • linkable to duplicates
  • history can be kept
  • alternative: Add ā€œNew requestā€ as ticket type (like feature, story ā€¦)
  • New requests should be added with a spearate status instead of as a new type ā†’ advantage:
    • type would need to change from ā€œNew requestā€ to feature after "approval"
    • status change is easier, and nevertheless trackable

How to provide new requests is separate from prioritization process, but has been discussed in context of it.


Recommendation: New requests as well as all other tickets in JIRA should be clear in title and description ā†’ for being able to understand what exactly the ticket is about ā†’ precondition for being able to rank

DECIDED

What should SIGs rank on?
  • All features with status
    • open
    • draft
    • in refinement
    • in progress
  • not on NFRs
  • not on Industry standards
    • Definition: A set of criteria within an industry relating to the standard functioning and carrying out of operations in their respective fields of production. In other words, it is the generally accepted requirements followed by the members of an industry.
  • The requests go through the SIGs, they decide on whether this is industry standard or something the SIG should rank on.
  • same as we ranked before; task was to create a new process around existing tickets
  • new institutions should be able to rank backwards and not only new requests
  • prevent duplicates
    • prevent that institutions create duplicate new requests because they are not allowed to rank on existing open features

SIGs or PO decides what industry standards are


DECIDED


SIG prioritization
  • Use ranking tool
    • TBD: which one? How?
  • frequency: regularly, up to PO or SIG convener
  • duration: 3 weeks Ā to rankĀ 
  • after ranking process: calculate one total rank per feature
  • cross-app features are ranked by all related SIGs
  • Add one field per SIG
    • for the calculated total rank of each SIGs ranking
    • as part of each ticket
  • Import total rank into JIRA
  • in addition: make variance of ranking available to the community (through the ranking tool)
  • extensive communication
  • asynchronous prioritization via tool enables all community members to take part
    • no matter the time zone
    • no matter whether there are/can be representatives in the SIG meetings
    • no matter the language barrier
  • one tool across the project:
    • for consistency reasons
    • to ease voting if ther are cross-app features
    • to make it easier for new members to get to know FOLIO tools

DECIDED

QUESTION

Institutional ranking
  • Use ranking tool - TBD: which one?
    • fields to prioritize should include
      • Priority,
      • Impact on work,
      • Urgency
  • frequency: twice a year
  • duration: 8 weeks to rankĀ 
  • after ranking process: calculate one total rank per feature
  • Use a combination of fields for the calculated total rank:
    • Priority,
    • Impact on work,
    • Urgency
  • Keep one field for the calculated total institutional rank as part of each ticket
  • Import total rank into JIRA
  • in addition: make variance of ranking available to the community (through the ranking tool)

can be a different tool than the one for the SIGs as it fulfills different needsĀ 

announce ranking period ahead of time

Check with POs which tool is easiest for them and how process can be as simple as possible for them


DECIDED

QUESTION

Weighting for ranking institutions

  • Implement differentiated weighting for ranking institutions
    • for implementation status
    • for consortia vs. single institution
  • TBD: How? What will the weighting look like?
  • Ideally this would be calculated automatically
  • The weighting needs to be adjusted over time
    • list needs to be maintained and updated over time
  • Do we need that weighting?

ranking is just giving direction; not determining

we can review after a year whether we need a weighting or not - but for the moment we leave it out


REJECTED



Ranking values
  • will be using 5 ranking levels
  • similar to R1 to R5, but clearer in wording for comprehensibility
  • there a limit to how many R1s an institution can use
    • one R1 for every app
  • 2 types of ranking:
    • do we need it ā†’ yes/no
    • how long can we wait = use R1 to R4Ā 
  • R1 - R5Ā 
    • R5 should not be "0" or nor needed
  • institutions should not rank features that they do not need
  • calculating total or average

check with POs

DECIDED

QUESTION

Surveys
  • once a year
  • free text



Ā 

Chat


Action items

  • Ā