2022-05-11 Meeting notes

Date

Attendees

Jana Freytag Martina Tumulla Marie Widigson Martina Schildt Michelle Futornick (later)

Zoom link: https://openlibraryfoundation.zoom.us/j/88593295877?pwd=Zm53aG1Ga2g5SVV1OFAvK0lMVVVQdz09

Calendar invite: https://openlibraryfoundation.zoom.us/meeting/tZwofuqqpz4iHdMaS6vffyjDAlO5x1_KkNTf/ics?icsToken=98tyKuGgqzIpGN2QuB6ARpw-GYr4b-rxmCVHgqdwnSyzFSZVewnSF-5tZ6ouL_Pb


Discussion items

ItemNotes

To Dos

follow upsĀ 

  • Next meetings:
    • 20th April: Office hour demo of the tools (tick)
    • 4th May cancelled du to vaccations (tick)
    • 11th May Meeting to get our presentation for PC ready
    • 19th May presentation of the gathered feedback/updated proposal and tools to the PC
Next Steps and Feedback

PC Meeting Notes: 2022-03-31 Product Council Meeting notes

Next steps as set in PC Meeting:
  • Next steps: the group is seeking additional feedback (including comments on the slides). Ā The task group will prepare an additional presentation on tools and a refinement of the process. Ā Feedback is due in three weeks, with the group coming back to Product Council a week after.

Feedback

Ā From Kristin Martin in Slack

Hi Martina, I've put some feedback on the prioritization into the slideshow, but I have a few other comments I'd like to share with the group:
Requests from Community coming through Jira: I like that community members can submit requests directly, but I also worry about cluttering up Jira with ill-formed or unsuitable types of requests (e.g., too broad, dups, etc.). It might be worth recommending that people take ideas to the SIG first (or appropriate convener to discuss even before a Jira is created? Otherwise, we may need to designate someone to review new requests that come in. When I worked on the OLE project, where libraries could add new Jira tickets at will (enhancements or bugs), there was a full-time quality-control person who triaged them.I also like using a tool (I canā€™t comment about the suitability of Air Table/Miro/etc., since I donā€™t have experience with them) for institutional voting. It would be interesting to note not only a total score/ranking for institutional votes, but also the variance. Some issues may really split libraries, while others may be middling all around. Both could end up with the same ranking, but the criticality to select institutions could be quite different. We had this when we say if an institution rated a feature as R1 in Jira. Maybe knowing a count of institutions that say ā€œyes, absolutely, this is a must-haveā€ would help.For weighting votes: I think institutions already live should get a boost. They are already using the system, and probably have a better sense of what they need than an institution not already live on the system. For consortia, maybe they should also get some additional weight over an individual institution. I also think an institution that is under 1 year out from implementation should get more weight. Iā€™m not as worried about member vs. non-member. I think active institutions will have a chance to speak and debate in the SIGs, so their voices will get amplified.

Open questions to clarify before PC meeting on May 19th
  • need rankable chunks (epics vs. features) - how?
  • cross-app features: all related SIGs vote - decision?
  • for NFRs: need only institutional rankings? - decision?
  • who enters institutional rankings from separate tool to JIRA?Ā 
    • one responsible person (TBD) ā†’ because JIRA is one place where PO can find all rankings and relevant information
    • would need several rank fields: ranking may change over time or a new institution adds a ranking to separate tool; needs to be reflected in JIRA as well
  • possible solutions:Ā 
    • only SIGs have spontaneous rankings that can be changed regularly
    • institutions rank once a year
    • keep institutional ranking out of JIRA and have a link from JIRA to the relevant ranking tool
    • how did this work with the excel sheet for pointing? Maybe a specific file can be imported to JIRA on a regular basis?
Tools
  • uservoice may be useful for institutional rankings
  • airtable and seatable better for SIG ranking
  • EBSCO uses survey monkey - maybe ask reuse
  • maybe decide after PC decision on new process - if we are not going with the new process, we do not need to decide on tools

Ā Chat


Action items

  • Ā add feedback and update revised slides to present to PC on Ā