Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Date

...

TimeItemWhoNotes
5 minAnnouncements

FOLIO Open Community Meeting Friday, October 15, 10 AM ET — Let Kristin Martin know if you have input or ideas for the 10-minute Product Council report during the community update.

BugFest: need more testers! so far only 44% of cases assigned

10 minPC vacancy: Philip Schreur to step down. Proposal to appoint Karen Newbery to open seatKristin Martin

The charter leaves this up to individual groups.  When the TC had this situation, they approached the next highest voting recipient.  In the case of the PC, this is Karen Newbery.  Karen accepted the nomination, and the voting PC members unanimously approved the appointment.

20 min

What is the acceptable migration time frame for a release (continued from 2021-09-30)?

Desired Outcomes: Discuss Questions and plan framework:

  • What is our Definition of Done and how can migration time frame fit into this?
  • What happens if a release fails our definition? Will there be exceptions?

In rolling out Juniper, sites found that it took a long time to run the migration scripts.  What parameters do we want to set for adding a maximum time for a migration script to the developers' Definition of Done?  What happens if a module's release fails this definition.

What is the acceptable amount of downtime for a library when migrating?  For Chicago, it depends on the size of the transition; they would schedule extensive downtimes for breaks.

Part of the information we take away from bugfest should be the amount of time it takes to migrate.  We shouldn't be surprised by the amount of time it takes for a production migration to take.  When there are new or changed fields, what should be part of the migration and what could be done as batch processes after migration?

During the upgrade, there are a lot of moving parts and it takes significant coordination of people (e.g. for testing).

The TC is going through a process of how to make decisions, and are concerned about the amount of technical decisions bubbling up to the TC.  There is not one project-wide definition of done.

Recommendation that a migration would take no more than 3 hours for a library with 10 million instances.  Inventory and SRS have been where there was churn in the data model each upgrade that has caused the length of migration times.

Contemporary internet systems use background data migration and feature flags to handle functionality upgrades: after a software upgrade, a feature flag turns of behavior that relies on new/modified data until a background migration script runs.  When the data is ready, a feature flag turns on the new functionality.  The development team must remember to remove old code and data in a future release.  (As a consequence, this has an impact with additional work being asked of development teams.)  This technique is already being used as part of the optimistic locking implementation.  This could be a decision made jointly by PC and TC to ask Product Owners to adopt this approach.

  •  Peter Murrayto write a proposal for feature flags by for discussion at PC and TC, including trying to quantify what is acceptable for an amount of downtime for a migration. 

We did not cover the need for a FOLIO-wide Definition of Done.  Nor did we identify what stages are causing the most amount of migration time.

40 min

Identifying priorities for development for FOLIO

Background:

Desired Outcomes:

  • Raising awareness and having discussion, do not anticipate firm decision
  • Determining distinct charge and problem statement for a group to investigate, and then composing a charging a group to work on this
Kristin Martin

The pointing exercise by the Capacity Planning team worked well; does it scale to a larger number of libraries?  In January, Cap-Planning is going to export the existing rankings into a spreadsheet for historical needs and delete those fields from Jira.

The pointing exercise was based on everything that had an R1 or R2 by libraries; without the rankings, what would be chosen for the pointing exercise?  Every library would be able to rank every feature in the future (rather than a few libraries in Jira now).  Perhaps the scope of features for the pointing exercise would be decided by the SIGs.  The Acquisitions SIG has tried this.

The pointing exercise left out a lot of what was invisible to the users...things that are important to SysOps, for example.  The technical underpinnings need to be tended to as well.  There is also a decreasing amount of development that is apportioned by the community.  Maybe this is part of the roadmap process to set the broad scope for particular releases.

There are products and systems that are designed to help manage customer/user feature requests, including voting and setting product roadmaps.

The spreadsheet had the difficulty of needing to click over to Jira to read the description, and with multiple people working in the spreadsheet at the same time.  Is it the role of Jira to hold these features to be developed?  Is this where libraries should be looking to provide input and tracking whether a feature has been delivered?

How can we make use of the SIGs to help the POs with ranking and other activities of selecting feature development?  In the past, we used to prioritize Epics—a level higher than the features.  The product owner has owned the backlog and priorities of feature development.  In discussion with the SIG, the PO can set those priorities.

10 minAgenda idea generation

Kristin Martin has been updating the Product Council wiki space with the changes that we have made to the product council processes, including identifying old/outdated pages.

Anya to report back on the Kiwi build.

Peter Murray to work on the feature flag proposal.

...

09:32:11 From Brooks Travis to Everyone:
 I’m planning to join with video, but my computer is not cooperating. I’m on my phone for now.
09:38:04 From Peter Murray to Everyone:
 Yes.
09:38:04 From Kristin Martin to Everyone:
 YES
09:38:06 From Martina Schildt to Everyone:
 Yes
09:38:07 From Martina Tumulla to Everyone:
 yes
09:38:08 From Anya to Everyone:
 yes
09:38:08 From Travis, Brooks L to Everyone:
 Yes
09:38:10 From Gang Zhou to Everyone:
 Yes
09:38:10 From Owen Stephens to Everyone:
 yes
09:42:57 From Kirstin Kemner-Heek to Everyone:
 No, sorry
09:48:39 From Harry Kaplanian to Everyone:
 I think a good outcome of today would be for the PC to let the TC know what an acceptable upgrade/migration time is.  The TC can then update the definition of done
09:52:19 From Harry Kaplanian to Everyone:
 +1 Owen
09:56:07 From Kirstin Kemner-Heek to Everyone:
 From a service providers view (for more than 200 libraries): we heavily rely on the release management based on test machines. Each tenant has a test machine, which is a 1:1 copy of the production machine (please replace “machine” now as “tenant environment on the platform”). There, a release upgrade is tested and implemented, until it is “production ready”. All changes of a release upgrade are afterwards copied / implemented on the production machine which are not critical for the service. That leads to release related downtimes from “nothing” to max. 2 hours at weekends / after office hours. So, that is our goal, we try to meet with FOLIO as well. I assume, that more than 2-3 hours downtime during is acceptable.
09:56:46 From Kirstin Kemner-Heek to Everyone:
 during office hours :-) sorry.
09:56:58 From Gang Zhou to Everyone:
 1)Migration times may vary depending on the size of the library.
 
  2)Academic libraries have summer and winter breaks, while public libraries may have no holidays.
09:57:00 From Owen Stephens to Everyone:
 Kirstin are you saying we should aim for not more than 3 hours of downtime in office hours?
09:57:15 From Kirstin Kemner-Heek to Everyone:
 Yes
09:57:35 From Owen Stephens to Everyone:
 For what size though Harry?
09:58:14 From Tod Olson to Everyone:
 Exactly, scale is a factor.
09:58:30 From Mike Gorrell to Everyone:
 IMO we need to set a goal and have that goal be based on an agreed standard. So the goal might be to migrate in under 2 hours for a collection size of 8M Instances/Items

It’s the Dev Teams’ job to engineer things to meet that goal.

If the goal cannot be met in a given circumstance then that will be a reason for discussion prior to release.
09:58:38 From Owen Stephens to Everyone:
 10 million instances?
09:58:41 From Owen Stephens to Everyone:
 Or items?
10:00:06 From Owen Stephens to Everyone:
 Do we have a list of other scales for 10 million instances? i.e. how many items, users, e-resources, … etc.
10:00:15 From Owen Stephens to Everyone:
 Because otherwise we’re just talking about inventory migration times!
10:00:27 From Harry Kaplanian to Everyone:
 +1 Owen
10:00:57 From Owen Stephens to Everyone:
 Or perhaps that’s the only real issue!
10:00:58 From Owen Stephens to Everyone:
 :)
10:01:06 From Charlotte Whitt to Everyone:
 Shanghai Public library has 50 mio items
10:01:08 From Charlotte Whitt to Everyone:
 https://wikifolio-org.folioatlassian.orgnet/wiki/display/COHORT2019/Implementer+Statistics
10:01:17 From Gang Zhou to Everyone:
 Is it possible to divided into small, medium and large scale to discuss separately?
 
  It is large scale with 10M instance.
10:02:00 From Gang Zhou to Everyone:
 Around 40M items in SHL
10:02:27 From Mike Gorrell to Everyone:
 Our “Standard” size for migration should include all record types - so # loans, # POL, # users, etc, etc. would all be defined.
10:03:18 From Mike Gorrell to Everyone:
 ALSO it needs to be documented what the hardware profile is being used for the migration. 10M records on a 286 PC will take a lot longer than a super computer.
10:03:55 From Harry Kaplanian to Everyone:
 +1 Mike
10:05:27 From Peter Murray to Everyone:
 +1 to the previous discussion of delaying any data migration to after a system's software has been updated (and, as a consequence, the use of "feature flags" to turn on features in the code _after_ data has been migrated).
10:05:47 From Karen Newbery to Everyone:
 "overnight" upgrades are difficult when we support a library that is in China.
10:06:34 From James Fuller (he/him) to Everyone:
 Do the changes have to be so large? Could the size of change be reduced?
10:09:23 From Harry Kaplanian to Everyone:
 +1 Peter
10:11:10 From Owen Stephens to Everyone:
 Do we have information on how long the kiwi upgrade took for the bugfest environment
10:12:11 From Axel Dörrer to Everyone:
 On the migration hardware profile Mike mentioned. I assume a migration hardware profile should not differ too much from the production hardware profile, or should it?
10:12:52 From Karen Newbery to Everyone:
 I agree, Axel.
10:14:50 From Charlotte Whitt to Everyone:
 It would be great to have the Bugfest migration statistics to split up each migration process, and specific data for load on inventory (instance, holdings, item) and SRS (MARC bibs, MARC holdings) - and not just in one big bucket
10:15:17 From Anya to Everyone:
 +1 Charlotte
10:15:22 From Mike Gorrell to Everyone:
 The point of having a standard recordd set and hardware profile will allow us to set targets and to assess deviations from our targets. Also helpful to use as relative info (i.e. my collection is 1/2 the size of the standard so we might expect 50% of the migration time).
10:15:57 From Gang Zhou to Everyone:
 +1 Mike
10:19:18 From Owen Stephens to Everyone:
 I’m a bit unclear what we are suggesting be added to the definition of done. Would that be part of this proposal?
10:21:11 From Owen Stephens to Everyone:
 Can we get a break down of what takes time in a migration front anywhere?
10:21:32 From Tod Olson to Everyone:
 Also, there is a tension with the project between enforcing a particular way to do something within FOLIO and the relative autonomy of the dev teams, as well as overhead in onboarding new devs. The more requirements the more overhead. This tension is part of the tech decision conversation we had in TC yesterday.
10:23:02 From Kirstin Kemner-Heek to Everyone:
 Can we have that chat recorded in the minutes somewhere? It's really good.
10:23:22 From Peter Murray to Everyone:
 I can add it to the bottom of the minutes.
10:23:34 From Kirstin Kemner-Heek to Everyone:
 +1 Peter. TY
10:24:08 From Tod Olson to Everyone:
 Those DoDs, I believe, are largely consistent, but some details may vary between teams. That may have to do with how the teams work, or details of the modules they tend to work on.
10:25:08 From Owen Stephens to Everyone:
 1000%
10:25:15 From Owen Stephens to Everyone:
 Or 100% !
10:26:20 From Tod Olson to Everyone:
 @Anya: Zero My Hero: from Schoolhouse Rock?
10:26:42 From Anya to Everyone:
 Yes !
10:27:55 From Owen Stephens to Everyone:
 Never heard of it! But YouTube is there for me https://youtu.be/ADY0D-GmO7U
10:31:30 From Anya to Everyone:
 What there - with requirements ?
10:32:22 From Owen Stephens to Everyone:
 There are products that are meant to help manage customer/user feature requests, voting and product roadmaps
10:33:02 From Peter Murray to Everyone:
 Owen—yes, I was wondering about the same thing.
10:40:26 From Martina Schildt to Everyone:
 +1 to dashboard
10:40:42 From Travis, Brooks L to Everyone:
 Labels?
10:40:44 From Owen Stephens to Everyone:
 labels
10:40:57 From Owen Stephens to Everyone:
 Worse than the current list of libraries!
10:41:31 From Owen Stephens to Everyone:
 SIG based ranking is appealing but some things don’t work at the SIG level
10:41:41 From Ian Walls to Everyone:
 I really think we need to have a separate tool for proposing, ranking and assigning features.  something that can help turn expressed 'rankinng' into funding to a dev team to do the work
10:41:52 From Peter Murray to Everyone:
 A label for each UXPROD issue that assigns it to a SIG would be useful for creating reports that the SIG could react to.
10:41:58 From Travis, Brooks L to Everyone:
 +1 Ian
10:43:18 From Harry Kaplanian to Everyone:
 +1 Ian
10:43:37 From Tod Olson to Everyone:
 We might be able to scale better at _this_ point in the project by looking at pain points. Perhaps prioritizing pain points is a more tractable discussion, and the particular features/bugs/NFRs falls out of that context.
10:44:28 From Travis, Brooks L to Everyone:
 I think that the Themes work that the Roadmap team is doing might help in that direction, Tod.
10:44:44 From Peter Murray to Everyone:
 Libraries can put bounties on features?  Hmmm...
10:44:56 From Harry Kaplanian to Everyone:
 +1 Peter
10:45:37 From Ian Walls to Everyone:
 I'm positive there is a tool out there that woudl let individual libraries agree to contribute $X towards a feature, and for dev teams to bid to do it for $Y
10:46:13 From Owen Stephens to Everyone:
 I think those questions of money are really interesting - I both like, and worry about, the idea of bounties on features
10:47:05 From Peter Murray to Everyone:
 Right—would we need a percentage overhead on the bounties to support non-functional development?
10:47:09 From Travis, Brooks L to Everyone:
 That’s the direction I was going. Thanks, Kirstin!
10:47:40 From Jana Freytag to Everyone:
 As a convener, that sounds very good to me!
10:48:16 From Travis, Brooks L to Everyone:
 We have the PO rank field in Jira. Perhaps we could use that or add a SIG rank field?
10:49:08 From Ian Walls to Everyone:
 NFRs could be separate features to bid upon; they'd likely only be funded by self-hosted libraries or vendors.  But individual libraries could throw in a small contirbution as funds are available
10:49:26 From Martina Schildt to Everyone:
 +1 to Kirstin
10:54:50 From Tod Olson to Everyone:
 I may be able to pitch in.
10:55:00 From Jana Freytag to Everyone:
 @Kristin, please write me down too
10:56:03 From Peter Murray to Everyone:
 I added you in the minutes Jana.
10:56:52 From Peter Murray to Everyone:
 You, too, Todd.