2025-06-25 Evaluating Existing Modules

2025-06-25 Evaluating Existing Modules

Date

Jun 25, 2025

Attendees 

  • @Craig McNally

  • @Kevin Day

  • @Tod Olson

  • @Marc Johnson

  • @Julian Ladisch

  • @Ingolf Kuss

  • @Owen Stephens

  • @Jason Root

  • @Maccabee Levine

  • @Matt Weaver

Discussion items

Time

Item

Who

Notes

Time

Item

Who

Notes

1 min

Scribe

All

@Jakub Skoczen is next, followed by @Joshua Greben

Reminder:  Please copy/paste the Zoom chat into the notes.  If you miss it, this is saved along with the meeting recording, but having it here has benefits.

*

Evaluating existing modules

All

Not just modules but existing “code”… shared libraries, plugins, etc. too

Background:

This document provides the criteria against which a module will be assessed for inclusion into a FOLIO release, following the New Module Technical Evaluation Process.

Existing FOLIO modules ideally conform to the same Values and Criteria. It is understood that not all existing modules currently do so, especially modules created before the Values & Criteria were initially defined. The Technical Council will work with development teams to align the current reality of the code with the Values and Criteria over time, as practicable. Such processes will need to be developed and documented.

  • TC approved that round of process improvements on 2024-01-31. From the notes:

  • general scope expanded from "just new modules" to also "existing modules"

    • some criteria might not extend to existing modules

Background/Context:  The TCR / New module technical review process was created to evaluate new modules.  It has long been discussed that we should also consider reviewing modules already part of the official Folio releases (Both those which predate these processes, and possibly those which were evaluated, but not reviewed recently).  

Goal:  Review the existing criteria and see which if any need to be removed/adjusted/added for evaluation of existing modules.  The idea is that this will feed into a larger RFC which covers not just the criteria, but also the process, timing, who's respsonsible for what, grace periods, etc.

Notes:

  • aim to avoid mixing it up with "and" to prevent any confusion with the new module evaluation process.

  • focus on the evaluating criteria and discussed potential changes that could be made

  • it is suggested that this work might feed into a larger RFC, but also it should be kept separate from the new module evaluation process.

  • proceeded to go through the list of criteria one by one, noting potential adjustments or additions for each one when evaluating existing modules. The rationale behind this decision was to avoid any overlaps or confusion between the two processes, ensuring they can be understood and applied independently.

    • include removing certain criteria or adjusting them to account for the specifics of existing modules

  • touched upon the potential consequences of the existing module assessment and how the process will feed into an RFC that covers the process in general

  • specific language/frameworks may not apply for existing modules but could still be relevant for new ones.

  • Suggestions for grandfathering certain modules or creating exemptions based on the date of module creation. This would involve creating exceptions for modules based on their date of creation and allowing them to bypass certain criteria.

  • Agreed that the 'module must use approved technologies' criteria should be kept, but recognized this could be seen as a moving target.

  • plans to discuss further adjustments to the criteria at the next meeting. Agreed to go through the list of criteria individually, determining potential changes for each as it applies to existing modules

  • @Craig McNally created a branch (during the meeting above) with our notes on changes to the criteria. I just now (2025-06-25) created a PR so we can view the diff easily if needed.

 

Notes

 

Maccabee gives a brief overview…

Discussion:

  • Craig asks that we should just give it a try, and define the next steps?

  • Tod asks if we should bring in POs initially? Many agree with regards to this.

  • Marc asks what counts as an existing module? Suggests we start with the newest un-evaluated modules, Craig agrees as do others.

  • Craig points out the TC has a bandwidth issue, self-eval should probably be off the table to start with, later on self-eval will be a win for distributing the work.

  • Kevin suggests adding additional language about changes over time vs the initial eval, "practice of standard".

  • Marc suggests we don't have a sense of process yet, evals are point-in-time by nature.

  • Craig's interpretation was that some sort of change log would make the eval easier - Kevin agrees these are both aspects of the process.

  • Craig suggests we do need a good way to track this, and providing some positive feedback with these evals (“carrot and the stick” aspects).

  • Marc thinks we need to be careful, the criteria were originally chosen because they were "easy" - not necessarily because they improved quality.

  • Kevin wants some feedback about the value you're getting out of this process - Craig's opinion is that we want to close the loop on some of these evals - Identify tech debt.

  • Marc suggests caution is needed, might ruffle some feathers because of large gaps.

  • Maccabee says we nearly piloted this with Serials and it went well with the dev team.

  • Craig asks about next steps, phasing this, and picking a module that has a risk for failures.

  • Marc suggests to just start picking, using SonarCloud as a way to filter down.

Criteria Summary: 2 modules - front end and back end, from different teams and orgs, newer modules but predating the eval process, ones likely to fail, and something we can all walk away with a positive result.

-

Zoom Chat



Florian Gleixner (Jun 25, 2025, 10:15 AM) mod-configuration is one example .... Maccabee Levine (Jun 25, 2025, 10:20 AM) I like the idea of doing the Jira work ourselves, unless the team tells us that they prefer to. We should probably just label the issues somehow to reference a TC eval. You (Jun 25, 2025, 10:45 AM) @Maccabee Levine you are breaking up. Maccabee Levine (Jun 25, 2025, 10:45 AM) Sorry about audio. Yup that was the point Day, Kevin (Jun 25, 2025, 10:48 AM) At a later phase, we would need to test out non-module projects, such as a library, to see how the process goes in those kinds of situations. Owen Stephens (Jun 25, 2025, 10:48 AM) As PO for Serials Management, I’d say that there is a benefit to being “kept honest” for a module that has gone through the approval process. The challenge with older modules (I think acknowledged by the previous discussion about `grandfathering`) is that the challenge could be well beyond the available resource Marc Johnson (Jun 25, 2025, 10:51 AM) It’s also worth keeping in mind that the team currently responsible for a module may not be the team that did much of the work Owen Stephens (Jun 25, 2025, 10:54 AM) I would suggest picking multiple modules so a single team doesn’t feel singled out 🙂 Owen Stephens (Jun 25, 2025, 10:56 AM) Comms wise - talking to the PO meeting (this timeslot every other Wed) would be worth doing a presentation at explaining what you are doing and why POs keep an eye on this kind of thing (which is why I’m at this meeting!) so I’d say better to get ahead of it. But just a suggestion Julian Ladisch (Jun 25, 2025, 11:02 AM) sonar's overview: https://sonarcloud.io/organizations/folio-org/projects semgrep's license policy check: https://semgrep.dev/orgs/semgrep_folio_org/supply-chain/dependencies Owen Stephens (Jun 25, 2025, 11:02 AM) Apologies I have to drop off. Thanks all