TCR Process Improvements - Fall 2023

  • Slack: #folio-tc-tcr-process
  • Meeting Time: Every week on Monday, 4pm - 5pm ET
  • Zoom Meeting ID: 938 2112 2373

Agenda and Minutes

2024-02-26

Attendees: Maccabee Levine Jenn Colt

  • Maccabee Levine ask Alexi about getting into the dev schedule template
    • Ask TC to approve the message, then post in those channels. 
    •  And to add to a TC calendar.  How do we do calendars now post-cloud?
    • Maccabee Levine ask Alexi about being in the timeline.  DM with Alexi, Khalilah, Jenn, me, Tod, Craig
    • Then Slack:
  • Update to scope section of New Module Technical Evaluations.
    • (After the PR is merged.)
    • Jenn Colt will do.  No need to check in
  • Update list of statuses and flow-chart on Jira Workflow page.
    • Also have to add the statuses to Jira itself and update workflow.
    • Can generate the flow-chart once we have the statuses added and workflow transitions added.  (Jenn confirmed.)
    • Maccabee Levine will ask who manages this (not this week because cloud work is nutso)
    • Jenn: owner of the Jira project can edit those statuses.  Jenn Colt will ask Peter to either do it or give her access to do it.

2024-02-19

Attendees: Maccabee Levine Jenn Colt Tod Olson

  • Merge the two cleanup PRs.
  • Communication plan for encouraging RFCs, before the TCR gets going.
    • Add it to the top of the module review template.  Maccabee Levine look at.
    • Recurring reminder around the start of each release cycle.  Get buy-in from Alexi, Khalilah.  Motivation for RFC process, what we are trying to avoid ("Because architectural changes can be very disruptive......  Unintended consequences.  If you're thinking about something that affects other modules..."
    • RFC is a kind of advanced planning for the following release.
    • Tod drafted: https://docs.google.com/document/d/1qXvm5-NpHVAjZvPYFYhf6CPMtJ7Gipf5zMrJPUIAwCM/edit#heading=h.33rqf726a4dg
    • We'll incorporate a few more details on "architectural changes", maybe bullet points, during the week.  Jenn Colt taking next cut.
    • Jenn Colt for next week (2/5).
      • will have for the 19th
    • We reviewed Jenn's edits.  All good.
    • Maccabee Levine ask Alexi about getting into the dev schedule template
      • Ask TC to approve the message, then post in those channels. 
      •  And to add to a TC calendar.  How do we do calendars now post-cloud?
  • Are there things we can adopt from our ongoing RFC process improvements that would help the TCR process as well?
    • For RFCs we listed all the complicated stuff on GitHub but moved it to the wiki.  Here everything is in GitHub.  Maybe align the docs.
    • Jenn Colt will consider this, but if bandwidth keeps us from getting to it, it should not block closing the group
    • Consensus that we should wait for formalization processes before trying this.
  • Share these to-dos with TC, to deal with after the subgroup's work is done.

2024-02-05

Attendees: Maccabee Levine Tod Olson

  • Update to scope section of New Module Technical Evaluations.
  • Rename old TCR files to include the module name.
  • Re-order Acceptance Criteria document to put Frontend & Backend below Shared/common and Administrative.
    • After the initial PR is merged.
    • Consensus on this order: Administrative, Shared/Common, Frontend, Backend.
    • Maccabee Levine do a separate PR with that change.
  • Update list of statuses and flow-chart on Jira Workflow page.
    • Also have to add the statuses to Jira itself and update workflow.
    • Can generate the flow-chart once we have the statuses added and workflow transitions added.  (Jenn confirmed.)
    • Maccabee Levine will ask who manages this (not this week because cloud work is nutso)

2024-01-29

Attendees: Maccabee Levine Jenn Colt 

  • Communication plan for encouraging RFCs, before the TCR gets going.
    • Add it to the top of the module review template.  Maccabee Levine look at.
    • Recurring reminder around the start of each release cycle.  Get buy-in from Alexi, Khalilah.  Motivation for RFC process, what we are trying to avoid ("Because architectural changes can be very disruptive......  Unintended consequences.  If you're thinking about something that affects other modules..."
    • RFC is a kind of advanced planning for the following release.
    • Tod drafted: https://docs.google.com/document/d/1qXvm5-NpHVAjZvPYFYhf6CPMtJ7Gipf5zMrJPUIAwCM/edit#heading=h.33rqf726a4dg
    • We'll incorporate a few more details on "architectural changes", maybe bullet points, during the week.  Jenn Colt taking next cut.
    • Jenn Colt for next week (2/5).
  • Plan for Wednesday?
    • Maccabee Levine remind people again to read the PR
    • read through / overview first of each non-trivial change.
  • How do we get explicit "yes" / "no" / "don't care" from PC, so we don't keep having conversations at TC trying to interpret what PC has done so far, whether they have approved something, whether they even think they need to approve it, etc.
    • We get caught between slowing down our own process (against our deadlines), or moving ahead of PC's own approval which we don't want.
    • Could be you have to get two approvals (TC and PC), doesn't matter what order it comes in.  So parallel?
    • Originally PC submitted the TCR, so approval was implied.
    • Some modules have no significant UI / functionality, like mod-batch-print.
    • Maccabee Levine reach out to PC chairs to figure out a parallel process.  Slack async.
    • Slack discussion with PC chairs.  Maccabee drafted an addition to our criteria, though it doesn't have a good fit with the values section.
    • Jenn Colt agreed on new criteria.  That way TC evaluation can continue without waiting for PC, but we will not approve before PC does.
  • TCR naming convention:

2024-01-22

Attendees: Maccabee Levine Jenn Colt Tod Olson 

  • Communication plan for encouraging RFCs, before the TCR gets going.
    • Add it to the top of the module review template.  Maccabee Levine look at.
    • Recurring reminder around the start of each release cycle.  Get buy-in from Alexi, Khalilah.  Motivation for RFC process, what we are trying to avoid ("Because architectural changes can be very disruptive......  Unintended consequences.  If you're thinking about something that affects other modules..."
    • RFC is a kind of advanced planning for the following release.
    • Tod Olson will draft something (after holidays).
    • To discuss on 1/22.
    • https://docs.google.com/document/d/1qXvm5-NpHVAjZvPYFYhf6CPMtJ7Gipf5zMrJPUIAwCM/edit#heading=h.33rqf726a4dg
    • We'll incorporate a few more details on "architectural changes", maybe bullet points, during the week.  Jenn Colt taking next cut.
  • Assigning out TCR evaluations: right now Craig asks for volunteers, and sometimes no one volunteers, or the same people volunteer each time.  Are there ways to improve that, and if so should we
    • Notes below from 1/8
    • Would have hit a wall last week if not for Jeremy volunteering for a whole lot of them.
    • Maccabee Levine Ask Craig & Jeremy for a Wednesday discussion.  when would it make sense to take it on? 
      • Schedule it out a little bit, maybe publicize and get non-TC input.  Others who may be interested in becoming a reviewer.  Or maybe that is later.
  • How do we get explicit "yes" / "no" / "don't care" from PC, so we don't keep having conversations at TC trying to interpret what PC has done so far, whether they have approved something, whether they even think they need to approve it, etc.
    • We get caught between slowing down our own process (against our deadlines), or moving ahead of PC's own approval which we don't want.
    • Could be you have to get two approvals (TC and PC), doesn't matter what order it comes in.  So parallel?
    • Originally PC submitted the TCR, so approval was implied.
    • Some modules have no significant UI / functionality, like mod-batch-print.
    • Maccabee Levine reach out to PC chairs to figure out a parallel process.  Slack async.
  • Consider open threads on the PR.
    • We reviewed the open threads, and they appear to be ready for TC review on 1/31.

2024-01-08

Attendees: Maccabee Levine Tod Olson Jenn Colt 

Agenda:

  • Review existing comments on the PR.
    • Maccabee Levine do cleanup with copying template to eval and vice versa.
    • Then ask for async feedback to start.  Much of what we did was non-controversial.  And ask for a Wednesday meeting at end of month.
  • Assigning out TCR evaluations: right now Craig asks for volunteers, and sometimes no one volunteers, or the same people volunteer each time.  Are there ways to improve that, and if so should we.
    • Training may helpful.  Give folks more confidence that they would do it right.  We all have others jobs, not all have expertise of the full-time devs.
    • Shadowing before taking the lead, and asking others how to approach particular aspects.
    • Zak does a lot of reviews, he is not on TC.  Little motivation for non-TC members to do review.  But some sort of body/structure to give form to the reviewers, to get acknowledgement/credit for doing the reviews.  Would not have to get involved in the rest of TC's work.
    • Round-robin suggest person to lead the next review?  Helps TC members to have hands-on experience doing one.
    • Add to list of things TC members should be doing – time expectation.

2023-12-11

Attendees: Maccabee Levine Tod Olson Jenn Colt 

Agenda:

  • Matt Weaver: It's not clear if/when an already accepted module needs to go through the TCR process. E.g., if we decide to change the architecture of FQM to use APIs exclusively, would that require a TCR? It seems like substantial changes like that probably should require review, but I don't know if that's documented anywhere. This also seems relevant for stuff like RTR, where the question was raised in the TC of whether it was even something that needed voted on. I actually wanted to ask about this one at WOLFcon in the new module session, but we ran out of time. It seems like a bit of a hole in module evaluation framework.
    • In TC Slack, discussed that perhaps our subgroup should just articulate the goals for an existing-module review at this point, and let the formalization process fill in the details.  https://folio-project.slack.com/archives/CAQ7L02PP/p1701726842015989
    • Do values and criteria apply equally to existing modules?  (Whether existing modules created before or after these criteria)
      • Yes, but we need an evaluation process, hard to define values/criteria without that.
      • But it would be uncontroversial to apply the criteria retroactively. 
      • But when is there opportunity to do those evaluations?
        • Some combination of length of time since last review / or when module was created?  Or if some substantial amount of work going on?  As we handle technical debt, if we had a budget for that.
        • What would be the result if there were deficits after a review?  Not going to remove a module.  If this is low-stakes, maybe we can encourage it safely.  Similar to how test case coverage has increased recently on legacy modules.
    • Should existing module evals start at the PC level?  If a big feature change?  PC approves a process very early, never gets to weigh in again.
    • Now that we have RFC, OST, etc., it's ok to say where we want to be (values/criteria) before we outline how we get there.  Which would have to be flexible anyhow.
    • Addressed in PR.
  • Communication plan for encouraging RFCs, before the TCR gets going.
    • Add it to the top of the module review template.  Maccabee Levine look at.
    • Recurring reminder around the start of each release cycle.  Get buy-in from Alexi, Khalilah.  Motivation for RFC process, what we are trying to avoid ("Because architectural changes can be very disruptive......  Unintended consequences.  If you're thinking about something that affects other modules..."
    • RFC is a kind of advanced planning for the following release.
    • Tod Olson will draft something (after holidays).
  • For Poppy, ui-plugins are out of scope for evaluation.  Starting with Quesnellia, they will be in scope.  
    • Addressed already in this PR.  "Note: Frontend criteria apply to both modules and shared libraries.
  • For Poppy, shared libraries are out of scope for evaluation.  This decision will be revisited after the Poppy deadline has passed.
  • For Poppy, edge modules are in scope for evaluation.  The existing acceptance criteria will be applied, but will likely be adjusted after the Poppy deadline has passed.
    • So, just a note that existing criteria are applied?  Which sections of criteria?  Or do we need to adjust now?  Vijay originally offered to draft this but had to back out.
    • No sense of what would have been special about these requirements.  Are some criteria not meaningful for edge modules?
    •  *** Just notate in the criteria & template that edge modules already count the same as other backend modules, same criteria.  Separately we have process now (in this PR) if there is an actual problem. "If the TC determines that some failed criteria would be resolved by non-controversial changes to the criteria themselves (or referenced requirements like the Officially Supported Technologies), TC may decide to accept the module and make the agreed-upon changes."
    • Added to PR.
  • Next meeting on 1/8.

2023-12-04

Attendees: Maccabee Levine Tod Olson Jenn Colt 

Agenda:

  • Related to exceptions: clarification around how strict the evaluation criteria are and who is empowered to make an exception. E.g., if a criterion is failed, is the evaluator obligated to say that the evaluation failed or can they still recommend TCR acceptance? In effect, it seems like evaluators are empowered to recommend acceptance despite failed criteria, but I'm not sure that this is actually documented anywhere.
    • Related: I noticed a recurring theme in our 4 TCR evaluations, where technologies outside the supported tech list were used and the evaluators were okay with that. It happened enough that it might be worthwhile to update the criteria to address this.
    • There has been back-and-forth about having fully objective criteria vs. not delaying / easy enough to repair.
    • The TC can decide as it wants to decide, including exceptions.  The evaluator's opinion can be given verbally in the TC meeting but doesn't need to be part of the evaluation document.
      • or opinion can be async in slack
    • Addressed in PR.
  • Process around updates that happen mid-evaluation
    • The rough process we used was basically this, which seemed to work well (note: this wasn't ever really formalized or followed exactly, but it's pretty much what ended up happening in every case):
      • The evaluators point something out either in a comment or some other channel like slack
      • We fix it and let the evaluator know
      • If the issue is major enough to meaningfully impact the evaluation:
        • If the evaluator is okay with changing the commit being evaluated, we update the TCR ticket with the updated hash in the ticket description. Also, document this change with a comment on the ticket, explaining what all is different between the old command the the new one. The end result is that the evaluation formally includes the changes, as if they were there from the beginning
          • Example: fixing dependencies in ui-lists to be compatible with poppy
          • Example: downgrading away from a SNAPSHOT dependency in edge-fqm due to a breaking change
        • If the differences between the two commits are too much to change mid-eval, don't change the description, but document the issue and fix in a comment. The evaluator should still take this into account, but the context is fundamentally different, so it shouldn't have as much weight as it would if it was in the commit under review. It's worth mentioning in the evaluation, but whether it turns a failed criterion into a passed on is up to the evaluator.
          • Example: adding a test to edge-fqm that increased the test coverage by ~25% after the evaluation was basically done. It would have been unfair to the evaluator to move the goalpost at that time.
      • If the issue isn't major enough to meaningfully impact the evaluation, the evaluator is free to handle it however they want. In these case, it's also reasonable to create a ticket for the issue and treat it like any other bug (i.e., prioritize it and fix it later).
    • We have since taken care of the second situation (a change to happen after the evaluation) through the provisional acceptance suggestion as part of this PR.
    • The first scenario (changing the hash) is adequately described in the existing process.
  • Clarification around communication - I added some comments to some of the TCR evaluation PRs, addressing what I believe to be inaccuracies or to add relevant information. I have no idea if this is an appropriate place for that or not. I think I saw something somewhere saying to use jira for public communication with the evaluator like that, but the PR feels like a more appropriate place.
    • Related (this might just be because of the tight timeline in our TCRs): it would have been nice to have a little more time to go over the evaluations (maybe even while they are in-progress) to have a chance to address any results directly prior to the TC discussion/vote. In some cases, failed criteria may be a result of simple misunderstandings; in those cases, addressing the issues during the TC discussion would largely just be a waste of time (or worse, it could be confusing, since the evaluator could end up disagreeing with their own written evaluation during the meeting). I don't think not really having a defined opportunity to respond to the eval was ever actually a big problem in the FQM/Lists TCRs, but it seems like it could be beneficial to the process.
    • The subgroup thinks this process for answering questions during the review works ok as-is.  Desire not to require more process if not needed.
    • The submitter is welcome to address issues in the PR comments if needed, but we don't need to add anything to the process for that.
  • Building feedback into the process for TCR process improvements is a great idea, but I think it's in the wrong place right now (from the submitter standpoint). We did not include any TCR process feedback in our self evaluations because we hadn't done through most of the process yet; feedback from TCR submitters can really only happen at the end of the process, not the beginning. Hence this wall of text now 
    • Added a section the process for post-eval feedback from the submitter(s).

2023-11-20

Attendees: Maccabee Levine Jenn Colt Tod Olson Craig McNally 

Agenda:

  • Apache 2 License compatibillity; Lars Iking's offer. 
    • Jenn Colt offered to continue coordinating with Lars, with whatever direction this group prefers.
    • Lars is working on the whitelist/blacklist and send it to Jenn.  So good for now, at least we would be able to give reviewers that list.
    • Could be problems with unknown licenses, but consensus to start with a known pair of lists and see how it goes.
  • "Exceptions process" approach(es), continued from last week 
    • Clarification around exceptions and any process for that
      • The idea of exceptions has come up a lot of times in the FQM/Lists TCRs, so a formal stance on exceptions should probably be more clearly defined (the only thing we saw prior to submitting the TCRs is that the TC is free to make a decision that doesn't align with the evaluator's results)
        • Possible that we should leave exceptions generally possible, to be flexible.
      • Big exceptions (fundamental architectural stuff) require a lot of discussion and a TCR is not an ideal place for them. But they can happen. I feel like it's important to make room for these exceptions, but it felt like there was **strong** resistance to any big exceptions at all, which is a bit concerning (I'm of the opinion that we shouldn't need to change the process or criteria to make an exception, but that's me). Further discussion and documentation around this would probably be very useful for everyone and help clarify things quite a bit
        • The place for big exceptions is an RFC.  And we have no lever for requiring an RFC in the future.  People are busy, things change.
      • Small ones (sonarqube violations, etc) seem a lot easier. Personally, I feel that the evaluators should be free to make these exceptions, as long as they document their reasoning in their evaluation. The TC is free to accept or reject that reasoning. The existing documentation seems to support this, but it'd be useful if it was more explicit.
        • Note: this came up in the mod-lists TCR, where it seemed like the TC rejected the idea that evaluators have any real say, which was a little worrying. I feel like an evaluator should be empowered to say that a module passed its evaluation despite failed criteria, as long as they are transparent about it.
      • Related to small exceptions: The module descriptor criterion may be overly strict in some cases. If the MD is invalid, then it can't be released anyway. It's also a trivial issue to fix. So even if the MD is completely missing, there's not really much downside to accepting a module despite not having a valid MD. Maybe be less strict in this criterion or remove it entirely?
      • Options: 1) double down on being strict. 2) keep flexibility and change nothing, TC still has final say, and is simple & flexible, but not transparent.  3) formalize the exceptions process.  Work regardless on the option chosen, some stickier than others.  Can also stick with option 2 but increase communication aspect, esp about related RFC process.
      • ** Group will brainstorm over next week, come back and try to come up with an approach to #2 or #3 for discussion next week.
    • Addressed the above in the PR.

2023-11-20

Attendees: Maccabee Levine Tod Olson Craig McNally Jenn Colt 

Agenda:

  • Incorporate OST process changes into criteria. 
    • Craig McNally summarizing a discussion at Wed 11/8 TC meeting: "Relax restrictions about OST list as it pertains to first-party frameworks / technologies, because the deadline for accepting new modules into the release happens before we even have feature freeze on those versions."
      • *** Tag each PR mention for whether we have to adjust it for this.
  • Continue working through Matt Weaver's feedback.
    • There was some confusion around PC approval and whether or not the PC's approval of the Lists app includes FQM. It'd be nice if this was clarified a bit, if possible. I'm not sure what that would look like...
      • If we had application formalization, we'd have a way to link the two.  But we are not there yet.  And unclear if new functionality would be part of a new application; and how PC would want to review it (or not).
      • PC approval of Lists app would never have considered backend issues like FQM.
      • There is a gap from what PC is looking at and what TC looks at.  Exact transition is "mushy".  Hopefully application stuff will help that.
      • Consensus to let this one go until/unless application formalization makes the question simpler to understand.
    • Related: The submitter criteria (member of the PC or a PC-appointed delegate) seems overly strict and unnecessary. If the goal is to ensure PC approval happens first, then we already have that by just saying that the PC has to approve it first.
      • History: at the time, PC did not have a process yet, so asked PC to be input into our process.  Has never worked out this way, in a dozen+ TCRs.  Should change language to be clearer on process / interaction with PC.  PC has just been leaving a comment in the TCR saying they approved the relevant functionality.
      • Addressed in PR.
  • Clarification around exceptions and any process for that
    • The idea of exceptions has come up a lot of times in the FQM/Lists TCRs, so a formal stance on exceptions should probably be more clearly defined (the only thing we saw prior to submitting the TCRs is that the TC is free to make a decision that doesn't align with the evaluator's results)
      • Possible that we should leave exceptions generally possible, to be flexible.
    • Big exceptions (fundamental architectural stuff) require a lot of discussion and a TCR is not an ideal place for them. But they can happen. I feel like it's important to make room for these exceptions, but it felt like there was **strong** resistance to any big exceptions at all, which is a bit concerning (I'm of the opinion that we shouldn't need to change the process or criteria to make an exception, but that's me). Further discussion and documentation around this would probably be very useful for everyone and help clarify things quite a bit
      • The place for big exceptions is an RFC.  And we have no lever for requiring an RFC in the future.  People are busy, things change.
    • Small ones (sonarqube violations, etc) seem a lot easier. Personally, I feel that the evaluators should be free to make these exceptions, as long as they document their reasoning in their evaluation. The TC is free to accept or reject that reasoning. The existing documentation seems to support this, but it'd be useful if it was more explicit.
      • Note: this came up in the mod-lists TCR, where it seemed like the TC rejected the idea that evaluators have any real say, which was a little worrying. I feel like an evaluator should be empowered to say that a module passed its evaluation despite failed criteria, as long as they are transparent about it.
    • Related to small exceptions: The module descriptor criterion may be overly strict in some cases. If the MD is invalid, then it can't be released anyway. It's also a trivial issue to fix. So even if the MD is completely missing, there's not really much downside to accepting a module despite not having a valid MD. Maybe be less strict in this criterion or remove it entirely?
    • Options: 1) double down on being strict. 2) keep flexibility and change nothing, TC still has final say, and is simple & flexible, but not transparent.  3) formalize the exceptions process.  Work regardless on the option chosen, some stickier than others.  Can also stick with option 2 but increase communication aspect, esp about related RFC process.
    • ** Group will brainstorm over next week, come back and try to come up with an approach to #2 or #3 for discussion next week.
  • The discussions between me the evaluators has been super useful! There were a lot of small things that the evaluators identified that we were able to fix, and we really appreciate that we were allowed to make a few small changes after the evaluation began. I feel like this should be actively encouraged, so some documentation around this process would be great!
    • How do we iterate, and how much process for that?  Initially, after eval you get approval or rejection.  Resubmit if there was a failure of any kind.  Things haven't worked out that way, there is resubmission during the eval process, new commit hash submitted.  There also might have been a negative connotation with 'failure', people upset; informal convo with submitter has a better impact.
    • Addressed in the PR.

2023-11-06

Attendees: Maccabee Levine Jenn Colt Craig McNally 

Agenda:

  • Incorporate TC's feedback over the last week on the suggestions listed in recent TCRs under "TCR Process Improvements" into the PR.  I posted various threads in #tech-council, I believe Jenn reached out on some other things.  There was also discussion on things already changed in the PR.
    • mod-fqm-manager:

      • The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
      • Added this to the PR.
    • mod-lists:

    • edge-courses:

      • The module evaluation criteria should be modified to address esge modules explicetly.
        • Probably looking for something simpler for the edge modules.  
      • Matt Weaver: Edge modules are a little different (they tend to be very simple, have no storage, deal with permissions differently, different requirements for module descriptors, etc), so it might be worth handling them a little differently from other backend modules
      • Flag whoever submitted edge-course (Radhakrishnan Gopalakrishnan ) and edge-fqm (Matt Weaver ).  What should we be checking for on those modules?
      • Backend shared libraries?  If something is just not applicable, can we say that?  So evaluators are consistent with which criteria they ignore for shared libraries.  Zero experience, none submitted.  Post in TC.
      • FE shared libraries?  Post in TC.
    • ui-lists:

      • We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
        • Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers.  Jenn Colt will review the tools and go from there.  Maybe a recommendation to give.  May run by developers and CC.
        • Maybe use a "software bill of materials"?  GitHub generates automatically for BE modules.  NPM does something as well, built-in.
        • Still need an opinion from CC.  Whether or not the tools we use give an opinion or not.
        • Related issue is when dependencies & licenses change after a module exists.
        • Jenn Colt will formulate a question for CC.
      • This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
  • Start working through Matt Weaver's feedback.
    • Clarification around deadlines
      • We misunderstood the 3 week window for TCRs as starting at the submission date, rather than with the assignment of an evaluator. As a result, we rushed to get the TCRs submitted before the wrong deadline and accidentally put extra burden on the TC. I'm not sure if we really could have submitted much earlier, but we may have prioritized work differently to try and submit earlier if we hadn't misunderstood the deadlines.
      • https://github.com/folio-org/tech-council/blob/master/NEW_MODULE_TECH_EVAL.MD - "A maximum duration (3 weeks) from the submission date for the initial review."
        • This should be reworded to remove any ambiguity
    • Added a straw man to the PR.
    • SNAPSHOTs are tricky, as they inherently create a "moving target" that we try to avoid by specifying a specific commit to review, so they should obviously be discouraged, but sometimes they are necessary. Since this came up as a very real problem in the edge-fqm TCR, it may be worth documenting a process or stance or something.
    • Added to PR.

2023-10-30

Attendees: Maccabee Levine Jenn Colt Tod Olson 

Subgroup scope / goals

  • Consider the items listed in recent TCRs under "TCR Process Improvements".
  • Consider other feedback provided by recent TCR submitters.
  • Consider process issues raised at TC meetings (hopefully in the minutes) during our recent TCR discussions.  This would include "meta-process" issues such as communication around the TCR process, timing issues, interaction with the RFC process, etc.

mod-fqm-manager:

  • The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
    • Ask Jeremy Huff Matt Weaver  for clarification.  Why did he fail this criteria in mod-fqm-manager?  What change might be made to this criteria?  Can this be separated from the 'shared database' issue which is a separate criteria?  Is there something else we're trying to capture as a dependency, that is not so obvious? What is a reasonable way to document that?  It might not be module-descriptor.

mod-lists:

  • Consider adding lombok to the Officially Supported Technologies list since it is used extensively througout the project, despite NOT having an Apache 2.0 license.  This could help evaluators in the future.
    • Already addressed!
  • Consider adjusting the sonarqube rule about number of levels of inheritence allowed (currently 5)
  • Consider adding MinIO to the Offiicially Supported Technologies list, and approving the decision here: https://folio-org.atlassian.net/wiki/pages/viewpage.action?pageId=5055749
    • Already addressed!
  • It would be helpful to have the module name, possibly other metadata listed at the top of this form
    • Added to PR.
  • Consider adding criteria about the naming of interfaces, referencing http://ssbp-devdoc-prod.s3-website.us-east-1.amazonaws.com/guidelines/naming-conventions/#interfaces.  The guidence linked does read more like a suggestion than a hard guideline.  Should we also consider rewording so it's more of a requirement than a suggestion?
    • Tod Olson Craig McNally Looks like you did this TCR evaluation.  What specific criteria would you add?  Zak Burke and Maccabee Levine have talked about the naming of modules re: future-proofing and understanding what they meant, i.e. mod-entities-links and such, but the specific guidelines at that wiki link are more about the syntax of interface names, not the substance of them.

edge-courses:

  • The module evaluation criteria should be modified to address esge modules explicetly.
    • Probably looking for something simpler for the edge modules.  Maccabee Levine look through TC notes for specific opinions on this.

ui-lists:

  • TypeScript is a superset of JavaScript. TC should make an explicit statement about whether it is permitted in FOLIO modules.
    • Already addressed!
  • We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance 
    • also mentioned in ui-service-interactions:
      • I am increasingly strongly uncomfortable evaluating license compatibility. I suggest we change the line Third party dependencies use an Apache 2.0 compatible license to> Includes a report of the licenses used by third-party dependencies and we can delegate evaluation of that list to a person/body with appropriate credentials for this kind of thing. IOW, IANAL and I really really don't want to be responsible for making definitive statements about license compatibility. Example tools in NPM-land:
        • npx apache2-license-checker
        • license-checker
    • Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers.  Jenn Colt will review the tools and go from there.  Maybe a recommendation to give.  May run by developers and CC.
  • This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
    • Added to PR

ui-service-interactions:

  • Do we/should we have a Source Of Record for application documentation? Is it OK to link to the wiki or should apps point to https://docs.folio.org/?
    • Process for updating docs.folio.org would slow down changes.  And future app store modules might be documented elsewhere.  It seems odd to dictate where documentation has to live.  Downside of linking to random pages is you might have broken links if the destination changes – but you can always resolve that on the destination, with edits or redirects.
    • Added to PR.