Existing Module Technical Evaluations

Existing Module Technical Evaluations

Overview

The Existing Module Evaluation is an initiative by the FOLIO Technical Council (TC) to extend their technical review process beyond just new modules to include modules that are already part of official FOLIO releases. This addresses a gap where many modules were created before technical standards were formally defined, or need to be reviewed against current standards.

Background and Motivation

The FOLIO Technical Council has had a well-established process for evaluating new modules through their Technical Council Review (TCR) process since its adoption. However, many existing modules in FOLIO:

  • Predate the creation of evaluation criteria and standards

  • Were created before values and best practices were formally defined

  • Have never undergone formal technical review

  • May have accumulated technical debt over time

  • May not conform to current technical standards

In fall 2023, during process improvements, the TC formally acknowledged that "existing FOLIO modules ideally conform to the same Values and Criteria" as new modules, and committed to "work with development teams to align the current reality of the code with the Values and Criteria over time, as practicable."

Key Differences from New Module Evaluation

The existing module evaluation process differs from new module evaluation in several important ways:

Aspect

New Module Evaluation

Existing Module Evaluation

Aspect

New Module Evaluation

Existing Module Evaluation

Initiation

Teams contact TC

TC contacts teams

Participation

Required for inclusion

Best Effort

Consequences

Acceptance/rejection

No threat of removal from releases, at this time.

Focus

Gate-keeping

Improvement and support

Reporting

Standardized process

Flexible (Jira, wiki, team preference)

Goal

Ensure standards compliance

Identify tech debt and work collaboratively

Process Approach

Key Principles

  1. Courteous outreach: Use "soft language" when approaching teams

  2. Avoid tech-debt terminology: Focus on standards and improvement

  3. Make it optional: Emphasize that participation is voluntary

  4. Professional courtesy: Expect teams to address issues discovered, but without enforcement mechanisms

  5. Collect feedback: Include an explicit step to gather team feedback after the evaluation

  6. Living process: The process will evolve based on pilot experiences

Process Steps

The evaluation process follows these general steps:

  1. Module Selection: TC identifies candidate modules using selection criteria

  2. Reviewer Assignment: TC co-chairs assign reviewers for each module

  3. Team Outreach: Reviewer contacts the responsible team to explain the process and request participation

  4. Evaluation: Reviewer conducts technical evaluation using the criteria

  5. Results Discussion: Reviewer and team discuss findings

  6. Issue Documentation: Issues are documented in the team's preferred format (Jira, wiki, etc.)

  7. Feedback Collection: Team provides feedback on the evaluation process

  8. Follow-up: TC tracks progress on addressing identified issues (with professional courtesy, no enforcement)

Tools and Resources

  • Evaluation criteria document: Available here.

  • Module evaluator tool: Developed by Olamide Kolawole - tc-module-eval

    • Only evaluates licensing for Javascript and Java projects at the time of writing.

  • Tracking spreadsheet: Created by Maccabee Levine with module ages, teams, and release information. Spreadsheet

Evaluation Criteria

The existing module evaluation uses similar (but potentially adjusted) criteria as the new module evaluation, including:

  • Use of officially approved technologies

  • Code quality and maintainability

  • Security practices

  • Testing coverage

  • Documentation

  • License compliance

  • Environment variable management (added November 2025)

Some criteria may be adjusted or grandfathered for existing modules depending on their age and circumstances. The full criteria are documented in the tech-council GitHub repository.

Goals and Expected Outcomes

Primary Goals

  1. Establish a repeatable process for evaluating existing modules

  2. Identify technical debt in current FOLIO modules

  3. Work collaboratively with teams to improve code quality

  4. Ensure all modules meet current technical standards over time

  5. Provide positive feedback and support to development teams

Success Metrics

  • Teams feel supported rather than criticized

  • Issues identified are actionable and get addressed

  • Process is sustainable and not overly burdensome

  • Community accepts and values the evaluation process

  • Code quality across FOLIO improves over time

The Pilot Program

Timeline

  • Initial discussions: February 2024

  • Process development: June-July 2025

  • Pilot launch: July-August 2025

  • Status: Ongoing as of January 2026

Selected Pilot Modules

Two modules were selected for the initial pilot:

  1. mod-ebsconet (backend, Spitfire team)

    • Backend module for integration with EBSCOnet external systems

    • TCR-60

  2. ui-export-manager (frontend, Firebird team)

    • Frontend module for mod-data-export

    • TCR-63

These modules were selected because they:

  • Are newer but predate the formal evaluation process

  • Represent both frontend and backend code

  • Are likely to have some issues but also achievable improvements

Selection Criteria

When choosing modules for evaluation, the TC considers:

  • Age: Modules from around lines 220-270 in the release spreadsheet (neither too old nor too new)

  • Type balance: Both frontend and backend modules

  • Quality metrics: Using tools like SonarCloud and Semgrep to identify modules with room for improvement

  • Likelihood of success: Preferably modules where discovered problems can realistically be resolved

Future

The pilot is designed to:

  • Test and refine the evaluation process

  • Identify what works and what needs adjustment

  • Eventually expand to more existing modules

  • Consider evaluating shared libraries, plugins, and other code beyond modules

  • Potentially include self-evaluation options once the process matures

  • Run evaluations once per flower release cycle

Feedback for the pilot can be added to Existing Module Evaluation Pilot Feedback

Related Pages and Resources

Key Meeting Notes

Related Processes

External Resources