NFR Scorecard practice

Overview

A non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. The issue of NFRs and their importance for the stability and high quality of the FOLIO platform is mentioned and discussed quite regularly. At the same time, at the moment there is no practical application of these requirements, their accounting, and control of implementation for functional features. The described approach with the NFR Scorecard aims to close this gap.

The idea is to have a list of NFRs, so that when analyzing and accepting each feature, the team (PO + SM + TL + SA) checks with it and explicitly marks compliance/applicability / non-applicability. Accordingly, the list should be relatively small and manageable so as not to complicate things. On the contrary, the main idea is to simplify and focus only on key requirements.

Important note - this scorecard is intended for development teams and only covers those requirements and quality attributes that are the responsibility of the development teams (and not DevOps or hosting providers).

Scope: Components being released within FOLIO.

Owner(s) (owners approve additions, modifications, or removals of NFRs): Raman Auramau

Using the NFR Scorecard

The following steps are needed to start receiving benefits from the NFR Scorecard practice:

As one can see from the presented workflow, the summarized expected duration of activities associated with the NFR Scorecard process is 90-150 minutes per feature.

Expected Benefits

Although following the described approach may seem like unnecessary overhead, it provides an excellent opportunity to analyze compliance with key non-functional requirements at an early stage. This allows the team to detect problems earlier, even at the development stage, before providing the code to the Bugfest and beyond. In turn, this means a faster feedback cycle and cheaper fixes.

Review and Modifications of the NFR Scorecard

Once per release, the NFRs should be reviewed by the Owner(s) for Applicability, Effectiveness, ROI and TCO, and Improvements. Changes and additions may be made as a result of this process.

FOLIO NFR Scorecard v0.4

Note: The NFR ID is suggested as a unique identifier for specific non-functional requirements, such as when detailing tasks for NFR Scorecard compliance or during discussions. It's essential to recognize that the NFR ID's uniqueness is not automatically controlled or enforced but it is a recommended aspect of the process. For this, the proposed approach involves using the Baseline keyword for identifiers from the baseline NFR scorecard, such as NFR.Baseline.Performance.1. This keyword can be substituted with a feature number or another value for additional, feature-specific non-functional requirements, for instance, NFR.3944.Performance.1 (for UXPROD-3944) or NFR.DI.Performance.1 (for Data Import).

LEGEND: Enumeration of possible statuses


COMPLIANT Compliance checked and confirmed

NOT VERIFIED Compliance not checked

NON COMPLIANT Compliance checked, and non-compliance found

NOT APPLICABLE Сompliance not required, requirement not applicable


Quality Attribute

NFR ID

Non-Functional Requirement

Notes and Comments

Status

Availability

NFR.Baseline.Availability.1

Modules are designed and implemented following the Stateless principle

https://folio-org.atlassian.net/wiki/display/DD/Stateless%2C+Horizontal+scaling%2C+and+High+Availability

COMPLIANT

NFR.Baseline.Availability.2

Load/performance testing must be conducted for at least 2 deployed instances of a module



Manageability

NFR.Baseline.Manageability.1

Application logs are collected in a unified form and location

https://folio-org.atlassian.net/wiki/display/DD/Folio+logging+solution

https://folio-org.atlassian.net/wiki/display/DD/Logging


NFR.Baseline.Manageability.2

All custom configuration values are placed in the settings, not in the program code



Performance

NFR.Baseline.Performance.1

Components are performance tested and compared to the prior release baseline; performance may not degrade more than 5% in exceptional cases

  • New functionality requires performance testing to determine baseline performance

  • Use perf-rancher for performance testing; save jmx to folio-perf-testing on GitHub

  • PTF can be invoked for testing for aggregated flows while perf testing of particular parts can be done by the dev team

  • :question_mark: Where to store perf testing reports?


Security

NFR.Baseline.Security.1

Tenant data must be isolated from other tenants

Kitfox - there is an option to create 2 tenants on the same rancher, so QA needs to create appropriate test cases and test them


NFR.Baseline.Security.2

Secrets (such as usernames, passwords, API keys, and/or their combinations) are not stored in source repositories (i.e. Github)

To get started - a simple search on the GitHub repository


NFR.Baseline.Security.3

No sensitive information in logs (logins, passwords, API keys)

This https://github.com/folio-org/personal-data-disclosure/blob/master/PERSONAL_DATA_DISCLOSURE.md#overview can be used as a source of information about PII/sensitive data

https://folio-org.atlassian.net/wiki/display/DD/Logging#Logging-Sensitiveinformation

(https://github.com/folio-org/folio-spring-base/blob/master/src/main/java/org/folio/spring/utils/LoggingUtils.java is a placeholder with no implementation)


Testability

NFR.Baseline.Testability.1

Unit-test coverage for new code created/changed during the implementation of the feature >= 80%



NFR.Baseline.Testability.2

E2E-test coverage - # of automated test cases from test rail to # of all test cases at a particular feature



NFR.Baseline.Testability.3

Karate-test coverage - # of test to # of new endpoints that were created (or existing endpoints that were changed) in the feature scope



Glossary

  • Availability defines the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability is affected by system errors, infrastructure problems, malicious attacks, and system load.

  • Manageability defines how easy it is for system administrators to manage the application, usually through sufficient and useful instrumentation exposed for use in monitoring systems and for debugging and performance tuning.

  • Performance is an indication of the responsiveness of a system to execute any action within a given time interval. It can be measured in terms of latency or throughput. Latency is the time required to respond to any event. Throughput is the number of events that take place within a given amount of time.

  • Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage and to prevent disclosure or loss of information. A secure system aims to protect assets and prevent unauthorized modification of information.

  • Testability is a measure of how easy it is to create test criteria for the system and its components and to execute these tests to determine if the criteria are met. Good testability makes it more likely that faults in a system can be isolated in a timely and effective manner.


 Click here to expand...

Below are some NFRs that have been discussed but it was decided not to include them in the initial version of the NFR Scorecard:





NFR.Performance.2

Data schema migration must be tested on a sufficient amount of production-like data and fall within the allowed Release Window

:question_mark: https://folio-org.atlassian.net/browse/RANCHER-191 is not ready yet


NFR.Reliability.1

SonarCloud quality gate conditions to new code are met

SonarCloud Quality Gates


NFR.Usability.1

FOLIO UI/UX guidelines are followed in UI modules

FOLIO UX docs · UX prototypes, guidelines & assets (though it seems to be pretty outdated at this point). Stripes-Components Storybook

:question_mark: Does this include accessibility & internationalization? Do we want explicit rows for these things?

NFR.Scalability.1

The ability of an application to scale horizontally (minimum 2 nodes) is tested and confirmed



Pilot run in Poppy R2 2023

Feature

Dev team

PO

SA

SM

TL

UXPROD-3944 NFR Scorecard

Thunderjet

Dennis Bridges

Raman Auramau

Mikita Siadych

Serhii Nosko

UXPROD-4006 NFR Scorecard

Firebird

Magda Zacharska

Taras Spashchenko

Mikita Siadych

Viachaslau Khandramai

Next steps

  • (tick) Review the FOLIO NFR Scorecard with stakeholders and interested, agree on working and metrics,

  • (tick) Choose 2 features in the Poppy release to give a try in practice (different teams, SAs, POs),

    • (tick) Poppy R2 2023 - NFR Compliance Review - create a NFR epic in ARCH project - https://folio-org.atlassian.net/browse/ARCH-45

    • (tick) Collect feedback and adjust the NFR Scorecard if needed, agree on particular features for piloting

    • (tick) Pilot NFR Scorecard

    • (tick) Collect feedback and adjust the NFR Scorecard if needed

  • Introduce the NFR Scorecard as a recommended practice in the Quesnelia release

    • (question) Develop a tool for monitoring and analysis

  • Collect feedback after Quesnelia and adjust the NFR Scorecard if needed