Bienenvolk Jest Test Documentation

Bienenvolk Jest Test Documentation

Work in Progress - The aim is for this documentation to be continuously updated and improved. If some section is not helpful, or something is missing, please suggest changes or reach out to Bienenvolk team leader to improve it.

Overview


This documentation provides an overview of the Jest test structure used in the FOLIO ERM Frontend modules. It is based on the existing tests and aims to provide guidelines for improving Jest tests in the modules, ensuring they are robust, efficient, and effective.


The general aim for our Jest tests (we will use Jest and jest interchangeably below) is to be in line with the concepts outlined in the jest documentation. However jest is very flexible, and in order to keep our tests consistent, we have internal standards we’d like to maintain in our test suite.

Test Structure

  • Location: Tests are stored in files with a .test.js extension, typically located in the same directory as the component/module.

  • Content: Each test file contains organized describe blocks grouping related tests.

  • Tests: Within each describe block, there are one or more test blocks, which define individual tests.

Sample Test Structure

describe('ComponentName', () => { beforeEach(() => { // setup code }); describe('Interaction scenarios', () => { test('should perform action A correctly', () => { // test code }); }); });

Test Resources

Inconsistency in Test Resource Usage

Across jest tests, there’s currently no unified approach to managing test resources. This leads to confusion and duplicated logic across test files and difficult to update when backend structures change.

Testing resource types

Centralized Resource Files (Preferred)

The ideal pattern involves storing reusable resources in a centralized test/resources directory.

When the modules API is altered, these changes can be implemented quickly by changing the test resources in a single location, allowing us to identify what tests fail as a result of this change

Approach:

  • Move all test data into a central test/resources/ directory.

  • Use reusable, composable modules to construct complex test resources.

  • Separate internal and external resources.

  • Allow high-level resources (e.g., a Serial) to reference lower-level ones (e.g., refdata, status).

Component-Specific Resource Files (Aim to refactor to centralised resources)

Some tests have dedicated testresources files per component.

This leads to:

  • False positives (tests passing when they shouldn't)

  • Manual updates each time data shapes change

  • Tests that help code "pass" rather than catch bugs

Example test, testresources file

Inline Resources (Always avoid)

Other tests embed test data directly within test files. This has many of the same drawbacks as component-specific resource files, but with the added drawback of making the test file itself harder to parse and read, or adapt to changes.

Example

Recommended Approach

  • The long-term goal is to use a centralized structure for all test resources, allowing shared use across multiple tests and simplifying updates.

  • Resource Composition via Nesting: High-level resources can reference lower-level resources. For instance, a Serial test resource may reference a Serial Status resource. This allows modular and composable test data definitions.

Testing best practices

Naming

  • A lot of our tests are named very generically

    • Test names should be descriptive and indicate what is being tested. For example, `clicking the remove button invokes the callback with expected value`.

      • The expectation for this would be a describe('clicking the remove button') with a beforeEach performing that action, and then test('invokes the callback with expected value',…))

    • It should be the case when a test is complete that the runthrough of the test steps in the console should read like a user’s test list might, and so that we could theoretically tell exactly what’s been tested at a glance without reading the test itself.

    • This is especially important when used in .each because those can spin up cases that aren’t immediately obvious while reading the test file itself.

Testing Outcomes

Test more than just renders

A lot of our tests technically meet the 80% threshold for testing, without actually testing anything of value to us.

The rough idea is anything that manifests as an outcome should be tested

 

What is an outcome?

  • Does a prop change a label? Check that prop works as expected

  • Is there a fallback? Check that not applying the prop also works as expected, etc.

  • Does a certain action call a prop function? Check that the function is called with the expected shape when the action occurs (such as a button press etc).

    • If there’s complex behaviour set up test cases and check the outcomes are all as expected.

    • One such example may be that a child component is responsible for the prop function actually being called. In this setup we can either:

      • (preferred) Mock the child component and map the function call to a Button onClick to test that the expected behaviour in the parent occurs. Example

      • Not mock the child component, and instead directly interact with it in the parent’s test. This means though that if the child component changes this test can break.

Deeper testing

Testing everything that the user sees on-screen is a good step one. Step two is then running through the code line by line and checking that all the behind the scenes interactions are working as expected.

Things like a call to a HTTP endpoint are worth us considering an “outcome”, it’s something that happens because of an interaction in the component, even if that doesn’t immediately result in a render on screen

We can mock useQuery or otherwise to ensure that the call would have been made rather than needing to make it in the test.

 

Another thing to bear in mind is testing all the conditional cases.

  • A thorough analysis of the coverage reports can find all the areas where a particular condition was not met during the test run

  • What it does NOT do is discover whether you actually tested the outcome, only that the condition was never met. This means that coverage can be “duped” by changing a prop but never actually checking that the outcome changed in a test… The aim is to carefully consider conditionals while testing the component

Using describe and beforeEach

Jest tests are conventionally split into blocks, describe, before*, after*, test and it. These are used to structure the test output and make semantic blocks of tests which fit together

In general we use a describe/beforeEach to perform an action, a test checks an outcome. (test/it are identical in function, they only really differ for semantic understanding of the developer)

 

  • We had a number of occasions where a single test was both performing setup type actions, AND performing multiple tests

  • We should have as granular test cases as possible

  • Nesting of actions with describes within describes to keep tests small and repeated lines to a minimum

Example, nested describe 4 levels deep

Effective Use of .each

  • describe.each and test.each are powerful tools when performing very similar tests over and over to avoid having to write out each case.

  • In Jest, describe.each is a method that allows us to run the same test suite multiple times with different input data. It is useful for parameterized testing where we want to run the same test logic with different input values.

    • Here is a basic example of how describe.each is used in Jest:

      describe.each([ [1, 1, 2], [1, 2, 3], [2, 2, 4], ])('add(%i, %i)', (a, b, expected) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); });

       

      In this example:

      • describe.each is used to define a test suite that will run three times with different input values.

      • The array passed to describe.each contains arrays of input values for each test run.

      • %i placeholders in the test name and the test function are replaced with the actual input values.

      • The test logic is executed for each set of input values.

      This allows us to write concise and readable tests for scenarios where the test logic is the same but the input data varies.

Example

Mocking Components and Functions

Mocking

This is a powerful feature that is used to simulate the behavior of certain functions and modules. This allows us to test the component in isolation without relying on the actual implementations of these dependencies.

Types of mocks:

Creating Mocks:

  • jest.fn() Creates a mock function. We can specify a mock implementation using mockImplementation() or mockReturnValue().

  • jest.mock() is used to automatically mock modules in our tests. When we mock a module, Jest replaces the actual implementation with mock functions.

    // Mocking a custom hook jest.mock('../../hooks', () => ({ ...jest.requireActual('../../hooks'), // Keep original implementations for other exports useBasket: jest.fn() // Mock only the useBasket hook }));
  • In the example above the entire ../../hooks module is mocked and jest.requireActual() is used to preserve the original implementations of other exports, only the useBasket hook is replaced with a mock function.

  • jest.unmock() restores the original implementation of a previously mocked module.

    • This can be useful when:

      • We need the real behavior of a specific module which is mocked by manual mocks

      • We're testing integration with a particular library

    • For example: jest.unmock('react-router');This ensures that the actual implementation of 'react-router' is used rather than the usual mocked version.

  • In general, if a mocked component ends up needing to be really complicated to test everything, it’s possibly an indicator that the boundaries for the components are in the wrong place.

Partial Mocking

We can create partial mocks by combining jest.mock() with jest.requireActual():

jest.mock('react-router-dom', () => { const { mockReactRouterDom } = jest.requireActual('@folio/stripes-erm-testing'); return ({ ...jest.requireActual('react-router-dom'), // Keep original implementations ...mockReactRouterDom, // Add mock implementations from testing library useHistory: () => ({ push: mockHistoryPush }) // Override specific functions }); });

 

  • This approach gives us fine-grained control over which parts of a module are mocked and which use their real implementations.

    By strategically combining these mocking techniques, we can create focused tests that isolate the specific behavior we want to verify while controlling all external dependencies.

  • Using Mocks in Tests:

    • Mocks can be set up in:

      • At the top of the file for test-global mocking.

      • beforeEach or beforeAll hooks to ensure a clean state for each test

        • This will usually require a blank jest.mock(…) above mocking a normal import with a jest.fn(), and then using mockImplementation on that in the beforeEach etc. Example

      • A mock can also be set up by directly importing a module which is globally mocked in __mocks__ and then directly using mockImplementation on that. Example

    • Assertions: We can use Jest’s assertion methods to verify mock behavior, such as toHaveBeenCalled() or toHaveBeenCalledWith().

  • Mock Implementation:

    • We can define custom behavior for mocks using mockImplementation() or mockReturnValue(). This allows us to simulate different scenarios and test how we could handle them.

  • Resetting Mocks:

    • Use mockClear() or jest.clearAllMocks() to clear mock call history or reset mock implementations between tests Example Example (Once this is in a tagged release swap this example to a tag instead of a commit)

Test Maintenance & Continuous Improvement

Outline

This is as much about upskilling and getting a handle on any gaps in knowledge around testing, and keeping those skills sharp, as it is about actually making sure the tests are perfect.

  • Complete any TODOs, or leave with comment pointing at why they can’t/shouldn’t be done in this case or generally

  • Scan through the PR done as part of release process for any test changes there

  • Check remaining jest tests for anything that strikes us as potentially problematic from the TODOs and PR

Handling Warnings & Deprecated Features

Wherever possible, jest tests should not throw up red warnings when run.

  • This is a bit complicated at the moment because of the findDOMNode deprecation warnings, but if we can fix anything else then we’re in good shape

Running the tests

Basic

To run test in our workspace, first navigate to the app folder (i.e., ui-agreement). Then:

  • To run all the tests within the app: yarn test

    • Under the hood this will call yarn test:jest, which in turn runs jest --ci --coverage --maxWorkers=50%

    • Setup in the package.json

  • To run one specific test, we can specify the name of the test we would like to run like: yarn test BasketSelector

Running Coverage report

  • Once we run the tests locally we should have an artifacts folder generated locally in our app folder.

    To generate coverage report:

    • From our app folder navigate to:

         cd artifacts/coverage-jest/lcov-report/ and then run npx serve or npx serve-static-cli

    • You can also alternatively open the html file at cd artifacts/coverage-jest/lcov-report/index.html in a browser

 

Helpful Test components/utilities

TestForm

  • Use TestForm only if the tested component relies on being inside a form context..

  • If the component renders its own form, it does not need to be tested inside a TestForm. The same logic applies to *FieldArray type components etc, if the component is usually rendered in a Field, render it in a Field in the test, if it’s normally rendered just inside a Form, then just plonk it in a TestForm.

  • In general there’s no need to test that the submit button appears when TestForm is rendered, or that filling out a TextField actually fills that out on the screen. Checking that the TextField is there is normally enough

    • What we do need to test is any interaction in that component, filling out one field validates another, for example.

MemoryRouter

  • The MemoryRouter is a component from the react-router library, often used in testing environments like Jest to simulate routing without needing a full browser environment. It allows us to define a memory-based history stack, which is useful for testing components that rely on routing without affecting the actual browser URL.

  • For components that require routing, we can wrap the component in a MemoryRouter from react-router-dom.

Screen Debug

  • In Jest tests, screen.debug() is a utility function provided by @testing-library/react that allows us to debug our tests by printing out the current state of the screen (i.e., the rendered component tree or the DOM nodes and their attributes) to the console.

  • This can be incredibly helpful when trying to debug issues with our tests, as it allows us to see exactly what's being rendered and what's going on with our components.

  • The screen.debug has a default limit as to how many number of lines it can display from the dom. To increase this we can run our test such as:

    • DEBUG_PRINT_LIMIT=10000 yarn test:jest That should suffice. If it doesnt, just increase the limit.