Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

I've picked out a few that could be relevant to how we got to the current design

  • Business logic must the most current state for decisions
  • Business logic and storage are split in two separate modules (in order to support independent substitution)
  • All integration between modules is done via HTTP APIs (proxied via Okapi)
  • All data is stored within PostgreSQL
  • A record oriented design with a single source system of truth record for each record type (business logic / storage separation not-withstanding)Business logic must the most current state for decisions

Some of these have changed since this early development e.g. the use of Kafka for integration.

Some may need to change for the options below to be tolerable and coherent within FOLIO.

Expectations

A checkout (including the staff member scanning the item barcode) must complete within 1 second (from the documented /wiki/spaces/DQA/pages/2658550). It is stated that this includes the time for a person to scan the barcode (presumably of the item).

...

the check out API must respond within 1 second under load from 8 concurrent requests

Solution Constraints

Beyond the general constraints on architectural decisions.

  • No changes to the circulation API (the interface must remain the same)
  • Only existing infrastructure can be used (I'm including Kafka in this, even though it isn't official yet)

Analysis

Limitations of Analysis

...

These ideas will be the framing for the proposal part of this document.

Proposal

Constraints

...

Options

Improve the performance of individual downstream requests

Analyse and try to improve the performance of each downstream request

Characteristics

  • Improvements can be undone by changes to downstream modules
  • Limited by the constraints of the downstream modules (e.g. the data is currently stored as JSONB)
  • Retains the same amount of downstream requests
  • Retains the same overhead from Okapi proxying

Make downstream requests concurrently

Make some of the downstream requests in mod-circulation 

Characteristics

  • Retains the same amount of downstream requests
  • Retains the same overhead from Okapi proxying

Combine downstream requests for related records into a single request

Introduces context-specific APIs that are intended for specific use.

It may not make sense to combine all of the record types from a single module. For example, 

Characteristics

  • Reduces the amount of individual requests (and hence the Okapi overhead)
  • Requires at least one downstream requests per destination module
  • Requires at least one database query per downstream module
  • Might reduce the load on downstream modules
  • Reduction in downstream requests is limited to number of record types within a single module
  • Increases the amount of APIs to maintain
  • Increases the coupling between modules (by introducing the clients context into the other module)
  • Increases the coupling between the record types involved (e.g. it's harder to move record types, changes to them ripple across APIs)

Copy data

...

into circulation

Consume messages produced (via Kafka) by other modules to build views of the data needed to perform a check out.

The biggest challenge with this option is the community's tolerance to using potentially stale data for making decisions.

Characteristics

  • Increases the potential for stale data to be used for decisions
  • Introduces a dependency on a database from mod-circulation
  • Introduces a dependency on messages produced by other modules
  • Requires no downstream requests for fetching data
  • State changes still require a downstream request (and the requisite overhead)

Variations

Store the copied data in mod-circulation-storage

...