Inventory Audit log

Summary

The technical design is aimed to address the need of catalogers to track the history of changes for different entities in FOLIO. The solution uses the common approach for auditable events in FOLIO similar to the audit of events in circulation and acquisition domains.

Requirements

Functional Requirements

NFR

Assumptions

  • For ECS environments: Shared entities' version history should be tracked only in the central tenant.

  • All changes in the system related to inventory entities (instances, items, holdings, bibs) generate Domain events.

Baseline Architecture

In existing architecture, mod-inventory-storage is responsible for persisting such entities as instances, holdings, and items. mod-entities-links is responsible for authorities. Both modules produce domain events on create/update/delete actions from different sources.

 

Target Architecture

The existing architecture allows the reuse of the capabilities of the domain events approach to persist audit log events.

Audit Consumers Sequence Diagram

Audit Consumers with Outbox Sequence Diagram

The implementation can follow a Transactional outbox pattern. The approach allows enhanced guarantee for persisting the audit event but the trade-off is that this approach will negatively affect the performance of flows related to domain events.

Solution Summary

The process is split into two main parts

  1. Persistence: The audit database should persist a snapshot of the entity. The queries are made mostly by the entity's unique identification. Thus partitioning by UUID can be applied to audit tables

  2. Version history display: This should be done on demand comparing each consecutive snapshot of the entity to the previous

Key implementation aspects:

  1. The Kafka default delivery semantics is “AT_LEAST_ONCE”. Ensure that domain events have their unique identifiers to be able to handle consuming messages in an idempotent manner

  2. Add new consumers in mod-audit to inventory domain events.

  3. Persist audit events in an event storage. A single table in DB per entity type with partitioning by UUID.

  4. Create REST API
    4. to provide information on a list of changes related to a particular entity
    5. to provide detailed information on the particular change - this API should use the Object diff library to return a verbose description of the difference between current and previous snapshots of the entity

ERD

With data size implications, creating separate tables per each entity type is required. The default table structure is listed below:

Column

Type

required

unique

Description

Column

Type

required

unique

Description

1

EventID

UUID

y

y

unique event identifier

2

EventDate

timestamp

y

n

date when the event appeared in the event log

3

Origin

varchar

y

n

Origin of the event: data-import, batch-update, user, etc.

4

Action

varchar

y

n

what action was performed

5

ActionDate

timestamp

y

n

when action was performed

6

EntityID

UUID

y

n

entity identifier

7

UserId

UUID

y

n

user who did the action, fixed UUID for anonymized user

8

Snapshot

jsonb

y

n

body of the entity

WBS

Story

Task

Entity

Description

Module

Story

Task

Entity

Description

Module

1

Persisting events

 

 

 

 

2

Persisting events

Extend domain event with source FOLIO

Instance (FOLIO)

[TBC]

mod-invenotry-storage

3

Persisting events

Extend domain event with source FOLIO

Item

[TBC]

mod-invenotry-storage

4

Persisting events

Extend domain event with source FOLIO

Holding

[TBC]

mod-invenotry-storage

5

Persisting events

Extend domain event with source MARC

Instance (MARC)

Add origin header to align depending DI profile

mod-source-record-storage

6

Persisting events

Extend domain event for Authority

Authority

Add origin header to align depending DI profile

mod-source-record-storage

7

Persisting events

Consume domain event

Instance(FOLIO)

Create table with partitioning by UUID
Create kafka consumer for domain event
Persist entity snapshot

mod-audit

8

Persisting events

Consume domain event

Instance(MARC)

Create table with partitioning by UUID
Create kafka consumer for domain event
Persist entity snapshot

mod-audit

9

Persisting events

Consume domain event

Holding

Create table with partitioning by UUID
Create kafka consumer for domain event
Persist entity snapshot

mod-audit

10

Persisting events

Consume domain event

Item

Create table with partitioning by UUID
Create kafka consumer for domain event
Persist entity snapshot

mod-audit

11

Persisting events

Consume domain event

Authority

Create table with partitioning by UUID
Create kafka consumer for domain event
Persist entity snapshot

mod-audit

12

Persisting events

Configuration

All

Provide configuration parameter to enable/disable audit log on tenant level

mod-audit

13

Persisting events

Configuration

All

Anonimize events

mod-audit

14

Display History

Rest Endpoint for history

Instance(FOLIO)

Query list of snapshots from database
Calculate diff messages
Return list of diff records

mod-audit

15

Display History

Rest Endpoint for history

Instance(MARC)

Query list of snapshots from database
Calculate diff messages
Return list of diff records

mod-audit

16

Display History

Rest Endpoint for history

Holding

Query list of snapshots from database
Calculate diff messages
Return list of diff records

mod-audit

17

Display History

Rest Endpoint for history

Item

Query list of snapshots from database
Calculate diff messages
Return list of diff records

mod-audit

18

Display History

Rest Endpoint for history

Authority

Query list of snapshots from database
Calculate diff messages
Return list of diff records

mod-audit

19

Display History

Show history Pane in inventory

 

 

ui-inventory

Risks and concerns

Risk

Description

Probability

Impact

Mitigation

Risk

Description

Probability

Impact

Mitigation

1

Long period for audit records retention

The number of records could overwhelm the capability of the Postgres database both from a computational point of view and cost

High

High

Introduce separate storage for audit-events

2

Cascade Updates will create redundant copies in the audit log

The update to holdings causes updates to all related items. Some holdings may contain ~15000 records

High

Medium

Collapse or filter out events that only change parent entity

3

Some flows could update inventory entities without using the Domain-events mechanism

With different capabilities of the system including UI, data import, bulk edit, etc some of the flows might skip sending Domain events and/or edit entities directly

Low

Medium

List those cases and add domain events to flows that has no this capability

4

Linked data

The flow and integration with inventory are not clear for the BIBFRAME format

Low

Low

Adjust BIBFRAME flow to follow the proposed solution for other inventory entities

Product Questions

 

Question

Answer

Comment

Question

Answer

Comment

1

Should failure in sending audit message block the create/update/delete operation?

Hey @Kalibek Turgumbayev - what happens today when an update is made and the create/update date and time stamp is not updated?

The question is related to transactional outbox pattern implementation

2

What would be the period of retention for Audit records?

@Dennis Bridges has this requirement come up for you with respect to Acq’s change tracker?

The storage options depend on this question:

  • Postgres for ~ 1-3year

  • Postrgres with Partitioning by UUID 5-15 years

  • NoSQL options for 20+ years

3

In what form should we show the changes to non-marc fields (e.g. staffSuppress, administrative Notes, etc.) in MARC instances?

@Kalibek Turgumbayev - I am unsure I understand this question. Can you review this mockup of how to display updates made to a FOLIO instance record?

Instance with source FOLIO or MARC from inventory is a separate object than SRS record and should be tracked separately

4

If only the order of fields in a MARC record is changed, should it be logged?

@Kalibek Turgumbayev - Good question - I need to ask users but unless it is a significant to implement, answer is Yes.

@Dennis Bridges has this requirement come up for you with respect to Acq’s change tracker?

 

5

Do we have scenarios where the audit log is exported in batches for some period of dates?

It is possible that a library may want to do so but I do not think it is a requirement for Sunflower. @Dennis Bridges has this requirement come up for you with respect to Acq’s change tracker?

If the solution uses Postgres with partitioning by entity key, such exports would cause significant performance issues.

Links

  1. Acquisition event log - data retention period is 20 years

  2. Transactional outbox pattern

  3. Orders Event Log

  4. Javers - Java object diff library