D2IR Record Contribution flow
D2IR Contribution flow is a set of activities and operations required to manage instance / item changes on FOLIO as Local Server and contribute those changes to INN-Reach Central Server.
Note that 2 cases should be considered:
- Initial contribution is expected to happen once per library in the very beginning of integration with INN-Reach; Initial contribution is aimed to let INN-Reach know about all items owned by the library which can be shared,
- Regular contribution is a part of daily activities; it's about tracking of changes (at least creation, deletion and status updating of instance / item).
Note that in all API calls FOLIO acts as a Client while INN-Reach acts as a Server.
Regular record contribution
Solution diagram
Below is a diagram showing flow implementation on component level.
Sequencing
- mod-inventory-storage tracks changes with instances, holdings and items on its side and publish so called domain events to particular Kafka topics (note: directly to Kafka, not via mod-pubsub)
- Kafka topics used for domain event publishing - inventory.instance, inventory.item, inventory.holdings-record
- On FOLIO side there are such types as instance, holding, item. On INN-Reach side there are such types as bib, item. Basing on their sense, 2 pairs are formed - instance↔bib and item↔item - Brooks Travis , could you confirm this please.
- Refer to SPIKE: [MODINREACH-22] Verify Inventory Domain Events Published for Remote Storage and Elasticsearch Can be Used to Trigger Updates to Bibs and Items for more details including content of domain events
- mod-inn-reach connects to mentioned Kafka topics directly with a dedicated consumer group and consumes those domain events
- note that a separate consumer group allows to consumer events from Kafka topic without interception with other consumers
- A domain event only contain a part of details about instance / item, in particular tenant, event type, instance / item ID. So, mod-inn-reach is to request all other required details from mod-inventory using mod-okapi and FOLIO Platform API (point 3.1-3.3)
- Then mod-inn-reach retrieves required configurations and / or mappings from its own configuration data
- Optionally, mod-inn-reach authenticate API if need (refer to D2IR API Authentication flow for more details)
- mod-inn-reach invokes an appropriate D2IR API endpoint (refer to API specification and D2IR endpoints - implementation status)
Initial record contribution
Initial record contribution is the case which occurs in the very beginning of integration with INN-Reach. In fact, the process may look like
- an administrator does all the required configuration,
- an administrator starts Initial record contribution to synchronize library inventory with INN-Reach and waits till its completion,
- regular record contribution and circulation can be used then.
Initial contribution flow looks pretty similar to a regular contribution though some new challenges are to be addressed. Since D2IR API doesn't provide a way to contribute instances in batches, one by one record contribution via regular API endpoints is only possible. But keeping in mind potentially large amount of instances/items to be contributed, this process can take really much time. That's why it should be 1) scalable, 2) robust enough in order not to loose data, 3) transparent to an administrator.
Details of Initial record contribution
The proposal is
- An UI form is to be added to visualize Initial record contribution process (somewhere together with INN-Reach configuration),
- the UI form should have a kind of Start initial record contribution button to start this process,
- the UI should provide information about the process progress (e.g. progress bar with total and contributed parameters),
- the UI should display a message once contribution process is completed, probably with some statistics (e.g. how many instances and items have been contributed, how much time it took, start & finish dates),
- after completion, Start initial record contribution button should be disabled .
Monitoring reindex process
### Monitoring reindex process
There is no end-to-end monitoring implemented yet, however it is possible to monitor it partially. The following mod-inventory-storage API provides information about a number of domain events initially sent to Kafka topic by reindex job:
http GET [OKAPI_URL]/instance-storage/reindex/[reindex job id] where reindex job id is a unique job identifier of a particular reindex job
On mod-inn-reach side one should count all the processed events; so that comparison of initially sent events and processed events can give a picture of contribution status at the moment of time.
Enumeration of all instances and items in inventory
Enumeration of all instances and items existing in inventory can be done either via a file with ID of all instances (it can be created externally - not sure is this is a viable option) or via existing REINDEX functionality of mod-inventory-storage.
Re-index via mod-inventory-storage
Some mod-inventory-storage consumers need to pull all instances from an existing database. There is instance-reindex API for that - when a reindex job is submitted then mod-inventory-storage initiates streaming of all instance IDs and publishing domain events for them. The domain event has REINDEX type, topic being used - inventory.instance.
Refer to - MSEARCH-42Getting issue details... STATUS for more details - this is mod-search which has implementation for triggering of re-index job and handle it.
So, the proposal is to re-use the same approach for INN-Reach and slightly update the code of mod-inventory-storage - add instance-reindex-innreach API endpoint, re-use the functionality of database re-indexing, and either send messages with new domain event to the same topic, or use a dedicate topic (TBD later).
Spike task is recommended to take a look into mod-inventory-storage and mod-search and define required work and efforts.
Contribution to INN-Reach Central server
The challenge comes from the D2IR API specification which does not provide any chance to bulk contribution of instances (those it supports bulk contribution of items related to one and the same instance). This means that from functional standpoint, the same contribution flow is required to both Initial and Regular contribution. With that, it's recommended to consider the only implementation in code for that, and enable scaling on Java thread level to speed up Initial contribution process.
Reliability aspect
Reliability is to be provided by several involved components.
Reindex job on mod-inventory-storage
mod-inventory-storage is responsible to look through the inventory and push domain events for every instance / item to Kafka. Since this process can take certain time, mod-inventory-storage should be able to track a state of reindex job and be able to resume the job in case of issues. Keeping in mind, that mod-inventory-storage and its reindex mechanism is out of responsibility for mod-inn-reach, it's recommended to review mod-inventory-storage to have clearer vision of its implemented behavior.
Kafka
Kafka acts as a transport channel with built-in persistent event storage. The only thing to check here is so called retention time, i.e. period of time to live for non-consumed events, The default retention time is 168 hours, i.e. 7 days. Need to confirm the retention time configured on FOLIO environments.
mod-inn-reach
All the contribution logic is located in mod-inn-reach where new events are to be consumed from Kafka.
- Kafka Acknowledgement consumer. Being a Consumer, mod-inn-reach will receive a message from Kafka and process it. Once the message is processed, mod-inn-reach will send an acknowledgement to the Kafka broker. Once Kafka receives an acknowledgement, it changes the offset to the new value.
- What is an expected behavior if a particular instance or item cannot be contributed to INN-Reach?
- in case of network or protocol issues (e.g., connection refused error, or 500 ISE etc., there should be other attempts to contribute),
- in case of API error returned in accordance with API specification, there should be not other attempts; though this should be logged and available to administrator
- mod-audit is a good candidate to keep such logs but it only works via mod-pubsub now so it should be slightly refactored to add direct Kafka support there,
- another option is to keep logs internally in mod-inn-reach.
Outstanding questions
- Discuss proposed UI form dedicated to Initial record contribution
- agreed
- Are there any particular performance expectations?
- no particular expectations are identified; proposed to make a performance testing to provide baseline performance metrics
- Do we need to have an option to cancel running Initial record contribution process?
- yes
Repeated record contribution
This is a specific case of full contribution when it's required to conduct a full sync of inventory and INN-Reach Central Server. Use case:
- contribution criteria is configured,
- Initial record contribution process is conducted using contribution criteria,
- contribution criteria is updated after some time,
- Partial record contribution process should be conducted to sync instances / items accordingly to updated contribution criteria
- all inventory is to be reiterated (the same approach as for Initial record contribution)
- every domain event is to be verified against current contribution criteria to identify if this instance / item should be contributed or not
- INN-Reach Central Server lookup endpoints are requested for every instance / items to identify if this instance / item has been previously contributed or not
- comparing outcome of previous 2 actions, mod-onn-reach should choose one of possible actions - contribute a new instance / item, delete previously contributed instance / item, skip instance / item
So, with the described scenario, the Partial record contribution process is expected to look like the Initial record contribution process with the difference in mod-inn-reach processing logic.
Special and edge cases, additional notes
- Look Up Bib by bibId and Look Up Bib by bibId and itemId endpoints can be used for verification of successful contribution
- Contribute Bib endpoint needs marc21 bib data in ISO2709; current vision is that only those instances that have associate MARC data in SRS can be contributed
- Assumption is that sending more than 1 instance / item with the same ID won't lead to duplicates on INN-Reach Central Server side
Proposed Work breakdown structure
-
-
MODINREACH-78Getting issue details...
STATUS
Spike #1: Analyze domain event pattern implementation in mod-inventory-storage and mod-search
- Basically, mod-inventory-storage is a publisher, and mod-search is a consumer (pretty similar to mod-inn-reach); so the goal is to see how they are implemented in code
- Expected outcome - code is analyzed and is clear for development team
-
-
MODINREACH-79Getting issue details...
STATUS
Spike #2: Analyze data required for D2IR contribution
- One need to review what data is required for D2IR contribution and verification of contribution criteria, and map them on data from FOLIO inventory and SRS
- Expected outcome - all fields are mapped
-
-
MODINREACH-80Getting issue details...
STATUS
Spike #3: Analyze Re-Index job implementation / usage in mod-inventory-storage and mod-search.
- Open question: should the same topic but new event type be used, or does dedicated topic fit better? (RA: i'm not a fan of making separate topics per every case but here it seems to be safer in order not to impact on other consumers... though for Kafka itself and its clients it should not be a problem to filter out events with unknown event type...). One more note: having 2 topics allows to manage them - e.g., process full contribution first, and after that only continue with regular contribution
- Open question: does it make sense to implement an mod-inventory-storage endpoint accepting a condition(-s) for filtering out events?
- Expected outcome - development team has understanding what should be implemented to start or cancel re-iteration process in mod-inventory-storage
-
-
UX-443Getting issue details...
STATUS
Create UX mock-ups for Initial / Partial Contribution settings
- Expected outcome - clear and detailed mock-ups are available
- (ui-inn-reach) Implement a new UI form based on new UX mockups
- at least, 2 parts are expected - Initial contribution status and Contribution history
- Expected outcome - a new UI form basing on mentioned mock-ups is implemented
- (mod-inn-reach) Design and implement a data model for Contribution process configuration and history
- CRUD for Contribution configuration
- CRUD for Full Contribution history (need to be able to process all the data required for UI form, including list of past contribution jobs with column mentioned in UX-443
- Expected outcome - mod-inn-reach can store and provide Contribution configuration and history and provides an API for that
- (mod-inn-reach) Implement Kafka client in mod-inn-reach to consume inventory domain events
- required configuration - Kafka endpoint, topic, SSL certificate to access Kafka (Enabling SSL and ACL for Kafka)
- it makes sense to consume events from Kafka by batches (batch size can be discussed, generally it impacts on performance; batch size = 50 or 100 can be considered as a basic value)
- Kafka acknowledgement should be sent after successful sending to INN-Reach
- all failed records from the batch are to be sent to DLQ (another Kafka topic)
- More details are to come from Spike #1
- Expected outcome - skeleton with Kafka client and placeholders for communication with inventory, SRS, database and INN-Reach Central Server
- (mod-inn-reach) Enrich domain events with additional information from inventory
- More details are to come from Spike #2
- Expected outcome - mod-inn-reach has all the information required for D2IR API and available in FOLIO inventory
- (mod-inn-reach) Enrich domain events with additional information from SRS
- More details are to come from Spike #2
- Expected outcome - mod-inn-reach has all the information required for D2IR API and available in SRS
-
-
MODINREACH-48Getting issue details...
STATUS
Implement MARC analysis
- Contribute Bib endpoint needs marc21 bib data in ISO2709; current vision is that only those instances that have associate MARC data in SRS can be contributed
- decode MARC record, update in to align with D2IR specification requirements (check 008 and 245 fields are presented, omit 9XX fields), encode to MARC again, encode in Base64
- Expected outcome - MARC record fully complied with D2IR
- (mod-inn-reach) Retrieve and apply contribution criteria and configuration
- Contribution criteria can be applied to a record to identify what action should be performed (contribute to INN-Reach, update, de-contribute, skip)
- Expected outcome - an appropriate action is chosen depending of record data and contribution criteria
- (mod-inn-reach) Implement D2IR endpoints (9 endpoints in total)
- 2 for contribution,
- 2 for lookup,
- 2 for updates,
- 2 for deletion,
- 1 for Base64 Encoding Table
- Expected outcome - all 9 endpoints are implemented
- (mod-inventory-storage) Updated the module to support INN-Reach re-iteration job
- Scope depends on Spike #3 outcome
- Expected outcome - there is a way to start full re-iteration job for INN-Reach integration and to cancel it if need
- (mod-inn-reach) Implement Full (Initial and Repeated) Record Contribution
- Scope depends on Spike #3 outcome
- Expected outcome - there is a way to start full re-iteration job for INN-Reach integration and to cancel it if need
- (mod-inn-reach) Support pausing and resuming of Full (Initial and Repeated) Record Contribution
- once a Full Record Contribution job is paused, no more actions are to be performed until either cancellation or resuming; no need to clear data from Kafka topic
- Expected outcome - re-iteration job can be paused and resumes
- (mod-inn-reach) Implement cancellation of Full (Initial and Repeated) Record Contribution
- once a Full Record Contribution job is cancelled by user, no more actions are to be performed; all events in Kafka topic with specified event type and re-index job id are to be consumed and committed without any additional activities
- Expected outcome - re-iteration job can be cancelled
- (mod-inn-reach) Implement status monitoring of Full (Initial and Repeated) Record Contribution
- Refer to Monitoring reindex process in the text above
- Expected outcome - it's possible to track progress of full record contribution job
- (mod-inn-reach) Error processing
- 1) error on initial casting / transformation of Kafka event to POJO or JSON - need to skip it without additional attempts to cast
- 2) cannot work with FOLIO API or D2IR API - add some pauses (use circuit breaker implementation) before next attempts or skip if a number of attempts is exceeded
- options - DLQ
or audit logs
- (mod-inn-reach) Error processing - circuit breaker implementation
- the idea is to introduce some pauses in processing flow (in mod-inn-reach) when experiencing issues with FOLIO API or D2IR API
- this can be configured, e.g. in this way - 0.5,1,5,10,30,60 seconds of pauses. When mod-inn-reach meets an issues with external API it starts a counter and add a pause before each next attempt - 0.5 sec before attempts #2, 1 sec before attempt 3, ... 60 sec before attempt 7 etc. If an attempt is successful - counter should be cleared
- Expected outcome - circuit breaker pattern is implemented
- (mod-inn-reach) Error processing - DLQ implementation
- Dead Letter Queue aka DLQ is aimed to log failed events
- Expected outcome - DLQ behavior is additionally discussed, confirmed with PO and implemented
- (mod-inn-reach) Conduct Loading testing for Initial Record Contribution
- need to know performance metrics - how many instances / items can be contributed per 1 minute / hour; how much does it take to contribute 1M of records.
- Expected outcome - circuit breaker pattern is implemented
- (mod-inn-reach) Support update only contribution
- AC: Jobs defined as "update only" only update the itemCount and titleHoldCount for Bibs already contributed
- (mod-inn-reach) Logging
- AC: All FOLIO instance ids / items ids that are not successfully processed or contributed as Bibs due to errors are included in an array in the generated JSON log file
- AC: Errors returned by the central server when attempting to contribute or update an item are logged in the job log
- Job log should be retrievable via API in JSON format
- Job log should include an array of FOLIO inventory IDs that experienced errors during processing or contribution