Ingesting Instances Via Async Channels

This document will illustrate how instances can be created/updated via async channels(Kafka). There are incoming use cases where Instances need to be synced with some other objects in bulk; with more guarantees than what the HTTP endpoints can provide and less coupling desired.

Event Model

Property

Type

Required

Property

Type

Required

id

String

Yes

eventType

String

Yes

eventMetadata

EventMetadata(can be derived from raml or manually defined in mod-inventory)

Yes

eventPayload

JSON Object

No

The eventPayload will contain the following

Property

Type

Valid Values

Required

Property

Type

Valid Values

Required

sourceRecordIdentifier

String

NA

No

sourceRecordObject

JSON Object

NA

Yes

sourceType

String

LINKED_DATA, MARC, FOLIO

Yes

additionalProperties

JSON Object

NA

No

Event Types

There will be two major event types

  • CREATE_INSTANCE: Representing the creation of an instance.

    • if sourceRecordIdentifier is populated, the instance will be created with the same identifier.

  • UPDATE_INSTANCE: Representing the update of an instance.

    • It will be required to have sourceRecordIdentifierpopulated and will be used to denote which instance to update. Value should map to a valid Inventory Instance Identifier.

    • It should not be possible to switch the source of the instance to be updated.

For the source record payload, there are some requirements

sourceType=LINKED_DATA

It is expected that sourceRecordObject will be a JSON representation of a MARC file. The MARC file will be saved at mod-source-record-storage and an Instance object with source=LINKED_DATA is saved at mod-inventory-storage.
source=LINKED_DATA implies that the source of truth is at the Linked Data Service. In the additionalProperties object, a key-value pair will represent the identifier of the linked data resource. The key will be linkedDataId and the value with be a Long(Java)/bigint(PostgreSQL).

The source record in SRS should have the following identifiers:

  • MARC

    • 999ff$i will have the Inventory instance Identifier

    • 999ff$s will have the SRS source record identifier

    • 999ff$l will have the linked data identifier

    • 035$a will have the linked data identifier prepended with (ld)

  • External ID Holder will include the linked data identifier

source=MARC To source=LINKED_DATA

It will be possible that an instance with source=MARC will be converted to source=LINKED_DATA but not the other way around. This will only take place when an update with a kafka message of sourceType=LINKED_DATA attempts to update a source=MARC instance in Inventory.

sourceType=MARC

Not implemented for Ramsons flower release

It is expected that sourceRecordObject will be a JSON representation of a MARC file. The MARC file will be saved at mod-source-record-storage and an Instance object with source=MARC is saved at mod-inventory-storage.

sourceType=FOLIO

Not implemented for Ramsons flower release

It is expected that sourceRecordObject will be a JSON of an instance; same or similar the request body when creating instances via HTTP in mod-inventory. Modifications to the HTTP schema should be reflected here as well. An Instance object with source=FOLIO is saved at mod-inventory-storage.
The same code path that mutates instances via HTTP should be the same for Kafka.

Event Topic

All domain events will be pushed into a Kafka topic named inventory.instance_ingress. It will be prepended with the tenantId or topic consolidation id and also the environment. A full topic name could be folio.diku.inventory.instance_ingress .