Naming Convention
Kafka Topic
ENV and tenant id setting should be used in topic naming convention. This will separate data of different customers to different Kafka topics. In addition to that it allows a Kafka instance to be shared by multiple environments that have the same tenant id.
Topic name should be concatenated from the following string constants (in exactly the same order):
- Environment name (from ENV environment variable)
- If the ENV variable is not defined for the environment, the module there should be a fallback to exclude it from topic name
- Tenant id (should be the second, because it is convenient to use wildcard in ACL for Kafka users)
- Producer name
- name of a module producing an event
- -storage postfix should be omitted
- Domain entity name in singular form (if it is not domain event, the name of process should be used or just event name)
For example topics in bugfest (ENV == bugfest) the Kafka topic for inventory instances for tenant fs09000000 should have the following name:
bugfest.fs09000000.inventory.instance
Example of topic naming for Data Import process
ibf3.Default.fs09000000.DI_RAW_RECORDS_CHUNK_PARSED
We should also consider cases with many producers and the only consumer, the best example of which is mod-audit - there can be many modules pushing their events for audit, and only the mod-audit consuming them. We can follow exactly the same convention as stated above, e.g. bugfest.fs09000000.mod-inventory-storage.audit, or can specify not producer module name but consumer module name instead, e.g. bugfest.fs09000000.mod-audit (this might be more confusing).
Consumer group
While the proposed topic naming convention is based on the producer module name, consumer group naming is based on the consumer module name. This will allow 1+ modules to work with one topic without interfering with each other. Example:
cg.fs09000000.search cg.fs09000000.innreach cg.fs09000000.remotestorage
Topic partitioning
In order not to have problems with consistency, that can occur due to race condition in case of concurrent writes to database Kafka topics should have appropriate partition_key. It could be id of the record or some another value, that allow to segregate events between consumer instances.
When multiple instances of the consumer modules are deployed, the same consumer group should be set for all of them.
Domain events json schema
Module sends notifications, when there is any change of domain entities (e.g. instance, holding, item, loans).
The pattern means that every time when an entity is created/updated/removed a message is posted to Kafka topic:
The event payload should have the following structure:
{ "old": {...}, // the entity state before update or delete "new": {...}, // the entity state update or create "type": "UPDATE|DELETE|CREATE|DELETE_ALL", // type of the event "tenant": "diku", // tenant id "ts": "1625832841003" // timestamp of the event }
X-Okapi-Url
and X-Okapi-Tenant
headers could be set from the request to the Kafka message.
Domain events for delete all APIs
In order to clean data for some tenant. There could be delete all APIs for records. For such APIs we're issuing a special domain event:
- Partition key:
00000000-0000-0000-0000-000000000000
- Event payload:
{ "type": "DELETE_ALL", "tenant": "<the tenant name>" }
Other items and aspects
Event schema versioning
Q: Does it make sense to include explicit version field into domain event schemas?
A: Potentially, this would make sense. Though now the basic schema is extremely straightforward so that there's a very small chance for changes. Also, adding new fields later should not be a breaking change.
Retention policy
Q: Retention policy recommendations - should particular cross-platform standard for that be recommended?
A: Retention policy is definitely required. With that, taking into the account the diversity of potential needs in FOLIO modules it would be more flexible to allow retention policy configuration on topic level. The Kafka default value for retention https://docs.confluent.io/platform/current/installation/configuration/topic-configs.html#topicconfigs_retention.ms which is 7 days seems to be more than enough. One can even think about decreasing the recommendations to 2 days (though no strong arguments behind this)
Obsolete topic deletion
Q: Should exist a mechanism of automated deletion for obsolete topics? How to determine obsolescence?
A: No out-of-the-box tool at the moment. 2 options for further discussion:
- development teams keep a list of obsolete topics, and cleanup is performed on some schedule,
- topics are tried to be deleted with every re-deploy, and new up-to-date topics are created once FOLIO is deployed (currently, autocreate=true); non-empty topics cannot be deleted so they should be logged and be a subject for further analysis.
Events type cataloging
Q: As per our separate discussion with Jakub Skoczen - "it would be great ... to specify how the Producers (and possibly Consumers) will catalog what kind of events they publish. I am assuming that the number of events will grow quite quickly once this proposal becomes a feature in FOLIO. I would propose two alternatives:
- extend the ModuleDescriptor with a section on "events" or "messages", ensure that this section is kept up to date with the type of events published in a specific version of a given module
- introduce a new type of descriptor, e.g MessageDescriptor or EventDescriptor, that will capture this information"
A:
Error handling
Being an async messaging system, Kafka does not provide immediate feedback on processing results. What should be done if a consumer meets an event it cannot process?
- Stop-the-world processing - stop any further processing until analysis and clarification,
- Dead Letter Queue (optionally - configurable retries, compact on Kafka etc.