Background
Authority records remapping was implemented for mod-marc-migrations according to the design Authority remapping
Currently vertical scaling is supported by increasing chunk size, chunks processing parallelism and resources for the module. In current implementation chunks data is prepared and read sequentially, only remapping/saving to file and related db queries are done in parallel. All files are uploaded to external storage when job ends.
For now - only one migration job could be running at the time, limited on purpose until MODMARCMIG-12 issue is solved.
Theoretically, if two app instances exist, they could process 2 jobs simultaneously, but only if load balancer routes second request to the second app instance.
Purpose
Support chunks processing distribution between app instances.
Solution Options
Option 1. Spring Batch Remote Partitioning
Manager+worker on the same instance doesn’t work properly. Manager, worker on different instances are not possible with current deployment approach.
Overview
Remote partitioning using Spring Batch Integration https://docs.spring.io/spring-batch/reference/spring-batch-integration/sub-elements.html#remote-partitioning with Kafka .
With such approach we will have batch job “manager“, which will construct chunks when job is submitted, then send chunk metadata to kafka so consumers (batch job “worker“) can read, process chunks, write/upload the file and return processing result metadata to kafka to later be consumed by “manager“ to complete the job.
Provides horizontal scaling in a scope of 1 job.
Notes/pitfalls
There’s currently a GitHub issue https://github.com/spring-projects/spring-batch/issues/4133 connected to simultaneous running of multiple jobs. Issue reproduces, confirmed with POC. May be avoided by limiting job execution at 1 at a time.
If chunks are submitted to kafka - parallel chunks processing in a scope of one instance would require concurrent consuming.
“manager“/”worker” are supposed to be separate app instances, f.e. using profiles for configuration. Probably will be ok to have 1 manager + one worker for each app instance. NOT ok, looks like spring batch job execution is running continuously on manager that started it and never gets completed if some worker responses are consumed by other manager.
In case of one app instance remote chunking will most likely be slower than current solution, so probably some profile/env variable should be present to enable remote chunking only in case there’re multiple app instances, otherwise - use currently implemented approach. May be a maintenance problem, probably just a configuration question.
Problem with proper handling folio execution contexts, easy to put in kafka headers, but no way found to execute step in folio context on worker side found so far, as a workaround - start context early in the worker and end it after response sent, should work fine if only one tenant job is launched at a time.
Spring batch also brings additional database operations which duplicates some logic that already present in feature design, such as migration/job statuses, chunks etc… which brings us to other solution ideas.
Required effort
Probably we’ll need two separate app deployments: 1 deployment with 1 instance for manager and 1 deployment with multiple instances for workers.
On development side - mostly spring batch configurations, some changes to existing code, kafka addition.
Option 2. Async spring batch job start
Overview
After migration operation created in db do either:
send kafka message about operation creation so some other app instance can start/perform spring batch job. Probably multiple jobs may go to the same kafka partition which will cause jobs to be stuck in a queue while other app instances may be not busy
have a scheduled job which will check on created operation, change it status to “in progress“ and run a spring batch job. Requires synchronization on operation and scheduler which will check db for each tenant.
Provides horizontal scaling in a scope of multiple jobs.
Required effort
In case of kafka - add producing/consumig, minimal changes to existing codebase
In case of scheduler - add client for tenant retrieval, scheduler to walk through all tenant schemas, minimal changes to existing code
Option 3. Async processing without spring batch
Overview
After chunk objects are constructed - send them (or lightweight version with only required info) to kafka. Each chunk then could be processed by different app instances.
Requires some mechanism to finish migration. Either check db for total/processed number of records after processing each chunk, or have some service that’ll create scheduled job which will cancel itself after migration is finished by checking database periodically, f.e. as demonstrated in Scheduled self-cancelling task example.
Provider horizontal scaling in a scope of one job, could provide horizontal scaling in scope of multiple jobs. More clear/isolated folio context interactions. No additional db calls related to spring batch
Required effort
Add kafka setup
Implement db checks on chunks processing completion.
Remove spring batch logic
Rearrange processing logic which was tied together by spring batch, now it needs to be run in some service
Option 4. Separate spring batch job for each chunk
Makes no sense to have spring batch logic just for individual chunk processing
Overview
Similar approach as in Option 3, but with preserving most spring batch logic.
After chunks preparation - send chunk/chunkId in Kafka, consumers will start a spring batch job for each chunk which will preserve most spring batch logic related to read/process/write.
Requires same mechanism to finish migration as Option 3.
Provider horizontal scaling in a scope of one job, could provide horizontal scaling in scope of multiple jobs. More clear/isolated folio context interactions. Still has spring batch db interactions
Required effort
Same as previous but instead of removing spring batch logic and rearranging processing - change spring batch configuration to have less elements / simpler structure
Summary
Solution option | Required effort | Benefits | Drawbacks | Accepted |
---|---|---|---|---|
2 | 3-5sp | horizontal scaling for multiple jobs | no horizontal scaling in a scope of one job. If f.e. 2 jobs started on two instances and one of them ends earlier - than one instance will be idle while other instance will still process the long job | - |
3 | 5-8sp | horizontal scaling for multiple jobs + in a scope of one job. No spring batch db calls/overhead | Amount of effort required | + |