...
The existing renewal process is too slow. However, having a batch perform one operation at a time has also proven problematic. The suggested design is to have a batch queue up a defined process renewals and when it has accrued a designated number of renewals and then perform the operationsuccessful renewals, submit them to the storage module as a sub-batch.
The business logic can perform a pre-process where it checks to see if any of the renewals would fail for business logic reasons. Queue Hold all renewals that pass the business logic test and then when the queue sub-batch size is reached perform the SQL operation on that queuemake a request to the storage module using the sub-batch. This is potentially problematic in that if something fails in the queue sub-batch the entire queue sub-batch might be considered a failure. If this is undesired then a queue sub-batch size of 1 must be used. This approach would require a bulk update endpoint to be available from the storage module.
This refers to queue the sub-batch as a subset of renewals that have passed the business logic processing. In this way, a queue is used to refer not to a batch but instead to a generic collection.
To restrict the amount of memory consumption there should be a max batch size of 10,000. When more than the maximum batch size is sent to the API endpoints then an HTTP 413 "Payload Too Large" should be immediately returned without processing anything.
...