Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Summary

As our Data Import application cannot process very large files, one possible solution can be to slice up a large data import source file into smaller chunks (files) and run Data Import Job for every chunk file separately.

...

Uploading to S3-like storage directly from a FOLIO UI application can be implemented using the following guide https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/. The initial call to acquire the uploadURL must be done by the back-end mod-data-import module.

The diagram below represents in detail the Direct upload flow.


The alternative solution could be the usage of an AWS SFTP-enabled server from the AWS Transfer Family service, but it has the following drawbacks:

  • It is an extra service that should be set up and configured
  • This service will require an extra security configuration (it requires a separate identity provider to manage user access to the SFTP server)
  • The price for US-East1 is $0.30 per hour per endpoint (~$216 per month) + $0.04 per gigabyte (GB) transferred

Based on the above, Direct uploading is preferable to the usage of the managed SFTP Server.

Simultaneous launch of a large number of Data Import Jobs (9)

To smooth the spike of resource consumption by the mod-data-import module when starting a large number of Data Import Jobs, it is necessary to organize a queue for jobs that do not have enough resources eliminating resource exhaustion.TODO: Master / Detail Data Import Jobs - collect results based on the master DI Job

The queue should be organized using a DB table where job details will be stored. The table must be created in a dedicated schema (not a tenant-specific schema), so job data for all tenants will be stored in that table. This way, we will create a simple solution to pick up jobs for every tenant.

Result aggregation

To streamline the aggregation of results from Data Import Jobs that process chunk files, it is essential to establish a connection with the primary Data Import job, which must be defined at the outset of the operation. This means that prior to initiating Data Import Jobs for chunk files, it is necessary to create the primary Data Import Job, to which the Data Import Jobs for chunk files will be linked. By doing so, we will be able to retrieve all the logs that pertain to the source file. It is worth noting that all the necessary data structures and relationships are already in place in the Data Import app (mod-source-record-manager).