Summary
As our Data Import application cannot process very large files, one possible solution can be to slice up a large data import source file into smaller chunks (files) and run Data Import Job for every chunk file separately.
- MODSOURCE-630Getting issue details... STATUS
In order to solve a problem with processing very large files by Data Import, the proposal is to implement two minor features in the Data Import module and not create a separate utility tool.
Splitting files by a separate tool will not bring the expected reliability to the Data Import process because the file upload step will still be included in the process.
The first idea is to let the Data Import app download the source file from the S3-like storage instead of consuming it as a server for uploading. Thus the Data Import initial stage will look the following:
- The user uploads a source file to the S3-like storage that is available to the Data Import application.
- The user can list already uploaded files and select which one should be used for processing.
- The user starts the Data Import job and provides the source file's location in the S3-like storage.
- The Data Import application downloads the file from the S3-like storage to the local file system.
- The Data Import application continues the usual source file processing once the file is downloaded to the local file system of the mod-data-import module.
This change will make the initial stage of the Data Import application more reliable and prevent a potential denial of service (DoS) attack in which a threat actor can fill up disk space. In addition, the risk of uncontrolled resource consumption in the case of multiple simultaneously running Data Import file uploads is also eliminated. The existing approach when the user uploads a source file directly to the Data Import app will be preserved for backward compatibility, but the max size of files that can be processed using this approach will be significantly reduced.
The second improvement is to implement large data import file slicing logic in the Data Import application as well.
When the user starts a Data Import Job providing a source file location, if the file size is greater than the maximum allowed, the Data Import application splits the original file into a number of chunks using a predefine naming schema and starts a separate Data Import Job for every chunk file created. The chunk files should be kept in S3-like storage as well. The logic regarding the calculation of the number of chunks should be configurable, so every deployment can be provided with its reasonable values.
These changes will allow to separate the source file uploading and processing operations. So, the user can perform a quite long-lasting file upload operation beforehand at any time that suits him/her. And the actual Data Import Job can be started at the appropriate time, at the end of the working day, for example.
Requirements
Functional requirements
- The max chunk file size or the max number of source records in the chunk file must be configurable.
- Records would need to be chunked and named based on the sequential order of the records in the original file, e.g. records 1-1000 in chunk file_1, records 1001-2000 in chunk file_2, etc.
Implementation
The solution will be implemented as a part of mod-data-import module.
High-level operation overview