Data-import process currently consists of a few stages. Uploaded file is being chunked, records from each chunk are parsed, saved to storage as Source Records, mapped to Instances, saved to inventoryInventory, and corresponding instanceIds are set to Source Records. Chunk size and the number of chunks being processed simultaneously can be changed (by default it's 50 and 10 respectively).
...
The actual data import starts at the point when file is uploaded and . Right now, that can only be triggered by the so-called "magicsecret" button is pressed to start the processing, which is basically calling , which triggers a default job to import MARC bibliographic records into SRS and create associated Inventory instances. This calls the POST endpoint /data-import/uploadDefinitions/{uploadDefinitionId}/processFiles
...