Data-import process currently consists of a few stages. Uploaded file is being chunked, records from each chunk are parsed, saved to storage as Source Records, mapped to Instances, saved to Inventory, and corresponding instanceIds are set to Source Records. Chunk size and the number of chunks being processed simultaneously can be changed (by default it's 50 and 10 respectively).
The actual data import starts at the point when file is uploaded. Right now, that can only be triggered by the so-called "secret" button, which triggers a default job to import MARC bibliographic records into SRS and create associated Inventory instances. This calls the POST endpoint /data-import/uploadDefinitions/{uploadDefinitionId}/processFiles
Import is considered finished when all the chunks are processed successfully or marked as ERROR, appropriate JobExecution status is set and the file is visible in the logs section on the UI.
Files used for testing:
msplit30000.mrc contains 30,000 raw MARC bibliographic records, RecordsForSRS_20190322.json contains 28,306 MARC records in json format.
Performance was measured on https://folio-snapshot-load.aws.indexdata.com with default chunk size and queue size parameters (50 and 10 respectively). It is consistently takes 8 min to load each of the files, which makes about 17 sec to load 1000 records.
Data-import performance was also tested locally on folio-testing-backend Vagrant box version 5.0.0-20190619.2334 (16 GB of RAM) with mod-source-record-manager and mod-data-import deployed additionally. For each module running on a docker container was allocated 256 MB of JVM heap memory.
Results are shown below, default values are highlighted. In average, it takes about 25 sec per 1000 raw marc records and about 22 sec per 1000 json records.