Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

Jira Legacy
serverSystem Jira
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-681

Table of Contents

Overview

...

Overview

The Data Import Task Force (DITF) implements a feature that splits large input MARC files into smaller ones, resulting in smaller jobs, so that the big files could be imported and be imported consistently.  This document contaest contains 1. Test with 1, 2, and 3 tenants' concurrent jobs with configurins the configurations the results of performance tests on the feature and also an analysis the feature's performance with respect to the baseline tests.  The following Jiras were implemented. 

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-644
Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-645
Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-647
Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-646
Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-671

Summary

  • The file-splitting feature is stable and offers more robustness to Data Import jobs even with the current infrastructure configuration. If there were failures, it's easier now to find the exact failed records to take actions on them. 
    • No stuck jobs in all tests performed.
    • There were errors (see below) in some partial jobs, but they still completed so the entire job status is "Completed with error".
    • Both of kinds of imports, create and update MARC BIBs worked well with this file-splitting feature enabled and also disabled. 
  • There is no performance degradations, jobs not getting slower, on single-tenant imports. On multi-tenants imports, performance is be a little better
  • Duration for DI correlates with number of the records imported (100k records- 38 min, 250k - 1 hour 32 min, 500k - 3 hours 29 min).
  • Multitenant DI could be performed successfully for up to 9 jobs in parallel. If jobs are big they will start one by one in order for each tenant but processed in parallel on 3 tenants. Small DI (1 record) could be finished faster not in order. 
  • No memory leak is suspected for all of the modules.
  • Average CPU usage for mod-inventory -was 144%, mod-di-converter-storage was about 107%, and for all other modules did not exceed 100 %. We can observe spikes in CPU usage of mod-data-import at the beginning of the Data Import jobs up to 260%.  Big improvement over the previous version (without file-splitting) for 500K imports where mod-di-converter-storage's CPU utilization was 462% and other modules were above 100% and up to 150%. 
  • Approximately DB CPU usage is up to 95%.

Recommendations and Jiras

  1. One record on one tenant could be discarded with error: io.netty.channel.StacklessClosedChannelException.
    Jira Legacy
    serverSystem JiraJIRA
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyMODDATAIMP-748
    Reproduces in both cases with and without splitting feature enabled in at least 30% of test runs with 500k record files and multitenant testing.
  2. During the new Data Import splitting feature testing, items for update were discarded with the error: io.vertx.core.impl.NoStackTraceThrowable: Cannot get actual Item by id: org.folio.inventory.exceptions.InternalServerErrorException: Access for user 'data-import-system-user' (f3486d35-f7f7-4a69-bcd0-d8e5a35cb292) requires permission: inventory-storage.items.item.get. Less than 1% of records could be discarded due to missing permission for  'data-import-system-user'. Permission was not added automatically during the service deployment. I added permission manually to the database and the error does not occur anymore.
    Jira Legacy
    serverSystem JiraJIRA
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyMODDATAIMP-930
  3. UI issue, when canceled or completed with error Job progress bar cannot be deleted from the screen.
    Jira Legacy
    serverSystem JiraJIRA
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyMODDATAIMP-929
  4. Usage:
    • Should not use less than 1000 for RECORDS_PER_SPLIT_FILE. The system is stable enough to ingest 1000 records consistently and smaller amounts will incur more overheads, resulting in longer jobs' durations.  CPU utilization for mod-di-converter-storage for 500 RECORDS_PER_SPLIT_FILE(RPSF) = 160%, for 1000RPSF =180%, for 5K RPSF =380% and for 10K RPSF =433%, so in the case of selecting configurations 5K or 10K we recommend to add more CPU to mod-di-converter-storage service.
    • When toggling the file-splitting feature, mod-source-record-storage, mod-source-record-manager's tasks need to be restarted.
    • Keep in mind about the Kafka broker's disk size (as bigger jobs - up to 500K - can be run now), consecutive jobs may use up the disk quickly because the messages' retention time currently is set at 8 hours. For example with 300GB disk size, consecutive jobs of 250K, 500K, 500K sizes will exhaust the disk. 
  5. More CPU could be allocated to mod-inventory and mod-di-converter-storage

Results

Test #

Profile

Splitting Feature EnabledResults

Splitting Feature Disabled

ResultsBefore Splitting Feature DeployedResults
1

100K MARC BIB Create

PTF - Create 237 min -39 minCompleted40 minCompleted32-33 minutesCompleted
1

250K MARC BIB Create 

PTF - Create 21 hour 32 minCompleted1 hour 41 minCompleted1 hour 33 min - 1 hour 57 minCompleted
1500K MARC BIB CreatePTF - Create 23 hours 29 minCompleted*3 hours 55 minCompleted3 hours 33 minCompleted
2Multitenant MARC Create (100k, 50k, and 1 record)PTF - Create 22 hours 40 minCompleted*2 hours 43 minCompleted*3 hours 1 minCompleted
3CI/CO + DI MARC BIB Create (20 users CI/CO, 25k records DI on 3 tenants)PTF - Create 224 min 18 secCompleted31 min 31 secCompleted24 minCompleted *
4

100K MARC BIB Update (Create new file)

PTF - Updates Success - 1

58 min 25 sec

57 min 19 sec

Completed1 hour 3 minCompleted--
4

250K MARC BIB Update

PTF - Updates Success - 1

2 hours 2 min **


2 hours 12 min

Completed with errors **

Completed

1 hour 53 minCompleted--
4500K MARC BIB UpdatePTF - Updates Success - 1

4 hours 43 min

4 hours 38 minutes

Completed

Completed

5 hour 59 minCompleted--

 * - One record on one tenant could be discarded with error: io.netty.channel.StacklessClosedChannelException.

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyMODDATAIMP-748
 Reproduces in both cases with and without splitting features in at least 30% of test runs with 500k record files and multitenant testing.

...

 ** -  up to 10 items were discarded with the error: io.vertx.core.impl.NoStackTraceThrowable: Cannot get actual Item by id: org.folio.inventory.exceptions.InternalServerErrorException: Access for user 'data-import-system-user' (f3486d35-f7f7-4a69-bcd0-d8e5a35cb292) requires permission: inventory-storage.items.item.get. Less than 1% of records could be discarded due to missing permission for  'data-import-system-user'. Permission was not added automatically during the service deployment. I added permission manually to the database and the error does not occur anymore.

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyMODDATAIMP-930


Test 1,2. 100k, 250K, 500k and Multitenant MARC BIB Create

Memory Utilization

This has memory utilization increasing due to previous modules restarting (everyday cluster shot down process) no memory leak is suspected for DI modules.

...

Test#2 Multitenant  DI (9 concurrent jobs)

Service CPU Utilization 

MARC BIB CREATE

Average CPU usage for mod-inventory -was 144%, mod-di-converter-storage was about 107%, and for all other modules did not exceed 100 %. We can observe spikes in CPU usage of mod-data-import at the beginning of the Data Import jobs up to 260%.

Test#1 500k records DI


Test#2 Multitenant


Instance CPU Utilization

Test#1 500k records DI

Test#2 Multitenant DI (9 concurrent jobs)


RDS CPU Utilization 

MARC BIB CREATE

Approximately DB CPU usage is up to 95%

...

Maximal DB CPU usage is about 95%


RDS Database Connections

MARC BIB CREATE
 For DI  job Create- Maximum 535 connections count.

Test#1  500k records DI

Test#2 Multitenant


Test 3 With CI/CO 20 users and DI 25k records on each of the 3 tenants Splitting Feature Enabled & 

Splitting Feature Disabled



Response time without DI

Before Splitting Feature Deployed

Response time with DI

Before Splitting Feature Deployed

Response time without DI

Splitting Feature disabled

Response time with DI 

Splitting Feature disabled

Response time without DI 
(Average) 

Splitting Feature enabled

Response time with DI

(Average) Splitting Feature enabled

Check-In0.517s1.138s0.542s1.1s0.505s1.067s
Check-Out0.796s1.552s0.841s1.6s0.804s1.48s

...

 * - Same approach testing DI: 3 DI jobs total on 3 tenants without CI/CO. Start the second job after the first one reaches 30%, and start another job on a third tenant after the first job reaches 60% completion. DI file size: 25k

Response time graph

With CI/CO 20 users and DI 25k records on each of the 3 tenants Splitting Feature Disabled

ocp3-mod-data-import:12

Image Modified

Data Import Robustness Enhancement

...

*   T1 - "00:33:35.1" Error T2 - "01:23:36.144" T3 - "01:16:26.391" on the first tenantproccesing stoped wit error "io.vertx.core.impl.NoStackTraceThrowable: Connection is not active now, current status: CLOSED "

it caused the spike of CPU utilization on Kafka (tenant cluster) up to 94% 

Instance CPU Utilization 

Test 1. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 500, 2 runs for each test. The maximal CPU Utilization value is 38%. 

...

Test 2. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 10K, 2 runs for each test. The maximal CPU Utilization value is 37%. 

Memory Utilization

Test 1. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 500, 2 runs for each test.

...

Memory utilization rich maximal value for mod-source-record-storage-b 88%  and for mod-source-record-manager-b 85%.

Test 2. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 10K, 2 runs for each test.

Service CPU Utilization 

Test 1. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 500, 2 runs for each test.

...

Test 2. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 10K, 2 runs for each test.

CPU utilization of  mod-di-converter-storage-b

 

RDS CPU Utilization 

Test 1. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 500, 2 runs for each test. Maximal  CPU Utilization = 95%

...

Test 2. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 10K, 2 runs for each test. Maximal  CPU Utilization = 94%

Image Modified

RDS Database Connections

Test 1. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 500, 2 runs for each test.

...

Test 2. Test with 1, 2, and 3 tenants' concurrent jobs with configuration RECORDS_PER_SPLIT_FILE = 10K, 2 runs for each test. Maximal  CPU Utilization = 94%

Retesting DI file-splitting feature on Poppy release

Retest the DI feature to be sure that the new changes have not affected performance negatively.  Retest the DI file-splitting feature for the following scenarios:

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-681

Brief comparison summary


The duration of the import date has increased, in particular(diff= Poppy time processing - Orchid time processing ):

  • 250K MARC BIB Create PTF - Create 2 ---> 44 minutes
  • 250K MARC BIB UpdatePTF - Updates Success - 1 -→ 45 minutes
  • Multitenant MARC Create (100k, 50k, and 1 record)PTF - Create 2 -→1 hour 35 minutes
    • Check-Out without DI ~ 200ms
    • Check-In without DI ~ 650ms65ms
    • Check-Out with DI ~ 770ms
    • Check-in with DI ~ 330ms

...

  • Service CPU utilization on Poppy is about the same as on the Orchid;
  • Memory utilization on Poppy is about the same as on the Orchid;
  • RDS CPU Utilization  during all tests and on both releases was about 96%;
  • The number of connections to DB on both releases were was about the same from 550(Test 1.1) to 1200(Test 1.4).


Test 1.  Single tenant(primary fs09000000): create and update 250K file 

Test #Test parametersProfile

Duration

(Poppy)

Splitting Feature Enabled

Status

Previous results 

(Orchid )

Duration

Duration

(Poppy)

diff= Poppy time processing - Orchid time processing

Duration

(Poppy)

Splitting Feature Disabled

1.1250K MARC BIB Create PTF - Create 22 hours 16 min Completed1 hour 32 min44 minutesfailed
1.2250K MARC BIB UpdatePTF - Updates Success - 13 hours 1 min Completed2 hours 16 min45 minutesfailed
1.3Multitenant MARC Create (100k, 50k, and 1 record)PTF - Create 24 hours 14min Completed2 hours 40 min1 hour 35 minutesfailed

On Poppy with the split feature disabled, large files stopped processing. Created ticket to this problem

Jira Legacy
serverSystem JIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-744

Test 1.4 With CI/CO 20 users and DI 25k records on each of the 3 tenants 

Splitting Feature enabled

Release: Orchid

Response time without DI (Average) 


Release: Orchid
Response time with DI
(Average)


Release: Poppy
Response time without DI (Average) 

Release: Poppy
Response time with DI (Average) 

Check-Out0.804s1.48s1.

diff= Poppy time processing - Orchid time processing

without DI

diff= Poppy time processing - Orchid time processing

with DI

Check-Out0.804s1.48s1.03s2.26s 200ms770ms
Check-In0.505s1.067s0.570s1.4s65ms330ms



Release: Orchid

DI Duration with CI/CO 

Release: Poppy

DI Duration with CI/CO 

Tenant _116 min 53 sec34 min 55 sec
Tenant _220min 39 sec27 min 39 sec
Tenant _317min 54 sec25 min 17 sec


Resource utilization during testing

Test 1.1. Data-import of 250K records file with "PTF - Create 2" job profile

Service CPU Utilization 

The shark spike sharp spike of CPU at the beginning of test 1, We see similar behavior in all of the DI tests. СPU consumption was uniform during the test.

Memory Utilization

The memory consumption was not affected, the mod-source-records-manager service increased the memory usage from 45% to 60% during the test, but after the test, the memory started to return to the pre-test value.


RDS CPU Utilization  

Consumption of the database CPU was 97% throughout the test

RDS Database Connections

The average number of DB connections during the test was about 550.


Test 1.2. Data-import of 250K records file with "PTF - Update" job profile

Service CPU Utilization 

СPU consumption was stable during the test, except mod-inventory service at the beginning of the test the CPU usage was about 140% at the end of the test CPU value was about 200%.   

Memory Utilization

The memory was stable and without memory leaks.

RDS CPU Utilization 

Consumption of the database CPU was 97% throughout the test

RDS Database Connections

The average number of DB connections during the test was about 550.

Test 1.3. Multitenant MARC Create (100k, 50k, and 1 record)

Service CPU Utilization 

СPU consumption was stable during the test. However, in the last hour of the test, the services mod-inventory and mod-quick-mark increare the CPU utilization by 75%

Memory Utilization

The memory was stable and without memory leaks.

RDS CPU Utilization 

Consumption of the database CPU was 96% throughout the test

RDS Database Connections

The average number of DB connections during the test was about 800.


Test 1.4. Data-import of 250K records file with "PTF - Update" job profile

Service CPU Utilization 
Image Modified

Memory Utilization 

The memory was stable and without memory leaks.

RDS CPU Utilization 

Consumption of the database CPU was 96% throughout the test

RDS Database Connections

The average number of DB connections during the test changed from 400 to 1200.

CICO responce time graph

Retesting DI file-splitting feature on Poppy release with Refresh Token Rotation (RTR) and file-splitting feature

The goal of the tests was to investigate how the file-splitting feature caused Data-import on Poppy release and the impact of Refresh Token Rotation (RTR). The tests were performed on ocp3(Poppy), pcp1(Poppy) and ncp5(Orchid)  environments.

Jira Legacy
serverSystem JiraJIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyPERF-723
Refresh Token Rotation (RTR)

Brief comparison summary

-b853-3c2e-90f1-ee9b165564fc
keyPERF-723
Refresh Token Rotation (RTR)

Brief comparison summary

  • Refresh Token Rotation configuration does not affect the data import process in any way, whether creating or updating a profile.
  • In the Poppy release 250,000 records of data import with PTF - Create-2 job profile failed, and 50,000 records of data import with PTF - Updates Success - 1 job profile also failed in all of the tests, except configuration when FSF=ture;
  • Data import works slowly on Poppy compared to the Orchid
  • As the number of records in the file for data import increases, the processing time also increases. Up to 25,000 records, the duration of the data import is approximately the same.
  • In the Poppy release data-import with an enabled file-splitting feature works slower compared to data-import with a disabled file-splitting feature.
  • Data import is performed approximately 5% faster when the file-splitting feature parameters are absent in the task definition configuration.

Test results

DI tests/ Configurationncp5
Orchid
ocp3
FSF true  without RTR token
*ocp3
FSF false without RTR token
*
ocp3
FSF deleted without token

ocp3

FSF false

AT =RT=

300; 

ocp3

 FSF false

AT =RT= 1000000000

pcp1

FSF false

AT =RT= 10000000

pcp1

FSF false without token

retest*

250k_bib_Create_1.mrcnot testednot testedfailedfailedfailedfailedfailedfailed
100k_bib_Create.mrc00:41:4100:54:3200:54:3600:53:5900:48:5600:54:42.0500:47:17"01:01:39"
50k_bib_Create.mrc00:19:4300:30:4000:
30
25:
20
3900:22:1700:27:0500:30:0900:21:4500:20:46
25k_bib_Create.mrc00:10:1100:13:5300:
17
12:
20
4600:10:3300:12:4200:13:2500:11:5400:10:53
10k_bib_Create.mrc00:04:1900:07:2200:05:3500:04:38not tested00:05:33.00:04:4200:04:36
5k_bib_Create.mrc00:02:3500:04:3100:02:4300:02:55not tested00:03:0700:02:5500:02:30
1k_bib_Create.mrcnot testednot testednot testednot testednot testednot tested00:00:54not tested
DI-25K-Update.mrcnot testednot testedfinished
successfully
failedfailedfinished
successfully
failedfinished
successfully

Column with "pcp1 FSF false without token" has testing results on the configuration similar to "ocp3 FSF false without RTR token".

Resource utilization during testing

Service CPU utilization during the Data-import process

The next data import jobs were carried out
1) 5k_bib_Create 2) 10k_bib_Create  3) 25k_bib_Create 4) 50k_bib_Create 5) 50k_bib_Create 6) 100k_bib_Create 7) 50k_bib_Create 8) 25k_bib_Create 9) 25k_bib_Update 10) 50k_bib_Update(stopped)
CPU utilization was stable during all jobs, but some spikes of data-import jobs were at the beginning of all tests.
 

Image Modified

Expand
titleDI test

Image Added


Memory Utilization

Most of the modules were stable during the test, and no memory leak is suspected for DI modules, except mod-inventory-b which consumed about 92% of memory during all DI processes. 
Image Modified

RDS CPU Utilization 


Maximal  CPU Utilization = 95%

Image Modified

RDS Database Connections

The maximal number of DB connections during the tests was about 580.

Image Modified

Database load

Image Modified

Top SQL queries

Appendix

Infrastructure ocp3  with the "Bugfest" Dataset

Records count :

  • tenant0_mod_source_record_storage.marc_records_lb = 9674629
  • tenant2_mod_source_record_storage.marc_records_lb = 0
  • tenant3_mod_source_record_storage.marc_records_lb = 0
  • tenant0_mod_source_record_storage.raw_records_lb = 9604805
  • tenant2_mod_source_record_storage.raw_records_lb = 0
  • tenant3_mod_source_record_storage.raw_records_lb = 0
  • tenant0_mod_source_record_storage.records_lb = 9674677
  • tenant2_mod_source_record_storage.records_lb = 0
  • tenant3_mod_source_record_storage.records_lb = 0
  • tenant0_mod_source_record_storage.marc_indexers =  620042011
  • tenant2_mod_source_record_storage.marc_indexers =  0
  • tenant3_mod_source_record_storage.marc_indexers =  0
  • tenant0_mod_source_record_storage.marc_indexers with field_no 010 = 3285833
  • tenant2_mod_source_record_storage.marc_indexers with field_no 010 = 0
  • tenant3_mod_source_record_storage.marc_indexers with field_no 010 = 0
  • tenant0_mod_source_record_storage.marc_indexers with field_no 035 = 19241844
  • tenant2_mod_source_record_storage.marc_indexers with field_no 035 = 0
  • tenant3_mod_source_record_storage.marc_indexers with field_no 035 = 0
  • tenant0_mod_inventory_storage.authority = 4
  • tenant2_mod_inventory_storage.authority = 0
  • tenant3_mod_inventory_storage.authority = 0
  • tenant0_mod_inventory_storage.holdings_record = 9592559
  • tenant2_mod_inventory_storage.holdings_record = 16
  • tenant3_mod_inventory_storage.holdings_record = 16
  • tenant0_mod_inventory_storage.instance = 9976519
  • tenant2_mod_inventory_storage.instance = 32
  • tenant3_mod_inventory_storage.instance = 32 
  • tenant0_mod_inventory_storage.item = 10787893
  • tenant2_mod_inventory_storage.item = 19
  • tenant3_mod_inventory_storage.item = 19

PTF -environment ocp3 

  • 10 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
  • 2 database  instances, one reader, and one writer

    NameAPI NameMemory GIBvCPUsmax_connections
    R6G Extra Largedb.r6g.xlarge32 GiB4 vCPUs2731


  • MSK ptf-kakfa-3
    • 4 m5.2xlarge brokers in 2 zones
    • Apache Kafka version 2.8.0

    • EBS storage volume per broker 300 GiB

    • auto.create.topics.enable=true
    • log.retention.minutes=480
    • default.replication.factor=3
  • Kafka topics partitioning: - 2 partitions for DI topics

...

ModuleTask Def. RevisionModule VersionTask CountMem Hard LimitMem Soft limitCPU unitsXmxMetaspaceSizeMaxMetaspaceSizeR/W split enabled
mod-circulation-storage16mod-circulation-storage:17.1.022880259215361814384512FALSE
mod-source-record-storage13mod-source-record-storage:5.7.025600500020483500384512FALSE
mod-calendar8mod-calendar:2.5.02102489612876888128FALSE
mod-inventory13mod-inventory:20.1.022880259210241814384512FALSE
mod-circulation10mod-circulation:24.0.022880259215361814384512FALSE
mod-di-converter-storage9mod-di-converter-storage:2.1.02102489612876888128FALSE
mod-pubsub10mod-pubsub:2.11.02153614401024922384512FALSE
mod-users10mod-users:19.2.02102489612876888128FALSE
mod-patron-blocks10mod-patron-blocks:1.9.021024896102476888128FALSE
mod-source-record-manager15mod-source-record-manager:3.7.025600500020483500384512FALSE
mod-quick-marc8mod-quick-marc:5.0.01228821761281664384512FALSE
nginx-okapi8nginx-okapi:2023.06.1421024896128000FALSE
okapi-b9okapi:5.1.13168414401024922384512FALSE
mod-feesfines9mod-feesfines:19.0.02102489612876888128FALSE
mod-notes8mod-notes:5.1.021024896128952384512FALSE
pub-okapi8pub-okapi:2023.06.142102489612876800FALSE
mod-data-import36579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-data-import:3.0.31204818442561292384512FALSE
mod-search31579891902283.dkr.ecr.us-east-1.amazonaws.com/folio/mod-search:3.0.0225922480204814405121024FALSE
mod-configuration9mod-configuration:5.9.22102489612876888128FALSE
mod-bulk-operations8mod-bulk-operations:1.1.023072260010241536384512FALSE
edge-ncip8edge-ncip:1.9.021024




FALSE
mod-inventory-storage8mod-inventory-storage:27.0.028961




FALSE


Methodology/Approach

To set splitting feature: Detailed Release Notes for Data Import Splitting Feature

...