[Ramsons] [ECS] [Data import] Create MARC authority Records

[Ramsons] [ECS] [Data import] Create MARC authority Records

Overview

This document presents performance testing results for Data Import of MARC Authority records using a Create job profile in the Ramsons release on Okapi-based ECS environments (RCON). The tests were conducted with Kafka consolidated topics and file-splitting features enabled.

The performance evaluation was carried out across a range of records for a single tenant: 1K, 5K, 10K, 25K, and 50K records. Additionally, we conducted a Data Import and parallel Check-In/Check-Out test simulating 5 virtual users to assess system behavior under concurrent operations and parallel data import on 3 tenants.
Current ticket: https://folio-org.atlassian.net/browse/PERF-979
Previous report: https://folio-org.atlassian.net/wiki/spaces/FOLIJET/pages/260309070

Summary

  • Data Import tests finished successfully during Test 1 - Test 3

  • The Data Import process during Test 1 of MARC bibliographic records using a Create job profile in the Ramsons release demonstrates a slight but noteworthy improvement in performance compared to the Quesnelia release(Table 1).

  • The Data Import and parallel Check-In/Check-Out testing, simulating five virtual users, revealed that the Ramsons release demonstrated better performance compared to Quesnelia.

    • The test results indicate that five virtual users (5 VU) for Check-In/Check-Out (CICO) operations do not affect the performance of the data import process, even vice versa, the duration of the DI has slightly decreased

    • Response time of CI and CO transactions increased proportionally with the increase in the number of importing records(Table 2).

  • The parallel 50К data-import on 3 tenants was successful, but the duration increased by 1.5-3 times compared to one DI on one tenant(Table 3).

  • Mod-source-record-manager has a new approach for inserting data in the records journal, using the function on the DB side, we observed that compared to previous results, this version results in about 50 more AAS. But according to the testing results this problem did not lead to a deterioration of the DI process.

Recommendations & Jiras

 

Test Runs 

Test

Test conditions and short description

Status

Test 1.

Tenant: cs00000int. Job profile KG - Create SRS MARC Authority on nonmatches to 010 $a DUPLICATE for Q 1k - 5k - 10k - 25k -50k with 5 minutes pauses between each DI

Completed

Test 2.

Tenant: cs00000int_001. Job profile KG - Create SRS MARC Authority on nonmatches to 010 $a DUPLICATE for Q 1k - 5k - 10k - 25k -50k with 5 minutes pauses between each DI

CheckIn-CheckOut 5 Virtual users

Completed

Test 3.

Parallel, multi-tenant Data import
Job profile KG - Create SRS MARC Authority on nonmatches to 010 $a DUPLICATE for Q. In parallel
Tenant: cs00000int. 50k
Tenant: cs00000int_0001. 50k
Tenant: cs00000int_0002. 50k

Completed

Test Results and Comparison

Test №1

Table 1. - Test with 1k, 10k, 25k, and 50k records files DI started on one tenant cs00000int, and comparative results between Quesnelia and Ramsons.

Number of records 

% creates

DI duration 
M. Glory

DI duration
Nolana

DI duration Orchid

DI duration 
Poppy

DI duration
Quesnelia
[ECS], QCON

DI duration 
Ramsons
[ECS], RCON

Time Diff and Perc. Improvement
R vs Q

Number of records 

% creates

DI duration 
M. Glory

DI duration
Nolana

DI duration Orchid

DI duration 
Poppy

DI duration
Quesnelia
[ECS], QCON

DI duration 
Ramsons
[ECS], RCON

Time Diff and Perc. Improvement
R vs Q

1,000

100

24 s

27 s

41 sec

29 sec

25 sec

27 sec

2 sec, 8%

5,000

100

1 min 21 s

1 min 15 s

1min 21s

1 min 38 sec

1 min 23 sec

1 min 24 sec

1 sec, 1.2%

10,000

100

2 min 32 s

2 min 31 s

2min 53s

2 min 53 sec

2 min 43 sec

2 min 38 sec

5 sec, 3.1%

25000

100

11 min 14 s

7 min 7 s

5 min 42s

6 min 24 sec

6 min 27 sec

5 min 24 sec

1 min 24 sec, 16.3%

50,000

100

22 min

11 min 24 s

11 min 11s

13 min 48 sec

11 min 45 sec

9 min 42 sec

2 min 03 sec, 17.4%

Test 2. DI Central tenant 1k-5K-10K-22K-50K + CI/CO 5VU.
Table 2. - Сomparative Baseline Check-In\Check-Out results without Data Import between Quesnelia and Ramsons.

Number of records

DI Duration with CICO

Poppy

DI Duration with CICO
Quesnelia
ECS

DI Duration with CICO
Ramsons
ECS

CI Avg time
(Quesnelia)

CI Avg time
(Ramsons)

CI, Avg time without DI
Ramsons
ECS

CO time Avg
(Ramsons )

CO time Avg
(Ramsons )

CO, Avg time without
DI
Ramsons
ECS

Number of records

DI Duration with CICO

Poppy

DI Duration with CICO
Quesnelia
ECS

DI Duration with CICO
Ramsons
ECS

CI Avg time
(Quesnelia)

CI Avg time
(Ramsons)

CI, Avg time without DI
Ramsons
ECS

CO time Avg
(Ramsons )

CO time Avg
(Ramsons )

CO, Avg time without
DI
Ramsons
ECS

1,000

35 sec

21 sec

17 sec

0.870 sec

0.642 sec

 

 

0.616 sec

 

 

1.361 sec

1.231 sec

 

 

1.187 sec

 

5,000

1 min 41 sec

1 min 09 sec

57 sec

0.878 sec

0.655 sec

1.772 sec

1.243 sec

10,000

3 min 4 sec

2 min 17 se

1 min 47 sec

0.955 sec

0.671 sec

1.905 sec

1.261 sec

25,000

6 min 32 sec

6 min 20 sec

4 min 01 sec

0.970 sec

0.691 sec

1.920 sec

1.339 sec

50,000

13 min 48 sec

13 min 49 sec

09 min 13 sec

1.040 sec

0.796 sec

1.907 sec

1.585 sec

Test №3
Table 3. - Duration on parallel multitenant data-import on tenants cs00000int, cs00000int_0001 and cs00000int_0002

Tenant

50K DI

Tenant

50K DI

Central - cs00000int

27 min 03 sec

College- cs00000int_0001

27 min 18 sec

Professional- cs00000int_0002

15 min 02 sec


Cluster resource utilization for Test 1

Service CPU Utilization

The image shows CPU consumption during Test 1.

image-20250221-142320.png

Service memory utilization

Service memory utilization remains consistent across all modules.

image-20250221-143739.png

DB CPU Utilization

Here are the conclusions drawn from the database CPU usage graph:

  • For 1k records, the maximum CPU usage was approximately 35%.

  • For 5k records, the maximum CPU usage reached around 72%.

  • For 10k records, the maximum CPU usage climbed to about 92%.

  • For both 25k and 50k records, the maximum CPU usage was around 93%.

image-20250221-151834.png

DB Connections

image-20250224-100847.png

Database load

Sliced by SQL

image-20250224-101306.png

Top SQL queries during test 1

image-20250224-101326.png

Load by squalls (AAS)

SQL statements

Calls/sec

Rows/sec

Avg latency (ms)/call

Load by squalls (AAS)

SQL statements

Calls/sec

Rows/sec

Avg latency (ms)/call

3.64

COMMIT

0.00

0.00

-

0.59

insert into "marc_records_lb" ("id", "content") values (cast($1 as uuid), cast($...

30.00

30.00

19.20

0.38

WITH input_rows(record_id, authority_id) AS ( VALUES ($1::uuid,$2::uuid) ) , ...

30.00

30.00

0.35

0.32

INSERT INTO cs00000int_mod_source_record_manager.events_processed (handler_id, e...

30.00

30.00

0.96

0.19

SELECT insert_journal_records($1::jsonb[])

0.91

0.91

175.38

0.16

select a1_0.id,a1_0.source_file_id,a1_0.created_by_user_id,a1_0.created_date,a1_...

0.02

0.00

0.00

0.06

insert into authority (source_file_id,created_by_user_id,created_date,deleted,he...

30.00

30.00

2.55

0.06

with "cte" as (select count(*) from "records_lb" where ("records_lb"."snapshot_i...

-

-

-

0.05

insert into authority (source_file_id,created_by_user_id,created_date,deleted,he...

28.09

28.09

1.15

0.05

insert into authority (source_file_id,created_by_user_id,created_date,deleted,he...

30.00

30.00

1.13

image-20250224-101502.png

 

 

Cluster resource utilization for Test 2

The checkIn-CheckOut test started at about 15:30 and finished at 16:25

CICO Response time graph

Response time and throughput were stable during the 1-hour CICO test with 5 VU. Error rate ~0.02%

image-20250221-143014.png

Service CPU Utilization

The image shows CPU consumption during Test 2

image-20250221-142916.png

Service memory utilization

Service memory utilization remains consistent across all modules.

image-20250221-144043.png

DB CPU Utilization

Here are the conclusions drawn from the database CPU usage graph:

  • For 1k records, the maximum CPU usage was approximately 28%.

  • For 5k records, the maximum CPU usage reached around 76%.

  • For 10k records, the maximum CPU usage climbed to about 86%.

  • For both 25k and 50k records, the maximum CPU usage was around 86%.

image-20250221-152018.png

DB Connections

In the idle state number of connection ~1100 and during CICO 5VU + 50K DI ~1520

image-20250224-101011.png

Database load

Sliced by SQL

image-20250224-103915.png

Top SQL queries during test 2

image-20250224-104049.png

Load by sqls (AAS)

SQL statements

Calls/sec

Rows/sec

Avg latency (ms)/call

Load by sqls (AAS)

SQL statements

Calls/sec

Rows/sec

Avg latency (ms)/call

0.71

SELECT insert_journal_records($1::jsonb[])

0.73

0.73

938.90

0.49

COMMIT

0.00

0.00

-

0.28

WITH input_rows(record_id, authority_id) AS ( VALUES ($1::uuid,$2::uuid) ) , ...

24.19

24.19

0.18

0.28

insert into "marc_records_lb" ("id", "content") values (cast($1 as uuid), cast($...

24.19

24.19

10.23

0.22

INSERT INTO cs00000int_0001_mod_source_record_manager.events_processed (handler_...

24.19

24.19

1.23

0.05