Skip to end of banner
Go to start of banner

Data Import Test Report (Kiwi) +DI topics with partitions > 1

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 58 Next »

Overview

This document contains the results of testing Data Import in Kiwi with OL enabled + increase number of partition in Kafka MQ Topics.


Infrastructure 

  • 6 m5.xlarge EC2 instances 
  • 2 instances of db.r6.xlarge database instances, one reader and one writer
  • MSK
    • 4 m5.2xlarge brokers in 2 zones
    • auto.create-topics.enable = true
    • log.retention.minutes=120
  • mod-inventory memory
    • 256 CPU units, 1814MB mem
    • inventory.kafka.DataImportConsumerVerticle.instancesNumber=10
    • inventory.kafka.MarcBibInstanceHridSetConsumerVerticle.instancesNumber=10
    • kafka.consumer.max.poll.records=10
  • mod-inventory-storage
    • 128 CPU units, 544MB mem
  • mod-source-record-storage
    • 128 CPU units, 908MB mem
  • mod-source-record-manager
    • 128 CPU units, 1292MB mem
  • mod-data-import
    • 128 CPU units, 1024MB mem


Software versions

  • mod-data-import:2.2.0
  • mod-inventory:18.0.4
  • mod-inventory-storage:22.0.2-optimistic-locking.559
  • mod-source-record-storage:5.2.5
  • mod-source-record-manager:3.2.6


Results

Tests performed:


Profile

KIWI

KIWI (with OL)

KIWI 

with

partitions N# 2

KIWI 

with

partitions N# 4

5K MARC Create

PTF - Create 2

5 min, 8 min

8 min5 min5,7 min

5K MARC Update

PTF - Updates Success - 1

11 min, 13 min

6 min7,6 min6 min

10K MARC Create 

PTF - Create 2

11 min , 14 min

12 min10,12 min16 min

10K MARC Update

PTF - Updates Success - 1

22 min, 24 min

15 min11 minfailed
25K MARC CreatePTF - Create 223 mins, 25 mins, 26 mins24 min23,26 min25 min
25K MARC UpdatePTF - Updates Success - 11 hour 20 mins (completed with errors) *, 56 mins40 minfailedfailed
50K MARC CreatePTF - Create 2Completed with errors, 1 hr 40 mins43 minfailedfailed
50K UpdatePTF - Updates Success - 12 hr 32 mins (job stuck at 76% completion)1hr 4minfailedfailed

With an increase in the number of partitions, there is no noticeable change in the performance of the service, however, negative trends were observed - an increase in the number of errors, more often the data import procedures fell.

number of partitions - 2

this table shows the results of a group of sequential data import tests for 2 partitions. In the case of errors, the number of missing entities in the database was determined

start timeend time#instances from DB
CREATE 5,000 recordsBegan 8:20 AM 1/28/2022, 8:26 AM
CREATE10,000 recordsBegan 8:48 AM1/28/2022, 9:00 AM
update 10,000 recordsBegan 9:17 AMFAILED

CREATE 25,000 recordsBegan 9:37 AM

Completed with errors

1/28/2022, 10:04 AM

fs09000000_mod_inventory_storage.item - 24996

fs09000000_mod_inventory_storage.holdings_record - 24996

fs09000000_mod_inventory_storage.instance - 25000

fs09000000_mod_source_record_storage.records_lb - 25000

 restart mods and clean Kafka MQ
create 25,000 recordsBegan 10:31 AM today1/28/2022, 10:57 AM
restart mods and clean Kafka MQ

50,000 recordsBegan 11:18 AM

Completed with errors

1/28/2022, 12:43 PM

fs09000000_mod_inventory_storage.item  49284

fs09000000_mod_inventory_storage.holdings_record 48684

fs09000000_mod_source_record_storage.records_lb 50000

fs09000000_mod_inventory_storage.instance 50000


 In terms of dynamic characteristics - CPU load, memory - no changes compared to 1 partition, fails are caused by features and possibly bugs in processing Kafka MQ by data import modules: Inventory, source-record-storage


memory usage


Example of error from the logs for the "CREATE 25,000 recordsBegan 9:37 AM"

filtered by mod-source-record-manager

FieldValue
@ingestionTime

1643361849833

@log

054267740449:kcp1-folio-eis

@logStream

kcp1/mod-source-record-manager/d7b8533172d04fbeace7953e139e48eb

@message

09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout

@timestamp

1643361846844

09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.844 [vert.x-worker-thread-18] ERROR utionProgressDaoImpl [4721204eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.845 [vert.x-worker-thread-18] ERROR tHandlingServiceImpl [4721205eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.846 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721206eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.969 [vert.x-worker-thread-11] ERROR PostgresClient [4721329eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.969 [vert.x-worker-thread-11] ERROR utionProgressDaoImpl [4721329eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.970 [vert.x-worker-thread-11] ERROR tHandlingServiceImpl [4721330eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.970 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721330eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.991 [vert.x-worker-thread-9] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 20 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 1 globalLoad: 549

09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Threshold is exceeded, preparing to pause, globalLoad: 550, currentLoad: 264, requestNo: -8728

09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 65 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 264 globalLoad: 550

09:24:06.993 [vert.x-worker-thread-8] INFO taImportKafkaHandler [4721353eqId] Event was received with recordId: 30004133-ba4d-468c-8dab-77c9b0357b07 event type: DI_COMPLETED

09:24:06.994 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721354eqId] Error saving JournalRecord entity


  • No labels