Overview
This document contains the results of testing Data Import in Kiwi with OL enabled + increase number of partition in Kafka MQ Topics.
Infrastructure
- 6 m5.xlarge EC2 instances
- 2 instances of db.r6.xlarge database instances, one reader and one writer
- MSK
- 4 m5.2xlarge brokers in 2 zones
- auto.create-topics.enable = true
- log.retention.minutes=120
- mod-inventory memory
- 256 CPU units, 1814MB mem
- inventory.kafka.DataImportConsumerVerticle.instancesNumber=10
- inventory.kafka.MarcBibInstanceHridSetConsumerVerticle.instancesNumber=10
- kafka.consumer.max.poll.records=10
- mod-inventory-storage
- 128 CPU units, 544MB mem
- mod-source-record-storage
- 128 CPU units, 908MB mem
- mod-source-record-manager
- 128 CPU units, 1292MB mem
- mod-data-import
- 128 CPU units, 1024MB mem
Software versions
- mod-data-import:2.2.0
- mod-inventory:18.0.4
- mod-inventory-storage:22.0.2-optimistic-locking.559
- mod-source-record-storage:5.2.5
- mod-source-record-manager:3.2.6
Results
Tests performed:
KIWI | KIWI (with OL) | KIWI with partitions N# 2 | KIWI with partitions N# 4 | ||
---|---|---|---|---|---|
5K MARC Create | PTF - Create 2 | 5 min, 8 min | 8 min | 5 min | 5,7 min |
5K MARC Update | PTF - Updates Success - 1 | 11 min, 13 min | 6 min | 7,6 min | 6 min |
10K MARC Create | PTF - Create 2 | 11 min , 14 min | 12 min | 10,12 min | 16 min |
10K MARC Update | PTF - Updates Success - 1 | 22 min, 24 min | 15 min | 11 min | failed |
25K MARC Create | PTF - Create 2 | 23 mins, 25 mins, 26 mins | 24 min | 23,26 min | 25 min |
25K MARC Update | PTF - Updates Success - 1 | 1 hour 20 mins (completed with errors) *, 56 mins | 40 min | failed | failed |
50K MARC Create | PTF - Create 2 | Completed with errors, 1 hr 40 mins | 43 min | failed | failed |
50K Update | PTF - Updates Success - 1 | 2 hr 32 mins (job stuck at 76% completion) | 1hr 4min | failed | failed |
With an increase in the number of partitions, there is no noticeable change in the performance of the service, however, negative trends were observed - an increase in the number of errors, more often the data import procedures fell.
number of partitions - 2
this table shows the results of a group of sequential data import tests for 2 partitions. In the case of errors, the number of missing entities in the database was determined
start time | end time | #instances from DB |
---|---|---|
CREATE 5,000 recordsBegan 8:20 AM | 1/28/2022, 8:26 AM | |
CREATE10,000 recordsBegan 8:48 AM | 1/28/2022, 9:00 AM | |
update 10,000 recordsBegan 9:17 AM | FAILED | |
CREATE 25,000 recordsBegan 9:37 AM Completed with errors | 1/28/2022, 10:04 AM | fs09000000_mod_inventory_storage.item - 24996 fs09000000_mod_inventory_storage.holdings_record - 24996 fs09000000_mod_inventory_storage.instance - 25000 fs09000000_mod_source_record_storage.records_lb - 25000 |
restart mods and clean Kafka MQ | ||
create 25,000 recordsBegan 10:31 AM today | 1/28/2022, 10:57 AM | |
restart mods and clean Kafka MQ | ||
50,000 recordsBegan 11:18 AM Completed with errors | 1/28/2022, 12:43 PM | fs09000000_mod_inventory_storage.item 49284 fs09000000_mod_inventory_storage.holdings_record 48684 fs09000000_mod_source_record_storage.records_lb 50000 fs09000000_mod_inventory_storage.instance 50000 |
In terms of dynamic characteristics - CPU load, memory - no changes compared to 1 partition, fails are caused by features and possibly bugs in processing Kafka MQ by data import modules: Inventory, source-record-storage
memory usage
Example of error from the logs for the "CREATE 25,000 "
filtered by mod-source-record-manager
Field | Value |
---|---|
@ingestionTime | 1643361849833 |
@log | 054267740449:kcp1-folio-eis |
@logStream | kcp1/mod-source-record-manager/d7b8533172d04fbeace7953e139e48eb |
@message | 09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout |
@timestamp | 1643361846844 |
09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.844 [vert.x-worker-thread-18] ERROR utionProgressDaoImpl [4721204eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.845 [vert.x-worker-thread-18] ERROR tHandlingServiceImpl [4721205eqId] Failed to handle DI_COMPLETED event |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.846 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721206eqId] Error saving JournalRecord entity |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.969 [vert.x-worker-thread-11] ERROR PostgresClient [4721329eqId] Timeout |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.969 [vert.x-worker-thread-11] ERROR utionProgressDaoImpl [4721329eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.970 [vert.x-worker-thread-11] ERROR tHandlingServiceImpl [4721330eqId] Failed to handle DI_COMPLETED event |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.970 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721330eqId] Error saving JournalRecord entity |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
09:24:06.991 [vert.x-worker-thread-9] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 20 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 1 globalLoad: 549 |
09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Threshold is exceeded, preparing to pause, globalLoad: 550, currentLoad: 264, requestNo: -8728 |
09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 65 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 264 globalLoad: 550 |
09:24:06.993 [vert.x-worker-thread-8] INFO taImportKafkaHandler [4721353eqId] Event was received with recordId: 30004133-ba4d-468c-8dab-77c9b0357b07 event type: DI_COMPLETED |
09:24:06.994 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721354eqId] Error saving JournalRecord entity |
errors filtered by mod-inventory
There are older events to load. Load more. |
09:31:48 [] [] [] [] ERROR Conn { "message": "Cannot update record f47bb037-d48d-4710-99bc-ae0a89161a92 because it has been changed (optimistic locking): Stored _version is 2, _version of request is 1", "severity": "ERROR", "code": "23F09", "where": "PL/pgSQL function holdings_record_set_ol_version() line 8 at RAISE", "file": "pl_exec.c", "line": "3876", "routine": "exec_stmt_raise", "schema": "fs09000000_mod_inventory_storage", "table": "holdings_record" } |
io.vertx.pgclient.PgException: { "message": "Cannot update record f47bb037-d48d-4710-99bc-ae0a89161a92 because it has been changed (optimistic locking): Stored _version is 2, _version of request is 1", "severity": "ERROR", "code": "23F09", "where": "PL/pgSQL function holdings_record_set_ol_version() line 8 at RAISE", "file": "pl_exec.c", "line": "3876", "routine": "exec_stmt_raise", "schema": "fs09000000_mod_inventory_storage", "table": "holdings_record" } |
at io.vertx.pgclient.impl.codec.ErrorResponse.toException(ErrorResponse.java:31) ~[ms.jar:?] |
at io.vertx.pgclient.impl.codec.QueryCommandBaseCodec.handleErrorResponse(QueryCommandBaseCodec.java:57) ~[ms.jar:?] |
at io.vertx.pgclient.impl.codec.ExtendedQueryCommandCodec.handleErrorResponse(ExtendedQueryCommandCodec.java:90) ~[ms.jar:?] |
at io.vertx.pgclient.impl.codec.PgDecoder.decodeError(PgDecoder.java:246) ~[ms.jar:?] |
at io.vertx.pgclient.impl.codec.PgDecoder.decodeMessage(PgDecoder.java:132) [ms.jar:?] |
at io.vertx.pgclient.impl.codec.PgDecoder.channelRead(PgDecoder.java:112) [ms.jar:?] |
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [ms.jar:?] |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?] |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?] |
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [ms.jar:?] |
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [ms.jar:?] |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?] |
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?] |
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [ms.jar:?] |
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [ms.jar:?] |
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [ms.jar:?] |
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [ms.jar:?] |
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [ms.jar:?] |
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [ms.jar:?] |
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [ms.jar:?] |
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [ms.jar:?] |
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [ms.jar:?] |
at java.lang.Thread.run(Thread.java:829) [?:?] |
Example of error from the logs for the "CREATE 50,000 "
filtered by mod-source-record-manager
12:41:35.095 [vert.x-worker-thread-2] ERROR PostgresClient [5063482eqId] Timeout |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.104 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063491eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763 |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.105 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063492eqId] Failed to handle DI_COMPLETED event |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.105 [vert.x-worker-thread-2] ERROR PostgresClient [5063492eqId] Timeout |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.106 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063493eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763 |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.105 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [5063492eqId] Error saving JournalRecord entity |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.106 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063493eqId] Failed to handle DI_COMPLETED event |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.108 [vert.x-worker-thread-19] ERROR PostgresClient [5063495eqId] Timeout |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.108 [vert.x-worker-thread-19] ERROR utionProgressDaoImpl [5063495eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763 |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
12:41:35.108 [vert.x-worker-thread-19] ERROR tHandlingServiceImpl [5063495eqId] Failed to handle DI_COMPLETED event |
io.vertx.core.impl.NoStackTraceThrowable: Timeout |
filtered by mod-inventory
11:53:59 [] [] [] [] ERROR KafkaConsumerWrapper Consumer - id: 0 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_INSTANCE_HRID_SET, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_INSTANCE_HRID_SET) Error while commit offset: 49446 |
org.apache.kafka.common.errors.RebalanceInProgressException: Offset commit cannot be completed since the consumer is undergoing a rebalance for auto partition assignment. You can try completing the rebalance by calling poll() and then retry the operation. |
11:53:59 [] [] [] [] INFO eHridSetKafkaHandler Event payload has been received with event type: DI_SRS_MARC_BIB_INSTANCE_HRID_SET and recordId: a36d571a-a64a-4aa7-ab5e-ac78ade3a1ab |
11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-29, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing |
11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing |
11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions |
11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing |
11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group |
11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions |
11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group |
11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Finished assignment for group at generation 577: {consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-32-69b7ae2e-a7ac-4d76-84cd-19f7c9f6f240=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-0]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-53-8578c203-815e-4d18-8015-2425ae55381e=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-1]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9-d9d97905-e982-4384-a175-5a0fe8d5d66f=Assignment(partitions=[])} |
11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577 |
11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-53, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577 |
11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-32, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577 |