Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This document contains the results of testing Data Import in Kiwi with OL enabled + increase number of partition in Kafka MQ Topics.

...

filtered by mod-source-record-manager + error


FieldValue
@ingestionTime

1643361849833

@log

054267740449:kcp1-folio-eis

@logStream

kcp1/mod-source-record-manager/d7b8533172d04fbeace7953e139e48eb

@message

09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout

@timestamp

1643361846844

...

09:24:06.835 [vert.x-worker-thread-18] ERROR PostgresClient [4721195eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.844 [vert.x-worker-thread-18] ERROR utionProgressDaoImpl [4721204eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.845 [vert.x-worker-thread-18] ERROR tHandlingServiceImpl [4721205eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.846 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721206eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.969 [vert.x-worker-thread-11] ERROR PostgresClient [4721329eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.969 [vert.x-worker-thread-11] ERROR utionProgressDaoImpl [4721329eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id 'a3244651-2474-47bb-8d69-54520570bedd

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.970 [vert.x-worker-thread-11] ERROR tHandlingServiceImpl [4721330eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.970 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721330eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

09:24:06.991 [vert.x-worker-thread-9] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 20 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 1 globalLoad: 549

09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Threshold is exceeded, preparing to pause, globalLoad: 550, currentLoad: 264, requestNo: -8728

09:24:06.991 [vert.x-worker-thread-8] DEBUG KafkaConsumerWrapper [4721351eqId] Consumer - id: 65 subscriptionPattern: SubscriptionDefinition(eventType=DI_COMPLETED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_COMPLETED) a Record has been received. key: 29 currentLoad: 264 globalLoad: 550

09:24:06.993 [vert.x-worker-thread-8] INFO taImportKafkaHandler [4721353eqId] Event was received with recordId: 30004133-ba4d-468c-8dab-77c9b0357b07 event type: DI_COMPLETED

09:24:06.994 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [4721354eqId] Error saving JournalRecord entity

...

filtered by mod

...

There are older events to load. Load more.

...

at io.vertx.pgclient.impl.codec.ErrorResponse.toException(ErrorResponse.java:31) ~[ms.jar:?]

...

at io.vertx.pgclient.impl.codec.QueryCommandBaseCodec.handleErrorResponse(QueryCommandBaseCodec.java:57) ~[ms.jar:?]

...

at io.vertx.pgclient.impl.codec.ExtendedQueryCommandCodec.handleErrorResponse(ExtendedQueryCommandCodec.java:90) ~[ms.jar:?]

...

at io.vertx.pgclient.impl.codec.PgDecoder.decodeError(PgDecoder.java:246) ~[ms.jar:?]

...

at io.vertx.pgclient.impl.codec.PgDecoder.decodeMessage(PgDecoder.java:132) [ms.jar:?]

...

at io.vertx.pgclient.impl.codec.PgDecoder.channelRead(PgDecoder.java:112) [ms.jar:?]

...

at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [ms.jar:?]

...

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?]

...

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?]

...

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [ms.jar:?]

...

at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [ms.jar:?]

...

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?]

...

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?]

...

at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [ms.jar:?]

...

at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [ms.jar:?]

...

at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [ms.jar:?]

...

at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [ms.jar:?]

...

at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [ms.jar:?]

...

-source-record-manager + warning

15:27:02.687 [vertx-blocked-thread-checker] WARN ? [15001392eqId] Thread Thread[vert.x-worker-thread-2,5,main] has been blocked for 588042 ms, time limit is 60000 ms

io.vertx.core.VertxException: Thread blocked

at jdk.internal.misc.Unsafe.park(Native Method) ~[?:?]

at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) ~[?:?]

at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1079) ~[?:?]

at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369) ~[?:?]

at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278) ~[?:?]

at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:86) ~[ms.jar:?]

at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:76) ~[ms.jar:?]

at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30) ~[ms.jar:?]

at io.kcache.KafkaCache.put(KafkaCache.java:511) ~[ms.jar:?]

at io.kcache.KafkaCache.remove(KafkaCache.java:548) ~[ms.jar:?]

at org.folio.kafka.cache.KafkaInternalCache.lambda$cleanupEvents$1(KafkaInternalCache.java:151) ~[ms.jar:?]

at org.folio.kafka.cache.KafkaInternalCache$$Lambda$917/0x00000008406d3840.accept(Unknown Source) ~[?:?]

at java.util.ArrayList.forEach(ArrayList.java:1541) ~[?:?]

at org.folio.kafka.cache.KafkaInternalCache.cleanupEvents(KafkaInternalCache.java:151) ~[ms.jar:?]

at org.folio.kafka.cache.util.CacheUtil.lambda$initCacheCleanupPeriodicTask$0(CacheUtil.java:24) ~[ms.jar:?]

at org.folio.kafka.cache.util.CacheUtil$$Lambda$915/0x0000000840839040.handle(Unknown Source) ~[?:?]

at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:160) ~[ms.jar:?]

at io.vertx.core.impl.ContextImpl$$Lambda$898/0x0000000840822c40.handle(Unknown Source) ~[?:?]

at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96) ~[ms.jar:?]

at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:158) ~[ms.jar:?]

at io.vertx.core.impl.ContextImpl$$Lambda$897/0x0000000840822840.run(Unknown Source) ~[?:?]

at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) ~[ms.jar:?]

at io.vertx.core.impl.TaskQueue$$Lambda$145/0x0000000840202840.run(Unknown Source) ~[?:?]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]

at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[ms.jar:?]

at java.lang.Thread.run(Thread.java:829) ~[?:?]

15:27:24.879 [vert.x-eventloop-thread-1] INFO RestRouting [15023584eqId] invoking getAdminHealth

15:27:24.880 [vert.x-eventloop-thread-1] INFO LogUtil [15023585eqId] 127.0.0.1:46174 GET /admin/health null HTTP_1_1 200 4 0 tid=null OK

15:27:26.344 [vert.x-eventloop-thread-1] INFO RestRouting [15025049eqId] invoking getAdminHealth

15:27:26.345 [vert.x-eventloop-thread-1] INFO LogUtil [15025050eqId] 127.0.0.1:46188 GET /admin/health null HTTP_1_1 200 4 0 tid=null OK


errors filtered by mod-inventory

There are older events to load. Load more.

09:31:48 [] [] [] [] ERROR Conn
{ "message": "Cannot update record f47bb037-d48d-4710-99bc-ae0a89161a92 because it has been changed (optimistic locking): Stored _version is 2, _version of request is 1", "severity": "ERROR", "code": "23F09", "where": "PL/pgSQL function holdings_record_set_ol_version() line 8 at RAISE", "file": "pl_exec.c", "line": "3876", "routine": "exec_stmt_raise", "schema": "fs09000000_mod_inventory_storage", "table": "holdings_record" }
io.vertx.pgclient.PgException:
{ "message": "Cannot update record f47bb037-d48d-4710-99bc-ae0a89161a92 because it has been changed (optimistic locking): Stored _version is 2, _version of request is 1", "severity": "ERROR", "code": "23F09", "where": "PL/pgSQL function holdings_record_set_ol_version() line 8 at RAISE", "file": "pl_exec.c", "line": "3876", "routine": "exec_stmt_raise", "schema": "fs09000000_mod_inventory_storage", "table": "holdings_record" }

at io.vertx.pgclient.impl.codec.ErrorResponse.toException(ErrorResponse.java:31) ~[ms.jar:?]

at io.vertx.pgclient.impl.codec.QueryCommandBaseCodec.handleErrorResponse(QueryCommandBaseCodec.java:57) ~[ms.jar:?]

at io.vertx.pgclient.impl.codec.ExtendedQueryCommandCodec.handleErrorResponse(ExtendedQueryCommandCodec.java:90) ~[ms.jar:?]

at io.vertx.pgclient.impl.codec.PgDecoder.decodeError(PgDecoder.java:246) ~[ms.jar:?]

at io.vertx.pgclient.impl.codec.PgDecoder.decodeMessage(PgDecoder.java:132) [ms.jar:?]

at io.vertx.pgclient.impl.codec.PgDecoder.channelRead(PgDecoder.java:112) [ms.jar:?]

at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [ms.jar:?]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?]

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [ms.jar:?]

at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [ms.jar:?]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [ms.jar:?]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [ms.jar:?]

at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [ms.jar:?]

at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [ms.jar:?]

at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [ms.jar:?]

at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [ms.jar:?]

at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [ms.jar:?]

at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [ms.jar:?]

at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [ms.jar:?]

at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [ms.jar:?]

at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [ms.jar:?]

at java.lang.Thread.run(Thread.java:829) [?:?]

Example of error from the logs for the "CREATE 50,000 "

filtered by mod-source-record-manager

12:41:35.095 [vert.x-worker-thread-2] ERROR PostgresClient [5063482eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.104 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063491eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063492eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-worker-thread-2] ERROR PostgresClient [5063492eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.106 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063493eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [5063492eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.106 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063493eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR PostgresClient [5063495eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR utionProgressDaoImpl [5063495eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR tHandlingServiceImpl [5063495eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout


filtered by mod-inventory + errors

11:53:59 [] [] [] [] ERROR KafkaConsumerWrapper Consumer - id: 0 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_INSTANCE_HRID_SET, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_INSTANCE_HRID_SET) Error while commit offset: 49446

org.apache.kafka.common.errors.RebalanceInProgressException: Offset commit cannot be completed since the consumer is undergoing a rebalance for auto partition assignment. You can try completing the rebalance by calling poll() and then retry the operation.

11:53:59 [] [] [] [] INFO eHridSetKafkaHandler Event payload has been received with event type: DI_SRS_MARC_BIB_INSTANCE_HRID_SET and recordId: a36d571a-a64a-4aa7-ab5e-ac78ade3a1ab

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-29, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions

11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Finished assignment for group at generation 577: {consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-32-69b7ae2e-a7ac-4d76-84cd-19f7c9f6f240=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-0]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-53-8578c203-815e-4d18-8015-2425ae55381e=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-1]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9-d9d97905-e982-4384-a175-5a0fe8d5d66f=Assignment(partitions=[])}

11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577

11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-53, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577

11:54:05 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-32, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577

filtered by mod-inventory + warnings

11:54:30 [] [] [] [] WARN ? Thread Thread[vert.x-worker-thread-9,5,main] has been blocked for 65139 ms, time limit is 60000 ms

io.vertx.core.VertxException: Thread blocked

at java.lang.StringUTF16.compress(StringUTF16.java:160) ~[?:?]

at java.lang.String.<init>(String.java:3214) ~[?:?]

at java.lang.String.<init>(String.java:276) ~[?:?]

at com.fasterxml.jackson.core.util.TextBuffer.contentsAsString(TextBuffer.java:421) ~[ms.jar:?]

at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.getText(ReaderBasedJsonParser.java:296) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:35) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.std.StringDeserializer.deserialize(StringDeserializer.java:10) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:293) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:156) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:290) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:249) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:26) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:293) ~[ms.jar:?]

at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:156) ~[ms.jar:?]

at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4497) ~[ms.jar:?]

at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2730) ~[ms.jar:?]

at io.nettyvertx.core.utiljson.concurrentjackson.SingleThreadEventExecutor$4DatabindCodec.runfromParser(SingleThreadEventExecutorDatabindCodec.java:986137) ~[ms.jar:?]

at io.vertx.nettycore.utiljson.internaljackson.ThreadExecutorMap$2DatabindCodec.runfromString(ThreadExecutorMapDatabindCodec.java:74101) ~[ms.jar:?]

at io.nettyvertx.utilcore.concurrentjson.FastThreadLocalRunnableJson.rundecodeValue(FastThreadLocalRunnableJson.java:3087) ~[ms.jar:?]

at java.lang.Thread.run(Threadorg.folio.inventory.dataimport.handlers.actions.CreateHoldingEventHandler.lambda$handle$2(CreateHoldingEventHandler.java:82982) ~[?ms.jar:?]

Example of error from the logs for the "CREATE 50,000 "

filtered by mod-source-record-manager

12:41:35.095 [vert.x-worker-thread-2] ERROR PostgresClient [5063482eqId] Timeout

at org.folio.inventory.dataimport.handlers.actions.CreateHoldingEventHandler$$Lambda$661/0x0000000840511c40.apply(Unknown Source) ~[?:?]

at io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.104 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063491eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

.future.Mapping.onSuccess(Mapping.java:35) ~[ms.jar:?]

at io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063492eqId] Failed to handle DI_COMPLETED event

future.FutureBase.emitSuccess(FutureBase.java:62) ~[ms.jar:?]

at io.vertx.core.impl.future.FutureImpl.addListener(FutureImpl.java:164) ~[ms.jar:?]

at io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-worker-thread-2] ERROR PostgresClient [5063492eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.106 [vert.x-worker-thread-2] ERROR utionProgressDaoImpl [5063493eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.105 [vert.x-eventloop-thread-1] ERROR JournalRecordDaoImpl [5063492eqId] Error saving JournalRecord entity

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.106 [vert.x-worker-thread-2] ERROR tHandlingServiceImpl [5063493eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR PostgresClient [5063495eqId] Timeout

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR utionProgressDaoImpl [5063495eqId] Rollback transaction. Failed to update jobExecutionProgress for jobExecution with id '229ad3be-6211-4f39-8b47-7dcb150ee763

io.vertx.core.impl.NoStackTraceThrowable: Timeout

12:41:35.108 [vert.x-worker-thread-19] ERROR tHandlingServiceImpl [5063495eqId] Failed to handle DI_COMPLETED event

io.vertx.core.impl.NoStackTraceThrowable: Timeout

filtered by mod-inventory

11:53:59 [] [] [] [] ERROR KafkaConsumerWrapper Consumer - id: 0 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_INSTANCE_HRID_SET, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_INSTANCE_HRID_SET) Error while commit offset: 49446

org.apache.kafka.common.errors.RebalanceInProgressException: Offset commit cannot be completed since the consumer is undergoing a rebalance for auto partition assignment. You can try completing the rebalance by calling poll() and then retry the operation.

11:53:59 [] [] [] [] INFO eHridSetKafkaHandler Event payload has been received with event type: DI_SRS_MARC_BIB_INSTANCE_HRID_SET and recordId: a36d571a-a64a-4aa7-ab5e-ac78ade3a1ab

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-29, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions

11:54:05 [] [] [] [] INFO tbeatResponseHandler [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Attempt to heartbeat failed since group is rebalancing

11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-50, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] Revoke previously assigned partitions

11:54:05 [] [] [] [] INFO AbstractCoordinator [Consumer clientId=consumer-DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4-6, groupId=DI_SRS_MARC_BIB_RECORD_MATCHED.mod-inventory-18.0.4] (Re-)joining group

11:54:05 [] [] [] [] INFO ConsumerCoordinator [Consumer clientId=consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9, groupId=DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4] Finished assignment for group at generation 577: {consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-32-69b7ae2e-a7ac-4d76-84cd-19f7c9f6f240=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-0]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-53-8578c203-815e-4d18-8015-2425ae55381e=Assignment(partitions=[kcp1.Default.fs09000000.DI_INVENTORY_INSTANCE_CREATED-1]), consumer-DI_INVENTORY_INSTANCE_CREATED.mod-inventory-18.0.4-9-d9d97905-e982-4384-a175-5a0fe8d5d66f=Assignment(partitions=[])}

11:54:05 .future.FutureBase.map(FutureBase.java:129) ~[ms.jar:?]

at org.folio.inventory.dataimport.handlers.actions.CreateHoldingEventHandler.handle(CreateHoldingEventHandler.java:80) ~[ms.jar:?]

at org.folio.processing.events.services.processor.EventProcessorImpl.process(EventProcessorImpl.java:36) ~[ms.jar:?]

at org.folio.processing.events.EventManager.handleEvent(EventManager.java:64) ~[ms.jar:?]

at org.folio.inventory.dataimport.consumers.DataImportKafkaHandler.lambda$handle$0(DataImportKafkaHandler.java:97) ~[ms.jar:?]

at org.folio.inventory.dataimport.consumers.DataImportKafkaHandler$$Lambda$557/0x000000084043b040.apply(Unknown Source) ~[?:?]

at java.util.Optional.map(Optional.java:265) ~[?:?]

at org.folio.inventory.dataimport.consumers.DataImportKafkaHandler.lambda$handle$1(DataImportKafkaHandler.java:97) ~[ms.jar:?]

at org.folio.inventory.dataimport.consumers.DataImportKafkaHandler$$Lambda$517/0x0000000840409c40.apply(Unknown Source) ~[?:?]

at java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106) ~[?:?]

at java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235) ~[?:?]

at java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:143) ~[?:?]

at org.folio.inventory.dataimport.consumers.DataImportKafkaHandler.handle(DataImportKafkaHandler.java:96) ~[ms.jar:?]

at org.folio.kafka.KafkaConsumerWrapper.handle(KafkaConsumerWrapper.java:171) ~[ms.jar:?]

at org.folio.kafka.KafkaConsumerWrapper.handle(KafkaConsumerWrapper.java:24) ~[ms.jar:?]

at io.vertx.kafka.client.consumer.impl.KafkaConsumerImpl.lambda$handler$1(KafkaConsumerImpl.java:81) ~[ms.jar:?]

at io.vertx.kafka.client.consumer.impl.KafkaConsumerImpl$$Lambda$447/0x0000000840226c40.handle(Unknown Source) ~[?:?]

at io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.run(KafkaReadStreamImpl.java:237) ~[ms.jar:?]

at io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl.lambda$schedule$8(KafkaReadStreamImpl.java:192) ~[ms.jar:?]

at io.vertx.kafka.client.consumer.impl.KafkaReadStreamImpl$$Lambda$456/0x0000000840303840.handle(Unknown Source) ~[?:?]

at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96) ~[ms.jar:?]

at io.vertx.core.impl.WorkerContext.lambda$run$1(WorkerContext.java:102) ~[ms.jar:?]

at io.vertx.core.impl.WorkerContext$$Lambda$118/0x00000008401a2840.run(Unknown Source) ~[?:?]

at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) ~[ms.jar:?]

at io.vertx.core.impl.TaskQueue$$Lambda$107/0x0000000840175840.run(Unknown Source) ~[?:?]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]

at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[ms.jar:?]

at java.lang.Thread.run(Thread.java:829) ~[?:?]

11:54:30 [] [] [] [] INFO bstractCoordinator$2 tbeatResponseHandler [Consumer clientId=consumer-DI_INVENTORY_INSTANCEITEM_CREATED.mod-inventory-18.0.4-932, groupId=DI_INVENTORY_INSTANCEITEM_CREATED.mod-inventory-18.0.4] Successfully joined group with generation 577Attempt to heartbeat failed since group is rebalancing

11:54:05 30 [] [] [] [] INFO bstractCoordinator$2 [Consumer clientId=consumer-INFO tbeatResponseHandler [Consumer clientId=consumer-DI_INVENTORY_HOLDING_MATCHED.mod-inventory-18.0.4-52, groupId=DI_INVENTORY_INSTANCEHOLDING_CREATEDMATCHED.mod-inventory-18.0.4-53, groupId=] Attempt to heartbeat failed for since member id consumer-DI_INVENTORY_INSTANCEHOLDING_CREATEDMATCHED.mod-inventory-18.0.4] Successfully joined group with generation 577-52-473d8b15-134b-43e7-9e7c-832d12b3aaba is not valid.

11:54:05 30 [] [] [] [] INFO bstractCoordinator$2 AbstractCoordinator [Consumer clientId=consumer-DI_INVENTORY_INSTANCEHOLDING_CREATEDUPDATED.mod-inventory-18.0.4-3251, groupId=DI_INVENTORY_INSTANCEHOLDING_CREATEDUPDATED.mod-inventory-18.0.4] Successfully joined group with generation 577(Re-)joining group