error from logs
WARN Fetcher [Consumer clientId=consumer-DI_INVENTORY_HOLDING_CREATED.mod-inventory-18.0.4-33, groupId=DI_INVENTORY_HOLDING_CREATED.mod-inventory-18.0.4] Received unknown topic or partition error in fetch for partition kcp1.Default.fs09000000.DI_INVENTORY_HOLDING_CREATED-0
@timestamp1642416090190
KIWI | KIWI (with OL) | KIWI with partitions N# 2 | KIWI with partitions N# 4 | ||
---|---|---|---|---|---|
5K MARC Create | PTF - Create 2 | 5 min, 8 min | 8 min | 5 min | 7 min |
5K MARC Update | PTF - Updates Success - 1 | 11 min, 13 min | 6 min | 7 min | |
10K MARC Create | PTF - Create 2 | 11 min , 14 min | 12 min | 10 min | 16 min |
10K MARC Update | PTF - Updates Success - 1 | 22 min, 24 min | 15 min | 11 min | |
25K MARC Create | PTF - Create 2 | 23 mins, 25 mins, 26 mins | 24 min | 23 min | 25 min |
25K MARC Update | PTF - Updates Success - 1 | 1 hour 20 mins (completed with errors) *, 56 mins | 40 min | failed | |
50K MARC Create | PTF - Create 2 | Completed with errors, 1 hr 40 mins | 43 min | - | |
50K Update | PTF - Updates Success - 1 | 2 hr 32 mins (job stuck at 76% completion) | 1hr 4min | - |
number of partitions - 2
memory usage
org.apache.kafka.common.errors.RebalanceInProgressException: Offset commit cannot be completed since the consumer is undergoing a rebalance for auto partition assignment. You can try completing the rebalance by calling poll() and then retry the operation.
@ingestionTime
1642509297159
@log
054267740449:kcp1-folio-eis
@logStream
kcp1/mod-inventory/cb97f30af8b74ce88e5faf0895bb9650
@message
12:34:54 [] [] [] [] ERROR KafkaConsumerWrapper Consumer - id: 51 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_RECORD_MATCHED_READY_FOR_POST_PROCESSING, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_RECORD_MATCHED_READY_FOR_POST_PROCESSING) Error while commit offset: 21596
@timestamp
1642509294406
12:34:54 [] [] [] [] ERROR KafkaConsumerWrapper Consumer - id: 51 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_RECORD_MATCHED_READY_FOR_POST_PROCESSING, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_RECORD_MATCHED_READY_FOR_POST_PROCESSING) Error while commit offset: 21596
number of partitions - 3
@ingestionTime
1642594255832
@log
054267740449:kcp1-folio-eis
@logStream
kcp1/mod-inventory/cdd928abb52f47cc99a179771c415e2d
@message
12:10:55 [] [] [] [] ERROR KafkaConsumerWrapper Error while processing a record - id: 13 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_RECORD_MODIFIED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_RECORD_MODIFIED)
@timestamp
1642594255751
12:10:55 [] [] [] [] ERROR KafkaConsumerWrapper Error while processing a record - id: 13 subscriptionPattern: SubscriptionDefinition(eventType=DI_SRS_MARC_BIB_RECORD_MODIFIED, subscriptionPattern=kcp1\.Default\.\w{1,}\.DI_SRS_MARC_BIB_RECORD_MODIFIED) |
io.vertx.core.impl.NoStackTraceThrowable: Failed to process data import event payload |
12:10:55 [] [] [] [] INFO AbstractConfig ProducerConfig values: |
acks = -1 |
batch.size = 16384 |
bootstrap.servers = [kafka.kcp1.folio-eis.us-east-1:9092] |
buffer.memory = 33554432 |
client.dns.lookup = default |
client.id = producer-12684 |
compression.type = gzip |
connections.max.idle.ms = 540000 |
delivery.timeout.ms = 120000 |
enable.idempotence = true |
interceptor.classes = [] |
key.serializer = class org.apache.kafka.common.serialization.StringSerializer |
linger.ms = 0 |
max.block.ms = 60000 |
max.in.flight.requests.per.connection = 5 |
max.request.size = 1048576 |
metadata.max.age.ms = 300000 |
metadata.max.idle.ms = 300000 |
metric.reporters = [] |
metrics.num.samples = 2 |
metrics.recording.level = INFO |
metrics.sample.window.ms = 30000 |
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner |
receive.buffer.bytes = 32768 |
reconnect.backoff.max.ms = 1000 |
reconnect.backoff.ms = 50 |
request.timeout.ms = 30000 |
retries = 2147483647 |
retry.backoff.ms = 100 |
sasl.client.callback.handler.class = null |
sasl.jaas.config = null |
sasl.kerberos.kinit.cmd = /usr/bin/kinit |
sasl.kerberos.min.time.before.relogin = 60000 |
sasl.kerberos.service.name = null |
sasl.kerberos.ticket.renew.jitter = 0.05 |
sasl.kerberos.ticket.renew.window.factor = 0.8 |
sasl.login.callback.handler.class = null |
sasl.login.class = null |
sasl.login.refresh.buffer.seconds = 300 |
sasl.login.refresh.min.period.seconds = 60 |
sasl.login.refresh.window.factor = 0.8 |
sasl.login.refresh.window.jitter = 0.05 |
sasl.mechanism = GSSAPI |
security.protocol = PLAINTEXT |
security.providers = null |
send.buffer.bytes = 131072 |
ssl.cipher.suites = null |
ssl.enabled.protocols = [TLSv1.2] |
ssl.endpoint.identification.algorithm = null |
ssl.key.password = null |
ssl.keymanager.algorithm = SunX509 |
ssl.keystore.location = null |
ssl.keystore.password = null |
ssl.keystore.type = JKS |
ssl.protocol = TLSv1.2 |
ssl.provider = null |
ssl.secure.random.implementation = null |
ssl.trustmanager.algorithm = PKIX |
ssl.truststore.location = null |
ssl.truststore.password = null |
ssl.truststore.type = JKS |
transaction.timeout.ms = 60000 |
transactional.id = null |
value.serializer = class org.apache.kafka.common.serialization.StringSerializer |
4 partitions
1/25/2022
5,000 recordsBegan 7:33 AM today | 1/2 |
5,000 recordsBegan 7:59 AM today | 1/25/2022, 8:06 AM |
10,000 recordsBegan 8:20 AM today | 11/25/2022, 8:36 AM |
25,000 recordsBegan 8:45 AM today | |
25,000 recordsBegan 12:55 PM today | 1/25/2022, 1:20 PM |
50,000 recordsBegan 2:35 PM | 50,000 recordsBegan 2:35 PM |
| |
5,000 recordsBegan 12:11 PM today |