Table of Contents |
---|
Steps:
Create a namespace with Bugfest dataset
...
Pic. 1 Example "Kafka UI topics & partitions"
Scale-up OpenSearch
As indexation is a heavy process with high CPU and memory resource consumption, it is required (and strongly recommended) to scale up the shared OpenSearch AWS service.
...
Adjust Kafka messages retention (OPTIONAL)
Before starting of indexation find log.retention.minutes (if log.retention.ms is null) property and set it to 24 hours (1440 minutes). Could be set on broker level.
If decided to do this on a topic level only - log.retention.ms should be changed because it has higher precedence over ..minutes and it's usually already set to some value.
Tune mod-search config(REQUIRED)
KAFKA_EVENTS_CONCURRENCY (default - 2) with higher value could increase instances reindex.
KAFKA_CONTRIBUTORS_CONCURRENCY (default - 2) with higher value could increase instances reindex.
KAFKA_SUBJECTS_CONCURRENCY(default - 2) with higher value could increase instances reindex.
No sense to make this higher than topic partition number because consumers will be created max 1 for partition.
So if we have 50 partitions and 4 mod-search instances - we may set KAFKA_SUBJECTS_CONCURRENCY to 13 so 4*13 = 52 and 12-13 consumers will be created for each app instance.
Considering that there should always be more subjects/contributors than instances - only subjects/contributors should be tuned. If there's an observation that subjects/contributors are read from topic faster than published - then we may want to also tune instances topic.
Scale-up backend modules(REQUIRED)
For better performance, please scale up backend modules.
...
Pic. 2 Example "Backend module scale up"
For ECS Consortia tenants
In pgadmin ran this query to identify current value and change the value to false as in the screenshot
SELECT feature_id, enabled
FROM cs00000int_mod_search.feature_config;
Start index
After completion of all pre-required steps, trigger index with POST Postman request.
URI: /search/index/inventory/reindex
Headers: X-Okapi-Tenant & X-Okapi-Token
items for resourceName: instance, authority, location
Body:
Code Block | ||
---|---|---|
| ||
{ "recreateIndex": true, "resourceName": "instance", "indexSettings": { "numberOfShards": 1, "numberOfReplicas": 1 } } |
Info | ||
---|---|---|
| ||
More information about index and requests you could find here: |
...
Code Block | ||
---|---|---|
| ||
// Request PUT /folio-testing-sprint_instance_fs09000000/_settings { "index": { "number_of_replicas": "1", "refresh_interval": "1s" } } // Response { acknowledged: true } |
Scale-down backend modules(REQUIRED)
After the indexation process is finished, do not forget to scale down the backend modules in Rancher
...
- mod-search (4 → 1) (or 4 → 2 for namespaces with HA mode)
- mod-inventory-storage (2 → 1) (or not scale down for namespaces with HA mode)
Scale-down OpenSearch
After the indexation process is finished, do not forget to scale down the shared OpenSearch AWS service
...
Adjust Kafka messages retention back(OPTIONAL, if previously modified)
Return previous value to log.retention property (usually 8 hours).
Tune mod-search config back(REQUIRED)
Return modified env variables to default values
An additional approach in case if reindex doesn't work properly (failing, stuck, etc...)
1. Recreate Kafka topics from KafkaUI
2. Remove existing indexes from OpenSearch
3. Send PUT and POST requests from OpenSearch to clone indexes, do it for all the necessary tenants,
select all the rows and send a request
In this example for tenant fs09000000
PUT /general_instance_subject_fs09000000/_block/write
PUT /general_instance_fs09000000/_block/write
PUT /general_contributor_fs09000000/_block/write
PUT /general_authority_fs09000000/_block/write
POST /general_instance_fs09000000/_clone/folio-testing-sprint_instance_fs09000000
POST /general_instance_subject_fs09000000/_clone/folio-testing-sprint_instance_subject_fs09000000
POST /general_contributor_fs09000000/_clone/folio-testing-sprint_contributor_fs09000000
POST /general_authority_fs09000000/_clone/folio-testing-sprint_authority_fs09000000