Skip to end of banner
Go to start of banner

Ongoing Indexing with Merged Documents in Postgres

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Problem statement

Issue: MSEARCH-899 - Getting issue details... STATUS

Description: With the introduction of the improvements to the Reindex the procedure was split into two different phases. The merge phase is aimed at preprocessing all the documents from the inventory and storing them in the Postgres database. The second phase is an “upload” phase and aims to load preprocessed search documents to the OpenSearch database. This approach allowed us to run only the second phase when there were no major changes to inventory.

However, the ongoing indexing creates the issue mentioned in the ticket because the “merge” and “upload” phases should happen sequentially for each document and the aggregate query should be run on each insert/delete operation. That creates a heavy load on the Postgres database because of a large number of simultaneous write/read operations to the same tables.

Solution options

Option

Description

Pros & Cons

Decision

1

Query optimization

Creation of additional indexes and/or reorganization of the query

Small effort

DECLINED due to low to none impact on the performance

2

Extracting upload into a separate thread

The extracting upload operation will allow reducing the amount of the same or similar aggregate queries on each insert/delete operation

Pros:

  • Medium effort

Cons:

  • Eventual consistency for sub-resources (subjects, contributors, classifications

IN PROGRESS

Extracting upload into a separate thread

The approach is as follows (using contributors as an example):

  1. Add last_updated_date to the contributors table

  2. Create the sub_resources_locks table(id, entity_type, locked_flag, last_updated_date) and fill it for all types of subresources

  3. Create a job that every minute gets the list of tenants select t.tenantjson -> 'descriptor' ->> 'id' from public.tenants t where t.tenantjson -> 'descriptor' ->> 'id' <> 'supertenant'; and for each of the tenants:

    1. Tries to update sub_resources_locks set locked_flag=true where entity_type='contributor' and locked_flag = false returning id, last_updated_date

    2. if the id has returned, then fetch the aggregate query of contributors with where last_updated_date >=? order by last_updated_date limit 100 clause. The value of 100 is the amount of records that could be processed in a minute, even if not, the lock will not allow to run in the next minute

    3. Load aggregates to OpenSeach

    4. Run update sub_resources_locks set locked_flag=false, last_updated_date=current_timestamp where entity_type='contributor' and locked_flag = true

NB! In ECS mode the job only should run for the central tenant

  • No labels