Import fails with "'idx_records_matched_id_gen', duplicate key value violates unique constraint" SRS logs JUNIPER HF
CSP Rejection Details
CSP Request Details
CSP Approved
Description
Environment
Definitely affecting Cornell, MTSU, Skidmore
Potential Workaround
Attachments
blocks
defines
is blocked by
is cloned by
is duplicated by
relates to
Checklist
hideTestRail: Results
Activity

Ann-Marie BreauxNovember 10, 2021 at 2:39 PM
Tested on Juniper BF, and no longer reproducing

Aliaksandr FedasiukNovember 9, 2021 at 2:48 PMEdited
, the barcode search is used to check whether a barcode number is unique or not.
Now I see that import process on Juniper BF is stable but is much slower that import on Kiwi BF (I attached images:
).
Now I will investigate it.
I think we need include ( and ) in Juniper if we have the capability to do it because without barcode index import will be really slow.

Ann-Marie BreauxNovember 9, 2021 at 2:02 PMEdited
Thanks, Why is it trying to search for barcodes? There's nothing in the import that involves barcodes.
please review Martin's comments above. Is there any other hotfix we need to apply to Juniper? Both of the fixes that he mentions ( and ) are in Kiwi, but not Juniper.
I'll try to import the 50K file again. Job started at 2:05 pm Juniper Bugfest time.

Martin TranNovember 8, 2021 at 11:49 PM
I applied the unique barcode index, things seem faster now. Please give it a try, .

Martin TranNovember 8, 2021 at 11:35 PM
There were two main issues during 's 50K import:
Searching by empty barcode.
These queries as noted in https://folio-org.atlassian.net/browse/MODINVSTOR-792 (and is fixed by https://folio-org.atlassian.net/browse/MODINV-508) caused intense slowness in the DB. To remedy this we need to apply the unique barcode index, or revert mod-inventory back to a few prior versions.
2. Seeing the following errors in mod-inventory, which could be symptoms of issue 1, but could be independent.
Also, the following errors need to be looked at and could impact performance:
Details
Assignee
Aliaksandr FedasiukAliaksandr FedasiukReporter
Carole GodfreyCarole GodfreyPriority
P1Story Points
3Development Team
Folijet SupportFix versions
Release
R2 2021 Hot Fix #4Affected Institution
!!!ALL!!!TestRail: Cases
Open TestRail: CasesTestRail: Runs
Open TestRail: Runs
Details
Details
Assignee

Reporter

Documents Case 3 in the comments on
This issue is observed in a Honeysuckle HF3 environment and in Iris, but may have been fixed by Hotfix 1
Additional info from one of 's comments below:
Data Import Run A: Create 10,300 SRS MARC Bib and Instances using new Data Import default job profile for Iris (
)
Retrieve Instance UUIDs via SRS MARC Query API:
{ "fieldsSearchExpression": "948.d ^= 'cu-batch'" }
Export the full MARC for the 10,300 records using Data Export default job profile
Create & associate DI profiles
job profile –
match profile (001 -> instance hrid) –
action profile (UPDATE Instance on match) –
mapping (overlay) –
Process the exported MARC (cleanup OCLC identifiers in
035$a
)Note: When we originally ran this and encountered the error, we DID NOT strip the
999 ff
Data Import Run B: Update the SRS MARC Bib records using job profile from #4 (1 file: 10,300 records)
I decided to try running through the test on the Iris reference environment and was unsuccessful in making it through all steps. Here are the results:
DI Run A: Initial create of 10,300 SRS MARC Bib and Instances (hrid: 17): 12 m
Retrieve Instance UUIDs via SRS MARC Query API: 629 ms
{ "fieldsSearchExpression": "948.d ^= 'cu-batch'" }
Export the full MARC for the 10,300 records (hrid: 8): 3 m
DI profiles ported via API and manually linked/related
Process the exported MARC (cleanup OCLC identifiers in
035$a
AND strip999 ff
) with external script: 4 sDI Run B: Update the 10,300 SRS MARC Bib records: Stuck at 37% after ~10 m
A follow up to SRS MARC Query API reveals 6,471 of the records remain in their original state, so we can conclude that 3,829 were updated (37%)
{ "fieldsSearchExpression": "(035.a ^= '(OCoLC)oc' or 035.a ^= '(OCoLC)0') and 948.d ^= 'cu-batch'" }
======================
When attempting to import a file - the import fails and the following message is observed in mod-source-record-storage logs
"idx_records_matched_id_gen", duplicate key value violates unique constraint
New Import's which fail are attempts to update records from a previous Data Import for a batch of appox 14K that also had issues (specifically the earlier batch failed with – Completed with errors status.)
Need analysis to understand state of related db table entries for Data Import attempts which are failing with this constraint error