About this document: Several teams are working on improving data import reliability. From institutions, we need
- Job profiles (including match/action/field mapping profiles)
- Environment that a hosting provider can run tests (if applicable)
- Files to use for testing
- Expected time to complete job
- Who submitted this information and from which institution
Goals
- Measure reliability and performance during low and high peak activity periods
- Measure against competitors benchmarks
- Measure goal of handling a large record file of 100,000
- Define clear and concise FOLIO benchmarks
Job profile name | Expected outcome of import | Environment | Precondition(s)
| File to use for testing
| Can this same file be processed over and over? (Y/N) | Reporting institution/reporter |
---|---|---|---|---|---|---|
A MARC bib and connected FOLIO instance is created Available in real-time for search via Inventory app | Orchid Bugfest | N/A | 100,000 records_2 (.mrc) | Yes | ||
A MARC bib and connected FOLIO instance is created Available in real-time for search via Inventory app | Orchid Bugfest | N/A | ~400 records (marcxml) | Yes | ||
| Orchid Bugfest | N/A | Yes | |||
Update a MARC authority record | A MARC authority and connected FOLIO instance is updated Update reflected in real-time for search |