Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Attendees

Darsi Rueda Jeff Fleming jpnelson Tod Olson 


Meeting Link

...

Migration status/checkin

Time

Item

Who

Notes


Optimistic locking workarounds

Could disable all through testing phase, and always set _version = 1.  Or at least could test this.  Did anyone (especially Stanford) try this?

Stanford created an Airflow DAG for dropping the OL triggers for inventory tables, monitors running migration jobs, and restores the OL triggers after 5 minutes have elapsed since the last active migration job. Will be testing starting today.


Jeff (Duke): Adding automatic updates and corrections for optimistic locking errors. Doing daily incremental updates. It tries to double-check the optimistic locking (looks up the version number) and updating the version number when he sends it in.  But sometimes updates twice (bound-withs!!!), and so errors.  Looks again for version number and sends again. Going for batch apis first.  When doing the error correction, does each record individually and looks up version number first.

Relevant Jiras
MODINVSTOR-910 - original report of the problem, in comments devs says it’s working as designed

Jira Legacy
serverSystem JIRA
serverId01505d01-b853-3c2e-90f1-ee9b165564fc
keyMODINVSTOR-924
 - new Jira asking for new api that ignores/handles optimistic locking

What error response do we want?
Jeff: would be nice if errors were in consistent format, not some in json and some as strings
Jeff: would like all the ok records to load and just report out the errors, instead of failing the whole batch


Migration status/checkin
  • Duke been upgrading to Lotus.  IndexData is hosting the server, but Jeff’s scripts load incremental. Jeff sends them files for full bib migration. Jeff loads orders, loans, etc.  ERM has been in prod for quite a while, and course reserves. Full go-live next summer.

  • If move ERM to prod, need to migrate prod server in place once we go live with ERM. Need to have same UUIDs for users on prod system, so when migrating users can’t wipe out the UUID.

How to keep UUIDs the same (Duke):
For settings, have gitlab with settings in it with UUIDs (Jeff’s script loads the settings)
For any of the automated data, use type 5 UUID generation so they are the same all the time.

Orders: Jeff doing open, and limited closed orders because see performance issues if number to migrate gets too big.  ~15K open orders.
Tod: mod-orders needed more memory (4G RAM) when doing FY rollover (it has memory leak?). Closed orders are also rolling with zero encumbrances (they’re going to fix this). Dry runs were really important, were able to fix small number of errors manually.


WOLFcon sessions
  • Inventory data migration crash course part 1 (Theodor) - Wed Aug 31, 4pm CEST
  • FOLIO Data Migration Workshop (SIG) - Thurs Sept 1, 10:15am CEST
  • FOLIO Data Migration: Lessons Learned (panel) - Friday Sept 2, 1:30pm CEST
  • Inventory data migration crash course part 2 - Performing a data migration (Theodor) - Friday Sept 2, 3:30pm CEST

We discussed that we should cancel the Data Migration Workshop, for 2 reasons

  1. Already 3 other sessions, and in-person attendance at WOLFcon not as high as expected/hoped
  2. The 10:15am CEST time is 4:15am Eastern, so all the U.S.-based developers that would have answered questions will be unavailable to do so, which makes this session hard! We would reschedule, but think that instead we’ll cancel and answer some general questions (as needed) in Theodor’s Inventory part 2 session (checking with Theo to make sure this is ok with him)








Action items

  •