Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

\uD83D\uDDE3 Discussion topics

Time

Item

Presenter

Notes

  1. Recap & background

  2. Where to source data from?

  3. Revisiting action items from 7/15/2024

    • Spreadsheet of reference data

    • Google Form for SIG responses to collect both sample record data and sample reference data types

    • Check about effort needed to get a fresh export from Chicago and documentation on the process (TO)

    • Regular meeting times

    Identify

  4. types of data used in each of the apps as well as data at the tenant level - move the table to spreadsheet 

  5. Reference data by app

    1. Loan types, fund codes, call number types, etc.

  6. Record types by app
    1. Orders, order lines, users, instance/holdings/items, etc.

  7. App settings

    1. Data Import job profiles, Inventory export targets, etc.

  8. Tenant-wide data & settings

    1. Libraries, location codes, service points

    2. Consortial partners & relationships

    3. Permission sets

  9. Insert links to GitHub repositories – CW - is this still relevant

  10. Also solicit data samples from respective libraries - e.g.

    1. Order data as mentioned by Maccabee Levine (Lehigh)

    2. Bound-with (one item linking to multiple holdings (GBV)

Documents:

Recap of where we are:

  • Plan is to set up a blank environment

    • Set up with a set of generic reference data set, hopefully using a copy of a university’s dataproduction environment

  • Ask SMEs and users to upload sample data that they need for testing

  • Take a snapshot and use as a golden copy

  • Will need to ensure ongoing maintenance of this environment as features and apps are built out and require new sample data

Where to source sample data

  • Chicago’s data set uses a customized MARC mapping rather than the default; Chicago is also not using MARC authority data

  • We need a library using ERM, MARC authorities, and default MARC mapping

  • Robust anonymization will be required. Lee’s plan:

    • Replace PII with randomly generated data

    • Scramble loan history

    • Scramble orders, invoice amounts, fund codes

    • Replace vendor names with randomized names

    • Strip out staff notes with initials, etc.

  • One set of data for the general environment, and perhaps a second sample set for the ECS environment

    • Get this from a consortia!

Action items

  1. Spreadsheet has been populated with all modules and their related SIGs

    1. This is what we will use to compile and deliver the final dataset to devs

  2. Form has been drafted to solicit input from SIGs, SMEs, POs, etc.

    1. Yogesh and Lee will review and let Autumn know about correx

  3. Use and anonymization of data sets

    1. Autumn and Tod will bring write-up of Lee’s proposal to administrations, check on feasibility and willingness to use Chicago and MSU data sets

5 min

Future meeting times


Every Tuesday at 6pm CET

...