Topic | Who | Meeting Notes | Related Jira | Decisions and Actions |
---|
Announcements: |
| No announcements |
|
|
Add support for tokens in field mapping profiles | All | From a discussion in Slack, we would like to have tokens added to mapping profiles. Outgrowth of previous discussion (see 3/13/2024 notes) to use tokens, like ###today###, in field mapping profiles. | Jira Legacy |
---|
server | System Jira |
---|
columnIds | issuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | UXPROD-3000 |
---|
|
|
|
Enhancements to Job profile deletion | All | Story created for Folijet team to look into enhancement options for deleting Job profiles without making linked sub-profiles un-deletable. Deletion of a job profile is currently 'soft'; i.e. not fully, technically deleted from the BE. Change 'soft' deletion to 'hard' and remove entirely. Should keep unused data from being stored and empty connections to remain. | Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDATAIMP-1026 |
---|
|
| - Ryan will convert to a small feature
|
Review issue reported by Christie: Quantity Electronic seems to always be zero (Currently planned for Poppy with CSP 3) | Ryan/all | In discussing this issue with the Folijet team, I've learned that the described behavior is a result of requirements to help avoid Item duplicates in support of the Multiples enhancements found in Poppy. Would the following logic/scenario make sense as a possible path forward? - If Job profile contains 'Create' Action profiles for Orders, Instance, and/or Holdings, then POL Quantity value should be controlled by mapping.
- If Job profile contains 'Create' Action profiles for Orders, Items, and/or Instance, and/or Holdings , then POL Quantity value should be controlled by the number of Items created.
Discussion : - Based on previous conversation, assumption is that the first bullet is the ideal logic.
- Questions : what scenario is requiring this complication? Why wouldn't the quantity in the order always match the quantity in the ingest file?
- Reasoning for current situation is unclear.
- Cost and quantity in an ingest file are related. Using the # of items instead of the values in the incoming file breaks the logic and expectation of a user.
- Standing orders are a good example : one (quantity = 1) set for $X dollars instead of $X dollars per item
- Orders can be for a set or a part; practice varies by library & vendor
- Controlling the quantity by the mapping leaves these values up to the library ; maximum flexibility
- Confounding variable could be that the quantity ordered must match the quantity by location.
- Recommendation to talk to Dennis (PO of Acquisitions) for clarity, further information.
- Question from Ryan : should the number of holdings records created have an impact on quantity?
- The quantity in the POL should always come from the mapping provided by a library.
- There is a situation where two items could be ordered, destined for different locations corresponding to multiple holdings records, but both locations aren't known during the time of order.
- Locations are often assigned as part of the cataloging process, not the acquisitions process.
- The order record is the source of truth.
Previous notes from 3/13: Expand |
---|
- Was this behaving differently in Orchid?
- Behavior seen in this bug is that if Create profiles for Holdings or Items are included in the Job profile, then Quantity value is controlled by the number of Items created. So if Holdings profile is included, but not Items, then Quantity will result as 0.
- Does this make sense to you that Quantity should be controlled by Items created as part of job or should it always be controlled by the Order mapping?
|
Previous notes from 2/28: Expand |
---|
When creating electronic orders are being created through data import, the electronic resource quantity is not being mapped from marc or from a default value in the mapping profile and the funds are not being encumbered. Test in Poppy CSP1 in local UChicago environment and in Poppy bugfest. See Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDATAIMP-1010 |
---|
|
For Stanford, Acquisition method is not mapping for Purchase. They are also not getting quantity or encumbrances. Also, order type is not mapping for Purchase when it is provided as a default in the import profile for orders. Discussion notes: No one has used the order imports in Orchid. Feedback that the quantity should come from the profile/order and not the number of items. Common scenarios where items are not going to be created. When processing orders the quantity is always controlled from the order. |
| Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDATAIMP-1010 |
---|
|
Earlier related work:
Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | UXPROD-2741 |
---|
|
Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODORDERS-876 |
---|
|
Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODORDERS-881 |
---|
|
| - Ryan to take feedback to dev team.
|
Review/Discuss: UXPROD-4303: Set instance/bib record for deletion | Ryan | Review and discuss feedback from initial testing of new 'Set record for deletion' action for Instances. See notes from DI Lab sessions here: Lref gdrive file |
---|
url | https://docs.google.com/spreadsheets/d/1xm26GuSPJZQXkt53DS4wtpUpTLIYMjL1Kz8UQ9yqEnA/edit?usp=sharingfeedback from initial testing of new 'Set record for deletion' action for Instances.See notes from DI Lab sessions here: Lref gdrive file |
---|
url | https://docs.google.com/spreadsheets/d/1xm26GuSPJZQXkt53DS4wtpUpTLIYMjL1Kz8UQ9yqEnA/edit?usp=sharing |
---|
| Discussion : - Ryan has reviewed the spreadsheet of feedback and shared some already with the dev team.
- MODSOURCE-756 : newly submitted ticket
- Folijet is reviewing the ticket and options for addressing it
- Hoping to address with a simple BE update
- In some instances, neither 'suppress' or 'delete' covers all needs.
- Advocation for a separate section in the 'Actions' button for the deletion actions.
- Lots of open questions concerning dependencies (outlined on the spreadsheet). Important for future phases of this project (not part of Phase 1).
- Note : ensure that item status is accounted for in reviewing dependencies ; i.e. items on loan
- Should the default behavior be that the user is expected to delete holdings and items before the instance? i.e. dependencies aren't as important
- Use case could be that holdings and items are marked for deletion separately; last step is to mark instance for deletion and clean all of them out at that point
- Discussion on various internal procedures for withdrawing and deleting records
- Practices in place in FOLIO to address the current inability to delete records leads some libraries to prefer the idea of deleting instances, holdings, and items in one step.
- Practices can vary between libraries based on type of material: physical vs. electronic
- No consensus on when to use a 'hard' vs. 'soft' delete.
- Discussion on ability and need to delete the SRS & Instance separately.
Notes from 3/13 meeting:
Expand |
---|
Set bib record for deletion is in place in snapshot. New set record for deletion action. Need a specific permission to have access to this. 9Separate from inventory All permissions. Click the action to suppress the record from discovery, staff suppress the record, and set the LDR 05 to d. New SRS property "deleted" set to true. Staff suppress will now be defaulted to No, so will not show up in search. Q: What happens to attached orders? A: Nothing right now, but Ryan will check on impact on associated records. Q: Can you do this batch with a delete file in Data Import? A: No. Just an individual, manual action to take. Step 1 of longer term plans for full deletion options. Q: Are other steps going to be continued before this becomes part of a release? A: Will be included as is in Quesnalia. Holdings and items should not be affected as part of this release. Concern: Hacving holdings and items available with the instance being deleted. Cannot imagine a situation in which you want to delete an instance and still have the holdings and item information available with a search. Also, if an item is checked out, it should block you from deleting a bib. Or on hold. Or in course reserves. Not interactive link with inventory or circulation control. Follow up Q: Would there be a script available to get this added to Instance records a library has already marked for delete? Q: What is the push this out in its partially developed state? A: Someone would use it to delete duplicate instances in the system. These things usually do not have holdings or items. Q: What would happen if a deleted record was matched and updated as a part of a data import? A: Believe that these records are not discoverable via data import matching and updates, but will check on that. Q: How do holdings and items show up in search if the instance is not available in the search? A: notetaker is unclear of the answer to this question Q: Can you reverse this process or undelete things? A: Manually edit the instance to undo the suppression, but cannot edit the marc once it has been set to deletion. Triggered by the status to deletion. Question about whether we need to revisit this. This functionality should be robustly described in the documentation so users fully understand the implications of deleting something. Suggestion: The warning toast should be delete rather than just suppressed from discovery and staff suppressed because there are implications like not being able to edit the srs marc Suggestion to postpone until Ramsons. |
| Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | UXPROD-4303 |
---|
|
Jira Legacy |
---|
server | System Jira |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODSOURCE-756 |
---|
|
|
|
MARC Modification Testing | Jennifer Eustis | Link to spreadsheet: Lref gdrive file |
---|
url | https://docs.google.com/spreadsheets/d/1OxiqsLjO7a19K1TQDCZXmbmWM-lWXgGkoz_3jL_C80g/edit#gid=361751186 |
---|
|
- Making DI UI consistent with quickMarc and Bulk Edit. For example, blanks are denoted by the \ rather than a space.
- Significant challenges with using MARC modifications with updates. One idea shared in lab was to see if we want to start with a baseline approach of how and where we need to have MARC modifications, develop that, and then build on that foundation.
|
|
|
|
|
|
|
|
Notes from previous meetings... |
|
|
|
|
Feature/Bug Review: UXPROD-4704: Stop processing the job after it was canceled by user (FKA MODSOURMAN-970)
| Ryan/All | Previous notes from 2/21: Expand |
---|
MODSOURMAN-970 was transitioned to UXPROD-4704 after a review by development. The fixes needed to address the issue involve multiple modules, hence why it is now a new feature. Target release is Ramsons and will be included on the priority review spreadsheet Ryan hopes to send out by the end of this week. Question : how does this impact or is impacted by the new data slicing functionality? - Answer : unknown; Ryan to investigate
|
| Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | UXPROD-4704 |
---|
|
| - Ryan Taylor: investigate any impacts on or by data slicing
Answer 3/6: it will not have any effect to data-slicing, slicing happens prior to the processing in mod-data-import. This feature will be focused on stopping the processing of records that starts in mod-source-record-manager and further.
|
Bug Review: MODDATAIMP-897 - Adding MARC modifications to single record overlay doesn't respect field protections - Discuss expectations of using Modify actions
| Ryan/Jennifer/All | A number of issues were seen with doing updates. It seems there might be regressions along with functionality that doesn't work: On Create: - Adding a string to the beginning or end doesn't work
- Adding a field with multiple subfields works EXCEPT for the indicators which aren't mapped
On Update: - Adding a string to the beginning or end doesn't work
- Adding a field with multiple subfields works EXCEPT for the indicators which aren't mapped
- Modification at the end of the file didn't remove the field from the incoming record and that field mapped to be removed was in the existing srs record at the completion of the job
- Modifications at the end of a job don't seem to work
Logs: - On Update, encountered a number of errors such as
- incoming file may contain duplicates when the file only had 1 record
- 2 rows in the log when there was only 1 record in the file and where the summary had 1 update and 1 no action error
If all this work is being done, should we create jiras for all the ways in which marc modifications don't work? Should we create jiras for modifications that worked in orchid and that no longer work in poppy? It makes sense to create jiras for things that were working that aren't working in Poppy. Let's hold off on what marc modifications should work until we get the developers' findings.
Previous notes from 2/22 and 2/14:
Expand |
---|
Discussion notes: Last week, the DI lab started a spreadsheet to track this functionality. The group is still working on this and expects to at the 2/22 meeting. RT: How are the protections working and how are they expected to work? Example from Jennifer Eustis: Use Case: Export, Transform, Load. Import profile includes a marc modification to delete fields, such Matches on 999 ff $i. Ideal to have a marc modification to remove unwanted marc fields: 029, 983, etc. Then a match on instance and update of instance. Marc modification at the end of the record results in marc modification. See screenshot of profile.
Comments that marc modifications was implemented with an expectation that marc modifications should be at the beginning of the job and should act on the incoming record. That is true, but past conversations in data import subgroup drew out two use cases: 1 to modify an incoming record before any actions are taken and 2) to modify the final srs record after all of the actions are taken. (Delete 9xx data after it is used to update the holdings and item, for example.) General experience right now is that marc modifications are working as expected with creates, but is not working or working but with corruption (such as the deletion of protected fields) on updates. RT: Is part of the problem how we are approaching updates vs modifications? Updates are designed to work with FOLIO records and modifications are designed to work on incoming records. Should updates have the same potential actions as marc modifications applying the logic to the updated record? Right now dependencies between srs and instance and the explicit nature of the updates on instance vs marc is problematic. It is difficult to understand what is happening with updates. Process is to put them anywhere to see where they work. Whether we are updating srs, instance or both we should be able to do the same thing. RT: You will see different behavior from marc modifications depending on its placement in the profile. Need to a deeper dive into how the behavior changes dependent on placement. This would be a good candidate for the functionality / documentation audit. If development dives into this and the DI lab group dives into this, we could then come together to identify the best way forward |
| Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDATAIMP-897 |
---|
|
Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | UXPROD-4709 |
---|
|
| This would be a good candidate for the functionality / documentation audit. If development dives into this and the DI lab group dives into this, we could then come together to identify the best way forward - Development Review
- DI Lab Group Review
|
Missing Action Profiles in Job Profile after Poppy migration: As called out in Poppy Release Notes, there is a known issue that's been observed in which some links to reusable Action Profiles might be missing from Job Profiles after Poppy migration.
Release notes recommend the following: - After migration, review existing Job Profiles to verify they migrated correctly. Pay attention to reusable Action Profiles. In case issues are found, Job Profile can be updated manually. For additional information on links created for that Job Profile - execute script #15 (or follow the link), notify support.
Recommended script will provide list of Action profiles to help users manually recreate any affected Job profiles.
| All | MODDINCONV-365 part of CSP#2. Previous notes from 2/7: Expand |
---|
There are 2 issues: experience of unlinking post migration and then experience of unlinking during migration. MODICONV-361 is a P1 with the hope to be released in a CSP #1. The MODICONV-365 is being investigated. It looks like FOLIO system job profiles are being affected in terms of actions being unlinked. 5C saw that the default ISRI overlay wasn't working correctly. When we checked the default system job profile there were no actions profiles. Ryan confirmed this issue only affects Action profiles. It is difficult to know how common this is. For 361, the behavior seems consistent. But for 365, this seems to be less common and different tenants have the issue occur on different jobs. This is the 3rd or 4th time that the issue in MODDICONV-361 has appeared during a flower release. The unlinking/linking issues date back several releases. To gather more information, it is worth keeping the corrupted jobs and create replacements. A job with no action profiles or an empty job can be run. There are no error messages when such a job is run. This is something we shouldn't be able to do. Perhaps a warning or an error message is needed. |
Previous notes from 1/31:
Expand |
---|
Overview : Action profiles connected to multiple job profiles are 'unlinked' from job profiles after migration to Poppy. - It isn't happening for every library after migration or for every re-used profile.
- However, it is happening often enough that libraries should be aware and check.
- General confusion on how or if the migration to Poppy is causing this issue. Root cause is not migration, but the migration process does cause the profiles to unlink.
- Script #15 (noted in the topic column) provides a list of profiles that need to be fixed. It does not fix the links. That must be done manually by the library.
- The action profiles actually disappear from the job profile, not just 'unlink'. They must be re-added.
Comments : - Lots of work for libraries to recreate job profiles manually.
- Should be a CSP candidate. High priority for correction.
- Could be a blocker for some libraries to migrate to Poppy.
- The "unlinking from one unlinks them all" issue has popped up multiple times.
Sidebar discussion in chat on how job profiles are deleted spurred #42 in the Data Import Issue Tracker. Until MODDICONV-361 is fixed, any time a re-used action profile is unlinked in a job profile it will be unlinked in all other job profiles. Fixing it after migration doesn't stop it from happening again should a re-used action profile be unlinked. The development team will be adding new test cases to their workflow to test this type of scenario (re-used profiles) going forward. |
| Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDICONV-361 |
---|
| Issue specific to unlinking of Action profiles when used by multiple Job profiles after Poppy migration. Ticket now closed and included within Poppy CSP #1.
Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDICONV-365 |
---|
|
Issue specific to unlinking of Action profiles during migration as reported by Cornell. Plan to address have been identified and ticket is currently In Code Review.
|
|
Partial Matching: | Subject raised by Yael Hod | Not discussed at the 2/21 meeting. Previous notes from 1/31: Expand |
---|
Partial matching, e.g. begins with, ends with, is required but it does not function as it should regardless of how it is configured. - System behaves as though it only looks for exact matches.
- Examples of use include prefixes/suffixes to an 035 added by a vendor or library to designate the source of the record.
- University of Chicago has had same issue. Corrie submitted MODDICORE-386 on their behalf.
- Question as to whether this is a bug or how the system is intended to function. Documentation is needed.
- #12 on the Data Import Issue Tracker.
|
| Jira Legacy |
---|
server | System JIRA |
---|
serverId | 01505d01-b853-3c2e-90f1-ee9b165564fc |
---|
key | MODDICORE-386 |
---|
|
| Ryan will : - Review Jira with Folijet leads to understand current design and identify requirement gaps.
|
Documentation: The group has identified a need for new, enhanced, or reorganized documentation around Data Import. - In a previous session, we agreed that completing a functionality audit spreadsheet would be a good first step
| All | Not discussed at the 2/21 meeting. Previous notes from 1/24 meeting: Expand |
---|
In lab session on 1/18/2024, we created a wiki page, Data Import Implementers Topic Tracker, with guidelines on how to contribute and a spreadsheet to track issues. This is based on the work done in the Acquisitions SIG. An archive area was also created where we could archive outdated pages such as the Archived Data Import Implementers and Feature Discussion Topics. The idea was to put down issues whether they were linked to a Jira issue or not. Some of the important information that we wanted to track was if there was a linked Jira and in particular when the issue was discussed in the working group and the decision(s) made in regard to that issue. The spreadsheet is still being developed. Before we add more issues, the group in lab wanted to know: - Do we adopt this page and spreadsheet? If yes, do we have volunteers to populate it?
- To make sure this page is maintained, the group suggested that the working group look at it once a month to see what is outstanding or new. Is this a practice we want to adopt?
Discussion: A link to the new Data Import topic tracker is at the top of the page. Format was worked on at last week's data import session. Q: is this only to track Jira tickets? Or will there be other topics added to the agenda. R: In Acq /RM individuals add stories to the topic tracker and the Jira may only be added later to the spreadsheet. (many think this is a good idea.) Can reference the Acq/Resource Management implementers topic tracker. Perhaps add widgets that bring in Jiras automatically based on the tag. Q: How to add "Click here and expand" text. R: Put the cursor where you want the text block to begin and use Insert Macro function. Type "Expand" to locate the Expand Macro. Agreed to use the de-duplication discussion to work on building a useful functionality framework. |
| N/A | - Get volunteers to create a spreadsheet and start brainstorming - DONE
|
De-duplication: Continue conversation from previous session to clarify what we expect from de-duplication of field values when a record is loaded into FOLIO via Data Import. | All | Ryan has discussed this with the team. He will get this in writing and will share this when done. Christie did some work as well in Poppy bugfest. Not discussed at the 2/21 meeting. Previous notes from 1/24 meeting: Expand |
---|
Jennifer Eustis and Aaron Neslin found comments in the data-import-processing-core code that provides details about expected behavior for de-duplication. These comments align with the behavior we are seeing except for when there is duplicate data in the incoming record. Data is being removed from the incoming record on update as well. Consensus seems to be that FOLIO should not be de-duplicating within the incoming record unless it is explicitly defined in an import profile. Q: Is de-duplication something that should be able to be deactivated on a field by field basis? R: Sounds like a reasonable approach. There is also some concern that this would complicate an already complicated situation. Possible solution - a tool to deduplicate in another tool rather than within data import instead. Suggestion to start with the functionality audit. RT can connect with the developers as a part of this audit. Q: Are we starting with how we as users expect functionality work or with how the developers expect it to work. R: Really should have both for each feature. Start from perceived / desired functionality of the users and add to it with designed functionality. Suggestion to provide examples to the developers so that it is clear what we are expecting. Pilot functionality audit with de-duplication and start with our understanding and then get input from the developers. |
| MODDATAIMP-879: Data Import removes duplicate 856s in SRS | - RYAN: Clarify current behavior of field value de-duplication.
- Define desired behavior of field value de-duplication (if different).
- Christie Thomas will create some dummy data to illustrate deduping 856s.
|