Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

TopicNotes (Currently in the process of editing the transcription file.--Bob)
Housekeeping
  • Magda: 7:57: As always I would like to ask those that are present in the meeting to add their names to the attendees list. I see there are approximately 17 People in the meeting. We don't have 17 names under the attendees. So please do if you have a chance. Bob, I hope you will continue to be our brave note-taker. Is this so?
  • Bob: Yes.
  • Magda: Okay, thank you.
  • Magda: 8:50: There will be no meeting on July 26th which is our next meeting, I will be traveling and I will I may not have access to a decent internet connection on that day. So, I thought it will be wiser to cancel the meeting. If we need to discuss we will probably use Slack for this. Is there any other housekeeping information from anyone else? Erin has indicated NO, so let's move to the next item on our agenda, the development status.

Development updates

  • Magda: 9:42: This is the current sprint. We are still finishing addressing bugs that we find found in Bulk Edit. Some of them you found in the UAT tests. So hopefully we will be able to address most of them before BugFest starts. If not, I will keep you posted about new deployments. The one thing that I would like to show you is that we have three stories in our backlog that we'll be working on in this sprint that will run performance on larger data sets. I bring it up because the this is the feedback I got in the user acceptance testing. There is concern about the performance when using larger data sets. And I am concerned as well. So for the item statuses, for example, we will be testing changing the status from available missing and withdrawal. This is just a subset for testing purposes that will be run on 100 records and then the number will ramp up to 1000, 10,000, and 100,000 records. And the bulk edit will be triggered by submitting files with item barcodes, Item UUIDs, holdings UUIDs, item HRIDs, item former IDs, and accession numbers. There were comments about some problems with some files in UAT. So hopefully we will be able to catch those issues as well.
  • Magda: 12:12: Any comments? A question about that?

  • Erin: 12:22: So this is specifically for item records?
  • Magda: Yes. For item records. We already completed Bulk Edit User
    Jira Legacy
    serverSystem Jira
    serverId01505d01-b853-3c2e-90f1-ee9b165564fc
    keyMODBULKED-21
  • Erin: Yeah. That makes sense to me.
  • Magda: development status scrum board, I think we covered that.

Nolana scope update

Features planned for Nolana:

  • UXPROD-3705 - Bulk Edit - User data - in app approach
  • UXPROD-3712 - Bulk Edit - in app approach - loan types
  • UXPROD-3704 - Bulk Edit - in app approach - holdings locations
  • Possibly also:  UXPROD-3713 - Bulk Edit - in app approach - item notes

Other features planned but delayed:

Bulk edit will need to address existing dependencies and that will require additional work.

The feature that was deprioritized by MM SIG:

  • UXPROD-3707 - Bulk edit - inventory items - csv approach
  • Magda: 13:29: Nolana scope updates--when we met last time I provide provided a few slightly different list. So, what is left from what I was showing is bulk edit user data. In-app approach, we will be trying to add an in-app approach for users, so they are in the same pot as item records.
  • Magda: 14:07: For Bulk Edit items, we will add support for loan types. We will also support holdings location similarly to the item location as we implemented in Morning Glory. The other features that are planned but delayed is are the Bulk Delete inventory item records and delete user records.
  • MadgaMagda: 14:40: After meeting with developers, they are extremely uncomfortable with the fact that not all the dependencies are being handled. First, in inventory, all of them are only handled on the UI. So, we need to recreate them. We also are not confident that all the dependencies are actually identified. You can live with this if only deleting one or two records manuallmanually. But once we started deleting hundreds of records, then this issue will be more visible and prominent. So, the development team proposes a separate approach, not soft delete, which means the record is marked as deleted, but still retained in the database, not hard delete, where the record is removed. The option proposed which is now on the table , is called hybrid delete, which means the record is removed from the main table from the users or items, but it still stays in the temporary table stays there, as long as the user are well defined. So adding this temporary table and having the maintenance of this temporary table is adding to the scope of deletion. We will have a better understanding of how much more work it will entail probably by the end of this week once I meet with the development team and architects.
  • Erin: Is there a wiki page?
  • Magda: 16:58: It's not the wiki page just yet. There is a Google Doc. I will put this into the chat when I find it. I probably will not find it right now. I will find it. And I will post this into on our Slack channel.
  • Erin: 17:13: I guess I'm not sure why creating a brand new table structure is better than doing a soft delete?
  • Erin: 17:35: I understand the reluctance about these features in general, like that makes total sense to me.
  • Magda: 17:43: I think inventory is especially difficult. Because all the dependencies right now for manually deleting one item are in the UI. So first of all, we have to implement those on the backend. And then not all of them are implemented. So that is something that we would need to investigate as well.
  • Erin: 18:16: Why is the assumption that Firebird would need to do that work versus Prokopovych? is it just like Prokopovych just doesn't have the capacity so Firebird needs to do it?
  • Magda: I think no one has the capacity.
  • Erin: 18:31: Well sure. But I just ...
  • Magda: 18:37: I see Jen's comment. It is the client's responsibility. When you say client Jen, do you mean the software?
  • Jenn: Yeah. Client Yeah.
  • Magda: 18:50: So that this the system that is subscribing to the data. I disagree with this approach, to be honest. I fully disagree or partially disagree actually. But we can talk about this maybe during our next meeting, I would like to move on.
  • Magda: 19:12: The last sentence about deleting item records, there is a parallel initiative that is related to marking inventory instances for deletion that depending on how it is implemented, we will need to adjust accordingly on our site as well. So more on deletion later, I will put the document in the slack chat after the meeting. This is still a work in progress. More about this probably during our next meeting.
  • Magda: 19:54: The other thing that was on our list when we talk last time was the CSV approach for items. However, MMC deprioritized it. They felt that they would rather invest time in the in-app approach because this seems to be the more desired behavior for MM SIG.

  • Magda: 20:53: So I added the comment to the feature if anyone is interested, I know some of you were interested in this, I think Sara. We not going to do it in Nolana. We will probably do it later, in a few releases. But at this point is not planned.
  • Magda: 21:30: There is one other feature that I hope we'll be able to squeeze as well in Nolana which is the item notes. It was mentioned at one of our meetings that once the location changes and the loan type changes, you would like to have the option of changing item notes. I will get a little bit more feedback about the requirements at the MM SIG meeting on Thursday and we'll see what we can squeeze in.

  • Magda: Bob, go ahead, what was your question?

  • Bob: 22:13: I just was wondering what Jenn meant by it's the client's responsibility?

  • Jenn: 22:21: Sorry, I just meant...I'll put the issue in that we filed. We accidentally deleted some items via the API that had loans. When we complained that the API hadn't returned any errors, we were told that it was because the API is meant to let you do whatever you want. And it's your responsibility when you write your script or your bulk edit program to check the dependencies.

  • Magda: 22:44: I definitely disagree with this approach. The business logic should not be in the UI, the business logic should not be in the client's scripts, and the business logic should be handled consistently through FOLIO. So it should not matter if you're using the UI or you're using API, the behavior should be the same.
  • Bob: 23:12: Thanks that clarifies that.
  • Magda: 23:15: So what I just said is my opinion. And this is how we will try to implement it in Bulk Edit. I'm not saying that everybody in FOLIO agrees with this approach. Sara, go ahead.
  • Sara: 23:34: So this is more back to the deprioritization by MM SIG. So just want to be sure I've understood correctly, it's not that it's off the table is just been pushed off without any kind of definite release in mind. So it is kind of an indefinite push off of bulk edit of inventory items via CSV approach is that that's correct?
  • Magda: Yes. It is in the backlog. It's just not prioritized at this point. So definitely not for Nolana. Depending on the interest we will get from this group...and you're pushing for this functionality?
  • Sara: 24:37: Yeah, I think it could be especially helpful if you want to have to do a large number of complex changes, like multiple changes, for example, loan type and temporary status. To me, it just seems like a much better way to handle this via the CSV file. So I do still advocate for this option in the longer term. But I totally understand that the in-app should come first. So I'm fine with that. I just don't want this to be one of those things that stays in the backlog for years.
  • Magda: 26:13: So I agree with you Sara on this because we I saw similar things happening in Data Export. We had stories that were pushed later, and we never had a chance to get back to them. Hopefully, we will have a chance to get to them. But they aren't prioritized. I also agree with you that CSV is a powerful tool and can help in special cases to simplify the updates for larger data sets or some specific data set.  But this is not a tool for everyone. You have to be comfortable with the data set and be aware of the impact of the changes you are making. So more people will definitely be more comfortable with the in-app approach. So let's do the in-app approach. We will not be able to support all fields in the in-app approach. We will do the most commonly requested fields. And then the CSV approach will be the way to handle the cases when you would like to work with other fields. But this is something on our roadmap, not for Nolana, and most likely not Orchid. Okay, what I would like us to do, which is on our on my list to do once we do some work on the inventory, we will start working on some records from circulation and from acquisition then, so we can then start the building across app queries. But this is something that is starting to talk in the Orchid release or later.
  • Magda: 28:18: Any question about the Nolana plans?
  • Magda: 28:27: Then I will move to the to the feedback that we have received from UAT.
UAT feedback reviewFeedback details: https://docs.google.com/document/d/1bH0ZMG2RZte_5Anl8t2-ApcDEUX1AzxzYXn0uUVBq60/edit?usp=sharing


...