Bulk Edit Rollback

Proposed solutions

Solution 1

One of the possible approaches is to roll back job with BULK_EDIT_IDENTIFIERS export type. In other words, when the user uploads a file and starts bulk edit, roll back service (in case of use) returns state of the file to the initial one regardless of how many edits this user has performed after initial uploading. The main idea of this approach is to store ID of the job launched after uploading initial file (job with BULK_EDIT_IDENTIFIERS export type). Since rollback endpoint has already been implemented ( /bulk-edit/{jobId}/roll-back ), it is enough to have only job ID for reverting.

Standard flow for this approach:

Possible risks and mitigations

It is possible a situation when one user uploads a file, and then another user uploads another file, and then first user clicks on roll back. As a result, last job with BULK_EDIT_IDENTIFIERS export type will not be the job which needs to be rolled back cause belongs to the second user. To handle such case, it has to be a map to store user ID and corresponding last job ID with BULK_EDIT_IDENTIFIERS export type. In this case, job ID to roll back can be retrieved from the map by ID of user. User ID can be obtained from the FolioExecutionContext. Implementation of this approach includes filling the map after a job has created.

The mitigation above contains the drawback: if module is restarted for some reason, map is cleared, and user will not be able to revert changes. To avoid that, there is a possibility to keep ID of the job with BULK_EDIT_IDENTIFIERS export type (this is a job which is created right after the uploading initial file) on the front end side. Such way allows us not to worry about ID of the currently logged in user since on front end side there is always one user. Whenever the user uploads initial file, it can be stored locally on the front end and retrieved when clicking on Roll back. If the user uploads two or more files, the actual job ID will be always the last one (i.e. job ID is updated every time). If the user uploads a file, performs bulk edit, then uploads another file, performs bulk edit and clicks on Roll back, then the actual data to roll back will be in the second uploaded file.

Whereas job ID to roll back is located on the client's side, there is a risk to lose such id if user, for example, refreshes a page, closes a browser, or somehow a session is interrupted (depending on the implementation of this logic on the front end side). One of the possible mitigations is to check whether the actual job ID with BULK_EDIT_IDENTIFIERS export type exists and if not, disable Roll back option.

To avoid losing connection between job ID and user ID completely (in case of saving into a map, it can be lost if module is restarted), job ID and user ID can be savelad into MinIO. If user uploads new file, old job ID with BULK_EDIT_IDENTIFIERS type is replaced by the new one.

One more risk comes from the situation when a user uploads and edits a large file, and user clicks on Roll back while editing process. In this case, recently created job with BULK_EDIT_UPDATE export type needs to be stopped. However, there is no saved job ID with BULK_EDIT_UPDATE type, only job ID with BULK_EDIT_IDENTIFIERS is saved. To address this issue, it is possible to create another map where user ID is mapped to the currently executed job ID with BULK_EDIT_UPDATE type. In such case, job ID to stop is retrieved from the map by user ID. Besides map on back end side, job ID can be saved either on front end side as well as in MinIO.

Solution 2

This is an extension of Solution 1 and assumes that roll back function returns the previous state, and this is not necessarily the initial state if user performed at least 2 edits consecutively. For example, user uploads a file and performs bulk edit. If clicking on roll back after that, then data is reversed to the initial state and there is no difference with Solution 1 in this case. However, if user does not click on roll back, but performs second edit and only after that clicks on roll back, then data is rolling back to the state after the first edit (to the state of job with BULK_EDIT_UPDATE export type), not to the initial state (in case of Solution 1, data is reversed to the initial state, i.e. to the job with BULK_EDIT_IDENTIFIERS export type).

Standard flow for this approach:

Possible risks and mitigations

This solution is a bit complicated cause needs to store references to updated files into the jobs with BULK_EDIT_UPDATE export type (job with BULK_EDIT_IDENTIFIERS export type stores reference only to the initial file). Following the logic, the map described in the Solution 1 should be updated every time with a new job ID if user performs edit (first job ID will have export type BULK_EDIT_IDENTIFIERS, and all the next jobs with BULK_EDIT_UPDATE).

In case of multiple edits every updated file is stored in S3 that leads to the wasting of resources, especially if edited files are large. One of the possible mitigation is to clean up storage and remove large files which will not be used for rolling back. For instance, if the user uploads initial file and performs edit 3 times, file from the first job with BULK_EDIT_UPDATE export type can be safely removed.

All the other risks and mitigations described in the Solution 1 can be applied for Solution 2 as well.