Table of Contents |
---|
...
Functional Area | Change or Addition | Considerations | Action required | Action timing | Contact person | Comments | Related JIRA issues | ||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Affected app or module | What has been changed or added that should be noted for this release | What challenges may arise related to this change or addition | If applicable, detail what action(s) must be taken here | When can the action be taken (before, during or after upgrade) | User name of person that can provide additional detail | Name of user leaving comment: comment on what you encountered or ask a question @mention Contact person | Include issue link for bug fix, story or feature that applies | ||||||||||||||||||||||||||||||||
mod-inventory-storage | The module now depends on Kafka message broker. Kafka should be up and running before module install. | Inventory-storage APIs (instances/holding-records/items - create, update, delete actions) will fail with 500 status code if Kafka is unreachable. | Make sure KAFKA_PORT and KAFKA_HOST environment variables are set and propagated to the mod-inventory-storage container before module installation. Example: for rancher dev environments we set following values - KAFKA_HOST: kafka, KAFKA_PORT: 9092 | Before upgrade | The same approach as for mod-pubsub may be used here. We tried to follow the same naming for the properties. | ||||||||||||||||||||||||||||||||||
mod-inventory-storage | Statistical code name needs to be unique | The upgrade process will fail when there are two or more statistical codes with the same name prior to the upgrade | Change the name of any existing statistical codes that are the same as another statistical code | Before upgrade |
| ||||||||||||||||||||||||||||||||||
mod-inventory-storage | The id property of the circulationNotes array property in the Item object is mandatory | The id property got added to the item schema in Honeysuckle. If data added in, it will break things. | With Iris, this id is now mandatory, and the UI and the API will hang if not present. | Before upgrade | This is an unexpected impact of
This issue was likely present in previous distributions of FOLIO |
| |||||||||||||||||||||||||||||||||
mod-source-record-manager | Changes to the default MARC Bib-to-Instance map:
| If library has customized the default MARC bib-to-Instance map, review these changes and decide if they should be added to the library's custom map. If changes are made, you may need to trigger a refresh of the existin gInstances that are based on the SRS MARC records, so that the changes are reflected in the Instance | |||||||||||||||||||||||||||||||||||||
mod-source-record-storage | An end point has been added to allow searching MARC records. | The end point will only return results from MARC records added after the upgrade. A script to retroactively process pre-existing records has been developed, see MODSOURCE-276: Add existing records to the SRS Query API table | To add records that existed prior to upgrade to the query search see MODSOURCE-276: Add existing records to the SRS Query API table Script: https://github.com/folio-org/mod-source-record-storage/blob/master/mod-source-record-storage-server/src/main/resources/migration_scripts/fill_marc_indexers.sql | After upgrade |
| ||||||||||||||||||||||||||||||||||
mod-circulation | The age to lost processes (ageing borrowed items to lost and issuing fees / fines for those items ) are no longer automatically scheduled to run periodically | No items will be aged to lost and no fees / fines will be issued unless these processes are scheduled by the hosting provider | Organisations who need these processes to run will have to schedule them from outside the system Configuration needed by the hosting provider
Organisations will likely need to schedule these processes separately meaning two separate tasks and schedules are likely to be needed (separate users may be used for each process) New External Endpoints
No body needs to be provided to make requests to either of these endpoints. Further documentation is available at How to Schedule Age to Lost Processes | During or after the upgrade |
| ||||||||||||||||||||||||||||||||||
mod-kb-ebsco-java | New feature "Usage Consolidation" is now supported | Configuring Usage Consolidation settings | In order to configure "Usage Consolidation" → apart from the settings on UI, middle layer service needs to be configured by making a query directly to DB like the below: ``` INSERT INTO <tenant_id>_mod_kb_ebsco_java.usage_consolidation_credentials(client_id, client_secret) VALUES (?, ?); ``` User stories have been created to make this configurable from UI. | After upgrade to Iris |
| ||||||||||||||||||||||||||||||||||
mod-ncip | New permission required to call NCIP services:
| During or after upgrade | |||||||||||||||||||||||||||||||||||||
mod-oai-pmh | This is not a required step but we noticed during tests that if the mod-oai-pmh performance is poor, re-indexing and vacuuming indexes located in mod-inventory-storage resolved the issue. |
| During or after upgrade | ||||||||||||||||||||||||||||||||||||
mod-data-import, mod-source-record-manager, mod-source-record-storage, mod-inventory, mod-invoice | All modules involved in data import process are now communicating through Kafka directly. Kafka should be configured and running before modules are installed and additional parameters should be set for modules. | New setup and configurations required | Follow the instructions to set all the necessary parameters for Kafka and the modules | Before and during the upgrade | |||||||||||||||||||||||||||||||||||
mod-search | Since 1.3.0 version OKAPI_URL environment variable is required for the module. | Data ingestion won't work. | Set/pass OKAPI_URL to the application container. | On module startup. |
| ||||||||||||||||||||||||||||||||||
mod-pubsub | Environment variables SYSTEM_USER_NAME and SYSTEM_USER_PASSWORD can be used to set up credentials for PubSub system user. Otherwise, default values (username pub-sub, password pubsub) will be used. | Not providing these variables will result in using default credentials which is considered a security issue. | Set SYSTEM_USER_NAME and SYSTEM_USER_PASSWORD environment variables. | On module startup. |
| ||||||||||||||||||||||||||||||||||
mod-agreements | When upgrading from Goldenrod consider running the Supplementary Document cleanup job on initialising the module in the tenant. | The Supplementary Document cleanup job fixes an issue that could lead to a single supplementary document being linked to multiple agreements. The cleanup job will detect this situation and duplicate the document so each agreement is linked to it's own copy of the document | On installing/starting the module in the tenant, include cleanSupplementaryDocs%3Dtrue in the tenantParameters. e.g. /_/proxy/tenants/diku/install?tenantParameters=cleanSupplementaryDocs%3Dtrue | On module startup. |
| ||||||||||||||||||||||||||||||||||
mod-patron-blocks | Events which took place before first mod-patron-blocks deployment are NOT taken into account while calculating automated patron blocks. | This is an inherent limitation of mod-patron-blocks which is an event-based system. | If tenant plans on using Automated Patron Blocks feature AND full event synchronization has not been performed before, follow steps outlined in Q3 2020 (Honeysuckle) Release Notes. | After upgrade |
| ||||||||||||||||||||||||||||||||||
mod-source-record-storage | Cleanup of invalid snapshot statuses | Data Import update of records associated with snapshot (with invalid status) will fail | Table snapshots_lib records should only contain status values of 'ERROR' or 'COMMITTED' for jobs which are no longer executing - the following manual script can be run to adjust values: This script should NOT be run if DI jobs are currently actively in progress | Before or After Upgrade (DI Jobs not running) |
| ||||||||||||||||||||||||||||||||||
mod-source-record-storage | Populate missed instance_hrid | Run following manual script to populate missed instance_hrid in records_lb table. ``` DO $$ BEGIN RAISE notice 'Script for populating missing Instance HRIDs in SRS started'; UPDATE <tenant>_mod_source_record_storage.records_lb rec SET instance_hrid = arr.item_object->>'001' FROM <tenant>_mod_source_record_storage.marc_records_lb ind, jsonb_array_elements(content->'fields') with ordinality arr(item_object, position) WHERE rec.id = ind.id and rec.instance_id is not null and rec.instance_hrid is null and arr.item_object ? '001'; RAISE notice 'Script for populating missing Instance HRIDs in SRS finished'; END; $$; ``` | Before or After Upgrade |
| |||||||||||||||||||||||||||||||||||
mod-circulation | All Service Points must be associated with a Fee/Fine Owner at Settings>Users>Fee/Fine: Owners for overdue fines and lost item fees to work properly. If an overdue fine/lost item fee is calculated for an item with a Location whose Primary Service Point is not associated to a Fee/Fine Owner, the overdue fine/lost item fee will NOT be charged to the patron. | In the future we will have a Default Fee/fine Owner to be charged. (See UXPROD-2278 for details.) | |||||||||||||||||||||||||||||||||||||
mod-circulation | Upon deployment make sure that automatic fee/fine types were added to the mod-feesfines database. Otherwise overdue fine creation functionality will not work. | To check, make a call to /feefines?query=automatic==true. The response should contain 4 entries: "Overdue fine", "Lost item fee", Lost item processing fee" and "Replacement processing fee." | |||||||||||||||||||||||||||||||||||||
mod-circulation | Overdue fine policies must be set up and circulation rules have to refer to policies that exist. | If you do not charge overdue fines, you need to create one overdue fine policy that is for an overdue fine of 0 and use that overdue fine policy for every circulation rule. | |||||||||||||||||||||||||||||||||||||
mod-circulation | Lost item fine policies must be set up and circulation rules have to refer to policies that exist. | If you do not charge lost item fees, you need to create one lost item fee policy that is for a lost item fee of blank and use that lost item fee policy for every circulation rule. |
...
This feature will allow a user to export filtered data in the Circulation log to a CSV file. Accessed through the Circulation Log app > Run a search > Open the “Actions” drop-down list > Select “.CSV export”
Cornell Library's go-live requirements to transfer fees/fines to the Cornell bursar system
Automated Transfer of fees/fines to bursar or other account. Accessed through Settings > Tenant > General: Bursar (to run a Bursar export manually or set the scheduling parameters for the export)
Permissions Updates
App | New Permissions | Deprecated Permissions | Product Owner |
---|---|---|---|
Data import | Renamed UI: ui-data-import module is enabled to Data import: all permissions, to make it clearer. Scope of the permissions did not change. | ||
Data import | Renamed Settings (data-import): display list of settings pages to Settings (Data import): Can view, create, edit, and remove, to make it clearer. Scope of the permission did not change. | Ann-Marie Breaux (Deactivated) | |
Inventory | The option to add tags to Inventory record types (instance, holdings, item) now has the consequence that anyone with inventory permissions should have their permissions edited to add either the Tags on records: View only or Tags: All permissions Important: If the user permissions are not updated as described above, then the Inventory app will not display in the top menu bar for the user with the Inventory app permissions. | ||
Inventory | We have implemented the possibility to move holdings and item in Inventory, but any transfers are not yet being reflected in dependent apps; e.g. Orders, Request, and Courses. In order to prevent corrupt data it's highly recommended that all libraries limit the ability to move holdings and item by limiting the number of staff having the appropriate user permission to move holdings and item. | Charlotte Whitt | |
Remote Storage | Remote storage integration has one level of permissions - Remote storage: Create, read, update, delete. A user with these permissions can create and configure remote locations. Any user with Inventory: View, create, edit instances has the ability to change the location of holdings and items to remote storage, and from remote storage to main locations. | ||
Check out | New Permissions: ui-checkout.viewRequests (Check out: View requests), ui-checkout.viewLoans (Check out: View loans), and ui-checkout.viewFeeFines (Check out: View fees/fines) UI-only permissions that control whether the Loans, Requests, and FeeFine information in the user pane of Check out are active hyperlinks to their respective information. Important: these permissions are not included in any other permission set by default, including ui-checkout.all (Check out: All permissions), and they are required to enable the indicated hyperlinks in Check out. | Brooks Travis(formerly Emma Boettcher) | |
quickMARC | quickMARC: Derive new MARC bibliographic record - this permission allows one to derive a new MARC bib record from an existing MARC record | Khalilah Gambrell | |
eholdings |
| Khalilah Gambrell | |
Agreements |
| ||
Licenses |
| ||
Local KB Admin |
|
...
Deployment Considerations:
- If you want to benefit from permission migration you need OKAPI v4.6.0 or greater (v4.7.2 or greater is highly recommended) and mod-permissions v5.13.0.
- Contrary to earlier communications, it is NOT required to upgrade mod-permissions first or last. It is also NOT required that you upgrade to the latest Honeysuckle Hot Fix release prior to upgrading to Iris.
Please contact the Craig McNally, Adam Dickmeiss, or Jakub Skoczen with questions.
Data Import
- 0-Recommended Maximum File Sizes and Configuration
- Supported in Iris:
- Create and update Inventory Instances, Holdings, and Items
- MARC record modifications to add, remove, and change constant data within the incoming file
- MARC-MARC matching on 001 and 999 ff fields, but not any other MARC fields
- EDIFACT invoice loading
- For importing EDIFACT invoices, the User needs these permissions
- Data import: all permissions
- Settings (Data import): Can view, create, edit, and remove
- Invoice: Assign acquisitions units to new record
- Invoice: Can view, edit and create new Invoices and Invoice lines
- Invoice: Can view, edit and delete Invoices and Invoice lines
...
Jira Charts | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Module mod-aes removed from the Iris release platform by request of the dev team (Hongwei Ji and Matt Reno). Was added by mistake
Hot fix release #2 -
Status | ||||||
---|---|---|---|---|---|---|
|
...