- MODKBEKBJ-260Getting issue details... STATUS
Introduction
This page is created to describe the ability to export library's holdings details from eHoldings application. There will be package export and title export available. In addition to exporting an entire package/title it will be possible to export only selected fields.
User starts export of a holdings detail - package or title, presses 'Actions' button, then 'Export package/title (CSV)'. The result of export is generated CSV file, that is available for librarians to download. To get a deeper understanding see UI mockups.
Package detail export
When a user clicks on 'Export package CSV' the modal window is getting shown. The user can select all or multiple Package fields, Title fields, and additional fields to export. Then a user presses the 'Export' button, the modal panel disappears, and an export process is getting started. The green toast message is displayed, it shows the name of generating file, and the approximate duration of export (30 mins). Is the user automatically directed to Export Manager ?
The name of the generated file depends on what is going to be exported from the packages:
- Package details only export (when a user chooses fields in the Package dropdown only, staying on the Package export page) - <<YYYY_MM_DD_>>_<<Package name>>_packagedetails.csv, for example, 2022_04_11_WileyOnlineLibrary_packagedetails.csv
- Title-package details export (when a user chooses fields in the Titles dropdown staying on the Package export page) - <<YYYY_MM_DD_>>_<<Title name>>_packagetitles.csv, for example 2022_04_11_WileyOnlineLibrary_packagetitles.csv
Title Package export
Works similar to Package export. Then a user presses the 'Export' button, the modal panel disappears, and an export process is getting started. The green toast message is displayed, it shows the name of generating file, and the approximate duration of export (30 mins). Is the user automatically directed to Export Manager ?
The name of a generated file is always <<YYYY_MM_DD>>_<<Title name>>_titledetails.csv, for example 2022_04_11_WileyOnlineLibrary_titledetails.csv
Additional details:
- What titles to include in export? If user is on the package detail record and wants to export title information THEN include the titles based on the titles returned by the Titles accordion search. For example, if the user is on a package record and conducts a title search within that returns 100 titles versus the 1000 titles in the package then the export should only include the 100 titles returned in the search.
- Requirements to generated files:
- Multiple records separator: pipe
- The extension is: csv
- Delimiter for an array of values: comma
Solution
The Export Manager application can satisfy given requirements. It provides sources we can reuse to retrieve various objects, generate CSV files, upload files into vendor-specific storage, and share access to the stored files. This application can manage 'immediate' export jobs and 'scheduled' export jobs, that have to be configured before running. Export Manager consists of backend modules mod-data-export-spring, mod-data-export-worker, and UI module ui-export-manager.
ui-export-manager shows a list of jobs, job status, job type, and other information. Here users can see the result of job execution, and download files. This module uses REST API of the mod-data-export-spring to retrieve jobs.
This module should be able to display and filter new eHoldings jobs, that we will use to export packages & titles.
What should be done in this module:
- add a new job type - 'eHoldings';
mod-data-export-spring is designed to manage, configure, and run jobs. This module is the entry point to start data export, it calls mod-data-export-worker to execute jobs sending events to the mod-data-export-worker's Kafka topic.
What should be done in this module:
- add new export type - 'eHoldings';
- add new request parameters (to ExportTypeSpecificParameters.json), needed to pass export fields, search params for titles search, and other params;
- add new JobCommandBuilder, needed to take request parameters to pass in Kafka event;
mod-data-export-worker is intended to receive events from mod-data-export-spring, and execute its jobs. The module is built based on Spring Batch Framework, and jobs are configured in a set of steps. The execution of a job happens in 3-steps in most cases: retrieve data, process data, and write data to the file. Uploading files to some vendor-specific storage is preconfigured already by the listener and happens when all the data has been written to the file.
What should be done in this module:
- create a reader extending base functionality (CsvItemReader.java). The reader should retrieve packages/titles using REST clients, taking search parameters from the incoming Kafka event;
- create a processor
- create a writer
mod-kb-ebsco-java ToDo
ui-eholdings ToDo
Questions/Answers