Details
Reporter
Magda ZacharskaMagda ZacharskaPotential Workaround
*Start the export:*
1. Post file definition to /data-export/fileDefinition (this is an equivalent of uploading UUIDs or CQL)
2. Start the export by /data-export/export
*Retrieve the files generated by the export:*
1. Get completed jobExecutionId and fileId : /data-export/jobExecution - by querying by completed time stamp
2. Download the file by /data-export/jobExecutions/{jobExecutionId}/download/{fileId}PO Rank
90PO Ranking Note
Needed for automated exportsFront End Estimate
XXL < 30 daysFront End Estimator
Magda ZacharskaMagda ZacharskaFront-End Confidence factor
LowBack End Estimate
XXXL: 30-45 daysBack End Estimator
Magda ZacharskaMagda ZacharskaBack-End Confidence factor
20%Release
Umbrellaleaf (R3 2025)Rank: 5Colleges (Full Jul 2021)
R1Rank: Cornell (Full Sum 2021)
R4Rank: GBV (MVP Sum 2020)
R1Rank: hbz (TBD)
R1Rank: Grand Valley (Full Sum 2021)
R2Rank: TAMU (MVP Jan 2021)
R1Rank: Chicago (MVP Sum 2020)
R1Rank: MO State (MVP June 2020)
R1Rank: U of AL (MVP Oct 2020)
R3Rank: Lehigh (MVP Summer 2020)
R1TestRail: Cases
Open TestRail: CasesTestRail: Runs
Open TestRail: Runs
Details
Details
Reporter
Magda Zacharska
Magda ZacharskaPotential Workaround
*Start the export:*
1. Post file definition to /data-export/fileDefinition (this is an equivalent of uploading UUIDs or CQL)
2. Start the export by /data-export/export
*Retrieve the files generated by the export:*
1. Get completed jobExecutionId and fileId : /data-export/jobExecution - by querying by completed time stamp
2. Download the file by /data-export/jobExecutions/{jobExecutionId}/download/{fileId}
PO Rank
90
PO Ranking Note
Needed for automated exports
Front End Estimate
XXL < 30 days
Front End Estimator
Magda Zacharska
Magda ZacharskaFront-End Confidence factor
Low
Back End Estimate
XXXL: 30-45 days
Back End Estimator
Magda Zacharska
Magda ZacharskaBack-End Confidence factor
20%
Release
Umbrellaleaf (R3 2025)
Rank: 5Colleges (Full Jul 2021)
R1
Rank: Cornell (Full Sum 2021)
R4
Rank: GBV (MVP Sum 2020)
R1
Rank: hbz (TBD)
R1
Rank: Grand Valley (Full Sum 2021)
R2
Rank: TAMU (MVP Jan 2021)
R1
Rank: Chicago (MVP Sum 2020)
R1
Rank: MO State (MVP June 2020)
R1
Rank: U of AL (MVP Oct 2020)
R3
Rank: Lehigh (MVP Summer 2020)
R1
TestRail: Cases
Open TestRail: Cases
TestRail: Runs
Open TestRail: Runs
Created March 18, 2020 at 2:27 AM
Updated March 25, 2025 at 7:17 PM
In the existing implementation the data export can only be triggered manually. For the exports that reoccur on regular basis (like incremental export of all records that were added or modified since the last export), the application will need to provide the API so that the export could be triggered by the external custom export script .
This feature covers backend work that would support a scenario when the library has an export job that needs to run on the regular basis on the data identified in a consistent way. Such jobs are mostly run when the exported data is need for integration with the external services and the file generated by the export might need to be FTP-ed to a specific location.
The user should be able to:
schedule when the job will need to run (quarterly, monthly, weekly, daily, at a specificied time)
determine if this re-occurring export job
the files generated by export are stored in the standard location
the job will be associated with a mapping profile that will determine required data manipulation.
identify the data that will be exported by CQL query that can take system parameters (like date of the last execution) for example or by providing list of UUIDs if static data needs to be exported.
Additional information:
Updated workaround has been attached.