Done
Details
Assignee
Volodymyr RohachVolodymyr RohachReporter
Brooks TravisBrooks TravisPriority
P3Story Points
1Sprint
NoneDevelopment Team
FolijetFix versions
Release
Sunflower (R1 2025)RCA Group
Requirements changeTestRail: Cases
Open TestRail: CasesTestRail: Runs
Open TestRail: Runs
Details
Details
Assignee
Volodymyr Rohach
Volodymyr RohachReporter
Brooks Travis
Brooks TravisPriority
Story Points
1
Sprint
None
Development Team
Folijet
Fix versions
Release
Sunflower (R1 2025)
RCA Group
Requirements change
TestRail: Cases
Open TestRail: Cases
TestRail: Runs
Open TestRail: Runs
Created February 13, 2025 at 5:29 PM
Updated March 13, 2025 at 5:35 PM
Resolved February 21, 2025 at 3:31 PM
When posting record chunks to
/change-manager/jobExecutions/{job_id}/records
, if the same payload is posted twice, the server will return a 500 error, rather than an appropriate (and informative) 400/422 error. This becomes important when dealing with a read timeout on a previous post attempt (retry logic). We need to be able to determine, based on the error response, whether the batch already exists in the Kafka queue or some other error has occurred.Steps to reproduce:
Create a new jobExecution with the appropriate job profile
Create a records payload
Post the records payload
Post the same records payload again
Expected results:
An error message indicating the reason for the error
Actual results:
A generic 500 error
Error log from mod-srm: