Platform, DevOps and Release Management
(UXPROD-1814)
|
|
| Status: | Open |
| Project: | UX Product |
| Components: | None |
| Affects versions: | None |
| Fix versions: | None | Parent: | Platform, DevOps and Release Management |
| Type: | New Feature | Priority: | P2 |
| Reporter: | Mike Gorrell | Assignee: | Jakub Skoczen |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | cap-mvp, performance | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original estimate: | Not Specified | ||
| Issue links: |
|
||||||||||||||||
| Epic Link: | Platform, DevOps and Release Management | ||||||||||||||||
| Back End Estimate: | XXL < 30 days | ||||||||||||||||
| Back End Estimator: | Jakub Skoczen | ||||||||||||||||
| Estimation Notes and Assumptions: | XXL is the largest available. This item is likely larger. | ||||||||||||||||
| Development Team: | None | ||||||||||||||||
| Kiwi Planning Points (DO NOT CHANGE): | 20 | ||||||||||||||||
| Rank: Chicago (MVP Sum 2020): | R1 | ||||||||||||||||
| Rank: Cornell (Full Sum 2021): | R1 | ||||||||||||||||
| Rank: Duke (Full Sum 2021): | R1 | ||||||||||||||||
| Rank: 5Colleges (Full Jul 2021): | R1 | ||||||||||||||||
| Rank: GBV (MVP Sum 2020): | R1 | ||||||||||||||||
| Rank: hbz (TBD): | R1 | ||||||||||||||||
| Rank: Lehigh (MVP Summer 2020): | R2 | ||||||||||||||||
| Rank: MO State (MVP June 2020): | R4 | ||||||||||||||||
| Rank: TAMU (MVP Jan 2021): | R1 | ||||||||||||||||
| Rank: U of AL (MVP Oct 2020): | R2 | ||||||||||||||||
| Description |
|
We need an environment, set of tests and an approach to load and performance testing that exercise the system based on realistic usage/behavior. Things like waits, concurrency, sequence, parameters, etc... all of these things can contribute to realistic and valuable test results, or if not founded in real-world and expected usage, garbage results that force us to chase our tail. There's no sense in working on optimizing an API so that it can stand up to a pounding if it will never be pounded in real life. We need to test front-end modules as well as backend. We've all seen times when the frontend is not using the backend correctly and causing performance problems as a result. Backend components' contribution to performance are obvious. I think we need a few things: This epic/issue will be used to track the various activities that relate to the overall load and performance testing approach for FOLIO. |
| Comments |
| Comment by Jakub Skoczen [ 05/Jun/19 ] |
|
I think a next step would be to start breaking down this epic into user stories. Initially large and high-level, that’s okay, we can break them down further and/or elevate some to the epic level. Couple of high level user-stories come to my mind: |
| Comment by Mike Gorrell [ 05/Jun/19 ] |
|
Added several stories. |
| Comment by Martin Tran [ 17/Jun/19 ] |
For the initial set of test scenarios we could rely on the libraries' existing data, as they would have information on rates of check in/out, requests, renew, and other workflows. Using this data we can assess which ones are more used than others. Going forward we can get this information by adding custom metrics to our modules. Metrics such as rate, count, and timer that are strategically placed in various parts of the module can help to identify which APIs or paths of the code are being called and their frequency of being called. For the first release we can't rely on these metrics because they won't be exercised by real customers yet, but once a real customer starts using the system, these metrics will light up and able to tell us the scenarios and workflows that customers often use. Having metrics can also tell us how the system performs in real time, and aid with analyzing performance issues. |
| Comment by Cate Boerema (Inactive) [ 29/Jul/19 ] |
|
Mike Gorrell and Jakub Skoczen can we assign this to someone and get a PO rank on it. Thanks! |