Test status: PASSED
...
- Response times in test with 30 vUsers. CI - 519 ms, CO - 1130 ms.. There's expected degradation during 24 hours test if to compare with 30 vUsers 45 minutes test. CI - 37%, CO - 14%.
- No memory leaks during longevity test. Two tests perform to get the results and both tests began erroring after 19 hours of running. The root course is under investigation.
- Comparison with Quesnelia results:
- CI/CO response times degradation (45 minute tests):
vUsers Check-Out Controller (CO) Check-In Controller (CI) 8 10% 15% 20 14% 20% 30 7% 7% 75 6% 4%
- CI/CO response times degraded (longevity test):
- 30 vUsers - 6% in CO and 14% in CI flow.
- CI/CO response times degradation (45 minute tests):
...
- RDS CPU utilization average
- 8 vUsers - 13%, 20 20 vUsers - 22%, 30 30 vUsers - 30%, 75 75 vUsers - 63% During longevity test CPU grew from 30% to 45%. So it has growing trend during longevity test that can be explained by absent dcb-system-user in mod-dcb module. The same CPU utilization as it was in quesnelia.
- CPU (User) usage by broker
- Common CPU utilization by broker during all tests was 15% with equal distribution between brokersAs MSK cluster is linked to all PTF clusters so the time range which can reflect only CI/CO - from midnight till 7 a.m. Max consumption rate for 30 vUsers test - 10%. Also we may observe impact of other CI/CO tests - the max consumption rate - 40% for all clusters.
Common notes
Recommendations
...
- Error messages: POST_circulation/check-out-by-barcode (Submit_barcode_checkout)_POST_422. 422/Unprocessable Entity. Happen expectedly if instance was checked out already. Error rate - 0.002% which is acceptable.
Response time
The table contains results of Check-in, Check-out tests in Ramsons release.
...
45 minute tests
Longevity test
RDS Database Connections
For 45 minute and longevity tests RDS used max 885-920 connections. Without test it was 860 connections.
45 minute tests
Longevity test
CPU (User) usage by broker
As MSK cluster is linked to all PTF clusters so the time range which can reflect only CI/CO - midnight till 7 a.m. Max consumption rate for 30 vUsers test - 10%. Also we may observe impact of other CI/CO tests - the max consumption rate - 40% for all clusters.
45 minute tests
Longevity test
Database load
Code Block | ||||
---|---|---|---|---|
| ||||
45 minutes tests UPDATE fs09000000_mod_inventory_storage.item SET jsonb=$1 WHERE id=$2 RETURNING jsonb::text INSERT INTO fs09000000_mod_pubsub.audit_message (id, event_id, event_type, tenant_id, audit_date, state, published_by, correlation_id, created_by, error_message) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10); WITH deleted_rows AS ( delete from marc_indexers mi where exists( select ? from marc_records_tracking mrt where mrt.is_dirty = ? and mrt.marc_id = mi.marc_id and mrt.version > mi.version ) returning mi.marc_id), deleted_rows2 AS ( delete from marc_indexers mi where exists( select ? from records_lb where records_lb.id = mi.marc_id and records_lb.state = ? ) returning mi.marc_id) INSERT IN SELECT fs09000000_mod_inventory_storage.count_estimate('SELECT * FROM fs09000000_mod_inventory_storage.material_type WHERE id=''025ba2c5-5e96-4667-a677-8186463aee69''') UPDATE fs09000000_mod_login.auth_attempts SET jsonb = $1::jsonb WHERE id='9883ca16-ef27-41f7-81d7-6693b79cddad' INSERT INTO fs09000000_mod_authtoken.refresh_tokens (id, user_id, is_revoked, expires_at) VALUES ($1, $2, $3, $4) SELECT upsert('circulation_logs', $1::uuid, $2::jsonb) Longevity INSERT INTO fs09000000_mod_pubsub.audit_message (id, event_id, event_type, tenant_id, audit_date, state, published_by, correlation_id, created_by, error_message) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10); SELECT fs09000000_mod_patron_blocks.count_estimate('SELECT jsonb FROM fs09000000_mod_patron_blocks.patron_block_limits WHERE (jsonb->>''patronGroupId'') = ''5fc96cbd-a860-42a7-8d2b-72af30206712''') UPDATE fs09000000_mod_inventory_storage.item SET jsonb=$1 WHERE id=$2 RETURNING jsonb::text SELECT jsonb FROM fs09000000_mod_patron_blocks.user_summary WHERE (jsonb->>'userId') = '4cd01954-62da-46c5-8558-ebd222bc48eb' SELECT fs09000000_mod_inventory_storage.count_estimate('SELECT jsonb,id FROM fs09000000_mod_inventory_storage.service_point WHERE id=''7068e104-aa14-4f30-a8bf-71f71cc15e07''') UPDATE fs09000000_mod_login.auth_attempts SET jsonb = $1::jsonb WHERE id='9883ca16-ef27-41f7-81d7-6693b79cddad' INSERT INTO fs09000000_mod_authtoken.refresh_tokens (id, user_id, is_revoked, expires_at) VALUES ($1, $2, $3, $4) SELECT upsert('circulation_logs', $1::uuid, $2::jsonb) SELECT COUNT(*) FROM fs09000000_mod_users.users SELECT fs09000000_mod_circulation_storage.count_estimate('SELECT jsonb,id FROM fs09000000_mod_circulation_storage.loan_policy WHERE id=''2be97fb5-eb89-46b3-a8b4-776cea57a99e''') |
During 45 minute tests we see that the longest request is UPDATE fs09000000_mod_inventory_storage.item SET with 38 ms/request
During longevity test INSERT INTO fs09000000_mod_pubsub.audit_message - 41 ms and SELECT fs09000000_mod_inventory_storage.count_estimate - 107 ms.
Other observation is that we see a lot of UPDATE fs09000000_mod_login.auth_attempts and INSERT INTO fs09000000_mod_authtoken.refresh_tokens which is new. It may be connected to every 10 minutes token refresh.
45 minute tests
Longevity test
Appendix
Infrastructure
PTF -environment rcp1 |
---|
|
...
Update revision in source-record-storage module to exclude every 30 minutes SQL statements - delete rows in marc_indexers
(mi
) WITH deleted_rows
Code Block | ||
---|---|---|
| ||
{ "name": "srs.marcIndexers.delete.interval.seconds", "value": "86400" }, |
Update mod-serials module. Set number of task with 0 to exclude significant database connection growth.
Usual PTF CI/CO data preparation script won’t work in Ramsons. To solve that disable trigger updatecompleteupdateddate_item_insert_update before data preparation for the tenant and enable it before test start.
...
- If the command executed from local machine you may encounter with too long query error message. To solve it use PGAdmin to run 2 long queries UPDATE ${TENANT}_mod_inventory_storage.item SET jsonb = jsonb_set(jsonb, '{status, name}', '\"Checked out\"') where id IN.
- Other possible issue - incorrect encoding (on Windows machine). To solve it just add ENCODING 'UTF8'
- Use pattern: copy ${TENANT}_mod_circulation_storage.loan(id, jsonb) FROM '${LOANS}' DELIMITER E'\t' ENCODING 'UTF8'
In Ramsons token expiration set to 10 minutes by default so to run any tests use new login implementation from the script. Pay attention to Backend Listener. Replace value of application to make the results visible in Grafana dashboard.
...