Overview
This is initial report for EBSCONET workflow testing. The purpose of this document is to highlight KPI of EBSCONET workflow, find possible bottlenecks/ issues. Define baseline
Summary
During testing we've find several issues:
- NullPointerException
- Internal server error
No memory leaks were found.
used modules:
- mod-organizations
- mod-organizations-storage
- mod-finance
- mod-finance-storage
- nginx-edge
- mod-ebsconet
- nginx-okapi
- mod-orders
- mod-orders-storage
- okapi
- mod-mod-notes
- mod-configuration
- edge-orders
Recommendations & Jiras
Ticket to handle internal server error .
Test Runs
Test # | VUsers | Data set | Load generator size | Load generator Memory(GiB) |
| 2 users |
| t3.medium | 3 |
2. normal conditions | 2 users |
| t3.medium | 3 |
3. extreme conditions | 10 users |
| t3.medium | 3 |
4 extreme conditions | 10 users |
| t3.medium | 3 |
Results
Test # | VUsers | Duration | Error rate |
---|---|---|---|
1 | 2 | 7 min 10 s | 2.6% (52 calls) |
2 | 2 | 7 min | 3.3% (67 calls) |
3 | 10 | 4 min 30 s | 1.14% (57 calls) |
4 | 10 | 4 min 10 s | 1.56% (78 calls) |
Memory Utilization
Nolana Avg% | |
---|---|
mod-organizations | 25% |
mod-organizations-storage | 24% |
mod-finance | 29% |
mod-finance-storage | 28% |
nginx-edge | 2% |
mod-ebsconet | 37% |
nginx-okapi | 3% |
mod-orders | 44% |
mod-orders-storage | 34% |
okapi | 36% |
mod-notes | 43% |
mod-configuration | 25% |
edge-orders | 20% |
CPU Utilization
Most used module - mod-orders-storage. During tests with 10 users it reaches 320% of CPU usage
RDS CPU Utilization
Appendix
Infrastructure
PTF -environment ncp3-pvt
- 9 m6i.2xlarge EC2 instances located in US East (N. Virginia)us-east-1
- 2 instances of db.r6.xlarge database instances, one reader, and one writer
- MSK ptf-kakfa-3
- 4 m5.2xlarge brokers in 2 zones
Apache Kafka version 2.8.0
EBS storage volume per broker 300 GiB
- auto.create.topics.enable=true
- log.retention.minutes=480
- default.replication.factor=3
Modules memory and CPU parameter
Modules | Version | Task Definition | Running Tasks | CPU | Memory | MemoryReservation | MaxMetaspaceSize | Xmx |
---|---|---|---|---|---|---|---|---|
mod-organizations | 1.6.0 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
mod-organizations-storage | 4.4.0 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
mod-finance | 4.6.2 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
mod-finance-storage | 8.3.1 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
nginx-edge | nginx-edge:2022.03.02 | 1 | 2 | 128 | 1024 | 896 | N/A | N/A |
mod-ebsconet | 1.4.0 | 2 | 2 | 128 | 1024 | 896 | 256 | 700 |
nginx-okapi | nginx-okapi:2022.03.02 | 1 | 2 | 128 | 1024 | 896 | N/A | N/A |
mod-orders | 12.5.4 | 2 | 2 | 1024 | 2048 | 1440 | 256 | 896 |
mod-orders-storage | 13.4.0 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
okapi | 4.14.7 | 1 | 3 | 1024 | 1684 | 1440 | 922 | 922 |
mod-mod-notes | 4.0.0 | 2 | 2 | 128 | 1024 | 896 | 128 | 322 |
mod-configuration | 5.9.0 | 3 | 2 | 128 | 1024 | 896 | 128 | 768 |
edge-orders | 2.7.0 | 2 | 2 | 128 | 1024 | 896 | 128 | 700 |
Methodology/Approach
According to EBSCONET order renewal integration testing we have designed test containing two calls [get] /orders/order-lines/${polineNumber}?type=EBSCONET&apiKey=${API_KEY}
and [put] /orders/order-lines/${polineNumber}?type=EBSCONET&apiKey=${API_KEY} with payload :
{
"currency": "USD",
"fundCode": "NEW2023",
"poLineNumber": "${polineNumber}",
"quantity": 1,
"unitPrice": 1.0,
"vendor": "ZHONEWAX$%",
"vendorAccountNumber": "libraryorders@library.tam",
"vendorReferenceNumbers": [],
"workflowStatus": "Open"
}
Test data creation
In order to create test data (orders with PO lines) we used SQL script. This script will create orders for particular organization ()
CREATE OR REPLACE FUNCTION public.generate_data_for_edifact_export(organizations_amount integer, orders_per_vendor integer, polines_per_order integer) RETURNS VOID as $$ DECLARE -- !!! SET DEFAULT TENANT NAME !!! orgName text DEFAULT 'perf_test_vendor'; orgCode TEXT default 'PERF_TEST_ORG'; vendor_id TEXT; BEGIN for org_counter in 1..organizations_amount loop /* INSERT INTO fs09000000_mod_organizations_storage.organizations (id, jsonb) VALUES (public.uuid_generate_v4(), jsonb_build_object('code', concat(orgCode, org_counter), 'erpCode', '12345', 'isVendor', true, 'name', concat(orgName, org_counter), 'status', 'Active', 'metadata', jsonb_build_object( 'createdDate', '2023-02-08T00:00:00.000+0000', 'createdByUserId', '28d1057c-d137-11e8-a8d5-f2801f1b9fd1', 'updatedDate', '2023-02-08T00:00:00.000+0000', 'updatedByUserId', '28d1057c-d137-11e8-a8d5-f2801f1b9fd1' ) )) RETURNING id INTO vendor_id;*/ PERFORM public.generate_orders(orders_per_vendor, polines_per_order, '2e6d8468-0620-475b-a092-045e659a0aaa');------------------------------- end loop; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION public.generate_orders(orders_per_vendor integer, polines_per_order integer, vendor_id text) RETURNS VOID as $$ DECLARE order_id text; newPoNumber integer; BEGIN for order_counter in 1..orders_per_vendor loop SELECT nextval('fs09000000_mod_orders_storage.po_number') INTO newPoNumber; -- INSERT INTO fs09000000_mod_orders_storage.purchase_order (id, jsonb) VALUES (public.uuid_generate_v4(), jsonb_build_object('id', public.uuid_generate_v4(), 'reEncumber', true, 'workflowStatus', 'Pending', 'poNumber', newPoNumber, 'vendor', vendor_id, 'orderType', 'One-Time', 'metadata', jsonb_build_object( 'createdDate', '2023-02-08T00:00:00.000+0000', 'createdByUserId', '9eb67301-6f6e-468f-9b1a-6134dc39a684', 'updatedDate', '2023-02-08T00:00:00.000+0000', 'updatedByUserId', '9eb67301-6f6e-468f-9b1a-6134dc39a684' ) )) RETURNING id INTO order_id; PERFORM public.generate_polines(order_id, polines_per_order, newPoNumber); end loop; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION public.generate_polines(order_id text, polines_per_order integer, ponumber integer) RETURNS VOID as $$ DECLARE polineNumber text; BEGIN for line_counter in 1..polines_per_order loop INSERT INTO fs09000000_mod_orders_storage.po_line (id, jsonb) VALUES (public.uuid_generate_v4(), -- add other fields to increase processing complexity jsonb_build_object('id', public.uuid_generate_v4(), 'acquisitionMethod', 'df26d81b-9d63-4ff8-bf41-49bf75cfa70e', 'rush', false, 'cost', json_build_object( 'currency', 'USD', 'discountType', 'percentage', 'listUnitPrice', 1, 'quantityPhysical', 1, 'poLineEstimatedPrice', 1 ), 'fundDistribution', json_build_array( jsonb_build_object( 'code', 'NEW2023', 'fundId', '9dde6d9d-a567-43f6-9024-eb00ac1fc076', 'distributionType', 'percentage', 'value', 100 ) ), 'locations', json_build_array( jsonb_build_object( 'locationId', 'f4619e23-d081-4447-a589-e278037e7f5e', 'quantity', 2, 'quantityElectronic', 0, 'quantityPhysical', 2 ) ), 'alerts', json_build_array(), 'source', 'User', 'physical', jsonb_build_object('createInventory', 'None'), 'details', jsonb_build_object(), 'isPackage', false, 'orderFormat', 'Physical Resource', 'vendorDetail', jsonb_build_object('vendorAccount', 'libraryorders@library.tam'), 'titleOrPackage', 'ABA Journal', 'automaticExport', true, 'publicationDate', '1915-1983', 'purchaseOrderId', order_id, 'poLineNumber', concat(ponumber, '-', line_counter), 'claims', json_build_array(), 'metadata', jsonb_build_object( 'createdDate', '2023-02-08T00:00:00.000+0000', 'createdByUserId', '9eb67301-6f6e-468f-9b1a-6134dc39a684', 'updatedDate', '2023-02-08T00:00:00.000+0000', 'updatedByUserId', '9eb67301-6f6e-468f-9b1a-6134dc39a684' ) )); end loop; END $$ LANGUAGE plpgsql; -- CREATE sample data -- 1 - amount of organizations to be created -- 2 - amount of orders per organization -- 3 - amount of polines per order select public.generate_data_for_edifact_export(1, 1, 1); -- CLEANUP