Skip to end of banner
Go to start of banner

Data Export Test Report - logging level(Juniper)

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »





















Overview

  1. These tests are run to investigate the performance difference of mod-data-export with logLevel=info vs logLevel=warn for Juniper release. This testing was part of MDEXP-394 - Getting issue details... STATUS where we observed that mod-data-export is continuously writing a lot of data to log. For 100K DE job, mod-data-export is writing 42Million records. This could also result in a crash if there is not enough CPU, memory allocated to mod-data-export.
  2. In mod-data-export log4j2.properties file was modified with rootLogger.level = warn, logger.netty.level = warn, status = warn


Backend:

  • mod-data-export-4.1.1 (snapshot version)
  • mod-source-record-storage-5.1.4
  • mod-source-record-manager-3.1.3
  • okapi-4.8.2
  • mod-authtoken-2.8.0

Frontend:

  • folio_data-export-4.1.0

Environment:

  • 8 million inventory records
  • 74 FOLIO back-end modules deployed in 144 ECS services
  • 3 okapi ECS services
  • 12 m5.large  EC2 instances
  • writer db.r6g.xlarge 1 reader db.r6g.xlarge AWS RDS instance
  • INFO logging level / WARN logging level
  • mod-data-export Soft memory limit - 360 MB Hard memory limit - 512 MB

High-Level Summary

  1. With WARN level logging, 9% improvement in memory utilization
  2. With WARN level logging, 70% improvement in CPU utilization
  3. With WARN level logging, No bumps in the memory were observed. Memory utilization stays stable for multiple Data Export job runs.
  4. Improvement by 42Million records for WARN. 42 Million fewer records were written.

Test Runs

1 user - INFO level logging vs WARN level logging

Test

Total instances

mod-data-export log level

Duration (Total time to complete exporting all instances, holdings, and items)Total records logged in CloudWatch
1.100,000INFO1 hour 6 minutes42.5 Million
2.100,000WARN57 minutes120K

Total records logged for 100K, INFO vs WARN

INFO - 42.5 Million records

WARN - ~120K records


Service Memory Utilization

For mod-data-export, 9% improvement in memory for WARN level logging

Memory Utilization INFO vs WARN

CPU Utilization

For mod-data-export, 70% improvement in CPU utilization for WARN level logging

INFO level logging

WARN level logging



Check how many jobs can run in parallel

Multiple jobs can run in parallel but data-export fails if trying to export 3 Million instance records with the below configuration.

Current memory allocation to mod-data-export service in ECS task definition container:
Soft memory limit - 360 MB
Hard memory limit - 512 MB


Memory Utilization gradually increases from 101% to 141% as we increase the number of instance records where it eventually crashes.

Number of inventory instance records (Millions)Average Memory Utilization (%) 
1101.66
2102.77
2.5124.4
2.75136.11
3141 (service fails with OOM)


When trying to export 3M records, POST data-export/file-definitions/d63d8a83-e339-44b2-8a2f-41caaf080221/upload fails with 503

Appendix

For more raw data of the test run please see the attached test-report-honeysuckle.xlsx for Honeysuckle.




  • No labels