Kubernetes Example Deployment

Overview

Right in the beginning of a long way we would highly recommend to become familiar with the Folio Eureka Platform Overview document

to be aware of main concepts for the new platform.

Setting Up the Environment

Prerequisites:

  • Kubernetes Cluster (system for automating deployment, scaling, and management of containerized applications)

  • PostgreSQL (RDBMS used by Keycloak, Kong Gateway, Eureka modules)

  • Apache Kafka (distributed event streaming platform)

  • HashiCorp Vault (identity-based secret and encryption management system)

  • Keycloak (Identity and Access Management)

  • Kong Gateway (API Gateway)

  • MinIO (Enterprise Object Store is built for production environments, OPTIONAL)

  • Elasticsearch or OpenSearch(enterprise-grade search and observability suite)

 

MinIO is implementation of Object Storage compatible with AWS S3 service.

It also works the other way around instead of MinIO you are free to use AWS S3 service without any problem.

 

To set up Eureka Platform you should already have Kubernetes Cluster installed. Then just create a new Namespace within K8s Cluster to assign and manage resources granularity for your Eureka deployment.

You can have your cluster nodes on premise in local data center or adopt any cloud provider (i.e. AWS, Azure, GCP and so on) most suitable for you to meet planned or not planned resource demand.

Eureka Platform depends on a bunch of 3rd party services (listed above) for its expected operation. Some of these services (PostgreSQL, Apache Kafka, OpenSearch, Hashicorp Vault) can be deployed as standalone servces outside of cluster namespace but others mostly never depoloyed outside.

For initial Eureka deployment you will need about 30Gb of RAM. Such setup incorporates all mentioned 3rd party services in one kubernetes namespace.

It may require some extra resources (RAM, CPU, HDD Disk Space, HDD IOPS) to be assigned to destination Kubernetes Cluster in case prerequisites services are deployed in to the same cluster namespace.

Also in case you are going to have Consortia deployment it also needs extra resources to be assigned.

In case you make decision to have everything in one place please pay attention for HDD IOPS required by PostgreSQL/OpenSearch/ApacheKafka services.

 

PostgreSQL RDBMS should be installed to cluster namespace first since its the prerequisite for Kong Gateway and Keycloak Identity Manager.

Apache Kafka service is used by Eureka for internal communication between modules and very important to keep it in a good shape.

HashiCorp Vault stores all secrets used within Platform. AWS SSM Parameters are also supported as secrets' storage now.

Keycloak service provides authentication and authorization (granting access) for any kind of identities (users, roles, endpoints).

Kong Gateway as API Gateway routes requests to modules and provides access to Eureka REST APIs.

MinIO object storage keeps data for some modules to be used during platform operation.

Elasticsearch instance contains huge amount of information and indexes it for a fast search. It is very important to look after appropriate level of performance for this service. Also can be installed outside of Kubernetes Cluster.

 

Expected Prerequisites deployment order:

  1. Hashicorp Vault

  2. PostgreSQL

  3. Apache Kafka

  4. ElasticSearch

  5. MinIO (Optional)

  6. Kong Gateway

  7. Keycloak Identity Manager

Cluster setup

Lets assume you are going to set up Eureka Platform development environment on Kubernetes Cluster. To meet resource scalability ease during workload spikes it worth to use Cloud Services like EKS (AWS), AKS (Azure), GKE (GCP).

In the same time to control cloud vendor lock and cut down expences we are going to deploy all prerequisite services into the one cluster namespace except OpenSearch instance :)

To deploy prerequisite services we would recommend to adopt following Container (Docker) Images and Helm Charts:

PostgreSQL container Image: hub.docker.com/bitnami/postgresql , Helm Chart: github.com/bitnami/charts/postgresql

architecture: standalone readReplicas: replicaCount: 1 resources: requests: memory: 8192Mi limits: memory: 10240Mi podAffinityPreset: soft persistence: enabled: true size: '20Gi' storageClass: gp2 extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '500' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' image: tag: 13.13.0 auth: database: folio postgresPassword: secretDBpassword replicationPassword: secretDBpassword replicationUsername: postgres usePasswordFiles: false primary: initdb: scripts: init.sql: | CREATE DATABASE kong; CREATE USER kong PASSWORD 'secretDBpassword'; ALTER DATABASE kong OWNER TO kong; ALTER DATABASE kong SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO kong; GRANT USAGE ON SCHEMA public TO kong; CREATE DATABASE keycloak; CREATE USER keycloak PASSWORD 'secretDBpassword'; ALTER DATABASE keycloak OWNER TO keycloak; ALTER DATABASE keycloak SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO keycloak; GRANT USAGE ON SCHEMA public TO keycloak; CREATE DATABASE ldp; CREATE USER ldpadmin PASSWORD 'someLdpPassword'; CREATE USER ldpconfig PASSWORD 'someLdpPassword'; CREATE USER ldp PASSWORD 'someLdpPassword'; ALTER DATABASE ldp OWNER TO ldpadmin; ALTER DATABASE ldp SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO ldpadmin; GRANT USAGE ON SCHEMA public TO ldpconfig; GRANT USAGE ON SCHEMA public TO ldp; persistence: enabled: true size: '20Gi' storageClass: gp2 resources: requests: memory: 8192Mi limits: memory: 10240Mi podSecurityContext: fsGroup: 1001 containerSecurityContext: runAsUser: 1001 podAffinityPreset: soft extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '5000' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' volumePermissions: enabled: true metrics: enabled: false resources: requests: memory: 1024Mi limits: memory: 3072Mi serviceMonitor: enabled: true namespace: monitoring interval: 30s scrapeTimeout: 30s

Apache Kafka container Image: hub.docker.com/bitnami/kafka, Helm Chart: github.com/bitnami/charts/kafka