/
Kubernetes Example Deployment

Kubernetes Example Deployment

Overview

Right in the beginning of a long way we would highly recommend to become familiar with the Folio Eureka Platform Overview document

to be aware of main concepts for the new platform.

Setting Up the Environment

Prerequisites:

  • Kubernetes Cluster (system for automating deployment, scaling, and management of containerized applications)

  • PostgreSQL (RDBMS used by Keycloak, Kong Gateway, Eureka modules)

  • Apache Kafka (distributed event streaming platform)

  • HashiCorp Vault (identity-based secret and encryption management system)

  • Keycloak (Identity and Access Management)

  • Kong Gateway (API Gateway)

  • MinIO (Enterprise Object Store is built for production environments, OPTIONAL)

  • Elasticsearch or OpenSearch(enterprise-grade search and observability suite)

 

MinIO is implementation of Object Storage compatible with AWS S3 service.

It also works the other way around instead of MinIO you are free to use AWS S3 service without any problem.

 

To set up Eureka Platform you should already have Kubernetes Cluster installed. Then just create a new Namespace within K8s Cluster to assign and manage resources granularity for your Eureka deployment.

You can have your cluster nodes on premise in local data center or adopt any cloud provider (i.e. AWS, Azure, GCP and so on) most suitable for you to meet planned or not planned resource demand.

Eureka Platform depends on a bunch of 3rd party services (listed above) for its expected operation. Some of these services (PostgreSQL, Apache Kafka, OpenSearch, Hashicorp Vault) can be deployed as standalone servces outside of cluster namespace but others mostly never depoloyed outside.

For initial Eureka deployment you will need about 30Gb of RAM. Such setup incorporates all mentioned 3rd party services in one kubernetes namespace.

It may require some extra resources (RAM, CPU, HDD Disk Space, HDD IOPS) to be assigned to destination Kubernetes Cluster in case prerequisites services are deployed in to the same cluster namespace.

Also in case you are going to have Consortia deployment it also needs extra resources to be assigned.

In case you make decision to have everything in one place please pay attention for HDD IOPS required by PostgreSQL/OpenSearch/ApacheKafka services.

 

PostgreSQL RDBMS should be installed to cluster namespace first since its the prerequisite for Kong Gateway and Keycloak Identity Manager.

Apache Kafka service is used by Eureka for internal communication between modules and very important to keep it in a good shape.

HashiCorp Vault stores all secrets used within Platform. AWS SSM Parameters are also supported as secrets' storage now.

Keycloak service provides authentication and authorization (granting access) for any kind of identities (users, roles, endpoints).

Kong Gateway as API Gateway routes requests to modules and provides access to Eureka REST APIs.

MinIO object storage keeps data for some modules to be used during platform operation.

Elasticsearch instance contains huge amount of information and indexes it for a fast search. It is very important to look after appropriate level of performance for this service. Also can be installed outside of Kubernetes Cluster.

 

Expected Prerequisites deployment order:

  1. Hashicorp Vault

  2. PostgreSQL

  3. Apache Kafka

  4. ElasticSearch

  5. MinIO (Optional)

  6. Kong Gateway

  7. Keycloak Identity Manager

Cluster setup

Lets assume you are going to set up Eureka Platform development environment on Kubernetes Cluster. To meet resource scalability ease during workload spikes it worth to use Cloud Services like EKS (AWS), AKS (Azure), GKE (GCP).

In the same time to control cloud vendor lock and cut down expences we are going to deploy all prerequisite services into the one cluster namespace except OpenSearch instance :)

To deploy prerequisite services we would recommend to adopt following Container (Docker) Images and Helm Charts:

PostgreSQL container Image: hub.docker.com/bitnami/postgresql , Helm Chart: github.com/bitnami/charts/postgresql

architecture: standalone readReplicas: replicaCount: 1 resources: requests: memory: 8192Mi limits: memory: 10240Mi podAffinityPreset: soft persistence: enabled: true size: '20Gi' storageClass: gp2 extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '500' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' image: tag: 13.13.0 auth: database: folio postgresPassword: secretDBpassword replicationPassword: secretDBpassword replicationUsername: postgres usePasswordFiles: false primary: initdb: scripts: init.sql: | CREATE DATABASE kong; CREATE USER kong PASSWORD 'secretDBpassword'; ALTER DATABASE kong OWNER TO kong; ALTER DATABASE kong SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO kong; GRANT USAGE ON SCHEMA public TO kong; CREATE DATABASE keycloak; CREATE USER keycloak PASSWORD 'secretDBpassword'; ALTER DATABASE keycloak OWNER TO keycloak; ALTER DATABASE keycloak SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO keycloak; GRANT USAGE ON SCHEMA public TO keycloak; CREATE DATABASE ldp; CREATE USER ldpadmin PASSWORD 'someLdpPassword'; CREATE USER ldpconfig PASSWORD 'someLdpPassword'; CREATE USER ldp PASSWORD 'someLdpPassword'; ALTER DATABASE ldp OWNER TO ldpadmin; ALTER DATABASE ldp SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO ldpadmin; GRANT USAGE ON SCHEMA public TO ldpconfig; GRANT USAGE ON SCHEMA public TO ldp; persistence: enabled: true size: '20Gi' storageClass: gp2 resources: requests: memory: 8192Mi limits: memory: 10240Mi podSecurityContext: fsGroup: 1001 containerSecurityContext: runAsUser: 1001 podAffinityPreset: soft extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '5000' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' volumePermissions: enabled: true metrics: enabled: false resources: requests: memory: 1024Mi limits: memory: 3072Mi serviceMonitor: enabled: true namespace: monitoring interval: 30s scrapeTimeout: 30s

Apache Kafka container Image: hub.docker.com/bitnami/kafka, Helm Chart: github.com/bitnami/charts/kafka

image: tag: 3.5 metrics: kafka: enabled: true resources: limits: memory: 1280Mi requests: memory: 256Mi jmx: enabled: true resources: limits: memory: 2048Mi requests: memory: 1024Mi serviceMonitor: enabled: true namespace: monitoring interval: 30s scrapeTimeout: 30s persistence: enabled: true size: 10Gi storageClass: gp2 resources: requests: memory: 2Gi limits: memory: 8192Mi zookeeper: image: tag: 3.7 enabled: true persistence: size: 5Gi resources: requests: memory: 512Mi limits: memory: 768Mi livenessProbe: enabled: false readinessProbe: enabled: false replicaCount: 1 heapOpts: "-XX:MaxRAMPercentage=75.0" extraEnvVars: - name: KAFKA_DELETE_TOPIC_ENABLE value: "true"

Hashicorp Vault container Image: hub.docker.com/bitnami/kafka, Helm Chart: github.com/bitnami/charts/vault

global: enabled: true server: ingress: enabled: false dev: enabled: true ha: enabled: false service: type: ClusterIP port: 8200 dataStorage: enabled: true tls: enabled: false auto: enabled: false extraEnvironmentVars: VAULT_DEV_ROOT_TOKEN_ID: "root" resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1Gi" cpu: "1024m" backup: enabled: false logLevel: "debug" dataStorage: enabled: false auditLog: enabled: false agentInjector: enabled: false metrics: enabled: false unsealConfig: enabled: false ui: enabled: true

Keycloak container Image: hub.docker.com/folioci/folio-keycloak, Helm Chart: github.com/bitnami/charts/keycloak, values for values.yaml: github.com/folio-org/pipelines-shared-library/…/keycloak.tf, Git Repository github.com/folio-org/folio-keycloak

image: registry: folioci repository: folio-keycloak tag: latest pullPolicy: Always debug: false auth: adminUser: "admin" existingSecret: keycloak-credentials passwordSecretKey: KEYCLOAK_ADMIN_PASSWORD extraEnvVars: - name: KC_HOSTNAME_BACKCHANNEL_DYNAMIC value: "true" - name: KC_HOSTNAME value: "https://keycloak.example.org" - name: KC_HOSTNAME_BACKCHANNEL value: "https://keycloak.example.org" - name: KC_HOSTNAME_STRICT value: "false" - name: KC_HOSTNAME_STRICT_HTTPS value: "false" - name: KC_PROXY value: "edge" - name: FIPS value: "false" - name: EUREKA_RESOLVE_SIDECAR_IP value: "false" - name: PROXY_ADDRESS_FORWARDING value: "true" - name: KC_FOLIO_BE_ADMIN_CLIENT_SECRET valueFrom: secretKeyRef: name: keycloak-credentials key: KC_FOLIO_BE_ADMIN_CLIENT_SECRET - name: KC_HTTPS_KEY_STORE_PASSWORD valueFrom: secretKeyRef: name: keycloak-credentials key: KC_HTTPS_KEY_STORE_PASSWORD - name: KC_LOG_LEVEL value: "DEBUG" - name: KC_HOSTNAME_DEBUG value: "true" - name: KC_DB_PASSWORD valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_PASSWORD - name: KC_DB_URL_DATABASE valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_DATABASE - name: KC_DB_URL_HOST valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_HOST - name: KC_DB_URL_PORT valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_PORT - name: KC_DB_USERNAME valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_USERNAME - name: KC_HTTP_ENABLED value: "true" - name: KC_HTTP_PORT value: "8080" - name: KC_HEALTH_ENABLED value: "true" resources: requests: cpu: 512m memory: 2Gi limits: cpu: 2048m memory: 3Gi postgresql: enabled: false externalDatabase: existingSecret: keycloak-credentials existingSecretHostKey: KC_DB_URL_HOST existingSecretPortKey: KC_DB_URL_PORT existingSecretUserKey: KC_DB_USERNAME existingSecretDatabaseKey: KC_DB_URL_DATABASE existingSecretPasswordKey: KC_DB_PASSWORD logging: output: default level: DEBUG enableDefaultInitContainers: false containerSecurityContext: enabled: false service: type: NodePort http: enabled: true ports: http: 8080 networkPolicy: enabled: false livenessProbe: enabled: false readinessProbe: enabled: false startupProbe: enabled: false ingress: enabled: true hostname: keycloak.example.org ingressClassName: "" pathType: ImplementationSpecific path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: "Example_Project_Name" alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/healthcheck-path: /

Kong Gateway container Image: hub.docker.com/folioci/folio-kong, Helm Chart: charts/bitnami/kong, Git Repository github.com/folio-org/folio-kong

image: registry: folioci repository: folio-kong tag: latest pullPolicy: Always useDaemonset: false replicaCount: 1 containerSecurityContext: enabled: true seLinuxOptions: {} runAsUser: 1001 runAsGroup: 1001 runAsNonRoot: true privileged: false readOnlyRootFilesystem: true allowPrivilegeEscalation: false capabilities: drop: ["ALL"] seccompProfile: type: "RuntimeDefault" database: postgresql postgresql: enabled: false external: host: pgsql.example.org port: 5432 user: kong password: "" database: kong existingSecret: "kong-credentials" existingSecretPasswordKey: "KONG_PG_PASSWORD" networkPolicy: enabled: false service: type: NodePort exposeAdmin: true disableHttpPort: false ports: proxyHttp: 8000 proxyHttps: 443 adminHttp: 8001 adminHttps: 8444 nodePorts: proxyHttp: "" proxyHttps: "" adminHttp: "" adminHttps: "" ingress: ingressClassName: "" pathType: ImplementationSpecific path: /* hostname: kong.example.org enabled: true annotations: kubernetes.io/ingress.class: "alb" alb.ingress.kubernetes.io/scheme: "internet-facing" alb.ingress.kubernetes.io/group.name: "project_group_name" alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: "200-399" alb.ingress.kubernetes.io/healthcheck-path: "/version" kong: livenessProbe: enabled: false readinessProbe: enabled: false startupProbe: enabled: false extraEnvVars: - name: KONG_PASSWORD valueFrom: secretKeyRef: name: kong-credentials key: KONG_PASSWORD - name: KONG_UPSTREAM_TIMEOUT value: "600000" - name: KONG_UPSTREAM_SEND_TIMEOUT value: "600000" - name: KONG_UPSTREAM_READ_TIMEOUT value: "600000" - name: KONG_NGINX_PROXY_PROXY_NEXT_UPSTREAM value: "error timeout http_500 http_502 http_503 http_504" - name: "KONG_PROXY_SEND_TIMEOUT" value: "600000" - name: "KONG_UPSTREAM_CONNECT_TIMEOUT" value: "600000" - name: "KONG_PROXY_READ_TIMEOUT" value: "600000" - name: "KONG_NGINX_HTTP_KEEPALIVE_TIMEOUT" value: "600000" - name: "KONG_NGINX_UPSTREAM_KEEPALIVE" value: "600000" - name: "KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT" value: "600000" - name: "KONG_UPSTREAM_KEEPALIVE_POOL_SIZE" value: "1024" - name: "KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS" value: "20000" - name: "KONG_NGINX_HTTP_KEEPALIVE_REQUESTS" value: "20000" - name: KONG_PG_DATABASE value: "kong" - name: KONG_NGINX_PROXY_PROXY_BUFFERS value: "64 160k" - name: KONG_NGINX_PROXY_CLIENT_HEADER_BUFFER_SIZE value: "16k" - name: KONG_NGINX_HTTP_CLIENT_HEADER_BUFFER_SIZE value: "16k" - name: KONG_ADMIN_LISTEN value: "0.0.0.0:8001" - name: KONG_NGINX_PROXY_PROXY_BUFFER_SIZE value: "160k" - name: KONG_NGINX_PROXY_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_PLUGINS value: "bundled" - name: KONG_MEM_CACHE_SIZE value: "2048m" - name: KONG_NGINX_HTTP_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_LOG_LEVEL value: "info" - name: KONG_ADMIN_GUI_API_URL value: "kong.example.org" - name: KONG_NGINX_HTTPS_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_PROXY_LISTEN value: "0.0.0.0:8000" - name: KONG_NGINX_WORKER_PROCESSES value: "auto" - name: EUREKA_RESOLVE_SIDECAR_IP value: "false" resources: requests: cpu: 512m ephemeral-storage: 50Mi memory: 2Gi limits: cpu: 2048m ephemeral-storage: 1Gi memory: 3Gi ingressController: enabled: false migration: command: ["/bin/sh", "-c"] args: ["echo 'Hello kong!'"]

MinIO container Image: hub.docker.com/bitnami/minio Helm Chart: github.com/bitnami/charts/minio

defaultBuckets: "mod-data-export,mod-data-export-worker,mod-data-import,mod-lists,mod-bulk-operations,mod-oai-pmh,mod-marc-migrations,local-files" auth: rootUser: root_user_name rootPassword: root_password resources: limits: memory: 1536Mi persistence: size: 10Gi extraEnvVars: - name: MINIO_SERVER_URL value: https://minio.example.org - name: MINIO_BROWSER_REDIRECT_URL value: https://minio-console.example.org service: type: NodePort ingress: enabled: true hostname: minio-console.example.org path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: project_group_name alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/healthcheck-path: / alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=4000 apiIngress: enabled: true hostname: minio.example.org path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: project_group_name alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: '200-399' alb.ingress.kubernetes.io/healthcheck-path: /minio/health/live alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=4000

 

Also we need to have Module Descriptors Registry to be in place.

Module Descriptors Registry service (MDR) represents HTTP Server that configured in Kubernetes Pod.

NOTE: A formal MDR is not a strict requirement. Any HTTP server will suffice, as long as the module descriptors are accessible from the mgr-applications service. For example, you could choose to host module descriptors in an S3 bucket configured as a static website using Amazon S3, or point directly to files in a GitHub repository, or setup an apache HTTP server, or even develop something custom.

This HTTP Server holds and distributes Modules Descriptors for Eureka Instance install and upgrade.

Module descriptor (see Module Descriptor Template) is generated during Continues Integration Flow and is put to Modules Descriptor Registry on finish.

These modules descriptors are used by Eureka install and update flows.

 

Deploying EUREKA on Kubernetes

Once all Prerequisites are met we can proceed with mgr-* Eureka modules deployment to cluster namespace:

  • mgr-applications module:

    • Github Repository folio-org/mgr-applications

    • Container Image folioci/mgr-applications

    • Helm Chart charts/mgr-applications

    • Helm Chart variable values (./values/mgr-applications.yaml file below):

      mgr-applications: extraEnvVars: - name: MODULE_URL value: "http://mgr-applications" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" integrations: db: enabled: true existingSecret: db-credentials kafka: enabled: true existingSecret: kafka-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi
  • mgr-tenant-entitlements module:

    • Github Repository folio-org/mgr-tenant-entitlements

    • Container Image folioci/mgr-tenant-entitlements

    • Helm Chart charts/mgr-tenant-entitlements

    • Helm Chart variable values(./values/mgr-tenant-entitlements.yaml file below):

      mgr-tenant-entitlements: extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" extraEnvVars: - name: MODULE_URL value: "http://mgr-tenant-entitlements" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" integrations: db: enabled: true existingSecret: db-credentials kafka: enabled: true existingSecret: kafka-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi
  • mgr-tenants module:

    • Github Repository folio-org/mgr-tenants

    • Container Image folioci/mgr-tenants

    • Helm Chart charts/mgr-tenants

    • Helm Chart variable values(./values/mgr-tenants.yaml file below):

      mgr-tenants: extraEnvVars: - name: MODULE_URL value: "http://mgr-tenants" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" integrations: db: enabled: true existingSecret: db-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi

       

JFYI:
We have FOLIO Helm Charts v2 repository on GitHub containing ready to use Helm Charts for every Eureka platform module.

Please become familiar with README.md to get more valuable information on how this repository orginized as well have desciption on common values and get some clue about how it all works together.

Deploy mgr-* applications to Kubernetes Cluster:

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install mgr-applications --namespace=eureka -f ./values/mgr-applications.yaml folio-helm-v2/mgr-applications + helm install mgr-tenant-entitlements --namespace=eureka -f ./values/mgr-tenant-entitlements.yaml folio-helm-v2/mgr-tenant-entitlements + helm install mgr-tenants --namespace=eureka -f ./values/mgr-tenants.yaml folio-helm-v2/mgr-tenants

 

Eureka deployment flow:

  1. Register Applications

  2. Register Modules

  3. Deploy backend modules to cluster namespace

  4. Create Tenant

  5. Set Entitlement

  6. Add User

  7. Set User Password

  8. Create Role

  9. Assign Capabilities to Role

  10. Add Roles to User

Get Master Auth token from Keycloak.

To run administrative REST API requests against Eureka Instance we need to get Master Access Token from Keycloak on the start.

We need to know request parameters first (consider adopting following example)

  • Keycloak FQDN: keycloak.example.org

  • Token Service Endpoint: /realms/master/protocol/openid-connect/token

  • Client ID: folio-backend-admin-client (this is expected value and should not be changed)

  • Client Secret: SecretPhrase

  • Grant Type: client_credentials (Constant)

curl --location 'https://keycloak.example.org/realms/master/protocol/openid-connect/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data 'client_id=folio-backend-admin-client&grant_type=client_credentials&client_secret=SecretPhrase'

We need to save returned Master Access Token to run within any administrative REST API call later.

 

Register Applications Descriptors:

REST API Docs for "POST /applications" endpoint.

General idea of Eureka Platform is to have a range of related applications.

Their applications descriptors are available by searching app-* pattern inside folio-org Github organization

(e.x. folio-org/app-platform-full: Application with all FOLIO modules included)

We need to register application descriptor in Eureka instance. Application descriptor is created from github.com/folio-org/app-platform-full/sprint-quesnelia/app-platform-full.template.json file taken from release branch.

Docs for registerApplication Rest API call - register a new application.

Descriptor is registered with CURL command and related parameters:

  • Kong Gateway FQDN (http header): kong.example.org

  • Auth token (http header): 'Authorization: Bearer...'

  • Application Descriptor (http request body): JSON data file

curl --location 'https://kong.example.org/applications' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data ' { "description": "Application comprised of all Folio modules", "modules": [ { "id": "mod-ebsconet-2.3.0", "name": "mod-ebsconet", "version": "2.3.0" }, { "id": "edge-sip2-3.3.0", "name": "edge-sip2", "version": "3.3.0" }, .... long list of other modules .... ], "uiModules": [ { "id": "folio_authorization-policies-1.3.109000000131", "name": "folio_authorization-policies", "version": "1.3.109000000131" }, { "id": "folio_authorization-roles-1.6.109000000580", "name": "folio_authorization-roles", "version": "1.6.109000000580" }, .... long list of other modules .... ], "platform": "base", "dependencies": [], "id": "app-platform-full-1.0.0", "name": "app-platform-full", "version": "1.0.0" }'

 

Register Modules

REST API Docs for “GET /modules/discovery“ endpoint.

Once required Applications Descriptors are registered in instance we proceed with Module Discovery Flow to register modules in system.

Docs for searchModuleDiscovery Rest API call - Retrieving module discovery information by CQL query and pagination parameters.

Modules Discovery is started with CURL command and related parameters:

  • Kong Gateway FQDN (HTTP header): kong.example.org

  • Auth token (http header): 'Authorization: Bearer...'

  • Module Discovery Info (http request body): JSON data file

curl --location 'https://kong.example.org/modules/discovery' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "discovery": [ { "id": "mod-users-keycloak-1.5.3", "name": "mod-users-keycloak", "version": "1.5.3", "location": "https://mod-users-keycloak:8082" }, { "id": "mod-login-keycloak-1.5.0", "name": "mod-login-keycloak", "version": "1.5.0", "location": "https://mod-login-keycloak:8082" }, { "id": "mod-scheduler-1.3.0", "name": "mod-scheduler", "version": "1.3.0", "location": "https://mod-scheduler:8082" }, { "id": "mod-configuration-5.11.0", "name": "mod-configuration", "version": "5.11.0", "location": "https://mod-configuration:8082" } .... long list of other modules .... ] }'

 

Deploy Backend Modules

Now we are ready to deploy backend modules to Kubernetes Namespace with Eureka instance.

Helm Charts for modules are taken from Github repository folio-org/folio-helm-v2

Variable values for helm charts are stored in dedicated repository folder folio-org/pipelines-shared-library/resources/helm

For exmaple:

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install mod-inventory-storage --namespace=eureka -f ./values/mod-inventory-storage.yaml folio-helm-v2/mod-inventory-storage + helm install mod-reading-room --namespace=eureka -f ./values/mod-reading-room.yaml folio-helm-v2/mod-reading-room + helm install mod-agreements --namespace=eureka -f ./values/mod-agreements.yaml folio-helm-v2/mod-agreements ... long list of other modules ....

Just for your information:

Eureka Module are deployed to Kubernetes Pod to provide service.

Every Module Pod contains two containers.

One container for Service and another one for lightweight Sidecar Container.

Sidecar Container is responsible for proxying authentication/authorization/routing HTTP requests.

Please see Folio Eureka Platform Overview#Sidecars link for more info.

Create tenant

REST API Docs for “POST /tenants“ endpoint.

At this point we are ready to create application tenant in Eureka instance.

First we need to take a look on docs for createTenant Rest API call to create a new tenant.

Once we sure about required parameters we give Post HTTP request to create a new tenant.

In our example we create tenant with name “diku” and description “magic happens here”:

curl --location 'https://kong.example.org/tenants' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "name": "diku", "description": "Knowledge magic happens here" }'

 

Set entitlement

REST API Docs for “POST /entitlements“ endpoint.

We have application tenant created so we can entitle registered applications to our tenant.

In other words we enable application(s) to tenant.

As usual we take a look on docs for create Rest API call to Install/enable application for tenant.

From mentioned docs we can get some info about passed parameters and returned value.

Following example shows how to enable application for our tenant without problem:

curl --location 'https://kong.example.org/entitlements' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "tenantId": "[Tenant-UUID]", "applications": [ "app-platform-complete-1.0.0" ] }'

 

Add User

REST API Docs for “POST /users-keycloak/users“ endpoint.

On this stage we are ready to add first User to Eureka Instance to have administrative privileges later.

So checking parameters in docs for createUser Rest API call to create a new user.

Then use CURL command to run POST HTTP request against Eureka Instance:

curl --location 'https://kong.example.org/users-keycloak/users' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data-raw '{ "username": "admin" , "active": true , "patronGroup": "3684a786-6671-4268-8ed0-9db82ebca60b" , "type": "staff" , "personal": { "firstName": "John" , "lastName": "Doe" , "email": "noreply@ci.folio.org" , "preferredContactTypeId": "002" } }'

Set User Password

REST API Docs for “POST /authn/credentials“ endpoint.

Having our user created we are free to assign him some secret password to use it on login.

Just carefully looking through docs for createCredentials Rest API call to add a new login to the system.

curl --location 'https://kong.example.org/authn/credentials' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "username": "admin", "userId": "[Admin-User-UUID]", "password": "SecretPhrase" }'

 

Create Role

REST API Docs for “POST /roles“ endpoint.

We need to create a Role to bundle Eureka administrative capabilities with our Admin User.

So accordingly to docs for createRole Rest API call to create a new role we need to run following POST HTTP reuest:

curl --location 'https://kong.example.org/roles' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "name": "adminRole", "description": "Admin role" }'

 

Assign Capabilities to Role

REST API Docs for “POST /roles/capabilities“ endpoint.

And then we just attach required Eureka application capabilities to our Admin Role

Using docs for createRoleCapabilities Rest API call we create a new record associating one or more capabilities with the already created role

curl --location 'https://kong.example.org/roles/capabilities' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "roleId": "[Role-UUID]", "capabilityIds": [ "[Eureka-Capability-01-UUID]", "[Eureka-Capability-02-UUID]", "[Eureka-Capability-03-UUID]" ] }'

To get a list of existing Capabilities we are going to use findCapabilities Rest API call

curl -X GET --location 'https://kong.example.org/roles/capabilities?query=<field_name>=="<value>"&limit=300' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku'

 

Add Roles to User

REST API Docs for “POST /roles/users“ endpoint.

The last step in the row is assigning Admin Role to Admin User to provide him Super Power to rule Eureka world.

So accordingly to existing docs for assignRolesToUser Rest API call to create a record associating role with user we should run CURL command like the next one:

curl --location 'https://kong.example.org/roles/users' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "userId": "[User-UUID]", "roleIds": [ "[Role-UUID]" ] }'

 

Deploy Edge modules

  • Render Ephemeral Properties

At this step we populate Ephemeral Properties template file for every edge-* module found in github.com/folio-org/platform-complete/snapshot/install.json file.

As example for rendering we have properties file to bundle module in tenant and its admin credentials with respective capabilities.

  • Create config map for every edge-* module

Completed Ephemeral Properties files have to be stored in Cluster Namespace as configmaps:

+ kubectl create configmap edge-inn-reach-ephemeral-properties --namespace=eureka --from-file=./edge-inn-reach-ephemeral-properties --save-config + kubectl create configmap edge-courses-ephemeral-properties --namespace=eureka --from-file=./edge-courses-ephemeral-properties --save-config + kubectl create configmap edge-oai-pmh-ephemeral-properties --namespace=eureka --from-file=./edge-oai-pmh-ephemeral-properties --save-config ...long list of other edge modules...
  • Deploy edge-* modules to cluster namespace

At this point we deploy a set of edge-* modules (see install.json file) to cluster namespace:

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install edge-inn-reach --namespace=eureka -f ./values/edge-inn-reach.yaml folio-helm-v2/edge-inn-reach + helm install edge-courses --namespace=eureka -f ./values/edge-courses.yaml folio-helm-v2/edge-courses + helm install edge-oai-pmh --namespace=eureka -f ./values/edge-oai-pmh.yaml folio-helm-v2/edge-oai-pmh ...long list of other edge modules...

Perform Consortia Deployment (if required)

REST API Docs for “POST /consortia“ endpoint.

Just in case you made decision to deploy Consortia as a separate application via its own application descriptor app-consortia.template.json

you should consider to use related folio-org/app-consortia repository on GitHub to achieve your goals.

  • Set up a Consortia Deployment with the given tenants

    • Create consortia deployment instance accordingly to docs for consortia REST API call to save consortium configuration.

      curl --location 'https://kong.example.org/consortia' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "name": "consortium", "id": "[Consortium-UUID]" }'
    • Add Consortia Central Tenant

      curl --location 'https://kong.example.org/consortia/[Consortium-UUID]/tenants' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "id": "consortium", "name": "Central office", "code": "MCO", "isCentral": true }'
    • Add Consotia Institutial Tenant

      curl --location 'https://kong.example.org/consortia/[Consortium-UUID]/tenants?adminUserId=[Admin-User-UUID]' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "id": "college", "name": "college", "code": "COL", "isCentral": false }'

Perform indexing on Eureka resources

There is comprehensive documentation piece for Search Indexing we would highly recommend to walk through to learn that magic closer.

  • Re-create search index for authority resource

    • Have a look into existing docs for Resource reindex REST API call to initiate reindex for the authority records (/search/index/inventory/reindex endpoint)

      curl --location 'https://kong.example.org/search/index/inventory/reindex' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "recreateIndex": true, "resourceName": "authority" }'
    • Monitoring reindex process

      • It is possible to monitor indexing process with getReindexJob REST API call. To check how many records are published to Kafka topic we may use following command

        curl --location 'https://kong.example.org/authority-storage/reindex/[reindex_job_id]' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'x-okapi-tenant: diku'

Where reindex_job_id - ID returned by /search/index/inventory/reindex endpoint in previous step.

  • Indexing of instance resources

    • First need to check related docs for Full Inventory Records reindex REST API call to initiate the full reindex for the inventory instance records ( /search/index/instance-records/reindex/full endpoint)

      curl --location 'https://kong.example.org/search/index/instance-records/reindex/full' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{}'

Configure Edge modules

  • Create Eureka Users for Eureka UI

    • UI modules expect respcective Users created in Eureka instance. Enough system capabilites have to be assigned to UI Users to perfrom desired level of access.

    • To have some clue how UI modules are mapped with Eureka Accounts with required capabilities please take a look into folio-org/pipelines-shared-library/resources/edge/config_eureka.yaml file

    • So we need to create extra Eureka Accounts to be used by UI Modules. For example

      • Create User Account:

        curl --location 'https://kong.example.org/users-keycloak/users' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data-raw '{ "username": "admin" , "active": true , "patronGroup": "3684a786-6671-4268-8ed0-9db82ebca60b" , "type": "staff" , "personal": { "firstName": "John" , "lastName": "Doe" , "email": "noreply@ci.folio.org" , "preferredContactTypeId": "002" } }'
      • Set Password for User:

        curl --location 'https://kong.example.org/authn/credentials' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "username": "admin", "userId": "[Admin-User-UUID]", "password": "SecretPhrase" }'
      • Assign Capabilities to User Account:

        curl --location 'https://kong.example.org/users/capabilities' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "userId": "[User-UUID]", "capabilityIds": [ "[Eureka-Capability-01-UUID]", "[Eureka-Capability-02-UUID]", "[Eureka-Capability-03-UUID]", "[Eureka-Capability-04-UUID]", "[Eureka-Capability-05-UUID]" ] }'
      • Assign Capabilities Set to User Account

        curl --location 'https://kong.example.org/users/capability-sets?limit=5000' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "userId": "[User-UUID]", "capabilityIds": [ "[Eureka-Capability-Set-01-UUID]", "[Eureka-Capability-Set-02-UUID]", "[Eureka-Capability-Set-03-UUID]" ] }'

Build FOLIO Eureka UI images

  • Please obtain the source code for the Eureka UI from the GitHub repository titled "folio-org/platform-complete" on the snapshot branch. Kindly modify the configuration template file located at platform-complete/eureka-tpl/stripes.config.js in accordance with the existing values. To construct the Eureka UI (Stripes Platform), please execute a series of commands using Yarn. For building the Eureka UI (Stripes Platform) within a Container Image, it suffices to utilize the following Dockerfile available at http://github.com/folio-org/platform-complete/docker/Dockerfile.

  • Please obtain the source code for the Eureka UI from the GitHub repository folio-org/platform-complete on the snapshot branch.

    $ git clone -b snapshot https://github.com/folio-org/platform-complete.git
  • Kindly modify the configuration template file located at platform-complete/eureka-tpl/stripes.config.js in accordance with the existing values.

  • Add to the platform-complete/package.json dependencies section the following modules

    "@folio/authorization-policies": ">=1.0.0", "@folio/authorization-roles": ">=1.0.0", "@folio/plugin-select-application": ">=1.0.0"
  • To build Eureka UI (Stripes Platform), please execute a series of commands using Yarn

    $ yarn config set @folio:registry https://repository.folio.org/repository/npm-folioci/ $ yarn install
  • For building the Eureka UI (Stripes Platform) within a Container Image, it suffices to utilize the following Dockerfile located at github.com/folio-org/platform-complete/docker/Dockerfile

    FROM node:18-alpine as stripes_build ARG OKAPI_URL=http://localhost:9130 ARG TENANT_ID=diku ARG CXXFLAGS="-std=c++17" RUN mkdir -p /etc/folio/stripes WORKDIR /etc/folio/stripes COPY . /etc/folio/stripes/ RUN apk upgrade \ && apk add \ alpine-sdk \ python3 \ && rm -rf /var/cache/apk/* RUN yarn config set python /usr/bin/python3 RUN yarn config set @folio:registry https://repository.folio.org/repository/npm-folioci/ RUN yarn install RUN yarn build-module-descriptors RUN yarn build output --okapi $OKAPI_URL --tenant $TENANT_ID # nginx stage FROM nginx:stable-alpine # Install latest patch versions of packages: https://pythonspeed.com/articles/security-updates-in-docker/ RUN apk upgrade --no-cache EXPOSE 80 COPY --from=stripes_build /etc/folio/stripes/output /usr/share/nginx/html COPY --from=stripes_build /etc/folio/stripes/yarn.lock /usr/share/nginx/html/yarn.lock COPY docker/nginx.conf /etc/nginx/conf.d/default.conf COPY docker/entrypoint.sh /usr/bin/entrypoint.sh ENTRYPOINT ["/usr/bin/entrypoint.sh"]

    After creating a Docker Image with the Eureka UI, it should be stored in a Container Image Repository to be accessible for deployment to the Kubernetes Cluster.

A unique image should be created for each standalone tenant and the consortia master tenant, given the different Kong and Keycloak URL values in https://github.com/folio-org/platform-complete/blob/snapshot/eureka-tpl/stripes.config.js. Additionally, it’s advisable to configure distinct Kong URLs for each tenant and the consortia master tenant in the DNS.

Deploy Eureka UI

  • Deploy 'ui-bundle' module

  • Configure Eureka UI parameters

    • Get Tenant Realm Name from Keycloak

      curl -X GET --location 'https://keycloak.example.org/admin/realms/[YOUR_TENANT_NAME]/clients?clientId=[YOUR_TENANT_NAME]-application' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json'
    • Put extra Tenant Configuration for Eureka UI (Stripes Platform)

      curl -X POST --location 'https://keycloak.example.org/admin/realms/[YOUR_TENANT_NAME]/clients/[YOUR_TENANT_REALM_NAME]' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "rootUrl": "[YOUR_TENANT_URL]", "baseUrl": "[YOUR_TENANT_URL]", "adminUrl": "[YOUR_TENANT_URL]", "redirectUris": ["[YOUR_TENANT_URL]/*", "http://localhost:3000/*"], "webOrigins": ["/*"], "authorizationServicesEnabled": true, "serviceAccountsEnabled": true, "attributes": ['post.logout.redirect.uris': "/*##[YOUR_TENANT_URL]/*", login_theme: 'custom-theme'] }'

 

 

 

Kong fine-tuning

You can customize Kong's default behavior using environment variables. When the application starts, it uses environment variables to configure the personal Nginx web server and Kong itself. To set Nginx parameters, use environment variables with the prefix KONG_NGINX_. For Kong-specific configurations, define variables with the prefix KONG_.

Post-Deployment Tasks

Monitoring and logging

Scaling and updates

Troubleshooting and Common Issues

  • InternalServerErrorException error 500. Connection refused - lack of resources.

Preamble: Working with complex operations such as application tenant entitlement may pose challenges due to the need for all modules to be available, direct requests between platform modules, the loosely coupled nature of K8S, and the resulting temporary unavailability of some modules.
Issue: Various errors like the following may occur during these complex operations due to incomplete execution within the required timeframe or unavailability of modules:
Enabling application for tenant failed: [errors:[[message:Flow 'd62cbd2c-9261-47df-bffd-e6a13871c59f' finished with status: FAILED, type:FlowExecutionException, code:service_error, parameters:[[key:mod-<some_module_name>-folioModuleInstaller, value:FAILED: [IntegrationException] Failed to perform doPostTenant call, parameters: [{key: cause, value: 500: {"errors":[{"type":"InternalServerErrorException","code":"service_error","message":"Failed to proxy request","parameters":[{"key":"cause","value":"Connection refused: localhost/127.0.0.1:8081"}]}],"total_records":1}}]]
Cause: The issue could be caused by resource throttling or module unavailability. If the allocated CPU or RAM limit is reached, the time needed to perform these operations significantly increases and exceeds the expected time limit. In other cases, in a self-rebalancing K8S cluster, pods for some modules may be evicted and moved to other nodes, leading to the inaccessibility of core modules or modules that play significant roles in these complex operations processes. If a core module like Kong, Keycloak, mgr-* modules, or very important modules like mod-roles-keycloak and mod-users-keycloak are affected, it could lead to breaking the invocation chain.
Resolution: To address this issue, you could use one of the following approaches or in conjunction:
- Provide node size fit the total amount module request regarding the CPU and RAM.
- Provide resource limits to each module to ensure they would not move by the cluster during the heavyweight operations.

  • InternalServerErrorException error 500. Connection refused - deployment timing.

Issue: During the entitlement process the following error message could appear
Enabling application for tenant failed: [errors:[[message:Flow 'd62cbd2c-9261-47df-bffd-e6a13871c59f' finished with status: FAILED, type:FlowExecutionException, code:service_error, parameters:[[key:mod-<some_module_name>-folioModuleInstaller, value:FAILED: [IntegrationException] Failed to perform doPostTenant call, parameters: [{key: cause, value: 500: {"errors":[{"type":"InternalServerErrorException","code":"service_error","message":"Failed to proxy request","parameters":[{"key":"cause","value":"Connection refused: localhost/127.0.0.1:8081"}]}],"total_records":1}}]]
This error may appear even if there are enough resources available for the environment, as described in the InternalServerErrorException error 500. Connection refused - lack of resources. topic and module availability was ensured.
Cause: In some cases, this error could occur due to inappropriate deployment timing, especially when an automated deployment process is used. Even if enough resources have been provided, modules need time to become available after they start. Some heavyweight modules, such as mod-oa or mod-agreement, may need up to 5 minutes to start. Therefore, it's important to check module availability after deployment before starting any operations, such as instance entitlement on modules. Additionally, ensure the correct order of deployment: Kong, Keycloak - Mgr-components - Modules.

  • The application is not entitled on tenant - sidecar vs Kafka
    Issue: The error The module is not entitled on tenant ... may occur during certain operations, especially during the entitlement process. You can find the full log of this issue in the related module's sidecar.
    Cause: This error happens due to communication issues between mgr-tenant-entitlement and the corresponding module, which notifies the module about the end of the entitlement process via Kafka. In some cases, the sidecar consumer connection could be marked as dead. The main reason for that is a wrong setting of Kafka heartbeat request and sidecar poll requests, which aren’t aligned with each other, or various networking issues that lead to the poll request being absent during the specific period. The module is not entitled on tenant ... errors appear due to the next module portion in the entitlement process request to the previously entitled modules and sidecar which does not receive a message via Kafka about the finish of the related module entitlement process.
    Resolution: In case a sidecar loses connection with Kafka during the entitlement process and it has not been affected by this issue, simply restart the affected module, and its sidecar will get entitlement information from the mgr-tenant-entitlement module. In case the entitlement process fails, repeat it. Nevertheless ensures a stable connection between Kafka and sidecars as well as aligns Kafka heartbeat request and sidecar poll request periods.

  • The application is not entitled on tenant - sidecar vs mod-tenant-entitlement
    Issue: The error The module is not entitled on tenant ... may occur during certain operations.
    Cause: This error can happen when the mgr-tenant-entitlement and some module pods are redeployed simultaneously, and the module's sidecar becomes ready before mgr-tenant-entitlement, causing it to be unable to obtain information about the application entitlement from the MTE module and potentially providing an error response to other modules upon request.
    Resolution: To fix this issue, ensure the correct module redeployment order. If the issue occurs unexpectedly, simply restart the affected module.

  • The upstream server is timing out - Kong fine-tuning
    Issue: Some API requests may result in a 504 error code with the error message "upstream server is timing out.” This problem is primarily caused by Kong and typically occurs during long operations, such as assigning a capability to a role or user.
    Cause: When a request reaches a specific module, it always goes through Kong, which has two potential points of failure: Kong's Nginx and Kong itself.
    Resolution: To address this issue, you should adjust the upstream timeout of Kong's Nginx using the KONG_NGINX_HTTP_KEEPALIVE_TIMEOUT, KONG_NGINX_UPSTREAM_KEEPALIVE, and KONG_NGINX_HTTP_KEEPALIVE_REQUESTS environment variables. Additionally, consider modifying the following Kong variables: KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT, KONG_UPSTREAM_KEEPALIVE_POOL_SIZE, and KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS, KONG_UPSTREAM_CONNECT_TIMEOUT, KONG_RETRIES. More information about how to perform that.

  • Some capabilities/capability sets are absent - Kafka messages processing period
    Issue: If you try to assign capabilities or capability sets to a role or user immediately after the entitlement process, you may encounter an issue with their absences.
    Cause: The predefined capabilities (also known as permissions) and capability sets are created just after the application entitlement process. This process takes time to complete. Here's how it works:

    • The mgr-tenant-entitlement module sends messages to the mod-roles-keycloak via Kafka with a list of roles. This process lasts during the entitlement process as each module is enabled on a tenant.

    • The mod-roles-keycloak starts processing the messages right after it has been entitled, so it could be at the end of the entitlement process.

    • mod-roles-keycloak proceeds through the message queue in Kafka until it reaches the end.

Resolution: Before starting to assign capabilities or capability sets, it is important to check the Kafka module consumer message queue offset to ensure that the process has been completed. Alternatively, you should determine the appropriate amount of time to allow for the process to finish based on the performance of your environment.

Related content