Kubernetes Example Deployment

Kubernetes Example Deployment

If above secret's structure used for deployment, then each component MUST BE configured accordantly i.e. “db-credentials” secrets should be reflecting PostgreSQL configuration and so on.

Overview

Right in the beginning of a long way we would highly recommend to become familiar with the Folio Eureka Platform Overview document

to be aware of main concepts for the new platform.

Setting Up the Environment

Prerequisites:

  • Kubernetes Cluster (system for automating deployment, scaling, and management of containerized applications)

  • PostgreSQL (RDBMS used by Keycloak, Kong Gateway, Eureka modules)

  • Apache Kafka (distributed event streaming platform)

  • HashiCorp Vault (identity-based secret and encryption management system)

  • Keycloak (Identity and Access Management)

  • Kong Gateway (API Gateway)

  • MinIO (Enterprise Object Store is built for production environments, OPTIONAL)

  • Elasticsearch or OpenSearch(enterprise-grade search and observability suite)

 

MinIO is implementation of Object Storage compatible with AWS S3 service.

It also works the other way around instead of MinIO you are free to use AWS S3 service without any problem.

 

To set up Eureka Platform you should already have Kubernetes Cluster installed. Then just create a new Namespace within K8s Cluster to assign and manage resources granularity for your Eureka deployment.

You can have your cluster nodes on premise in local data center or adopt any cloud provider (i.e. AWS, Azure, GCP and so on) most suitable for you to meet planned or not planned resource demand.

Eureka Platform depends on a bunch of 3rd party services (listed above) for its expected operation. Some of these services (PostgreSQL, Apache Kafka, OpenSearch, Hashicorp Vault) can be deployed as standalone servces outside of cluster namespace but others mostly never depoloyed outside.

For initial Eureka deployment you will need about 30Gb of RAM. Such setup incorporates all mentioned 3rd party services in one kubernetes namespace.

It may require some extra resources (RAM, CPU, HDD Disk Space, HDD IOPS) to be assigned to destination Kubernetes Cluster in case prerequisites services are deployed in to the same cluster namespace.

Also in case you are going to have Consortia deployment it also needs extra resources to be assigned.

In case you make decision to have everything in one place please pay attention for HDD IOPS required by PostgreSQL/OpenSearch/ApacheKafka services.

 

PostgreSQL RDBMS should be installed to cluster namespace first since its the prerequisite for Kong Gateway and Keycloak Identity Manager.

Apache Kafka service is used by Eureka for internal communication between modules and very important to keep it in a good shape.

HashiCorp Vault stores all secrets used within Platform. AWS SSM Parameters are also supported as secrets' storage now.

Keycloak service provides authentication and authorization (granting access) for any kind of identities (users, roles, endpoints).

Kong Gateway as API Gateway routes requests to modules and provides access to Eureka REST APIs.

MinIO object storage keeps data for some modules to be used during platform operation.

Elasticsearch instance contains huge amount of information and indexes it for a fast search. It is very important to look after appropriate level of performance for this service. Also can be installed outside of Kubernetes Cluster.

 

Expected Prerequisites deployment order:

  1. Hashicorp Vault

  2. PostgreSQL

  3. Apache Kafka

  4. ElasticSearch

  5. MinIO (Optional)

  6. Kong Gateway

  7. Keycloak Identity Manager

Cluster setup

Lets assume you are going to set up Eureka Platform development environment on Kubernetes Cluster. To meet resource scalability ease during workload spikes it worth to use Cloud Services like EKS (AWS), AKS (Azure), GKE (GCP).

At the same time to control cloud vendor lock and cut down expences we are going to deploy all prerequisite services into the one cluster namespace except OpenSearch instance.

NOTE: It’s HIGHLY recommended to place all of the deploying resources in the single Kubernetes namespace, so all services can be accessed without FQDN specified.

To deploy prerequisite services we would recommend to adopt following Container (Docker) Images and Helm Charts:

PostgreSQL container Image: hub.docker.com/bitnami/postgresql , Helm Chart: github.com/bitnami/charts/postgresql

Suggested PostgreSQL chart version: 13.2.19

P\S Please pay attention to this SQL initialization script (in “values.yml” file), if deployment approach differs these 2 DBs MUST BE pre-created and configured accordantly beforehand.

CREATE DATABASE kong; CREATE USER kong PASSWORD 'secretDBpassword'; ALTER DATABASE kong OWNER TO kong; ALTER DATABASE kong SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO kong; GRANT USAGE ON SCHEMA public TO kong; CREATE DATABASE keycloak; CREATE USER keycloak PASSWORD 'secretDBpassword'; ALTER DATABASE keycloak OWNER TO keycloak; ALTER DATABASE keycloak SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO keycloak; GRANT USAGE ON SCHEMA public TO keycloak;

EXAMPLE ONLY:./values.yaml file, below values are subject to be adjusted according to your setup

architecture: standalone readReplicas: replicaCount: 1 resources: requests: memory: 8192Mi limits: memory: 10240Mi podAffinityPreset: soft persistence: enabled: true size: '20Gi' storageClass: gp2 extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '500' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' image: tag: 16.1.0 <-- PostgreSQL server version MUST BE 16.1.0 or higher auth: database: folio postgresPassword: secretDBpassword replicationPassword: secretDBpassword replicationUsername: postgres usePasswordFiles: false primary: initdb: scripts: init.sql: | CREATE DATABASE kong; CREATE USER kong PASSWORD 'secretDBpassword'; ALTER DATABASE kong OWNER TO kong; ALTER DATABASE kong SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO kong; GRANT USAGE ON SCHEMA public TO kong; CREATE DATABASE keycloak; CREATE USER keycloak PASSWORD 'secretDBpassword'; ALTER DATABASE keycloak OWNER TO keycloak; ALTER DATABASE keycloak SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO keycloak; GRANT USAGE ON SCHEMA public TO keycloak; CREATE DATABASE ldp; CREATE USER ldpadmin PASSWORD 'someLdpPassword'; CREATE USER ldpconfig PASSWORD 'someLdpPassword'; CREATE USER ldp PASSWORD 'someLdpPassword'; ALTER DATABASE ldp OWNER TO ldpadmin; ALTER DATABASE ldp SET search_path TO public; REVOKE CREATE ON SCHEMA public FROM public; GRANT ALL ON SCHEMA public TO ldpadmin; GRANT USAGE ON SCHEMA public TO ldpconfig; GRANT USAGE ON SCHEMA public TO ldp; persistence: enabled: true size: '20Gi' storageClass: gp2 resources: requests: memory: 8192Mi limits: memory: 10240Mi podSecurityContext: fsGroup: 1001 containerSecurityContext: runAsUser: 1001 podAffinityPreset: soft extendedConfiguration: |- shared_buffers = '2560MB' max_connections = '5000' listen_addresses = '0.0.0.0' effective_cache_size = '7680MB' maintenance_work_mem = '640MB' checkpoint_completion_target = '0.9' wal_buffers = '16MB' default_statistics_target = '100' random_page_cost = '1.1' effective_io_concurrency = '200' work_mem = '1310kB' min_wal_size = '1GB' max_wal_size = '4GB' volumePermissions: enabled: true metrics: enabled: false resources: requests: memory: 1024Mi limits: memory: 3072Mi serviceMonitor: enabled: true namespace: monitoring interval: 30s scrapeTimeout: 30s

Apache Kafka container Image: hub.docker.com/bitnami/kafka, Helm Chart: github.com/bitnami/charts/kafka

Since Kafka is crucial part of Eureka platform, it’s highly recommended to make sure that Kafka is up & running, you’re able to create topics and push messages.

All of the above check could be done via Kafka scripts, usually located in the following location:

/opt/bitnami/kafka/bin

It’s also it worth to mention check zookeeper deployment status, make sure that both Kafka and Zookeeper are in up & running state.

Suggested Kafka chart version: 21.4.6

EXAMPLE ONLY:./values/kafka-values.yaml file, below values are subject to be adjusted according to your setup

image: tag: 3.5 metrics: kafka: enabled: true resources: limits: memory: 1280Mi requests: memory: 256Mi jmx: enabled: true resources: limits: memory: 2048Mi requests: memory: 1024Mi serviceMonitor: enabled: true namespace: monitoring <--- if your env does not have monitoring deployed(Prometheus) in monitoring namespace deployed, then set enabled: false interval: 30s scrapeTimeout: 30s persistence: enabled: true size: 10Gi storageClass: gp2 resources: requests: memory: 2Gi limits: memory: 8192Mi zookeeper: image: tag: 3.7 enabled: true persistence: size: 5Gi resources: requests: memory: 512Mi limits: memory: 768Mi livenessProbe: enabled: false readinessProbe: enabled: false replicaCount: 1 heapOpts: "-XX:MaxRAMPercentage=75.0" extraEnvVars: - name: KAFKA_DELETE_TOPIC_ENABLE value: "true"

Hashicorp Vault container Image: hub.docker.com/bitnami/vault, Helm Chart: github.com/bitnami/charts/vault

Suggested Vault helm chart version: 0.28.0

P\S In case of AWS SSM parameters store, the following parameters MUST BE pre-created, manually or via some automation approach:
"folio-backend-admin-client", "master_mgr-applications", "master_mgr-tenant-entitlements", "master_mgr-tenants"

and EKS worker nodes should have appropriate AWS_SSM permissions via EC2 Instance Profile(s) or you may use AWS_CLI_ACCESS_KEYS as described in README.md files of mgr-* components.

image-20250516-082707.png

*** On Folio Rancher Eureka envs, EC2 instance profile is being used for granting AWS_SSM permission.

*** the same is true for Vault setup, all of the above secrets MUST BE pre-created in the following path: {{folio}} → value of ENV environment variable (may differ, so please adjust according to your values)

{{folio}}/master

Source code for AWS SSM Params store:

list of params: pipelines-shared-library/terraform/rancher/project/locals.tf at master · folio-org/pipelines-shared-library

creation: pipelines-shared-library/terraform/rancher/project/secrets.tf at master · folio-org/pipelines-shared-library

global: enabled: true server: ingress: enabled: false dev: enabled: true ha: enabled: false service: type: ClusterIP port: 8200 dataStorage: enabled: true tls: enabled: false auto: enabled: false extraEnvironmentVars: VAULT_DEV_ROOT_TOKEN_ID: "root" resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1Gi" cpu: "1024m" backup: enabled: false logLevel: "debug" dataStorage: enabled: false auditLog: enabled: false agentInjector: enabled: false metrics: enabled: false unsealConfig: enabled: false ui: enabled: true

 

Keycloak container Image: folioci/folio-keycloak(snapshot versions), folioorg/folio-keycloak(release versions)

Helm Chart: github.com/bitnami/charts/keycloak, values for values.yaml: github.com/folio-org/pipelines-shared-library/…/keycloak.tf, Git Repository github.com/folio-org/folio-keycloak

Suggested Keycloak chart version: 21.0.4

P\S for the Keycloak deployment USE ONLY folio-keycloak docker image, default Keycloak image is missing additional configuration scripts.

EXAMPLE ONLY:./values.yaml file, below values are subject to be adjusted according to your setup

image: registry: folioci repository: folio-keycloak tag: latest pullPolicy: Always debug: false auth: adminUser: "admin" existingSecret: keycloak-credentials passwordSecretKey: KEYCLOAK_ADMIN_PASSWORD extraEnvVars: - name: KC_HOSTNAME_BACKCHANNEL_DYNAMIC value: "true" - name: KC_HOSTNAME value: "https://keycloak.example.org" - name: KC_HOSTNAME_BACKCHANNEL value: "https://keycloak.example.org" - name: KC_HOSTNAME_STRICT value: "false" - name: KC_HOSTNAME_STRICT_HTTPS value: "false" - name: KC_PROXY value: "edge" - name: FIPS value: "false" - name: EUREKA_RESOLVE_SIDECAR_IP value: "false" - name: PROXY_ADDRESS_FORWARDING value: "true" - name: KC_FOLIO_BE_ADMIN_CLIENT_SECRET valueFrom: secretKeyRef: name: keycloak-credentials key: KC_FOLIO_BE_ADMIN_CLIENT_SECRET - name: KC_HTTPS_KEY_STORE_PASSWORD valueFrom: secretKeyRef: name: keycloak-credentials key: KC_HTTPS_KEY_STORE_PASSWORD - name: KC_LOG_LEVEL value: "DEBUG" - name: KC_HOSTNAME_DEBUG value: "true" - name: KC_DB_PASSWORD valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_PASSWORD - name: KC_DB_URL_DATABASE valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_DATABASE - name: KC_DB_URL_HOST valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_HOST - name: KC_DB_URL_PORT valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_URL_PORT - name: KC_DB_USERNAME valueFrom: secretKeyRef: name: keycloak-credentials key: KC_DB_USERNAME - name: KC_HTTP_ENABLED value: "true" - name: KC_HTTP_PORT value: "8080" - name: KC_HEALTH_ENABLED value: "true" - name: BASE_LOGO_FILES_URL value: "http://origin.hosting.your.logo/optional/path" resources: requests: cpu: 512m memory: 2Gi limits: cpu: 2048m memory: 3Gi postgresql: enabled: false externalDatabase: existingSecret: keycloak-credentials existingSecretHostKey: KC_DB_URL_HOST existingSecretPortKey: KC_DB_URL_PORT existingSecretUserKey: KC_DB_USERNAME existingSecretDatabaseKey: KC_DB_URL_DATABASE existingSecretPasswordKey: KC_DB_PASSWORD logging: output: default level: DEBUG enableDefaultInitContainers: false containerSecurityContext: enabled: false service: type: NodePort http: enabled: true ports: http: 8080 networkPolicy: enabled: false livenessProbe: enabled: false readinessProbe: enabled: false startupProbe: enabled: false ingress: enabled: true hostname: keycloak.example.org ingressClassName: "" pathType: ImplementationSpecific path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: "Example_Project_Name" alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/healthcheck-path: /

Kong Gateway container Image: folioci/folio-kong(snapshot versions), folioorg/folio-kong(release versions)

Helm Chart: charts/bitnami/kong, Git Repository github.com/folio-org/folio-kong

Suggested Kong helm chart version: 12.0.11

image: registry: folioci repository: folio-kong tag: latest pullPolicy: Always useDaemonset: false replicaCount: 1 containerSecurityContext: enabled: true seLinuxOptions: {} runAsUser: 1001 runAsGroup: 1001 runAsNonRoot: true privileged: false readOnlyRootFilesystem: true allowPrivilegeEscalation: false capabilities: drop: ["ALL"] seccompProfile: type: "RuntimeDefault" database: postgresql postgresql: enabled: false external: host: pgsql.example.org port: 5432 user: kong password: "" database: kong existingSecret: "kong-credentials" existingSecretPasswordKey: "KONG_PG_PASSWORD" networkPolicy: enabled: false service: type: NodePort exposeAdmin: true disableHttpPort: false ports: proxyHttp: 8000 proxyHttps: 443 adminHttp: 8001 adminHttps: 8444 nodePorts: proxyHttp: "" proxyHttps: "" adminHttp: "" adminHttps: "" ingress: ingressClassName: "" pathType: ImplementationSpecific path: /* hostname: kong.example.org enabled: true annotations: kubernetes.io/ingress.class: "alb" alb.ingress.kubernetes.io/scheme: "internet-facing" alb.ingress.kubernetes.io/group.name: "project_group_name" alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: "200-399" alb.ingress.kubernetes.io/healthcheck-path: "/version" kong: livenessProbe: enabled: false readinessProbe: enabled: false startupProbe: enabled: false extraEnvVars: - name: KONG_PASSWORD valueFrom: secretKeyRef: name: kong-credentials key: KONG_PASSWORD - name: KONG_UPSTREAM_TIMEOUT value: "600000" - name: KONG_UPSTREAM_SEND_TIMEOUT value: "600000" - name: KONG_UPSTREAM_READ_TIMEOUT value: "600000" - name: KONG_NGINX_PROXY_PROXY_NEXT_UPSTREAM value: "error timeout http_500 http_502 http_503 http_504" - name: "KONG_PROXY_SEND_TIMEOUT" value: "600000" - name: "KONG_UPSTREAM_CONNECT_TIMEOUT" value: "600000" - name: "KONG_PROXY_READ_TIMEOUT" value: "600000" - name: "KONG_NGINX_HTTP_KEEPALIVE_TIMEOUT" value: "600000" - name: "KONG_NGINX_UPSTREAM_KEEPALIVE" value: "600000" - name: "KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT" value: "600000" - name: "KONG_UPSTREAM_KEEPALIVE_POOL_SIZE" value: "1024" - name: "KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS" value: "20000" - name: "KONG_NGINX_HTTP_KEEPALIVE_REQUESTS" value: "20000" - name: KONG_PG_DATABASE value: "kong" - name: KONG_NGINX_PROXY_PROXY_BUFFERS value: "64 160k" - name: KONG_NGINX_PROXY_CLIENT_HEADER_BUFFER_SIZE value: "16k" - name: KONG_NGINX_HTTP_CLIENT_HEADER_BUFFER_SIZE value: "16k" - name: KONG_ADMIN_LISTEN value: "0.0.0.0:8001" - name: KONG_NGINX_PROXY_PROXY_BUFFER_SIZE value: "160k" - name: KONG_NGINX_PROXY_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_PLUGINS value: "bundled" - name: KONG_MEM_CACHE_SIZE value: "2048m" - name: KONG_NGINX_HTTP_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_LOG_LEVEL value: "info" - name: KONG_ADMIN_GUI_API_URL value: "kong.example.org" - name: KONG_NGINX_HTTPS_LARGE_CLIENT_HEADER_BUFFERS value: "4 16k" - name: KONG_PROXY_LISTEN value: "0.0.0.0:8000" - name: KONG_NGINX_WORKER_PROCESSES value: "auto" - name: EUREKA_RESOLVE_SIDECAR_IP value: "false" resources: requests: cpu: 512m ephemeral-storage: 50Mi memory: 2Gi limits: cpu: 2048m ephemeral-storage: 1Gi memory: 3Gi ingressController: enabled: false migration: command: ["/bin/sh", "-c"] args: ["echo 'Hello kong!'"]

MinIO container Image: hub.docker.com/bitnami/minio Helm Chart: github.com/bitnami/charts/minio

Suggested MinIO helm chart version: 11.8.1

defaultBuckets: "mod-data-export,mod-data-export-worker,mod-data-import,mod-lists,mod-bulk-operations,mod-oai-pmh,mod-marc-migrations,local-files" auth: rootUser: root_user_name rootPassword: root_password resources: limits: memory: 1536Mi persistence: size: 10Gi extraEnvVars: - name: MINIO_SERVER_URL value: https://minio.example.org - name: MINIO_BROWSER_REDIRECT_URL value: https://minio-console.example.org service: type: NodePort ingress: enabled: true hostname: minio-console.example.org path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: project_group_name alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/healthcheck-path: / alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=4000 apiIngress: enabled: true hostname: minio.example.org path: /* annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.name: project_group_name alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/success-codes: '200-399' alb.ingress.kubernetes.io/healthcheck-path: /minio/health/live alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=4000

OpenSearch container Image: OpenSearch/opensearch(latest versions), necessary for e.g. mod-search

Create the following secret “opensearch-credentials” before deploying and adjust as necessary for your setup:

BROWSE_CN_INTERMEDIATE_REMOVE_DUPLICATES true BROWSE_CN_INTERMEDIATE_VALUES_ENABLED true ELASTICSEARCH_COMPRESSION_ENABLED true ELASTICSEARCH_URL http://opensearch-cluster-master-headless:9200 ENV folio INDEXING_DATA_FORMAT smile INITIAL_LANGUAGES ger,eng INSTANCE_CONTRIBUTORS_INDEXING_RETRY_ATTEMPTS 3 INSTANCE_SUBJECTS_INDEXING_RETRY_ATTEMPTS 3 KAFKA_AUTHORITIES_CONCURRENCY 1 KAFKA_CONTRIBUTORS_CONCURRENCY 2 KAFKA_CONTRIBUTORS_TOPIC_PARTITIONS 50 KAFKA_SECURITY_PROTOCOL PLAINTEXT

Use the following values for the deployment. Adjust as necessary for your setup. Note the addition of the plugins section:

plugins:
enabled: true
installList:
- analysis-icu
- analysis-kuromoji
- analysis-nori
- analysis-phonetic
- analysis-smartcn

--- clusterName: "opensearch-cluster" nodeGroup: "master" singleNode: true masterService: "opensearch-cluster-master" roles: - master - ingest - data - remote_cluster_client replicas: 3 majorVersion: "" global: dockerRegistry: "" opensearchHome: /usr/share/opensearch config: opensearch.yml: | cluster.name: opensearch-cluster network.host: 0.0.0.0 extraEnvs: - name: OPENSEARCH_INITIAL_ADMIN_PASSWORD value: "SecretPassword" - name: plugins.security.disabled value: "true" envFrom: - secretRef: name: "opensearch-credentials" secretMounts: [] hostAliases: [] image: repository: "opensearchproject/opensearch" tag: "" pullPolicy: "IfNotPresent" podAnnotations: {} openSearchAnnotations: {} labels: {} opensearchJavaOpts: "-Xmx512M -Xms512M" resources: requests: cpu: "1000m" memory: "100Mi" initResources: {} sidecarResources: {} networkHost: "0.0.0.0" rbac: create: false serviceAccountAnnotations: {} serviceAccountName: "" automountServiceAccountToken: false podSecurityPolicy: create: false name: "" spec: privileged: true fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - secret - configMap - persistentVolumeClaim - emptyDir persistence: enabled: true enableInitChown: true storageClass: csi-rbd-sc accessModes: - ReadWriteOnce size: 40Gi annotations: {} extraVolumes: [] extraVolumeMounts: [] extraContainers: [] extraInitContainers: [] priorityClassName: "" antiAffinityTopologyKey: "kubernetes.io/hostname" antiAffinity: "soft" customAntiAffinity: {} nodeAffinity: {} podAffinity: {} topologySpreadConstraints: [] podManagementPolicy: "Parallel" enableServiceLinks: true protocol: https httpPort: 9200 transportPort: 9300 metricsPort: 9600 httpHostPort: "" transportHostPort: "" service: labels: {} labelsHeadless: {} headless: annotations: {} type: ClusterIP nodePort: "" annotations: {} httpPortName: http transportPortName: transport metricsPortName: metrics loadBalancerIP: "" loadBalancerSourceRanges: [] externalTrafficPolicy: "" updateStrategy: RollingUpdate maxUnavailable: 1 podSecurityContext: fsGroup: 1000 runAsUser: 1000 securityContext: capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 securityConfig: enabled: true path: "/usr/share/opensearch/config/opensearch-security" actionGroupsSecret: configSecret: internalUsersSecret: rolesSecret: rolesMappingSecret: tenantsSecret: config: securityConfigSecret: "" dataComplete: true data: {} terminationGracePeriod: 120 sysctlVmMaxMapCount: 262144 startupProbe: tcpSocket: port: 9200 initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 30 livenessProbe: {} readinessProbe: tcpSocket: port: 9200 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3 schedulerName: "" imagePullSecrets: [] nodeSelector: {} tolerations: [] ingress: enabled: false annotations: {} ingressLabels: {} path: / hosts: - chart-example.local tls: [] nameOverride: "" fullnameOverride: "" masterTerminationFix: false opensearchLifecycle: {} lifecycle: {} keystore: [] networkPolicy: create: false http: enabled: false fsGroup: "" sysctl: enabled: false sysctlInit: enabled: false plugins: enabled: true installList: - analysis-icu - analysis-kuromoji - analysis-nori - analysis-phonetic - analysis-smartcn removeList: [] extraObjects: [] serviceMonitor: enabled: false path: /_prometheus/metrics scheme: http interval: 10s labels: {} tlsConfig: {} basicAuth: enabled: false

 

Also we need to have Module Descriptors Registry to be in place.

Module Descriptors Registry service (MDR) represents HTTP Server that configured in Kubernetes Pod.

NOTE: A formal MDR is not a strict requirement. Any HTTP server will suffice, as long as the module descriptors are accessible from the mgr-applications service. For example, you could choose to host module descriptors in an S3 bucket configured as a static website using Amazon S3, or point directly to files in a GitHub repository, or setup an Apache HTTP server, or even develop something custom.

This HTTP Server holds and distributes Modules Descriptors for Eureka Instance install and upgrade.

Module descriptor (see Module Descriptor Template) is generated during Continues Integration Flow and is put to Modules Descriptor Registry on finish.

These modules descriptors are used by Eureka install and update flows.

In case of Folio community snapshot\release docker images (Docker Hub) usage, this MDR: folio-registry.dev.folio.org/_/proxy/modules could be used for module descriptors pulling.

 

Deploying EUREKA on Kubernetes

Snapshot versions modules introduced by Eureka platform, can be found via platform-complete/eureka-platform.json at snapshot · folio-org/platform-complete

*****

CAUTION: as of 05/12/2025 ALL POST REST API calls to mgr-* endpoints should have(strongly suggested) Authorization token, GET API calls are working without authorization.

*****

To enable authorization feature, follow up instructions in [RANCHER-2180] Security-related env variables misconfigured for mgr-* in Eureka envs - FOLIO Jira

Once all Prerequisites are met, we can proceed with mgr-* Eureka modules deployment to cluster namespace:

  • mgr-applications module:

    • Github Repository folio-org/mgr-applications

    • Container Image folioci/mgr-applications

    • Helm Chart charts/mgr-applications

    • Helm Chart variable values (EXAMPLE ONLY:./values/mgr-applications.yaml file, below values are subject to be adjusted according to your setup):

      mgr-applications: extraEnvVars: - name: MODULE_URL value: "http://mgr-applications" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" integrations: db: enabled: true existingSecret: db-credentials kafka: enabled: true existingSecret: kafka-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi
  • mgr-tenant-entitlements module:

    • Github Repository folio-org/mgr-tenant-entitlements

    • Container Image folioci/mgr-tenant-entitlements

    • Helm Chart charts/mgr-tenant-entitlements

    • Helm Chart variable values(EXAMPLE ONLY:./values/mgr-tenant-entitlements.yaml file, below values are subject to be adjusted according to your setup):

      mgr-tenant-entitlements: extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" extraEnvVars: - name: MODULE_URL value: "http://mgr-tenant-entitlements" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" integrations: db: enabled: true existingSecret: db-credentials kafka: enabled: true existingSecret: kafka-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi
  • mgr-tenants module:

    • Github Repository folio-org/mgr-tenants

    • Container Image folioci/mgr-tenants

    • Helm Chart charts/mgr-tenants

    • Helm Chart variable values (EXAMPLE ONLY:./values/mgr-tenants.yaml file, below values are subject to be adjusted according to your setup):

      mgr-tenants: extraEnvVars: - name: MODULE_URL value: "http://mgr-tenants" - name: FOLIO_CLIENT_CONNECT_TIMEOUT value: "600s" - name: FOLIO_CLIENT_READ_TIMEOUT value: "600s" - name: KONG_CONNECT_TIMEOUT value: "941241418" - name: KONG_READ_TIMEOUT value: "941241418" - name: KONG_WRITE_TIMEOUT value: "941241418" extraJavaOpts: - "-Dlogging.level.root=DEBUG -Dsecure_store=AwsSsm -Dsecure_store_props=/usr/ms/aws_ss.properties" integrations: db: enabled: true existingSecret: db-credentials replicaCount: 1 resources: limits: memory: 2Gi requests: memory: 1Gi

       

JFYI:
We have FOLIO Helm Charts v2 repository on GitHub containing ready to use Helm Charts for every Eureka platform module.

Please become familiar with README.md to get more valuable information on how this repository orginized as well have desciption on common values and get some clue about how it all works together.

Example of deploying mgr-* applications to Kubernetes Cluster (folio-helm-v2 is a private Helm repo, if you would like to gain an access to please reach out via Slack channel: https://open-libr-foundation.slack.com/archives/C017RFAGBK2, alternatively you may package charts on your side and use them for installation):

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install mgr-applications --namespace=eureka -f ./values/mgr-applications.yaml folio-helm-v2/mgr-applications + helm install mgr-tenant-entitlements --namespace=eureka -f ./values/mgr-tenant-entitlements.yaml folio-helm-v2/mgr-tenant-entitlements + helm install mgr-tenants --namespace=eureka -f ./values/mgr-tenants.yaml folio-helm-v2/mgr-tenants

 

Eureka deployment flow:

  1. Register Applications

  2. Register Modules

  3. Deploy backend modules to cluster namespace

  4. Create Tenant

  5. Set Entitlement

  6. Add User

  7. Set User Password

  8. Create Role

  9. Assign Capabilities to Role

  10. Add Roles to User

Get Master Auth token from Keycloak.

To run administrative REST API requests against Eureka Instance we need to get Master Access Token from Keycloak on the start.

We need to know request parameters first (consider adopting following example)

  • Keycloak FQDN: keycloak.example.org

  • Token Service Endpoint: /realms/master/protocol/openid-connect/token

  • Client ID: folio-backend-admin-client (this is expected value and should not be changed)

  • Client Secret: SecretPhrase (Generated by the deployment example K8s secret reference keycloak-credentials.  E.g. the value usually used in reference envs isSecretPassword)

  • Grant Type: client_credentials (Constant)

curl -X POST --location 'https://keycloak.example.org/realms/master/protocol/openid-connect/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data 'client_id=folio-backend-admin-client&grant_type=client_credentials&client_secret=SecretPhrase'

We need to save returned Master Access Token to run within any administrative REST API call later.

 

Register Applications Descriptors:

REST API Docs for "POST /applications" endpoint.

General idea of Eureka Platform is to have a range of related applications.

Their applications descriptors are available by searching app-* pattern inside folio-org Github organization

(e.x. folio-org/app-platform-full: Application with all FOLIO modules included)

We need to register application descriptor in Eureka instance. Application descriptor is created from github.com/folio-org/app-platform-full/sprint-quesnelia/app-platform-full.template.json file taken from release branch.

Depending on release being delivered, application composition may differ, please ALWAYS refer to this page: FOLIO Eureka Applications - Releases - FOLIO Wiki for actual structure and the list of the applications.

If targeting the latest development version, please use snapshot branch in application’s repository, in case any release, pre-generated application descriptor is residing inside release tag (example):

image-20250528-065611.png

Each and every application descriptor MUST BE generated using folio-application-generator (for snapshot versions): folio-org/folio-application-generator: A Maven plugin to generate application descriptor from a template

Docs for registerApplication Rest API call - register a new application.

Descriptor is registered with CURL command and related parameters:

  • Kong Gateway FQDN (http header): kong.example.org

  • Auth token (http header): 'Authorization: Bearer...'

  • Application Descriptor (http request body): JSON data file

curl -X POST --location 'https://kong.example.org/applications' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data ' { "description": "Application comprised of all Folio modules", "modules": [ { "id": "mod-ebsconet-2.3.0", "name": "mod-ebsconet", "version": "2.3.0" }, { "id": "edge-sip2-3.3.0", "name": "edge-sip2", "version": "3.3.0" }, .... long list of other modules .... ], "uiModules": [ { "id": "folio_authorization-policies-1.3.109000000131", "name": "folio_authorization-policies", "version": "1.3.109000000131" }, { "id": "folio_authorization-roles-1.6.109000000580", "name": "folio_authorization-roles", "version": "1.6.109000000580" }, .... long list of other modules .... ], "platform": "base", "dependencies": [], "id": "app-platform-full-1.0.0", "name": "app-platform-full", "version": "1.0.0" }'

 

Register Modules

REST API Docs for “POST /modules/discovery“ endpoint.

Once required Applications Descriptors are registered in instance we proceed with Module Discovery Flow to register modules in system.

Docs for searchModuleDiscovery Rest API call - Retrieving module discovery information by CQL query and pagination parameters.

Modules Discovery is started with CURL command and related parameters:

  • Kong Gateway FQDN (HTTP header): kong.example.org

  • Auth token (http header): 'Authorization: Bearer...'

  • Module Discovery Info (http request body): JSON data file

curl -X POST --location 'https://kong.example.org/modules/discovery' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "discovery": [ { "id": "mod-users-keycloak-1.5.3", "name": "mod-users-keycloak", "version": "1.5.3", "location": "https://mod-users-keycloak:8082" }, { "id": "mod-login-keycloak-1.5.0", "name": "mod-login-keycloak", "version": "1.5.0", "location": "https://mod-login-keycloak:8082" }, { "id": "mod-scheduler-1.3.0", "name": "mod-scheduler", "version": "1.3.0", "location": "https://mod-scheduler:8082" }, { "id": "mod-configuration-5.11.0", "name": "mod-configuration", "version": "5.11.0", "location": "https://mod-configuration:8082" } .... long list of other modules .... ] }'

 

Deploy Backend Modules

Now we are ready to deploy backend modules to Kubernetes Namespace with Eureka instance.

Helm Charts for modules are taken from Github repository folio-org/folio-helm-v2

Variable values for helm charts are stored in dedicated repository folder folio-org/pipelines-shared-library/resources/helm

For exmaple:

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install mod-inventory-storage --namespace=eureka -f ./values/mod-inventory-storage.yaml folio-helm-v2/mod-inventory-storage + helm install mod-reading-room --namespace=eureka -f ./values/mod-reading-room.yaml folio-helm-v2/mod-reading-room + helm install mod-agreements --namespace=eureka -f ./values/mod-agreements.yaml folio-helm-v2/mod-agreements ... long list of other modules ....

Just for your information:

Eureka Modules are deployed to Kubernetes Pod to provide service.

Every Module Pod contains two containers.

One container for Service and another one for lightweight Sidecar Container.

Sidecar Container is responsible for proxying authentication/authorization/routing HTTP requests.

Please see Folio Eureka Platform Overview#Sidecars link for more info.

CAUTION: Become familiar with folio-org/folio-helm-v2: Helm charts modules repository (using Helm common library) and carefully review Helm helper: folio-helm-v2/charts/folio-common/templates/_sidecar.tpl at master · folio-org/folio-helm-v2

Each BE module i.e. mod-* MUST have sidecar container in the same deployment, in other words multicontainer deployment approach is used for Eureka based environments.

*** mod-login, mod-login-saml, mod-authtoken & okapi modules MUST BE EXCLUDED from deployment

Example of removal on Groovy PL (installJson → install.json):

installJson.removeAll { module -> module.id =~ /(mod-login|mod-authtoken|mod-login-saml)-\d+\..*/ } installJson.removeAll { module -> module.id == 'okapi' }

Create tenant

REST API Docs for “POST /tenants“ endpoint.

At this point we are ready to create application tenant in Eureka instance.

First we need to take a look on docs for createTenant Rest API call to create a new tenant.

Once we sure about required parameters we give Post HTTP request to create a new tenant.

In our example we create tenant with name “diku” and description “magic happens here”:

curl -X POST --location 'https://kong.example.org/tenants' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --data '{ "name": "diku", "description": "Knowledge magic happens here" }'

 

Set entitlement

REST API Docs for “POST /entitlements“ endpoint.

Before initiating the entitlement process, both the application and the tenant must be created or registered. This entitlement process can be understood as enabling an application for a specific tenant.
It is advisable to review the documentation related to entitlement or enabling applications for tenants. This documentation offers detailed information about the parameters required and the expected returned values.
The following example demonstrates how an application can be enabled for a tenant effectively:

curl -X POST --location 'https://kong.example.org/entitlements' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-token: <the same token as in the Authorization header without Bearer>' --data '{ "tenantId": "[Tenant-UUID]", "applications": [ "app-platform-complete-1.0.0" ] }'

 

Add User

REST API Docs for “POST /users-keycloak/users“ endpoint.

On this stage we are ready to add first User to Eureka Instance to have administrative privileges later.

So checking parameters in docs for createUser Rest API call to create a new user.

Then use CURL command to run POST HTTP request against Eureka Instance:

curl -X POST --location 'https://kong.example.org/users-keycloak/users' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data-raw '{ "username": "admin" , "active": true , "patronGroup": "3684a786-6671-4268-8ed0-9db82ebca60b" , "type": "staff" , "personal": { "firstName": "John" , "lastName": "Doe" , "email": "noreply@ci.folio.org" , "preferredContactTypeId": "002" } }'

Set User Password

REST API Docs for “POST /authn/credentials“ endpoint.

Having our user created we are free to assign him some secret password to use it on login.

Just carefully looking through docs for createCredentials Rest API call to add a new login to the system.

curl -X POST --location 'https://kong.example.org/authn/credentials' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "username": "admin", "userId": "[Admin-User-UUID]", "password": "SecretPhrase" }'

 

Create Role

REST API Docs for “POST /roles“ endpoint.

We need to create a Role to bundle Eureka administrative capabilities with our Admin User.

So accordingly to docs for createRole Rest API call to create a new role we need to run following POST HTTP reuest:

curl -X POST --location 'https://kong.example.org/roles' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "name": "adminRole", "description": "Admin role" }'

 

Assign Capabilities to Role

REST API Docs for “POST /roles/capabilities“ endpoint.

And then we just attach required Eureka application capabilities to our Admin Role

Using docs for createRoleCapabilities Rest API call we create a new record associating one or more capabilities with the already created role

curl -X POST --location 'https://kong.example.org/roles/capabilities' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "roleId": "[Role-UUID]", "capabilityIds": [ "[Eureka-Capability-01-UUID]", "[Eureka-Capability-02-UUID]", "[Eureka-Capability-03-UUID]" ] }'

To get a list of existing Capabilities we are going to use findCapabilities Rest API call

curl -X GET --location 'https://kong.example.org/roles/capabilities?query=<field_name>=="<value>"&limit=300' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku'

 

Add Roles to User

REST API Docs for “POST /roles/users“ endpoint.

The last step in the row is assigning Admin Role to Admin User to provide him Super Power to rule Eureka world.

So accordingly to existing docs for assignRolesToUser Rest API call to create a record associating role with user we should run CURL command like the next one:

curl -X POST --location 'https://kong.example.org/roles/users' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "userId": "[User-UUID]", "roleIds": [ "[Role-UUID]" ] }'

 

Deploy Edge modules

  • Render Ephemeral Properties

At this step we populate Ephemeral Properties template file for every edge-* module found in github.com/folio-org/platform-complete/snapshot/install.json file.

As example for rendering we have properties file to bundle module in tenant and its admin credentials with respective capabilities.

  • Create config map for every edge-* module

Completed Ephemeral Properties files have to be stored in Cluster Namespace as configmaps:

+ kubectl create configmap edge-inn-reach-ephemeral-properties --namespace=eureka --from-file=./edge-inn-reach-ephemeral-properties --save-config + kubectl create configmap edge-courses-ephemeral-properties --namespace=eureka --from-file=./edge-courses-ephemeral-properties --save-config + kubectl create configmap edge-oai-pmh-ephemeral-properties --namespace=eureka --from-file=./edge-oai-pmh-ephemeral-properties --save-config ...long list of other edge modules...
  • Deploy edge-* modules to cluster namespace

At this point we deploy a set of edge-* modules (see install.json file) to cluster namespace:

+ helm repo add folio-helm-v2 https://repository.folio.org/repository/folio-helm-v2/ + helm repo update folio-helm-v2 + helm install edge-inn-reach --namespace=eureka -f ./values/edge-inn-reach.yaml folio-helm-v2/edge-inn-reach + helm install edge-courses --namespace=eureka -f ./values/edge-courses.yaml folio-helm-v2/edge-courses + helm install edge-oai-pmh --namespace=eureka -f ./values/edge-oai-pmh.yaml folio-helm-v2/edge-oai-pmh ...long list of other edge modules...

P\S on Eureka based environments, edge-* modules should be pointed to Kong endpoint, below TF config snippet is used across all edge-* modules as source of additional env vars.

resource "rancher2_secret" "eureka-edge" { name = "eureka-edge" count = var.eureka ? 1 : 0 project_id = rancher2_project.this.id namespace_id = rancher2_namespace.this.id data = { OKAPI_HOST = base64encode("kong-${rancher2_namespace.this.id}") OKAPI_PORT = base64encode("8000") } }

Perform Consortia Deployment (if required)

REST API Docs for “POST /consortia“ endpoint.

Just in case you made decision to deploy Consortia as a separate application via its own application descriptor app-consortia.template.json

you should consider to use related folio-org/app-consortia repository on GitHub to achieve your goals.

  • Set up a Consortia Deployment with the given tenants

    • Create consortia deployment instance accordingly to docs for consortia REST API call to save consortium configuration.

      curl -X POST --location 'https://kong.example.org/consortia' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "name": "consortium", "id": "[Consortium-UUID]" }'
    • Add Consortia Central Tenant

      curl -X POST --location 'https://kong.example.org/consortia/[Consortium-UUID]/tenants' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "id": "consortium", "name": "Central office", "code": "MCO", "isCentral": true }'
    • Add Consotia Institutial Tenant

      curl -X POST --location 'https://kong.example.org/consortia/[Consortium-UUID]/tenants?adminUserId=[Admin-User-UUID]' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: consortium' \ --data '{ "id": "college", "name": "college", "code": "COL", "isCentral": false }'

Perform indexing on Eureka resources

There is comprehensive documentation piece for Search Indexing we would highly recommend to walk through to learn that magic closer.

  • Re-create search index for authority resource

    • Have a look into existing docs for Resource reindex REST API call to initiate reindex for the authority records (/search/index/inventory/reindex endpoint)

      curl -X POST --location 'https://kong.example.org/search/index/inventory/reindex' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{ "recreateIndex": true, "resourceName": "authority" }'
    • Monitoring reindex process

      • It is possible to monitor indexing process with getReindexJob REST API call. To check how many records are published to Kafka topic we may use following command

        curl -X GET --location 'https://kong.example.org/authority-storage/reindex/[reindex_job_id]' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'x-okapi-tenant: diku'

Where reindex_job_id - ID returned by /search/index/inventory/reindex endpoint in previous step.

  • Indexing of instance resources

    • First need to check related docs for Full Inventory Records reindex REST API call to initiate the full reindex for the inventory instance records ( /search/index/instance-records/reindex/full endpoint)

      curl -X POST --location 'https://kong.example.org/search/index/instance-records/reindex/full' \ --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA...' \ --header 'Content-Type: application/json' \ --header 'x-okapi-tenant: diku' \ --data '{}'

Configure Edge modules

  • Create Eureka Users for Eureka UI