Eureka environments deployment known issues - AI generated KB for RAG purpose.

Eureka environments deployment known issues - AI generated KB for RAG purpose.

## **Eureka Environments Deployment: Known Issues & Resolutions** ### 1. Deployment Errors: Missing Dependencies & Module Enablement Failures * **Issue:** Errors like `RequestValidationException: Missing dependencies found...` during tenant enablement; failures when enabling modules out of dependency order or using missing images. * **Resolution:** + Validate dependencies using the [FOLIO API Dependencies Tool]("https://dev.folio.org/folio-api-dependencies/"). + Deploy modules in this order: Kong → Keycloak → mgr-* modules → tenant modules. + Wait for all pods/modules to be fully ready before beginning tenant enablement and entitlement. + If issues persist, inspect logs and confirm module versions in the platform-complete repository. *(Source: Kubernetes Example Deployment, RANCHER-1817, RANCHER-1860, Updated: 2025-07-30)* ### 2. Resource Limitations & Infrastructure Problems * **Issue:** `InternalServerErrorException error 500: Connection refused` or doPostTenant/entitlement failures related to CPU/RAM exhaustion, resulting in throttling or pod eviction. * **Resolution:** + Allocate sufficient CPU/RAM quotas for each node. + Monitor pod resources in Rancher/Grafana, and assign module-level resource limits to avoid rebalancing. + For persistent problems, increase node size or cluster resources. *(Source: Kubernetes Example Deployment, Build/Deploy UI for Eureka environment, Updated: 2025-07-30)* ### 3. Entitlement and Kafka Sidecar Errors * **Issue:** Errors like `The module is not entitled on tenant ...`, often caused by sidecar-Kafka misalignment or lost connections. * **Resolution:** + Monitor `kafka-ui` for Kafka and sidecar health. + Restart modules if sidecars lose Kafka connection; retry entitlement when fully healthy. + Ensure correct deployment and redeployment sequence for modules and mgr-tenant-entitlement. *(Source: Kubernetes Example Deployment, Updated: 2025-07-30)* ### 4. Kong Upstream Timeout/504 Errors * **Issue:** Long-running requests via Kong result in `504: Upstream server is timing out`. * **Resolution:** + Increase Kong/Nginx timeout values using environment variables such as `KONG_NGINX_HTTP_KEEPALIVE_TIMEOUT`, `KONG_NGINX_UPSTREAM_KEEPALIVE`, `KONG_NGINX_HTTP_KEEPALIVE_REQUESTS`. + For more details, see [Kong documentation]("https://docs.konghq.com/gateway/latest/production/kong-conf/"). *(Source: Kubernetes Example Deployment, Updated: 2025-07-30)* ### 5. Issues with Module Enablement/Updating * **Issue:** New module versions (e.g., consortia, mod-users-bl) fail to enable due to blocked PRs or incomplete descriptor changes. * **Resolution:** + Confirm descriptor PRs are merged and available. + Wait for upstream snapshot builds; only enable modules after all dependencies are deployed. *(Source: RANCHER-1868, RANCHER-1880, Updated: 2024-11-05)* ### 6. UI and Build Pipeline Issues * **Issue:** UI build failures, inconsistent UI flow after changes, or snapshot deployment errors. * **Resolution:** + Use [buildAndDeployUIEureka]("https://jenkins.ci.folio.org/job/folioDevTools/job/uiManagement/job/buildAndDeployUIEureka/") pipeline with the correct parameters. + Investigate Jenkins pipeline failures, fix parameters, and manually rebuild/reset if necessary. *(Source: Build/Deploy UI for Eureka environment, Updated: 2025-03-03; RANCHER-2181, Updated: 2025-03-12)* ### 7. Environment Inaccessibility or Partial Failure * **Issue:** Env/Database/UI inaccessible after Jenkins job or pod restart. * **Resolution:** + After failed create/start env job, delete the old namespace fully using [deleteNamespace]("https://jenkins.ci.folio.org/job/folioRancher/job/manageNamespace/job/deleteNamespace/") before re-creation. + For persistent access issues, use [recreateTeamNamespace]("https://jenkins.ci.folio.org/job/folioRancher/job/manageNamespace/job/recreateTeamNamespace/") and monitor pods in Rancher UI. *(Source: How to recreate Eureka Rancher environment, RANCHER-1990, Updated: 2025-03-06; Updated: 2025-01-09)* ### 8. Login Issues * **Issue:** Users cannot log in, even when environment and modules appear enabled. * **Resolution:** + Run [loginIssueFix]("https://jenkins.ci.folio.org/job/folioDevTools/job/userManagement/job/loginIssueFix/") pipeline. Wait 5 minutes after it completes before retrying login. See detailed instructions here. *(Source: Fix Eureka login issue, Updated: 2025-02-17)* ### 9. Capabilities/Capability Sets Missing After Entitlement * **Issue:** Unable to assign capabilities immediately after entitlement due to Kafka processing lag. * **Resolution:** + Wait for mod-roles-keycloak to consume the relevant Kafka queue, then proceed with assignments. *(Source: Kubernetes Example Deployment, Updated: 2025-07-30)* ### 10. Sprint Testing/Module Update Issues * **Issue:** Environments do not reflect the correct release version or do not include expected fixes/features for testing. * **Resolution:** + Use [deployModulesFromJson]("https://jenkins-aws.indexdata.com/job/folioRancher/job/folioDevTools/job/moduleDeployment/job/deployModulesFromJson/") pipeline before/after environment provisioning to update module versions as needed. *(Source: Sprint testing folio-testing-sprint, Updated: 2024-12-13)* ### 11. **🚨 CRITICAL: Existing Namespace Conflict During Environment Creation** * **Issue:** - `createNamespaceFromBranch` pipeline fails with existing environment conflicts - Error occurs even if previous deployment attempts failed - Namespace remains in inconsistent state preventing new deployments - Build console shows resource conflicts or allocation issues * **Symptoms:** - Jenkins job fails during initial namespace setup phase - Console logs indicate existing resources blocking creation - Environment appears partially deployed or in error state - Subsequent creation attempts continue to fail * **⚠️ MANDATORY Resolution Workflow:** **ALWAYS follow this sequence before any environment creation attempt:** 1. **Check Current Namespace Status:** - Navigate to [Projects(Namespaces)]("https://folio-org.atlassian.net/wiki/spaces/FOLIJET/pages/1396467/Projects+Namespaces") - Verify if target namespace exists or is in error state 2. **Delete Existing Environment (REQUIRED):** - Use [deleteNamespace]("https://jenkins.ci.folio.org/job/folioRancher/job/manageNamespace/job/deleteNamespace/") - Parameters: Correct CLUSTER and NAMESPACE values - **⏱️ Wait for complete deletion** before proceeding 3. **Verify Clean State:** - Confirm namespace no longer appears in Rancher UI - Check that all associated resources are removed 4. **Retry Environment Creation:** - Run [createNamespaceFromBranch]("https://jenkins.ci.folio.org/job/folioRancher/job/manageNamespace/job/createNamespaceFromBranch/") with original parameters * **⚡ Quick Alternative:** Use [recreateTeamNamespace]("https://jenkins.ci.folio.org/job/folioRancher/job/manageNamespace/job/recreateTeamNamespace/") which automatically handles delete+create sequence * **🔄 Best Practice:** - **NEVER** attempt environment creation without first checking for existing namespaces - **ALWAYS** delete before recreate, even if previous creation "failed" - This applies to ALL environment types (team, feature branch, etc.) * **Example Case Reference:** - Build #folio-edev-volaris-2nd.6280 failed due to existing namespace conflict - Resolution: Delete volaris-2nd namespace → Wait → Retry creation → SUCCESS *(Source: Jenkins Console Analysis build #6280, Slack thread C08FXR6L6G5/1760449472.893459, Updated: 2025-10-07)* ### 12. **Jenkins Module Deployment Stuck at Helm Deploy State** * **Symptoms:** - Jenkins deployModuleFromFeatureBranchEureka jobs hang at "helm deploy" step for 30+ minutes (normal deployment ~4 minutes) - Job shows "running" status but no actual progress in deployment steps - Module pods may show unfinished deployments or problematic states in Rancher * **Root Causes:** - Resource pressure or pod evictions in namespace preventing module/sidecar from reaching Ready state - Jenkins agent/session issues during long waits that leave pipelines "running" without progress - Heavy startup/migration or OOM on modules prolonging rollout beyond expected timeouts - Telepresence traffic-agents interfering with module deployment and tenant entitlement - Problematic modules like mod-service-interaction causing deployment cascading issues * **Immediate Diagnostic Steps (5-10 minutes):** 1. **Check Rancher Workload Status:** - Go to Rancher UI → folio-edev → namespace "your-namespace" → Workloads - Find deployed module pods (including `folio-module-sidecar`) - Verify pods are Running/Ready and check Events for: - Image pull errors - Readiness probe failures - OOM kills or eviction messages - Restart counts 2. **Jenkins Parameters Verification:** - Ensure AGENT = `jenkins-agent-java17` (NOT `rancher`) - CONFIG_TYPE should be appropriate for dev environment - CLUSTER and NAMESPACE correctly selected 3. **Clean Up Evicted Pods:** ```bash kubectl delete pod --field-selector="status.phase==Failed" -n your-namespace ``` * **Resolution Steps:** - **Option 1: Re-run Job (Most Common Fix)** - If no pod activity/events during hang → Simply re-run the Jenkins job - This often clears Jenkins agent session issues - **Option 2: Address Resource Issues** - If Evicted/CrashLoopBackOff pods → Clean up bad pods and re-run - If OOM/resource pressure evident → Use appropriate CONFIG_TYPE or increase limits - **Option 3: Remove Telepresence Agents** - If using Telepresence: ```bash telepresence list -a telepresence uninstall module-name # if present ``` - **Option 4: Environment Recreation (Last Resort)** - If namespace appears broadly unhealthy → Use recreateTeamNamespace pipeline - Then retry the deploy * **Prevention Notes:** - Deploy job has seen iterative hardening with readiness checks and cleanup around Helm stages - Repeated >10-15 minute waits usually trace back to namespace health or resource constraints - Start environment before deployment to ensure healthy baseline * **Validated Resolution Example:** - Case: firebird namespace deployment stuck after dropping mod_service_interaction__system - Solution: Started environment + re-ran failed job → SUCCESS - No special actions required beyond basic troubleshooting steps *(Source: Slack thread C017RFAGBK2/1759250204.744059, RANCHER-1970, RANCHER-1996, RANCHER-1805, Updated: 2025-10-01)* ### 13. **mod-search Feature Branch Deployment Failures with SIMPLIFIED Workaround** 🔍 * **Issue:** - `deployModuleFromFeatureBranchEureka` fails for mod-search feature branches - Deployment completes build stages successfully but fails during Helm deployment phase - Module appears to have runtime/startup issues causing deployment timeouts - Issue persists even after environment recreation * **Symptoms:** - ✅ Module build completes successfully (e.g., mod-search-6.0.0-SNAPSHOT.ef07499) - ✅ Docker image pushed to ECR repository successfully - ✅ Application descriptor updated (e.g., app-platform-complete-3.0.1-SNAPSHOT.6299) - ❌ Deployment fails during Helm deployment phase - Module pod logs show runtime errors or startup failures - Traditional troubleshooting (environment restart, re-run job) may not resolve * **Root Cause:** - mod-search is part of app-platform-complete, making updates complex - Feature branch changes may conflict with application descriptor flow - Runtime issues in the specific module version being deployed - Database migration or capability changes requiring special handling * **✅ PROVEN Resolution - SIMPLIFIED Deployment:** 1. **Re-run the Failed Jenkins Job** with **`SIMPLIFIED=true`** parameter 2. **Parameters to use:** - `SIMPLIFIED=true` (bypasses complex application descriptor flows) - Keep all other parameters the same (MODULE_NAME, MODULE_BRANCH, etc.) 3. **Expected outcome:** Deployment completes successfully within normal timeframe * **Why SIMPLIFIED Works:** - Bypasses complex application descriptor registration and dependency resolution - Directly deploys the module without full platform-complete integration - Suitable for feature branches with DB updates or capability changes - Avoids conflicts with existing application descriptor versions * **When to Use SIMPLIFIED:** - ✅ Feature branch deployments that fail with standard approach - ✅ DB schema changes or migrations in the module - ✅ Capability/interface changes that don't affect dependencies - ❌ Major capability changes that affect other modules (use standard approach) * **Validated Success Case:** - **Environment:** Vega Rancher (folio-edev) - **Module:** mod-search feature branch FAT-21606 - **Initial failure:** Jenkins build #3877 with standard deployment - **Resolution:** SIMPLIFIED=true deployment → SUCCESS - **Confirmation:** "deploy with SIMPLIFIED option was successful" ✅ * **Prevention & Best Practices:** - For mod-search and other app-platform-complete modules, consider SIMPLIFIED first for feature branches - Monitor module startup logs to identify runtime issues early - Start environment before deployment to ensure healthy baseline - Use standard deployment for release/snapshot versions, SIMPLIFIED for development branches *(Source: Slack thread C017RFAGBK2/1760536397.064279, Jenkins build #3877, Updated: 2025-12-16)* ### 14. **🔐 SSO Keycloak Configuration: Alternative User Matching Methods** * **Issue:** - SSO configuration fails when IDP doesn't provide `external_system_id` attribute - "Detect existing FOLIO broker user" execution requires `externalId` attribute that may not be available - Need alternative methods to match existing Keycloak users with IDP identities * **Symptoms:** - Keycloak SSO authentication fails with "existing user not found" errors - IDP provides different attributes (uid, email, etc.) but not `external_system_id` - Users cannot authenticate even though they exist in Keycloak * **✅ PROVEN Solutions - Alternative Matching Methods:** **Option 1: Username-Based Matching (Stanford IDP Success Case)** 1. **Use "Detect existing broker user" execution** instead of "Detect existing FOLIO broker user" 2. **Configure Identity Provider Mapper:** - **Mapper Type:** "Attribute Importer" - **Attribute Name:** "uid" (or IDP's username attribute) - **User Attribute Name:** `username` (instead of `external_system_id`) - **Sync mode override:** "Force" 3. **Identity Provider Settings:** - Set **Principal type** to "Subject NameID" or "Attribute [Name]" - Ensure IDP sends username-like identifier **Option 2: Email-Based Matching (Recommended for most IDPs)** 1. **Configure IdP Mapper:** - **Name:** "email" - **Mapper type:** "Attribute Importer" - **Attribute Name:** "EmailAddress" (or IDP's email attribute) - **User Attribute Name:** "email" - **Sync mode override:** "Force" 2. **Identity Provider Settings:** - **Principal type:** "Attribute [Name]" - **Principal attribute:** "EmailAddress" **Option 3: Custom Attribute Matching (Advanced)** - Use custom Keycloak extension for flexible attribute matching - Reference: [FOLIO Keycloak Extensions](https://github.com/folio-org/folio-keycloak/tree/KEYCLOAK-14-investigate-options-for-automatic-creation-of-identity-provider-links-in-keycloak-sso) - Allows matching based on any custom user attributes * **Key Configuration Requirements:** 1. **Disable User Creation:** Set "Allow create" to "off" in Identity Provider settings 2. **Use Correct Authentication Flow:** Set "First login flow" to custom "Detect and Set Existing User" flow 3. **Test Thoroughly:** Decode SAML response to verify available attributes 4. **Verify Attribute Mapping:** Ensure IDP attribute names match mapper configuration * **Validated Success Example:** - **Environment:** Stanford IDP integration - **Solution:** Used "Detect existing broker user" + username mapping - **IDP Attribute:** `uid` mapped to Keycloak `username` - **Result:** SSO authentication successful ✅ * **Troubleshooting Tips:** - Examine SAML response to identify available IDP attributes - Test with different Principal type settings - Verify existing Keycloak user attributes match IDP values - Email-based matching typically most reliable if IDP provides email * **Best Practices:** - Email-based matching recommended for most scenarios - Username matching suitable when IDP provides consistent username attribute - Always test authentication flow thoroughly before production deployment - Document IDP-specific attribute mappings for future reference *(Source: Slack thread C07SL94PAPR/1762209775.358279, SSO Configuration documentation, Updated: 2025-01-20)* ### 15. Best Practices & Troubleshooting * Always monitor Rancher pod status post-deployment, acting quickly to address any evictions or failures. * For unresolved or critical issues, create a Rancher Jira and inform the Kitfox team in `#folio-rancher-support` Slack (Reporting: instructions here). * For routine problems (module update, entitlement, login, UI), use recommended Jenkins pipelines, and consult Namespace useful info & tools and Grafana for troubleshooting. **References:** (Source: Kubernetes Example Deployment, Updated: 2025-07-30) (Source: Rancher FAQ, Updated: 2025-07-30) (Source: How to recreate Eureka Rancher environment, Updated: 2025-03-06) (Source: Fix Eureka login issue, Updated: 2025-02-17) (Source: How to report an issue on Rancher environment, Updated: 2025-03-01) (Source: Build/Deploy UI for Eureka environment, Updated: 2025-03-03) (Source: Create Eureka environment, Updated: 2025-03-01) (Source: Jira Tickets including RANCHER-1990 and others, Updated: 2025-01-09) --- ## Additional Known Issues and Workarounds (Consolidated) | Category | Symptoms / Errors | Likely Root Cause | Resolution / Workaround | Source | | --- | --- | --- | --- | --- | | Tenant enablement fails (dependencies) | RequestValidationException: Missing dependencies found ... | Descriptor or deployment order issues | Validate dependencies; deploy in order: Kong → Keycloak → mgr-* → tenant modules; wait pods Ready | Kubernetes Example Deployment; RANCHER-1817; RANCHER-1860 | | doPostTenant connection refused | Connection refused: localhost:8080/8081 in flow logs | Sidecar routing/port mismatch, module not bound | Ensure sidecar/env vars correct (OKAPI_URL in *-keycloak; SIDECAR_FORWARD_UNKNOWN_REQUESTS_DESTINATION set); verify 8082 mapping; restart | RANCHER-1860; RANCHER-1666; RANCHER-1652; RANCHER-1681; RANCHER-1667 | | Keycloak not ready / DB not populated | Pod not listening on 8080; schema missing | Wrong image/DB config or secrets | Use folio-keycloak image; verify keycloak-credentials (KC_DB_*); ensure external DB reachable; resource limits adequate | RANCHER-1497; RANCHER-1498; RANCHER-1500 | | Edge modules discovery/ingress | Edge routes broken; 404; missing wildcards | Edge pointing to sidecar; ingress path wrong; secrets mapping | Point edge-* to service, not sidecar; fix secret mapping; disable okapi integration for edge; ensure TLS/certs; add '*' to dcbService path | RANCHER-1747; RANCHER-1939; RANCHER-1769; RANCHER-1948; RANCHER-1427 | | Random 502 during publishing/descriptors | 502/504 via Kong during discovery | Proxy timeouts; upstream not ready | Increase Kong/NGINX timeouts; retry when modules healthy | Kubernetes Example Deployment; RANCHER-1764 | | k8s connection resets | Connection reset errors across env | Underlying infra/network | Enable KC DEBUG logs; analyze ALB logs; re-try after stabilization | RANCHER-1690; RANCHER-1671 | | DNS propagation in daily snapshot | Env URLs not resolvable immediately | Jenkins/propagation lag | Apply pipeline fix; wait/trigger retry | RANCHER-1741 | | High CPU / scaling pressure | Pods throttled/evicted; doPostTenant fails 500 | Insufficient node capacity | Increase quotas; scale ASG; set module resources | RANCHER-1742; Kubernetes Example Deployment | | ErrImagePull for mgr-* / sidecar / *-keycloak | Pull from ECR fails | Wrong registry reference | Switch to folioorg DockerHub images; update charts/values; add DockerHub auth to helper pods | RANCHER-2033; RANCHER-1654; RANCHER-1657; RANCHER-2010 | | Module cannot start (OOM) | mod-inventory OOM/fails to start | Insufficient memory limits | Raise module memory requests/limits per config type | RANCHER-1889 | | Env creation/update pipeline failures | createNamespaceFromBranch/deployModuleFromFeatureBranch fails | Various (DB failed, Jenkins glitches) | Re-run; if persists, delete + recreate env; verify DB; review pipeline logs | RANCHER-1977; RANCHER-2085; RANCHER-2014; RANCHER-2149 | | UI bundle build fails | buildAndDeployUIEureka pipeline broken | Pipeline step regression | Investigate/patch pipeline; rebuild | RANCHER-2181; Build/Deploy UI for Eureka environment | | Required env vars missing | mgr/*-keycloak connectivity issues | OKAPI_URL/PLATFORM not set | Set OKAPI_URL for *-keycloak; ensure PLATFORM=eureka on listed modules | RANCHER-1666; RANCHER-1652; RANCHER-1829 | | Sidecar unknown requests routing | Requests misrouted to admin URL | Wrong env mapping | Use SIDECAR_FORWARD_UNKNOWN_REQUESTS_DESTINATION with Kong inner URL; restore KONG_ADMIN_URL | RANCHER-1681 | | Security on POST to mgr-* | 401/403 on POST | Authorization enforcement enabled | Obtain token; follow security env setup | Kubernetes Example Deployment (Deploying EUREKA on Kubernetes) | | Jenkins module deployment stuck at helm | Job hangs 30+ minutes at helm deploy step; no progress | Resource pressure, Jenkins agent issues, pod evictions, problematic modules | Start environment first; re-run job; check pod events; clean evicted pods; address resource issues | Slack C017RFAGBK2/1759250204.744059; RANCHER-1970; RANCHER-1996; RANCHER-1805 | | **Existing namespace conflict** | **createNamespaceFromBranch fails; resource conflicts** | **Previous environment not properly deleted** | **MANDATORY: Delete existing namespace first using deleteNamespace, wait for completion, then retry creation** | **Jenkins build #6280; Slack C08FXR6L6G5/1760449472.893459** | | **mod-search feature branch deployment fails** | **deployModuleFromFeatureBranchEureka fails during Helm phase; runtime/startup issues** | **Complex app-platform-complete integration conflicts; feature branch compatibility** | **Use SIMPLIFIED=true parameter to bypass application descriptor flows; proven successful for DB/capability changes** | **Slack C017RFAGBK2/1760536397.064279; Jenkins build #3877** | | **SSO Keycloak user matching fails** | **Authentication fails when IDP doesn't provide external_system_id; existing user not found errors** | **IDP provides different attributes (uid, email) instead of external_system_id** | **Use username-based matching with "Detect existing broker user" + uid→username mapping, or email-based matching** | **Slack C07SL94PAPR/1762209775.358279; Stanford IDP success case** | ### Notes - For step-by-step environment (re)creation, see: - Create: Create Eureka environment - Delete: Delete Eureka environment - Recreate: How to recreate Eureka Rancher environment - For UI build/update: Build/Deploy UI for Eureka environment - For login problems: Fix Eureka login issue - For troubleshooting endpoints/hostnames/TLS: Rancher FAQ and Hostnames Configuration ### 16. mod-serials-management schema reset workaround - Workaround: 1) In Rancher, scale the `mod-serials-management` deployment to 0 replicas. 2) Using pgAdmin, delete (CASCADE) the `mod_serials_management__system` database schema. 3) Scale `mod-serials-management` back up to 1 replica. ### 17. mod-service-interaction 502 Bad Gateway during deployModuleFromFeatureBranchEureka (schema reset workaround) - Symptoms: - Jenkins deployModuleFromFeatureBranchEureka fails with HTTP 502: "An invalid response was received from the upstream server" (org.folio.utilities.RequestException). - Likely root cause: - Stale or inconsistent PostgreSQL schema `mod_service_interaction__system` causing the module to fail handling requests via Kong during pipeline steps. - Workaround (validated in edev/firebird): 1) In Rancher, scale the `mod-service-interaction` deployment to 0 replicas. 2) In pgAdmin, drop (CASCADE) the schema: `mod_service_interaction__system`. 3) Scale `mod-service-interaction` back up to 1 replica. 4) Wait until the pod is Running/Ready. 5) Re-run the failed Jenkins job. - Notes: - Use the [Namespace useful info & tools]("https://folio-org.atlassian.net/wiki/spaces/FOLIJET/pages/508690454/Namespace+useful+info+tools") page to construct pgAdmin URL and find credentials for the target namespace. - Scope: non-production/testing environments only. - Source: This Slack thread (2025-09-09); Namespace useful info & tools.

## **18. 

 Deploying Locally Built Container Images in Eureka Environments****Issue:**

  • Need to deploy a locally built module container image to Rancher environment

  • Local Maven has specific versions of dependent modules not available in standard repositories

  • Uncertainty about whether Eureka supports direct container image swapping like OKAPI environments**Symptoms:**

  • Developer has locally built container but needs to deploy to Rancher environment

  • Standard Jenkins pipelines don't accept custom container image parameters

  • Module dependencies require specific versions only available locally**

 PROVEN Solutions:****Option 1: Direct Container Image Swap (Simplest - Confirmed Working)**

  1. **Build your container locally** using FOLIO community Dockerfile

  2. **Push to public container registry** (Docker Hub, GitHub Container Registry, etc.)

  • Rancher cluster must have access to the registry

  • No access to folioci required - any public repo works

  1. **Edit deployment in Rancher UI:**

  • Navigate to your namespace in Rancher

  • Find the module deployment

  • Edit container image reference to your custom image

  • **Important:** This works the same as OKAPI environments**Option 2: Use Jenkins Build Pipeline with Custom Branch**

  • Push your changes to a feature branch

  • Use buildAndPushModule + deployModuleFromFeatureBranchEureka pipeline combination

  • Ensure dependencies are available in your branch**Important Considerations:**

  •  

 **No module descriptor changes needed:** Direct container swap works without descriptor updates

  •  

 **Public registry support:** Any publicly accessible container registry works

  •  

 **Temporary changes:** Environment automation will overwrite changes during next provisioning

  •  

 **Module descriptor compatibility:** If descriptor changes are needed, use Kong/Keycloak API updates**Validated Success Case:**

  • **Environment:** Eureka Rancher environments

  • **Method:** Direct container image swap in Rancher UI

  • **Outcome:** "No harm in trying besides potentially breaking the module/cluster" - method works reliably

  • **Confirmation:** Same container swapping capability as OKAPI environments**Best Practices:**

  • Test in development environment first

  • Ensure container follows FOLIO standards

  • Document custom image version for team reference

  • Consider pushing changes to feature branch for reproducible deployments