BusyBee Developer Env Setup

icsearch

This document will outline steps to enable a development environment with a smaller resource footprint than the standard FOLIO Vagrant box. The environment will be suitable enough to run Karate tests.

Requirements

  • Docker Engine & Docker Compose is available on the host machine.

    This guide is written with Rancher Desktop in mind. Other flavors can work but there may be new issues not already triaged.

    • If Rancher Desktop is used on Windows, It is required to enable network tunneling via Preferences > WSL > Network
  • Access to BusyBee source code at https://github.com/Olamshin/busybee or similar.

Base Services

Every FOLIO cluster needs services like Postgres and Kafka to function. The docker-compose file below will instantiate the services. It is a decent starting point with default credentials that will be used through out this document.

Copy the file below into a directory somewhere. Make sure it is named docker-compose.yml

docker-compose.yml
services:
  zookeeper:
    image: bitnami/zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ALLOW_ANONYMOUS_LOGIN: "yes"
    ports:
      - 2181:2181

  kafka:
    image: bitnami/kafka
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
      - 9092:9092
    environment:
      KAFKA_CFG_LISTENERS: INTERNAL://:9092,LOCAL://:29092
      KAFKA_CFG_ADVERTISED_LISTENERS: INTERNAL://host.docker.internal:9092,LOCAL://kafka:29092
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: LOCAL:PLAINTEXT,INTERNAL:PLAINTEXT
      KAFKA_CFG_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
      KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CFG_NODE_ID: 1
      KAFKA_CFG_LOG_RETENTION_BYTES: -1
      KAFKA_CFG_LOG_RETENTION_HOURS: -1

  postgres:
    image: postgres:16.4-alpine
    container_name: postgres
    mem_limit: 2g
    environment:
      POSTGRES_PASSWORD: folio_admin
      POSTGRES_USER: folio_admin
      POSTGRES_DB: okapi_modules
    command: -c max_connections=200 -c shared_buffers=512MB -c log_duration=on -c log_min_duration_statement=0ms -c shared_preload_libraries=pg_stat_statements -c jit=off
    ports:
      - 5432:5432

  minio:
    image: 'minio/minio'
    command: server /data --console-address ":9001"
    ports:
      - 9000:9000
      - 9001:9001

  createbuckets: # This container will terminate after running its commands to create a bucket in minio
    image: minio/mc
    depends_on:
      - minio
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio http://host.docker.internal:9000 minioadmin minioadmin;
      /usr/bin/mc rm -r --force myminio/example-bucket;
      /usr/bin/mc mb myminio/example-bucket;
      exit 0;
      "

  okapi:
    image: 'folioci/okapi:latest'
    command: 'dev'
    ports:
      - 9130:9130
    environment: # be careful to leave a space character after every java option
      JAVA_OPTIONS: |-
        -Dhttp.port=9130 
        -Dokapiurl=http://host.docker.internal:9130 
        -Dstorage=postgres 
        -Dpostgres_username=folio_admin 
        -Dpostgres_password=folio_admin 
        -Dpostgres_database=okapi_modules 
        -Dpostgres_host=host.docker.internal 
        -Dhost=host.docker.internal
        -Dport_end=9170 
        -DdockerUrl=tcp://expose-docker-on-2375:2375 
    depends_on:
      - postgres
  
  expose-docker-on-2375:
    image: alpine/socat
    container_name: expose-docker-on-2375
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: "tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock"
    restart: always

  # elasticsearch:
  #   image: 'ghcr.io/zcube/bitnami-compat/elasticsearch:7.17.9'
  #   ports:
  #     - 9300:9300
  #     - 9200:9200
  #   environment:
  #     ELASTICSEARCH_PLUGINS:
  #       "analysis-icu,analysis-kuromoji,analysis-smartcn,analysis-nori,analysis-phonetic" 

Linux

If the operating system is Linux, at the time of writing, host.docker.internal  is not a domain name that is automatically configured. One workaround is to replace every instance of host.docker.internal with 172.17.0.1 or your IP address in the docker-compose.yml file. This thread provides some insight on how to enable host.docker.internal for Linux.

This docker-compose.yml is adapted for Windows users:

docker-compose.yml
# Adapted for Windows users
services:
  kafka-ui:
    image: provectuslabs/kafka-ui
    ports: 
      - 18080:8080
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: host.docker.internal:29092
      KAFKA_CLUSTERS_0_METRICS_PORT: 9997
      DYNAMIC_CONFIG_ENABLED: 'true'
    depends_on:
      - zookeeper
      - kafka
    extra_hosts: ["host.docker.internal:host-gateway"]

  zookeeper:
    image: bitnami/zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ALLOW_ANONYMOUS_LOGIN: "yes"
    ports:
      - 2181:2181
    extra_hosts: ["host.docker.internal:host-gateway"]
 
  kafka:
    image: bitnami/kafka
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
      - 9092:9092
    environment:
      KAFKA_CFG_LISTENERS: INTERNAL://:9092,LOCAL://:29092
      KAFKA_CFG_ADVERTISED_LISTENERS: INTERNAL://host.docker.internal:9092,LOCAL://kafka:29092
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: LOCAL:PLAINTEXT,INTERNAL:PLAINTEXT
      KAFKA_CFG_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
      KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CFG_NODE_ID: 1
      KAFKA_CFG_LOG_RETENTION_BYTES: -1
      KAFKA_CFG_LOG_RETENTION_HOURS: -1
    extra_hosts: ["host.docker.internal:host-gateway"]
 
  postgres:
    image: postgres:16.4-alpine
    container_name: postgres
    mem_limit: 2g
    environment:
      POSTGRES_PASSWORD: folio_admin
      POSTGRES_USER: folio_admin
      POSTGRES_DB: okapi_modules
    command: -c max_connections=200 -c shared_buffers=512MB -c log_duration=on -c log_min_duration_statement=0ms -c shared_preload_libraries=pg_stat_statements -c jit=off
    ports:
      - 5432:5432
 
  minio:
    image: 'minio/minio'
    command: server /data --console-address ":9001"
    ports:
      - 9000:9000
      - 9001:9001
 
  createbuckets: # This container will terminate after running its commands to create a bucket in minio
    image: minio/mc
    depends_on:
      - minio
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio http://host.docker.internal:9000 minioadmin minioadmin;
      /usr/bin/mc rm -r --force myminio/example-bucket;
      /usr/bin/mc mb myminio/example-bucket;
      exit 0;
      "
    extra_hosts: ["host.docker.internal:host-gateway"]
 
  okapi:
    image: 'folioci/okapi:latest'
    command: 'dev'
    ports:
      - 9130:9130
    environment: # be careful to leave a space character after every java option
      JAVA_OPTIONS: |-
        -Dhttp.port=9130
        -Dokapiurl=http://host.docker.internal:9130
        -Dstorage=postgres
        -Dpostgres_username=folio_admin
        -Dpostgres_password=folio_admin
        -Dpostgres_database=okapi_modules
        -Dpostgres_host=host.docker.internal
        -Dhost=host.docker.internal
        -Dport_end=9170
        -DdockerUrl=tcp://expose-docker-on-2375:2375
    extra_hosts: ["host.docker.internal:host-gateway"]
    depends_on:
      - postgres
   
  expose-docker-on-2375:
    image: alpine/socat
    container_name: expose-docker-on-2375
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: "tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock"
    restart: always
 
  # elasticsearch:
  #   image: 'ghcr.io/zcube/bitnami-compat/elasticsearch:7.17.9'
  #   ports:
  #     - 9300:9300
  #     - 9200:9200
  #   environment:
  #     ELASTICSEARCH_PLUGINS:
  #       "analysis-icu,analysis-kuromoji,analysis-smartcn,analysis-nori,analysis-phonetic"

Make sure you add a hostname entry to resolve host.docker.internal correctly on Windows machine such as the one below:

C:\Windows\System32\drivers\etc
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost

192.168.0.170 host.docker.internal


Execute the following in the same directory as the docker-compose.yml file above

docker-compose.yml Directory
docker-compose up -d

The following script will wipe the database created by the docker-compose.yml script above. This is good for starting from scratch.

docker-compose.yml Directory
docker-compose stop postgres
docker-compose rm postgres -f
docker-compose create postgres
docker-compose start postgres

OR just run the below to wipe everything

docker-compose.yml Directory
docker-compose down

Ensure that base services are accessible before continuing! Typically, if one of the services are accessible, most likely others are as well.

  • postgres: Verify that you connect to the postgres container at port 5432 via PgAdmin or other similar tool
  • OKAPI: Confirm that the service at port 9130 returns a response.

The only container that should disabled is the "createbuckets" container used to create a bucket in minio.


BusyBee

BusyBee is a tool that helps to streamline HTTP calls to okapi to manage the development environment. Get the source code at https://github.com/Olamshin/busybee

Python 3.11 is the latest version required for BusyBee. Any version higher than 3.11 should not be used.


Run the following to install required dependencies for busybee. Run the following commands in the directory where BusyBee is located.

BusyBee Directory
pip install -r requirements.txt

 The first invocation of BusyBee should fail and create a config file at a location that needs to be updated. The location should be in the BusyBee response. The config should be in a .busybee folder in your home directory.

BusyBee Directory
python -m busybee


After the BusyBee config is updated, invoke BusyBee once more. You will be launched into another terminal where special commands are supported.

To start creating the dev environment, run the following:

BusyBee
start


Working With Custom Versions Of Modules

FOLIO modules that are initialized by BusyBee are at the same version of the install.json provided. During development, it is necessary to make code changes to a branch and test. This section will describe how to accomplish this with mod-inventory as an example.

Run the command below in BusyBee terminal. It will remove the docker container created by OKAPI that would represent mod-inventory.

BusyBee
undeploy -m mod-inventory

Start a custom version of mod-inventory and note the port on your local machine where it will be available. In our example, custom mod-inventory is started on port 7000. Run the command below in a BusyBee terminal to redirect mod-inventory requests from OKAPI to custom mod-inventory

BusyBee
redirect -m mod-inventory -l http://host.docker.internal:7000

Now OKAPI will forward requests to the custom mod-inventory at port 7000.

To remove the redirect, run the command below in the BusyBee terminal

BusyBee
redirect -m mod-inventory -rm

You can use this cURL-based script in a terminal session (for Windows use Git Bash or WSL2 terminal) to query Okapi and verify whether the redirection is successful by running ./test_busybee_redirect.sh mod-invoice:

test_busybee_redirect.sh
#!/bin/bash
set -e

MODULE_NAME="${1:-mod-permissions}" 

echo "=== Started testing busybee redirect for module name <$MODULE_NAME> ==="

printf "\nModule ID:\n"
MODULE_ID=$(curl -s --location http://localhost:9130/_/proxy/modules?filter=$MODULE_NAME | python -c 'import json, sys; obj = json.load(sys.stdin); print(obj[0]["id"])')
echo $MODULE_ID
printf "\n"

printf "Module health:\n"
curl --location http://localhost:9130/_/discovery/health/$MODULE_ID
printf "\n\n"

echo "=== Stopped testing busybee redirect ==="

Example output:

test_busybee_redirect.log
=== Started testing busybee redirect for module name <mod-invoice> ===

Module ID:
mod-invoice-5.9.0-SNAPSHOT.435

Module health:
[ {
  "instId" : "073f1a53-5267-4cff-b94f-7e73f9d842b7",
  "srvcId" : "mod-invoice-5.9.0-SNAPSHOT.435",
  "healthMessage" : "OK",
  "healthStatus" : true
} ]

=== Stopped testing busybee redirect ===

Or use cURL utility directly: 

Manual cURL commands
# Find current Module Id (version) for mod-invoice
curl --location 'http://localhost:9130/_/proxy/modules?filter=mod-invoice'

# Check health of mod-invoice
curl --location 'http://localhost:9130/_/discovery/health/mod-invoice-5.9.0-SNAPSHOT.435'

Example output:

Manual cURL command log
[ {
  "id" : "mod-invoice-5.9.0-SNAPSHOT.435",
  "name" : "Invoice business logic module"
} ]
[ {
  "instId" : "073f1a53-5267-4cff-b94f-7e73f9d842b7",
  "srvcId" : "mod-invoice-5.9.0-SNAPSHOT.435",
  "healthMessage" : "OK",
  "healthStatus" : true
} ]

To enable the original docker container for the mod-inventory version in the install.json, run the command below

BusyBee
deploy -m mod-inventory



These steps can be executed for any module initialized with BusyBee

There are more tips here: BusyBee Tips & Tricks


FOLIO UI

The FOLIO UI can be built by git cloning the platform-complete repository. The "snaphot" branch of platform-complete is a better starting point that the "master" branch. At the time of writing platform-complete requires a Node v18 runtime and yarn installed as a node module. After NodeJS has been installed, Yarn can be installed globally with:

platform-complete Directory
npm i -g yarn

The command above needs to be run only once.


With the platform-complete repository as the current working directory, required node modules can be installed with:

platform-complete Directory
yarn install

The command above should only the executed once unless the current branch of the platform-complete repo is updated. Or if install.json is modified to include/exclude modules and module versions updated.


FOLIO UI can then be started with:

platform-complete Directory
yarn start
Boburbek Kadirkhodjaev
November 20, 2024

Yeah, sorry I forgot to bump it, I suggest 16.4-alpine

Olamide Kolawole
November 20, 2024

Yes, it should be 16!

Olamide Kolawole
November 20, 2024

@Boburbek Kadirkhodjaev I have not tested with 16.4-alpine. If you have that working on your end, you can update the doc!