******** Work in Progress *************
The following relies heavily upon and expands TAMU's setup instructions at https://github.com/folio-org/folio-install/tree/kube-rancher/alternative-install/kubernetes-rancher/TAMU and 144611142018 Q4 Folio on Rancher 2.0 for use in AWS. Many thanks for all the support from jroot at TAMU and the Sys-Ops SIG!
...
We are using our own certs so need to add secrets before finishing all the steps in Rancher installation
Convert the key to RSA pem key with:
Code Block |
---|
openssl rsa -in /home/rld244/key.key -text > key.pem |
Check the cert and key match (the output modulus should be the same) with:
Code Block |
---|
openssl x509 -noout -modulus -in cert.pem | openssl md5 ;openssl rsa -noout -modulus -in key.pem | openssl md5 |
...
Login to Rancher Manager GUI
Go to Rancher Server URL
First time logging in - set admin password
Set Rancher Server URL - I think this can be changed afterward in the GUI
Create a Kubernetes Cluster
On Clusters Tab click Add Cluster
Check Amazon EC2
Cluster Name like "folio-cluster"
Name Prefix like "folio-pool"
Count = 3
Check etcd, Control Plane and Worker
Create a Node Template
Setup a Rancher IAM user with adequate permissions. See AWS Quick Start. The access key and secret key will be used below.
Click + to create a new template
- Account Access
- Choose region
- Add the Access Key and Secret Key for the Rancher IAM user
- Zone and Network
...
Set up EBS Volume for persistent storage
Provision AWS EBS Volume and attach to an existing instance (this can be a node in a cluster see https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/nodes/ on how to get SSH key - centos is the ssh user).
Make filesystem ext4
Check if there is a file system on it
Code Block |
---|
sudo file -s /dev/xvdf |
if it comes back with "data" it doesn't have a filesystem on it
Otherwise it'll say something like "SGI XFS filesystem..."
Make the file system
Code Block |
---|
sudo mkfs -t ext4 /dev/xvdf |
Mount the drive and add directories with 26:26 ownership and 700 perms. Then unmount the drive
Code Block |
---|
sudo mount /dev/xvdf /mnt cd /mnt sudo mkdir data backup pgconf pgwal sudo chown 26:26 data backup pgconf pgwal sudo chmod 700 data backup pgconf pgwal cd .. sudo umount /mnt |
Detach the volume from the instance in the AWS console
Add Persistent Volume on the cluster
- With folio-cluster selected choose Persistent Volumes from Storage dropdown
- Add Volume
- Name = folio-pv
- Right now it's 100GiB
- Volume Plugin = AWS EBS Disk
- Volume ID = volume ID from AWS
- Partition = 0
- Filesystem Type = ext4
- Read Only = No
Add a Persistent Volume Claim for Folio-Project
- With Folio-Project selected choose Workloads > Volumes and Add Volume
- Name = folio-pvc
- Namespace = folio-q4
- Select the Persistent Volume created above
Create db-config secret
- With Folio-Project selected choose Resources > Secrets and Add Secret
- Name = db-config
- Available to all namespaces in this project
Paste the following values
PG_DATABASE = okapi PG_PASSWORD = password PG_PRIMARY_PASSWORD = password PG_PRIMARY_PORT = 5432 PG_PRIMARY_USER = primaryuser PG_ROOT_PASSWORD = password PG_USER = okapi
Create db-connect secret
- With Folio-Project selected choose Resources > Secrets and Add Secret
- Name = db-connect
- Available to all namespaces in this project
Paste the following values:
DB_DATABASE = okapi_modules DB_HOST = pg-folio DB_MAXPOOLSIZE = 20 DB_PASSWORD = password DB_PORT = 5432 DB_USERNAME = folio_admin PG_DATABASE = okapi PG_PASSWORD = password PG_USER = okapi
Setup Cluster Service Accounts
- With Cluster: folio-cluster selected click Launch kubectl
From the terminal that opens run:
Code Block kubectl create clusterrolebinding pgset-sa --clusterrole=admin --serviceaccount=folio-q4:pgset-sa --namespace=folio-q4
Code Block touch rbac.yaml vi rbac.yaml
Code Block apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hazelcast-rb-q4 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: default namespace: folio-q4
Code Block kubectl apply -f rbac.yaml
Deploy Postgres DB
With Folio-Project selected choose Catalog Apps > Launch
On All Catalogs dropdown choose Crunchy-Postgres
Choose statefulset
Name = pgset
Click customize next to "This application will be deployed into the ... namespace"
Click Use an existing namespace
Choose folio-q4 in Namespace dropdown
Launch
After it is running click pgset on Catalog Apps page
Click pgset under Workloads
Scale down to 0 wait for pods to be removed
In vertical ... click Edit
Expand Environment Variables
Remove all prepopulated Environmental Variables and add these
WORK_MEM = 4MB |
TEMP_BUFFERS = 16MB |
SHARED_BUFFERS = 128MB |
PGHOST = /tmp |
PGDATA_PATH_OVERRIDE = folio-q4 |
PG_REPLICA_HOST = pgset-replica |
PG_PRIMARY_HOST = pgset-primary |
PG_MODE = set |
PG_LOCALE = en_US.UTF-8 |
MAX_WAL_SENDERS = 2 |
MAX_CONNECTIONS = 500 |
ARCHIVE_MODE = off |
Add From Source
- Type = Secret
- Source = db-config
- Key = All
Node Scheduling
Select Run all pods for this workload on a specific node
Volumes
Leave pgdata alone
Remove Volume called backup
Add Volume > Use an existing persistent volume (claim) > folio-pvc
Volume Name = folio-q4
Add multiple mounts under this Volume
- Mount Point = /pgdata/folio-q4 Sub Path in Volume = data
- Mount Point = /backup Sub Path in Volume = backup
- Mount Point = /pgconf Sub Path in Volume = pgconf
- Mount Point = /pgwal Sub Path in Volume = pgwal
Click Upgrade
Scale up to 2 pods
Add Service Discovery record
- With Folio-Project selected choose Workloads > Service Discovery > Add Record
- Name = pg-folio
- Namespace = folio-q4
- Resolves to = The set of pods which match a selector
- Add Selector and paste
- statefulset.kubernetes.io/pod-name=pgset-0
Deploy create-db Workload Job
- In AWS ECR console click on folio/create-db
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-database/
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-db
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-db:latest)
- Expand Environment Variables > Add From Source
- Type = Secret
- Source = db-connect
- Key = All
- Click Launch
Deploy Okapi Workload
- In repo in kube-rancher branch in file alternative-install/kubernetes-rancher/TAMU/okapi/hazelcast.xml update the following (not sure what all Hazelcast does and what is really needed here)
- lines 56 - 79
- set aws enabled="true"
- add access-key and secret-key
- region = us-east-1
- security-group-name = rancher-nodes
- tag-key = Name
- tag-value = folio-rancher-node (this is set in the template used to create the cluster)
- set kubernetes enabled="false"
- lines 56 - 79
- In AWS ECR console click folio/okapi
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/okapi
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = okapi
- Workload Type = Scalable deployment of 1 pod
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/okapi:latest)
- Namespace = folio-q4
- Add Ports
- 9130 : TCP : Cluster IP (Internal only) Same as container port
- 5701 : same as 9130 above
- 5702 : same
- 5703 : same
- 5704 : same
Expand Environment Variables and paste
PG_HOST = pg-folio OKAPI_URL = http://okapi:9130 OKAPI_PORT = 9130 OKAPI_NODENAME = okapi1 OKAPI_LOGLEVEL = INFO OKAPI_HOST = okapi INITDB = true HAZELCAST_VERTX_PORT = 5703 HAZELCAST_PORT = 5701 - Add From Source
- Type = Secret
- Source = db-connect
- Key = All
- Click Launch
- In the Rancher Manger with Folio-Project selected choose Workloads > Service Discovery
- Copy the Cluster IP under okapi
- Select Workloads > Workloads and click on okapi
- In the vertical ... click Edit
- Add 2 new Environment Variables
- HAZELCAST_IP = paste the Cluster IP from Service Discovery (should look something like 10.43.230.80)
- OKAPI_CLUSTERHOST = paste the Cluster IP from Service Discovery
- Set INITDB = false
- Click Upgrade
Deploy Folio Module Workloads
...
mod | Env Vars | db-connect | Scalable Deployment |
folioorg/mod-feesfines:15.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-users:15.3.0 | JAVA_OPTIONS=-Xmx384m | x | 2 |
folioorg/mod-password-validator:1.0.1 | x | 2 | |
folioorg/mod-permissions:5.4.0 | JAVA_OPTIONS=-Xmx512m | x | 2 |
folioorg/mod-login:4.6.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-inventory-storage:14.0.0 | JAVA_OPTIONS=-Xmx512m | x | 2 |
folioorg/mod-configuration:5.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-authtoken:2.0.4 | JAVA_OPTIONS = -Djwt.signing.key=CorrectBatteryHorseStaple -Xmx256m | 2 | |
folioorg/mod-circulation-storage:6.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-circulation:14.1.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-inventory:11.0.0 | JAVA_OPTIONS = -Dorg.folio.metadata.inventory.storage.type=okapi -Xmx256m | 2 | |
folioorg/mod-codex-mux:2.3.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-codex-inventory:1.4.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-login-saml:1.2.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-notify:2.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-notes:2.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-users-bl:4.3.2 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-tags:0.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-codex-ekb:1.1.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-kb-ebsco:1.1.0 | EBSCO_RMAPI_BASE_URL = https://sandbox.ebsco.io JAVA_OPTIONS = -Xmx256m | 2 | |
folioorg/mod-calendar:1.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-vendors:1.0.3 | JAVA_OPTIONS=-Xmx384m | x | 2 |
folioorg/mod-agreements:1.0.2 | JAVA_OPTIONS=-Xmx256m GRAILS_SERVER_HOST = mod_-agreements GRAILS_SERVER_PORT = 8080 | x | 2 |
folioorg/mod-marccat:1.2.0 | 2 | ||
folioorg/mod-template-engine:1.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-finance-storage:1.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-orders-storage:1.0.2 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-source-record-storage:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-source-record-manager:0.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-event-config:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-orders:1.0.2 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-erm-usage:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-erm-usage-harvester:1.0.0 | JAVA_OPTIONS=-Xmx256m CONFIG='{"okapiUrl": "http://okapi:9130"}' | 2 | |
folioorg/mod-gobi:1.0.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-data-import:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 1 |
folioorg/mod-patron:1.2.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-rtac:1.2.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-email:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-sender:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-audit:0.0.3 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-audit-filter:0.0.3 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-licenses:1.0.2 | JAVA_OPTIONS=-Xmx256m GRAILS_SERVER_HOST = mod-licenses GRAILS_SERVER_PORT = 8080 | x | 2 |
folioorg/mod-oai-pmh:1.0.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-data-loader:latest | JAVA_OPTIONS=-Xmx256m | 1 |
...
- Add Logo and Favicon to new directory in the project at alternative-install/kubernetes-rancher/TAMU/stripes-diku/tenant-assets
- Update branding section in alternative-install/kubernetes-rancher/TAMU/stripes-diku/stripes.config.js
- In alternative-install/kubernetes-rancher/TAMU/stripes-diku/Dockerfile
- In AWS ECR console click on folio/create-db
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/stripes-diku
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = stripes
- Workload Type = Run one pod on each node
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/stripes:latest)
- Add Port
- 3000 : TCP : Cluster IP (Internal only) : Same as container port
- Launch
Create diku-tenant-config secret
- ADMIN_PASSWORD = admin
ADMIN_USER = diku_admin
OKAPI_URL = http://okapi:9130
TENANT_DESC = My Library
TENANT_ID = diku
TENANT_NAME = My Library
Create create-tenant Workload job
- In AWS ECR console click on folio/create-tenant
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-tenant
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-tenant
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-tenant:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create create-deploy Workload job
- In AWS ECR console click on folio/create-deploy
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-deploy
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-deploy
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-deploy:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create bootstrap-superuser Workload job
- In AWS ECR console click on folio/bootstrap-superuser
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/bootstrap-superuser
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = bootstrap-superuser
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/bootstrap-superuser:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create create-refdata Workload job
- In AWS ECR console click on folio/create-refdata
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-refdata
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-refdata
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-refdata:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Ingress
We need to setup a load balancer here and change how the ingress works
Point folio-2018q4.myurl.org and okapi-2018q4.myurl.org to public IP of node at pool1
Add SSL certificate for ingress
- In the Rancher Manager with Folio-Project selected choose Resources > Certificates
- Add certificate
- Name = folio-2018q4
- Available to all namespaces in this project
- Private Key needs to be in RSA format see above on how to create the RSA format
- Add root cert
Ingress
- In Rancher Manger with Folio-Project selected choose Workloads > Load Balancing > Add Ingress 2 times
Okapi- Name = okapi
- Specify a hostname to use
- Request Host = okapi-2018q4.myurl.org
- Target Backend +Service 3 times for three services
- Path = <leave it blank the first time> Target = okapi Port = <should autofill with 9130>
- Path = / Target = okapi Port = <should autofill with 9130>
- Path = /_/ Target = okapi Port = <should autofill with 9130>
- SSL/TLS Certificates
- Name = stripes
- Specify a hostname to use
- Request Host = folio-2018q4.myurl.org
- Target Backend +Service 3 times for three services
- Path = <leave it blank the first time> Target = stripes Port = <should autofill with 3000>
- Path = / Target = stripes Port = <should autofill with 3000>
- Path = /_/ Target = stripes Port = <should autofill with 3000>
- SSL/TLS Certificates
- Add Certificate
- Choose a certificate
- Certificate made above should be in Certificate dropdown
- Host = folio-2018q4.myurl.org
- Login should come up at https://folio-2018q4.myurl.org now
Get Okapi token and create secret
- After logging in choose Apps > Settings > Developer > Set Token
- Copy the Authentication token (JWT)
- In Rancher Manager with Folio-Project selected choose Resources > Secrets > Add Secret
- Name = x-okapi-token
- Key = X_OKAPI_TOKEN
- Value = <copied token>
Create-samp-data - (this might work as a way to inject our own records into the system)
- In AWS ECR console click on folio/create-sampdata
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/tenants/diku/create-sampdata
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-sampdata
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-sampdata:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Under Environment Variables > Add From Source
- Type = Secret
- Source = x-okapi-token
- Key = All
- Launch