Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

******** Work in Progress *************

The following relies heavily upon and expands TAMU's setup instructions at https://github.com/folio-org/folio-install/tree/kube-rancher/alternative-install/kubernetes-rancher/TAMU and 2018 Q4 Folio on Rancher 2. 0 for use in AWS. Many thanks for all the support from jroot at TAMU and the Sys-Ops SIG!

...

We are using our own certs so need to add secrets before finishing all the steps in Rancher installation

Convert the key to RSA pem key with:

Code Block
openssl rsa -in /home/rld244/key.key -text > key.pem

Check the cert and key match (the output modulus should be the same) with: 

Code Block
openssl x509 -noout -modulus -in cert.pem | openssl md5 ;openssl rsa -noout -modulus -in key.pem | openssl md5

...

Login to Rancher Manager GUI

Go to Rancher Server URL

First time logging in - set admin password

Set Rancher Server URL - I think this can be changed afterward in the GUI

Create a Kubernetes Cluster

On Clusters Tab click Add Cluster

Check Amazon EC2

Cluster Name like "folio-cluster"

Name Prefix like "folio-pool"

Count = 3

Check etcd, Control Plane and Worker

Create a Node Template

Setup a Rancher IAM user with adequate permissions. See AWS Quick Start. The access key and secret key will be used below.

Click + to create a new template

  1. Account Access
    1. Choose region 
    2. Add the Access Key and Secret Key for the Rancher IAM user
  2. Zone and Network

...

  1. Click Next: Select a Security Group
    1. Choose one or more existing groups
      1. rancher-nodes (**Note** Don't select "Standard: Automatically create a rancher-nodes group" there is already one created called rancher-nodes unless this is first time you are using Rancher in AWS - then you'll want to review the rancher-nodes security group that is created)
  2. Click Next: Set Instance options
    1. We are currently using an r5.large
    2. Root Disk Size = 40GB
    3. Current Official CentOS7 AMI as of 01/24/19 = ami-9887c6e7
    4. SSH User = centos
    5. IAM Instance Profile Name = RancherNodeRole (need to review what the specific needs with the Instance Profile are)
    6. Add AWS Tags as needed to track you instances in AWS
  3. Give the template a name like folio-template
  4. Under Engine Options make sure the Docker version is at least 18.09.2

Finish creating the cluster

...

Set up EBS Volume for persistent storage

Provision AWS EBS Volume and attach to an existing instance (this can be a node in a cluster see https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/nodes/ on how to get SSH key - centos is the ssh user).

Make filesystem ext4

Check if there is a file system on it

Code Block
sudo file -s /dev/xvdf

if it comes back with "data" it doesn't have a filesystem on it

Otherwise it'll say something like "SGI XFS filesystem..."

Make the file system

Code Block
sudo mkfs -t ext4 /dev/xvdf

Mount the drive and add directories with 26:26 ownership and 700 perms. Then unmount the drive

Code Block
sudo mount /dev/xvdf /mnt
cd /mnt
sudo mkdir data backup pgconf pgwal
sudo chown 26:26 data backup pgconf pgwal
sudo chmod 700 data backup pgconf pgwal
cd ..
sudo umount /mnt

Detach the volume from the instance in the AWS console

Add Persistent Volume on the cluster

  • With folio-cluster selected choose Persistent Volumes from Storage dropdown
  • Add Volume
  • Name = folio-pv
  • Right now it's 100GiB
  • Volume Plugin = AWS EBS Disk
  • Volume ID = volume ID from AWS
  • Partition = 0
  • Filesystem Type = ext4
  • Read Only = No

Add a Persistent Volume Claim for Folio-Project

  • With Folio-Project selected choose Workloads > Volumes and Add Volume
  • Name = folio-pvc
  • Namespace = folio-q4
  • Select the Persistent Volume created above

Create db-config secret

  • With Folio-Project selected choose Resources > Secrets and Add Secret
  • Name = db-config
  • Available to all namespaces in this project
  • Paste the following values

    PG_DATABASE = okapi
    PG_PASSWORD = password
    PG_PRIMARY_PASSWORD = password
    PG_PRIMARY_PORT = 5432
    PG_PRIMARY_USER = primaryuser
    PG_ROOT_PASSWORD = password
    PG_USER = okapi


Create db-connect secret

  • With Folio-Project selected choose Resources > Secrets and Add Secret
  • Name = db-connect
  • Available to all namespaces in this project 
  • Paste the following values: 

    DB_DATABASE = okapi_modules
    DB_HOST = pg-folio
    DB_MAXPOOLSIZE = 20
    DB_PASSWORD = password
    DB_PORT = 5432
    DB_USERNAME = folio_admin
    PG_DATABASE = okapi
    PG_PASSWORD = password
    PG_USER = okapi


Setup Cluster Service Accounts

  • With Cluster: folio-cluster selected click Launch kubectl
  • From the terminal that opens run:

    Code Block
    kubectl create clusterrolebinding pgset-sa --clusterrole=admin --serviceaccount=folio-q4:pgset-sa --namespace=folio-q4


    Code Block
    touch rbac.yaml
    vi rbac.yaml


    Code Block
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: hazelcast-rb-q4
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: view
    subjects:
    - kind: ServiceAccount
      name: default
      namespace: folio-q4


    Code Block
    kubectl apply -f rbac.yaml


Deploy Postgres DB

With Folio-Project selected choose Catalog Apps > Launch

On All Catalogs dropdown choose Crunchy-Postgres

Choose statefulset

Name = pgset

Click customize next to "This application will be deployed into the ... namespace"

Click Use an existing namespace

Choose folio-q4 in Namespace dropdown

Launch

After it is running click pgset on Catalog Apps page

Click pgset under Workloads

Scale down to 0 wait for pods to be removed

In vertical ... click Edit

Expand Environment Variables

Remove all prepopulated Environmental Variables and add these

WORK_MEM = 4MB
TEMP_BUFFERS = 16MB
SHARED_BUFFERS = 128MB
PGHOST = /tmp
PGDATA_PATH_OVERRIDE = folio-q4
PG_REPLICA_HOST = pgset-replica
PG_PRIMARY_HOST = pgset-primary
PG_MODE = set
PG_LOCALE = en_US.UTF-8
MAX_WAL_SENDERS = 2
MAX_CONNECTIONS = 500
ARCHIVE_MODE = off

Add From Source

  • Type = Secret
  • Source = db-config
  • Key = All

Node Scheduling

Select Run all pods for this workload on a specific node

Volumes

Leave pgdata alone

Remove Volume called backup

Add Volume > Use an existing persistent volume (claim) > folio-pvc

Volume Name = folio-q4

Add multiple mounts under this Volume

  • Mount Point = /pgdata/folio-q4 Sub Path in Volume = data
  • Mount Point = /backup Sub Path in Volume = backup
  • Mount Point = /pgconf Sub Path in Volume = pgconf
  • Mount Point = /pgwal Sub Path in Volume = pgwal

Click Upgrade

Scale up to 2 pods

Add Service Discovery record

  • With Folio-Project selected choose Workloads > Service Discovery > Add Record
  • Name = pg-folio
  • Namespace = folio-q4
  • Resolves to = The set of pods which match a selector
  • Add Selector and paste
  • statefulset.kubernetes.io/pod-name=pgset-0

Deploy create-db Workload Job

  • In AWS ECR console click on folio/create-db
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-database/
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy

Deploy Okapi Workload

  • In repo in kube-rancher branch in file alternative-install/kubernetes-rancher/TAMU/okapi/hazelcast.xml update the following (not sure what all Hazelcast does and what is really needed here)
    • lines 56 - 79
      • set aws enabled="true"
      • add access-key and secret-key
      • region = us-east-1
      • security-group-name = rancher-nodes
      • tag-key = Name
      • tag-value = folio-rancher-node (this is set in the template used to create the cluster)
      • set kubernetes enabled="false"
  • In AWS ECR console click folio/okapi
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/okapi
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
    • Name = okapi
    • Workload Type = Scalable deployment of 1 pod
    • Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/okapi:latest)
    • Namespace = folio-q4
    • Add Ports
      • 9130 : TCP : Cluster IP (Internal only) Same as container port
      • 5701 : same as 9130 above
      • 5702 : same
      • 5703 : same
      • 5704 : same
    • Expand Environment Variables and paste

      PG_HOST = pg-folio
      OKAPI_URL = http://okapi:9130
      OKAPI_PORT = 9130
      OKAPI_NODENAME = okapi1
      OKAPI_LOGLEVEL = INFO
      OKAPI_HOST = okapi
      INITDB = true
      HAZELCAST_VERTX_PORT = 5703
      HAZELCAST_PORT = 5701


    • Add From Source
      • Type = Secret
      • Source = db-connect
      • Key = All
    • Click Launch
  • In the Rancher Manger with Folio-Project selected choose Workloads > Service Discovery
  • Copy the Cluster IP under okapi
  • Select Workloads > Workloads and click on okapi
  • In the vertical ... click Edit
  • Add 2 new Environment Variables
    • HAZELCAST_IP = paste the Cluster IP from Service Discovery (should look something like 10.43.230.80)
    • OKAPI_CLUSTERHOST = paste the Cluster IP from Service Discovery
  • Set INITDB = false
  • Click Upgrade

Deploy Folio Module Workloads

...

  • Add Logo and Favicon to new directory in the project at alternative-install/kubernetes-rancher/TAMU/stripes-diku/tenant-assets
  • Update branding section in alternative-install/kubernetes-rancher/TAMU/stripes-diku/stripes.config.js
  • In alternative-install/kubernetes-rancher/TAMU/stripes-diku/Dockerfile 
    • set ARG OKAPI_URL= okapi-2018q4.myurl.org
    • under #Copy in files at this build layer add
      • COPY /tenant-assets/cornell-favicon.png /usr/local/bin/folio/stripes/tenant-assets/

      • COPY /tenant-assets/cornell-logo.png /usr/local/bin/folio/stripes/tenant-assets/

  • In AWS ECR console click on folio/create-dbstripes
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/stripes-diku
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
  • Launch

Create diku-tenant-config secret

  • ADMIN_PASSWORD = admin
    ADMIN_USER = diku_admin
    OKAPI_URL = http://okapi:9130
    TENANT_DESC = My Library
    TENANT_ID = diku
    TENANT_NAME = My Library

Create create-tenant Workload job

  • In AWS ECR console click on folio/create-tenant
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-tenant
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy

Create create-deploy Workload job

  • In AWS ECR console click on folio/create-deploy
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-deploy
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy

Create bootstrap-superuser Workload job 

  • In AWS ECR console click on folio/bootstrap-superuser
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/bootstrap-superuser
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy

 

Create create-refdata Workload job 

  • In AWS ECR console click on folio/create-refdata
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-refdata
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy

Ingress

We need to setup a load balancer here and change how the ingress works and just learning about Global DNS in Rancher 2.2

 Point folio-2018q4.myurl.org and okapi-2018q4.myurl.org to public IP of node at pool1

Add SSL certificate for ingress

  • In the Rancher Manager with Folio-Project selected choose Resources > Certificates
  • Add certificate
  • Name = folio-2018q4
  • Available to all namespaces in this project
  • Private Key needs to be in RSA format see above on how to create the RSA format
  • Add root cert

Ingress

  • In Rancher Manger with Folio-Project selected choose Workloads > Load Balancing > Add Ingress 2 times
    Okapi
    • Name = okapi
    • Specify a hostname to use
    • Request Host = okapi-2018q4.myurl.org
    • Target Backend +Service 3 times for three services
      • Path = <leave it blank the first time> Target = okapi Port = <should autofill with 9130>
      • Path = / Target = okapi Port = <should autofill with 9130>
      • Path = /_/ Target = okapi Port = <should autofill with 9130>
    • SSL/TLS Certificates
      • Add Certificate
      • Choose a certificate
      • Certificate should be in Certificate dropdown
      • Host = okapi-2018q4.myurl.org
    Stripes
    • Name = stripes
    • Specify a hostname to use
    • Request Host = folio-2018q4.myurl.org
    • Target Backend +Service 3 times for three services
      • Path = <leave it blank the first time> Target = stripes Port = <should autofill with 3000>
      • Path = / Target = stripes Port = <should autofill with 3000>
      • Path = /_/ Target = stripes Port = <should autofill with 3000>
    • SSL/TLS Certificates
      • Add Certificate
      • Choose a certificate
      • Certificate made above should be in Certificate dropdown
      • Host = folio-2018q4.myurl.org
  • Login should come up at https://folio-2018q4.myurl.org now

Get Okapi token and create secret

  • After logging in choose Apps > Settings > Developer > Set Token
  • Copy the Authentication token (JWT)
  • In Rancher Manager with Folio-Project selected choose Resources > Secrets > Add Secret
    • Name = x-okapi-token
    • Key = X_OKAPI_TOKEN
    • Value = <copied token>

Create-samp-data - (this might work as a way to inject our own records into the system)

  • In AWS ECR console click on folio/create-sampdata
  • Click on View push commands
  • In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/tenants/diku/create-sampdata
  • Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
  • In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
    • Name = create-sampdata
    • Workload Type = Job
    • Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-sampdata:latest)
    • Under Environment Variables > Add From Source
      • Type = Secret
      • Source = diku-tenant-config
      • Key = All
    • Under Environment Variables > Add From Source
      • Type = Secret
      • Source = x-okapi-token
      • Key = All
    • Launch