******** Work in Progress *************
The following relies heavily upon and expands TAMU's setup instructions at https://github.com/folio-org/folio-install/tree/kube-rancher/alternative-install/kubernetes-rancher/TAMU and 2018 Q4 Folio on Rancher 2.0 for use in AWS. Many thanks for all the support from jroot at TAMU and the Sys-Ops SIG!
Note: we are referencing TAMU code that was used with 2018 Q4 release that we cloned from Github on Jan 24, 2019 which might have changed.
Deploy HA Rancher Server (This probably could use more explanation)
Following the instructions from these sources
https://rancher.com/docs/rancher/v2.x/en/installation/ha/
https://itnext.io/setup-a-basic-kubernetes-cluster-with-ease-using-rke-a5f3cc44f26f
https://medium.com/@facktoreal/installing-rancher-2-ha-with-lets-encrypt-ca3e09bf19c1
Currently running:
3 CentOS7 t3.xlarge instances - HA Node Requirements for Small Deployment
AWS ECR - repos referenced below will need to be created - you'll need a AWS user to access repos
Setup rancher-ssh user
Create a ssh keys without a passphrase. On each instance:
- Create a user called rancher-ssh
- Add to wheel and enable wheel no password
- Add public key to authorized_keys
- Verify ssh access with key
- Verify sudo no password
Setup Docker on each instance
Install Docker and enable it to start on boot
curl https://releases.rancher.com/install-docker/17.03.sh | sh systemctl enable docker
Create a dockerroot group and add user to it
groupadd dockerroot usermod -G dockerroot -a 'rancher-ssh'
Create /etc/docker/daemon.json
{ "group": "dockerroot" }
Restart Docker and verify the ssh user can run docker commands
Setup Load Balancer
Add Elastic IPs to each node and setup load balancer following https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb/
Point Rancher Server hostname as cname to the AWS Load Balancer DNS name
Following examples from several sources:
https://itnext.io/setup-a-basic-kubernetes-cluster-with-ease-using-rke-a5f3cc44f26f
https://medium.com/@facktoreal/installing-rancher-2-ha-with-lets-encrypt-ca3e09bf19c1
https://rancher.com/docs/rancher/v2.x/en/installation/ha/kubernetes-rke/
Use RKE to deploy cluster
Create cluster.yaml by hand or using rke config https://rancher.com/docs/rancher/v2.x/en/installation/ha/kubernetes-rke/
Deploy k8s, Helm and Tiller https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/
Deploy Rancher except the last bit after Adding TLS Secrets https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/
SSL certificates
We are using our own certs so need to add secrets before finishing all the steps in Rancher installation
Convert the key to RSA pem key with:
openssl rsa -in /home/rld244/key.key -text > key.pem
Check the cert and key match (the output modulus should be the same) with:
openssl x509 -noout -modulus -in cert.pem | openssl md5 ;openssl rsa -noout -modulus -in key.pem | openssl md5
Add secrets https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/
Finish last steps of Rancher install
Login to Rancher Manager GUI
Go to Rancher Server URL
First time logging in - set admin password
Set Rancher Server URL - I think this can be changed afterward in the GUI
Create a Kubernetes Cluster
On Clusters Tab click Add Cluster
Check Amazon EC2
Cluster Name like "folio-cluster"
Name Prefix like "folio-pool"
Count = 3
Check etcd, Control Plane and Worker
Create a Node Template
Setup a Rancher IAM user with adequate permissions. See AWS Quick Start. The access key and secret key will be used below.
Click + to create a new template
- Account Access
- Choose region
- Add the Access Key and Secret Key for the Rancher IAM user
- Zone and Network
- Select your availability zone and subnet
- Click Next: Select a Security Group
- Choose one or more existing groups
- rancher-nodes (**Note** Don't select "Standard: Automatically create a rancher-nodes group" there is already one created called rancher-nodes unless this is first time you are using Rancher in AWS - then you'll want to review the rancher-nodes security group that is created)
- Choose one or more existing groups
- Click Next: Set Instance options
- We are currently using an r5.large
- Root Disk Size = 40GB
- Current Official CentOS7 AMI as of 01/24/19 = ami-9887c6e7
- SSH User = centos
- IAM Instance Profile Name = RancherNodeRole (need to review what the specific needs with the Instance Profile are)
- Add AWS Tags as needed to track you instances in AWS
- Give the template a name like folio-template
Finish creating the cluster
Expand Cluster Options
Select Amazon under Cloud Provider
Click Create
Add crunchy-postgres Helm chart
With Global selected in the Rancher UI choose Catalogs > Add Catalog
- Name = crunchy-postgres
- Catalog URL = https://github.com/CrunchyData/crunchy-containers.git
- Branch = master
- Kind = Helm
- Click Create
Create a project
With the cluster selected in the upper lefthand corner dropdown click on Add Project
Name it Folio-Project and click Create
Under the new project click Add Namespace
Name it folio-q4
Set up EBS Volume for persistent storage
Provision AWS EBS Volume and attach to an existing instance (this can be a node in a cluster see https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/nodes/ on how to get SSH key - centos is the ssh user).
Make filesystem ext4
Check if there is a file system on it
sudo file -s /dev/xvdf
if it comes back with "data" it doesn't have a filesystem on it
Otherwise it'll say something like "SGI XFS filesystem..."
Make the file system
sudo mkfs -t ext4 /dev/xvdf
Mount the drive and add directories with 26:26 ownership and 700 perms. Then unmount the drive
sudo mount /dev/xvdf /mnt cd /mnt sudo mkdir data backup pgconf pgwal sudo chown 26:26 data backup pgconf pgwal sudo chmod 700 data backup pgconf pgwal cd .. sudo umount /mnt
Detach the volume from the instance in the AWS console
Add Persistent Volume on the cluster
- With folio-cluster selected choose Persistent Volumes from Storage dropdown
- Add Volume
- Name = folio-pv
- Right now it's 100GiB
- Volume Plugin = AWS EBS Disk
- Volume ID = volume ID from AWS
- Partition = 0
- Filesystem Type = ext4
- Read Only = No
Add a Persistent Volume Claim for Folio-Project
- With Folio-Project selected choose Workloads > Volumes and Add Volume
- Name = folio-pvc
- Namespace = folio-q4
- Select the Persistent Volume created above
Create db-config secret
- With Folio-Project selected choose Resources > Secrets and Add Secret
- Name = db-config
- Available to all namespaces in this project
Paste the following values
PG_DATABASE = okapi PG_PASSWORD = password PG_PRIMARY_PASSWORD = password PG_PRIMARY_PORT = 5432 PG_PRIMARY_USER = primaryuser PG_ROOT_PASSWORD = password PG_USER = okapi
Create db-connect secret
- With Folio-Project selected choose Resources > Secrets and Add Secret
- Name = db-connect
- Available to all namespaces in this project
Paste the following values:
DB_DATABASE = okapi_modules DB_HOST = pg-folio DB_MAXPOOLSIZE = 20 DB_PASSWORD = password DB_PORT = 5432 DB_USERNAME = folio_admin PG_DATABASE = okapi PG_PASSWORD = password PG_USER = okapi
Setup Cluster Service Accounts
- With Cluster: folio-cluster selected click Launch kubectl
From the terminal that opens run:
kubectl create clusterrolebinding pgset-sa --clusterrole=admin --serviceaccount=folio-q4:pgset-sa --namespace=folio-q4
touch rbac.yaml vi rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hazelcast-rb-q4 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: default namespace: folio-q4
kubectl apply -f rbac.yaml
Deploy Postgres DB
With Folio-Project selected choose Catalog Apps > Launch
On All Catalogs dropdown choose Crunchy-Postgres
Choose statefulset
Name = pgset
Click customize next to "This application will be deployed into the ... namespace"
Click Use an existing namespace
Choose folio-q4 in Namespace dropdown
Launch
After it is running click pgset on Catalog Apps page
Click pgset under Workloads
Scale down to 0 wait for pods to be removed
In vertical ... click Edit
Expand Environment Variables
Remove all prepopulated Environmental Variables and add these
WORK_MEM = 4MB |
TEMP_BUFFERS = 16MB |
SHARED_BUFFERS = 128MB |
PGHOST = /tmp |
PGDATA_PATH_OVERRIDE = folio-q4 |
PG_REPLICA_HOST = pgset-replica |
PG_PRIMARY_HOST = pgset-primary |
PG_MODE = set |
PG_LOCALE = en_US.UTF-8 |
MAX_WAL_SENDERS = 2 |
MAX_CONNECTIONS = 500 |
ARCHIVE_MODE = off |
Add From Source
- Type = Secret
- Source = db-config
- Key = All
Node Scheduling
Select Run all pods for this workload on a specific node
Volumes
Leave pgdata alone
Remove Volume called backup
Add Volume > Use an existing persistent volume (claim) > folio-pvc
Volume Name = folio-q4
Add multiple mounts under this Volume
- Mount Point = /pgdata/folio-q4 Sub Path in Volume = data
- Mount Point = /backup Sub Path in Volume = backup
- Mount Point = /pgconf Sub Path in Volume = pgconf
- Mount Point = /pgwal Sub Path in Volume = pgwal
Click Upgrade
Scale up to 2 pods
Add Service Discovery record
- With Folio-Project selected choose Workloads > Service Discovery > Add Record
- Name = pg-folio
- Namespace = folio-q4
- Resolves to = The set of pods which match a selector
- Add Selector and paste
- statefulset.kubernetes.io/pod-name=pgset-0
Deploy create-db Workload Job
- In AWS ECR console click on folio/create-db
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-database/
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-db
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-db:latest)
- Expand Environment Variables > Add From Source
- Type = Secret
- Source = db-connect
- Key = All
- Click Launch
Deploy Okapi Workload
- In repo in kube-rancher branch in file alternative-install/kubernetes-rancher/TAMU/okapi/hazelcast.xml update the following (not sure what all Hazelcast does and what is really needed here)
- lines 56 - 79
- set aws enabled="true"
- add access-key and secret-key
- region = us-east-1
- security-group-name = rancher-nodes
- tag-key = Name
- tag-value = folio-rancher-node (this is set in the template used to create the cluster)
- set kubernetes enabled="false"
- lines 56 - 79
- In AWS ECR console click folio/okapi
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/okapi
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = okapi
- Workload Type = Scalable deployment of 1 pod
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/okapi:latest)
- Namespace = folio-q4
- Add Ports
- 9130 : TCP : Cluster IP (Internal only) Same as container port
- 5701 : same as 9130 above
- 5702 : same
- 5703 : same
- 5704 : same
Expand Environment Variables and paste
PG_HOST = pg-folio OKAPI_URL = http://okapi:9130 OKAPI_PORT = 9130 OKAPI_NODENAME = okapi1 OKAPI_LOGLEVEL = INFO OKAPI_HOST = okapi INITDB = true HAZELCAST_VERTX_PORT = 5703 HAZELCAST_PORT = 5701 - Add From Source
- Type = Secret
- Source = db-connect
- Key = All
- Click Launch
- In the Rancher Manger with Folio-Project selected choose Workloads > Service Discovery
- Copy the Cluster IP under okapi
- Select Workloads > Workloads and click on okapi
- In the vertical ... click Edit
- Add 2 new Environment Variables
- HAZELCAST_IP = paste the Cluster IP from Service Discovery (should look something like 10.43.230.80)
- OKAPI_CLUSTERHOST = paste the Cluster IP from Service Discovery
- Set INITDB = false
- Click Upgrade
Deploy Folio Module Workloads
Deploy one workload of Scalable deployment of <# in Scalable Deployment column> pod of each of the following with Environment Variables and db-connect secret where indicated (need to figure out how to deploy all these with Helm or Yaml). Name each workload with the module name, like mod folioorg/mod-feesfines:15.1.0 would be named mod-feesfines.
mod | Env Vars | db-connect | Scalable Deployment |
folioorg/mod-feesfines:15.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-users:15.3.0 | JAVA_OPTIONS=-Xmx384m | x | 2 |
folioorg/mod-password-validator:1.0.1 | x | 2 | |
folioorg/mod-permissions:5.4.0 | JAVA_OPTIONS=-Xmx512m | x | 2 |
folioorg/mod-login:4.6.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-inventory-storage:14.0.0 | JAVA_OPTIONS=-Xmx512m | x | 2 |
folioorg/mod-configuration:5.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-authtoken:2.0.4 | JAVA_OPTIONS = -Djwt.signing.key=CorrectBatteryHorseStaple -Xmx256m | 2 | |
folioorg/mod-circulation-storage:6.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-circulation:14.1.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-inventory:11.0.0 | JAVA_OPTIONS = -Dorg.folio.metadata.inventory.storage.type=okapi -Xmx256m | 2 | |
folioorg/mod-codex-mux:2.3.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-codex-inventory:1.4.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-login-saml:1.2.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-notify:2.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-notes:2.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-users-bl:4.3.2 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-tags:0.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-codex-ekb:1.1.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-kb-ebsco:1.1.0 | EBSCO_RMAPI_BASE_URL = https://sandbox.ebsco.io JAVA_OPTIONS = -Xmx256m | 2 | |
folioorg/mod-calendar:1.2.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-vendors:1.0.3 | JAVA_OPTIONS=-Xmx384m | x | 2 |
folioorg/mod-agreements:1.0.2 | JAVA_OPTIONS=-Xmx256m GRAILS_SERVER_HOST = mod-agreements GRAILS_SERVER_PORT = 8080 | x | 2 |
folioorg/mod-marccat:1.2.0 | 2 | ||
folioorg/mod-template-engine:1.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-finance-storage:1.0.1 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-orders-storage:1.0.2 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-source-record-storage:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-source-record-manager:0.1.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-event-config:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-orders:1.0.2 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-erm-usage:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-erm-usage-harvester:1.0.0 | JAVA_OPTIONS=-Xmx256m CONFIG='{"okapiUrl": "http://okapi:9130"}' | 2 | |
folioorg/mod-gobi:1.0.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-data-import:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 1 |
folioorg/mod-patron:1.2.0 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-rtac:1.2.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-email:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-sender:1.0.0 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-audit:0.0.3 | JAVA_OPTIONS=-Xmx256m | x | 2 |
folioorg/mod-audit-filter:0.0.3 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-licenses:1.0.2 | JAVA_OPTIONS=-Xmx256m GRAILS_SERVER_HOST = mod-licenses GRAILS_SERVER_PORT = 8080 | x | 2 |
folioorg/mod-oai-pmh:1.0.1 | JAVA_OPTIONS=-Xmx256m | 2 | |
folioorg/mod-data-loader:latest | JAVA_OPTIONS=-Xmx256m | 1 |
Deploy Stripes Module
- Add Logo and Favicon to new directory in the project at alternative-install/kubernetes-rancher/TAMU/stripes-diku/tenant-assets
- Update branding section in alternative-install/kubernetes-rancher/TAMU/stripes-diku/stripes.config.js
- In alternative-install/kubernetes-rancher/TAMU/stripes-diku/Dockerfile
- In AWS ECR console click on folio/create-db
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/stripes-diku
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = stripes
- Workload Type = Run one pod on each node
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/stripes:latest)
- Add Port
- 3000 : TCP : Cluster IP (Internal only) : Same as container port
- Launch
Create diku-tenant-config secret
- ADMIN_PASSWORD = admin
ADMIN_USER = diku_admin
OKAPI_URL = http://okapi:9130
TENANT_DESC = My Library
TENANT_ID = diku
TENANT_NAME = My Library
Create create-tenant Workload job
- In AWS ECR console click on folio/create-tenant
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-tenant
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-tenant
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-tenant:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create create-deploy Workload job
- In AWS ECR console click on folio/create-deploy
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-deploy
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-deploy
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-deploy:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create bootstrap-superuser Workload job
- In AWS ECR console click on folio/bootstrap-superuser
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/bootstrap-superuser
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = bootstrap-superuser
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/bootstrap-superuser:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Create create-refdata Workload job
- In AWS ECR console click on folio/create-refdata
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/create-refdata
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-refdata
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-refdata:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Launch
Ingress
We need to setup a load balancer here and change how the ingress works
Point folio-2018q4.myurl.org and okapi-2018q4.myurl.org to public IP of node at pool1
Add SSL certificate for ingress
- In the Rancher Manager with Folio-Project selected choose Resources > Certificates
- Add certificate
- Name = folio-2018q4
- Available to all namespaces in this project
- Private Key needs to be in RSA format see above on how to create the RSA format
- Add root cert
Ingress
- In Rancher Manger with Folio-Project selected choose Workloads > Load Balancing > Add Ingress 2 times
Okapi- Name = okapi
- Specify a hostname to use
- Request Host = okapi-2018q4.myurl.org
- Target Backend +Service 3 times for three services
- Path = <leave it blank the first time> Target = okapi Port = <should autofill with 9130>
- Path = / Target = okapi Port = <should autofill with 9130>
- Path = /_/ Target = okapi Port = <should autofill with 9130>
- SSL/TLS Certificates
- Name = stripes
- Specify a hostname to use
- Request Host = folio-2018q4.myurl.org
- Target Backend +Service 3 times for three services
- Path = <leave it blank the first time> Target = stripes Port = <should autofill with 3000>
- Path = / Target = stripes Port = <should autofill with 3000>
- Path = /_/ Target = stripes Port = <should autofill with 3000>
- SSL/TLS Certificates
- Add Certificate
- Choose a certificate
- Certificate made above should be in Certificate dropdown
- Host = folio-2018q4.myurl.org
- Login should come up at https://folio-2018q4.myurl.org now
Get Okapi token and create secret
- After logging in choose Apps > Settings > Developer > Set Token
- Copy the Authentication token (JWT)
- In Rancher Manager with Folio-Project selected choose Resources > Secrets > Add Secret
- Name = x-okapi-token
- Key = X_OKAPI_TOKEN
- Value = <copied token>
Create-samp-data - (this might work as a way to inject our own records into the system)
- In AWS ECR console click on folio/create-sampdata
- Click on View push commands
- In terminal cd into the project folder alternative-install/kubernetes-rancher/TAMU/deploy-jobs/tenants/diku/create-sampdata
- Run commmands from AWS to retrieve the login command, build the image, tag the image and push to AWS
- In the Rancher Manger with Folio-Project selected choose Workloads > Deploy
- Name = create-sampdata
- Workload Type = Job
- Copy Docker Image URI from AWS ECR repo and paste into Rancher Docker Image (should look something like 0101010101010101.dkr.ecr.us-east-1.amazonaws.com/folio/create-sampdata:latest)
- Under Environment Variables > Add From Source
- Type = Secret
- Source = diku-tenant-config
- Key = All
- Under Environment Variables > Add From Source
- Type = Secret
- Source = x-okapi-token
- Key = All
- Launch