Telepresence on Rancher
As of 12/19/2024 All newly created Rancher Folio environments have telepresence installed by default, thus every development team can remotely connect via IDE and debug all things.
P\S if your env was created before the date above, to enable telepresence, environment MUST BE re-created.
How to install required tools:
telepresence: Link
AWS CLI: Link
kubectl: Link
Once env build\rebuild activity is done, please go to your team’s namespace\project on Rancher UI → Storage → ConfigMaps → telepresence-{{YourTeamName}}
Click on configMap and copy keyId and secretKey (it will be required later on)
if all of 3 prerequisite tools installed, please proceed with the following configurations:
AWS CLI:
command to execute: aws configure
paste previously copied keyId into in AWS Access Key ID prompt
paste previously copied secretKey into in AWS Secret Access Key prompt
type us-west-2 in Default region name prompt
type json in Default output format prompt
kubectl:
Command to execute: aws eks update-kubeconfig --region us-west-2 --name {{clusterName}} depending on there your environment has been deployed, that value should be placed in {{clusterName}} placeholder, in our case it’s folio-edev
once everything is on the place, connect your local telepresence to Rancher environment:
Command to execute:
telepresence connect --namespace {{envName}} --manager-namespace {{envName}}
envName in most of the cases is equal to your team name: eureka, thunderjet, spitfire etc.
In case of shared environment, simply type env name i.e. snapshot, sprint and so on.
if everything went OK, you’ll see something as below
Environment variables
Modules deployed in Eureka environment require more env vars to be present on module startup in IntelliJ when compared to the Okapi setup. When starting your module in IntelliJ make sure to extract the required environment variables (e.g. eureka-common, db-credentials and kafka-credentials) from your cluster into some .env file or individual env vars attached to IntelliJ run config.
Find and include additional env vars from the Rancher UI, these can be found under Storage > Secrets tab:
Or for the respective module you can determine which are required to be extracted and added into your local module environment variables, in Workloads > Pods > {{module_name}} > Config tab:
For example, for mod-orders we only require db-credentials and kafka-credentials to be extracted.
Also note that for Feign Client to work properly make sure to include this env var into your module environment:
SIDECAR_FORWARD_UNKNOWN_REQUESTS_DESTINATION=http://kong-{{namespace}}:8000
Environment variable extraction for an Okapi-based environment will follow the same pattern but may require changing the cluster from folio-edev to folio-dev on your Rancher UI.
Sidecar port-proxy:
In order for cross-module requests to perform correctly in your Eureka environment there must be a port proxy added to route all traffic from your local module instance deployed in IntelliJ, to localhost:8082 and then finally to the remote cluster itself. This rerouting is necessary to simulate a fully functional pod environment, where we have our module of interest and its companion sidecar accessible on localhost:8082 in the pod.
On Windows:
Make sure to use a shell terminal with admin privileges otherwise corporate Windows Firewall rules may silently prevent ports from openings.
Add port proxy for both localhost and 127.0.0.1 loopback address on port 8082:
netsh interface portproxy add v4tov4 listenport=8082 listenaddress=127.0.0.1 connectport=8082 connectaddress={{module_name}}
netsh interface portproxy add v4tov4 listenport=8082 listenaddress=localhost connectport=8082 connectaddress={{module_name}}
List/check if port proxy rules are created:
netsh interface portproxy show all
There should be 2 records resembling:
Check if the port proxy process has been created:
netstat -ano | grep 8082 or netstat -ano | findStr “:8082”
There should be 1 record:
Test port proxy with curl, all three URLs (localhost, 127.0.0.1 and {{module_name}}) should produce a similar response such as below:
Finally if you are done with this module you can remove port proxy using:
netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=127.0.0.1
netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=localhost
On Linux | MacOS:
Add port proxy for all addresses on port 8082:
socat TCP-LISTEN:8082,bind=localhost,fork TCP:{{module_name}}:8082 &
Check whether it is reachable with curl:
curl localhost:8082
curl 127.0.0.1:8082
curl {{module_name}}:8082
Remove port proxy by finding the process id and killing it:
killall socat
Module interception
To start intercepting a module, configure the environment variables, and in case of Eureka, setup necessary port proxies, and then finally run the following command:
telepresence intercept -p {{local_instance_port}}:http {{module_name}}
Your {{local_instance_port}} will correspond to the port number used to start the module instance in the IntelliJ as shown below:
A successful interception will have tel-agent-init terminated as Completed, traffic-agent left in Running status, and the module in question returning “OK” from its /admin/health endpoint:
After meeting all the criteria you can now start your module in IntelliJ, and with that you are all set test a feature or debug some issue using the resources on the Rancher and your module started locally.
Kafka work stealing
If your module communicates with Kafka there may be a problem with work stealing, i.e. the module still deployed in Rancher will steal messages that are to be consumed by your local module instance. To mitigate that you need specify an explicit ENV
env var key in your module Deployment so that the module still deployed in Rancher will consume messages from non-existent topics.
For example if we set ENV
to NOP, it will override the default folio-{{cluster}}-{{namespace}} value specified in the Secrets, and will allow us to consume all messages from folio-prefixed topics locally with our Module deployed in IntelliJ.
ENV
is being setWhen you are done with this module make sure to remove the ENV
env var key or set it to folio-{{cluster}}-{{namespace}}, so that the Rancher environment can continue functioning normally.
Use a different module for interception
When you want intercept some other module it is advised to uninstall Telepresence traffic agents from other existing modules to avoid it from interfering with Okapi module installation or with Eureka module tenant entitlement during environment start up or on module deployment from a feature branch. This is particularly critical and a must have operation in case you are intercepting mod-consortia-keycloak, as it is known to affect normal ECS and Consortia Manager operation while the traffic agent is being present. To uninstall a traffic agent from your module of interest first check whether it has already been installed with the following command:
telepresence list -a
Should return something like this:
Next uninstall the agent by running this command:
telepresence uninstall mod-users-keycloak
You should see a different output being returned from telepresence list -a, as shown below:
Additionally you can verify the uninstallation using the Rancher UI, where we expect only the module itself and its associated folio-module-sidecar sidecar being present:
Clean-up
Once your task is complete and interception no longer needed, please execute:
telepresence leave {{InterceptedSvcName}}
telepresence uninstall {{InterceptedSvcName}} | more info: Known Issue
telepresence quit -s
If telepresence is failing to connect, please find all telepresence active\running process(es) on your system and stop them, then try to connect.
How to find active\running process(es) and terminate them:
Windows OS → use Task Manager (ctrl + shift + esc)
Linux | MacOS → in shell: sudo kill $(ps aux | grep 'telepresence' | awk '{print $2}')
try to connect again.
AFTERWORDS (assuming that, you’ve completed AWSCLI & KUBECTL configuration from previous steps):
if you need logs from any Back End module(s) streamlined to your PC, please do the following:
find required pod via cmd: export podLog=$(kubectl get pod -l 'app.kubernetes.io/name={{ moduleName }}' -o=name -n {{ namespaceName }})
start log streaming via cmd: nohup kubectl logs $podLog -c {{ moduleName }} -f -n {{ namespaceName }} >> /tmp/{{ moduleName }}.log &
see target log via cat, grep, tail, head, vi(m), less or anything else
Example of commands:
export podLog=$(kubectl get pod -l 'app.kubernetes.io/name=mod-consortia-keycloak' -o=name -n snapshot)
nohup kubectl logs $podLog -c mod-consortia-keycloak -f -n snapshot >> /tmp/mod-consortia-keycloak.log &
tail -f /tmp/mod-consortia-keycloak.log
Expected results should look like as below: