Skip to end of banner
Go to start of banner

Telepresence on Rancher

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Current »

As of 12/19/2024 All newly created Rancher Folio environments have telepresence installed by default, thus every development team can remotely connect via IDE and debug all things.

P\S if your env was created before the date above, to enable telepresence, environment MUST BE re-created.

How to install required tools:

telepresence: Link

AWS CLI: Link

kubectl: Link

Once env build\rebuild activity is done, please go to your team’s namespace\project on Rancher UI → Storage → ConfigMaps → telepresence-{{YourTeamName}}

image-20241219-120739.png

Click on configMap and copy keyId and secretKey (it will be required later on)

if all of 3 prerequisite tools installed, please proceed with the following configurations:

AWS CLI:

  • command to execute: aws configure

  • paste previously copied keyId into in AWS Access Key ID prompt

  • paste previously copied secretKey into in AWS Secret Access Key prompt

  • type us-west-2 in Default region name prompt

  • type json in Default output format prompt

image-20241219-121121.png

kubectl:

  • Command to execute: aws eks update-kubeconfig --region us-west-2 --name {{clusterName}} depending on there your environment has been deployed, that value should be placed in {{clusterName}} placeholder, in our case it’s folio-edev

    image-20241219-121902.png

    once everything is on the place, connect your local telepresence to Rancher environment:

  • Command to execute:

telepresence connect --namespace {{envName}} --manager-namespace {{envName}}

envName in most of the cases is equal to your team name: eureka, thunderjet, spitfire etc.

In case of shared environment, simply type env name i.e. snapshot, sprint and so on.

if everything went OK, you’ll see something as below

image (7).png

Environment variables

Modules deployed in Eureka environment require more env vars to be present on module startup in Intellij. When starting your module in Intellij make sure to extract the required environment variables (e.g. eureka-common, db-credentials and kafka-credentials) from your cluster into some .env file or individual env vars attached to Intellij run config.

image-20250108-155158.png

Find and include additional env vars from the Rancher UI, these can be found under Storage > Secrets tab:

image-20250108-120218.png

Or for the respective module you can determine which are required to be extracted and added into your local module environment variables, in Workloads > Pods > {{module_name}} > Config tab:

image-20250108-120447.png

For example, for mod-orders we only require db-credentials and kafka-credentials to be extracted.

Also note that for Feign Client to work properly make sure to include this env var into your module environment:

SIDECAR_FORWARD_UNKNOWN_REQUESTS_DESTINATION=http://kong-{{namespace}}:8000

Sidecar port-proxy:

In order for cross-module requests to perform correctly there must be a port proxy added to route all traffic from your local module instance deployed in Intellij, to localhost:8082 and then finally to the remote cluster itself. This rerouting is necessary to simulate a fully functional pod environment, where we have our module of interest and its companion sidecar accessible on localhost:8082 in the pod.

On Windows:

Add port proxy for both localhost and 127.0.0.1 loopback address on port 8082:

netsh interface portproxy add v4tov4 listenport=8082 listenaddress=127.0.0.1 connectport=8082 connectaddress={{module_name}}

netsh interface portproxy add v4tov4 listenport=8082 listenaddress=localhost connectport=8082 connectaddress={{module_name}}

List/check if they are created:

netsh interface portproxy show all

There should be 2 records resembling:

image-20250108-112952.png

Test port proxy with curl, all three URLs (localhost, 127.0.0.1 and {{module_name}}) should produce a similar response such as below:

image-20250108-113446.png

Finally if you are done with this module you can remove port proxy using:

netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=127.0.0.1

netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=localhost

On Linux | MacOS:

Add port proxy for all addresses on port 8082:

nc -l 8002 -k | nc {{module_name}} 8082

Check whether it is reachable with curl:

curl localhost:8082

curl 127.0.0.1:8082

curl {{module_name}}:8082

Remove port proxy by finding the process id and killing it:

sudo lsof -i :8002

sudo kill -9 <PID>

Clean-up

Once your task is complete and interception no longer needed, please execute:

  • telepresence leave {{InterceptedSvcName}}

  • telepresence quit -s

If telepresence is failing to connect, please find all telepresence active\running process(es) on your system and stop them, then try to connect.

How to find active\running process(es) and terminate them:

  • Windows OS → use Task Manager (ctrl + shift + esc)

  • Linux | MacOS → in shell: sudo kill $(ps aux | grep 'telepresence' | awk '{print $2}')

try to connect again.

  • No labels