Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

As of 12/19/2024 All newly created Rancher Folio environments have telepresence installed by default, thus every development team can remotely connect via IDE and debug all things.

...

Once env build\rebuild activity is done, please go to your team’s namespace\project on Rancher UI → Storage → ConfigMaps → telepresence-{{YourTeamName}}

...

Click on configMap and copy keyId and secretKey (it will be required later on)

...

if everything went OK, you’ll see something as below

...

Environment variables

Modules deployed in Eureka environment require more env vars to be present on module startup in Intellij when compared to the Okapi setup. When starting your module in Intellij make sure to extract the required environment variables (e.g. eureka-common, db-credentials and kafka-credentials) from your cluster into some .env file or individual env vars attached to Intellij run config.

...

Find and include additional env vars from the Rancher UI, these can be found under Storage > Secrets tab:

...

Or for the respective module you can determine which are required to be extracted and added into your local module environment variables, in Workloads > Pods > {{module_name}} > Config tab:

...

For example, for mod-orders we only require db-credentials and kafka-credentials to be extracted.

Also note that for Feign Client to work properly make sure to include this env var into your module environment:

SIDECAR_FORWARD_UNKNOWN_REQUESTS_DESTINATION=http://kong-{{namespace}}:8000

Sidecar port-proxy:

In order for cross-module requests to perform correctly there must be a port proxy added to route all traffic from your local module instance deployed in Intellij, to localhost:8082 and then finally to the remote cluster itself. This rerouting is necessary to simulate a fully functional pod environment, where we have our module of interest and its companion sidecar accessible on localhost:8082 in the pod.

On Windows:

Add port proxy for both localhost and 127.0.0.1 loopback address on port 8082:

netsh interface portproxy add v4tov4 listenport=8082 listenaddress=127.0.0.1 connectport=8082 connectaddress={{module_name}}

netsh interface portproxy add v4tov4 listenport=8082 listenaddress=localhost connectport=8082 connectaddress={{module_name}}

List/check if they are created:

netsh interface portproxy show all

There should be 2 records resembling:

...

Test port proxy with curl, all three URLs (localhost, 127.0.0.1 and {{module_name}}) should produce a similar response such as below:

...

Finally if you are done with this module you can remove port proxy using:

netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=127.0.0.1

netsh interface portproxy delete v4tov4 listenport=8082 listenaddress=localhost

On Linux | MacOS:

Add port proxy for all addresses on port 8082:

nc -l 8002 -k | nc {{module_name}} 8082

Check whether it is reachable with curl:

curl localhost:8082

curl 127.0.0.1:8082

curl {{module_name}}:8082

Remove port proxy by finding the process id and killing it:

sudo lsof -i :8002

sudo kill -9 <PID>

Clean-up

Once your task is complete and interception no longer needed, please execute:

  • telepresence leave {{InterceptedSvcName}}

  • telepresence quit -s

If telepresence is failing to connect, please find all telepresence active\running process(es) on your system and stop them, then try to connect.

How to find active\running process(es) and terminate them:

  • Windows OS → use Task Manager (ctrl + shift + esc)

  • Linux | MacOS → in shell: sudo kill $(ps aux | grep 'telepresence' | awk '{print $2}')

try to connect again.