Pacing up development on Kubernetes

Vaibhav Rajput
5 min readJun 1, 2020

--

After working on Kubernetes for some time, I have to say that when it comes to development on a Kubernetes cluster, the process of testing and debugging can be really slow. Say you have a cluster running on AWS EKS which runs a service which you want to update. Ideally, the process would be: change your code ➔ build an image ➔ push the image to a registry (say AWS ECR) ➔ update the cluster to use the new image ➔ wait for the new image to be downloaded into the pod ➔ perform the tests. That’s a lot of work!

If your service was independent of other services, then maybe you could’ve replicated that component into a local cluster and tested it. But what if it is dependent on several services or databases. Now you can’t replicate the whole setup on a local system until of course, you have a system with massive computing power.

So we need a way to be able to run our test service inside the Kubernetes environment but in a way that our systems don’t suffer the load. The best solution that I could find for this was Telepresence. Telepresence works by your code locally, as a normal local process, and then forwarding requests to/from the Kubernetes cluster.

How it works

Telepresence deploys a two-way network proxy between a pod running in your local system and a pod running in your Kubernetes cluster (bootstrapped using port-forward command). This pod proxies data like environment variables, secrets, volumes etc. from your Kubernetes environment to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed over the proxy to the remote Kubernetes cluster. With this setup established, your service gets full access to other services on the remote cluster and vice versa.

Establishing the proxy

The proxy connection can be established in two ways, namely vpn-tcp and inject-tcp. You can choose those ways using the --method tag.

vpn-tcp

This is the default method in which a VPN-like tunnel is created using a program called sshuttle, which tunnels the packets over the SSH connection, and forwards DNS queries to a DNS proxy in the cluster. Sshuttle forwards the traffic to the IPs of Kubernetes Pods and Services via the proxy Pod running in cluster.

inject-tcp

When using --method inject-tcp this is implemented using a LD_PRELOAD mechanism as on Linux or DYLD_INSERT_LIBRARIES mechanism as on macOS, where a shared library can be injected into a process and override library calls. In particular, it overrides DNS resolution and TCP connection and routes them via a SOCKS proxy to the cluster. The SOCKS proxy runs on the Kubernetes pod, which uses Tor’s extended SOCKSv5 protocol which adds support for DNS lookups. kubectl port-forward creates a tunnel to this SOCKS proxy.

Let's get started

First starting off with installing Telepresence.

On OSX, you can install it using brew with the following commands

$ brew cask install osxfuse
$ brew install datawire/blackbird/telepresence

On Linux, use the shell script as

$ curl -s https://packagecloud.io/install/repositories/datawireio/telepresence/script.deb.sh | sudo bash
$ sudo apt install --no-install-recommends telepresence

For any other distribution or update to installation instructions, refer to the installation instructions.

Testing your local container in the cluster

Now that we have installed Telepresence, we’ll try to run a container locally and use it in place of an existing pod in a remote Kubernetes cluster by using a two-way proxy. So, any service which tries to contact the existing pod will have its traffic redirected to the local container and similarly, any communications that originate from the local container and destined for other services will be proxied into the cluster. To do this, you just need to create a container image and run the following command

$ telepresence --swap-deployment foo --docker-run --rm -v$(pwd):/app -p 8080:8080 myimage:tag python app.py

Now let’s break down this command to understand what it does

  • --swap-deployment foo : Assumes we already have a foo deployment running in our clusters and we will be replacing it
  • --docker-run : Tells Telepresence to run a Docker containers
  • --rm : Tells Docker to discard our image when it terminates just to keep your systems clean
  • -v$(pwd):/app : Creates a volume mapping between the current directory into a /build folder inside the Docker container. This is the same way we mount a volume while running a container.
  • -p 8080:8080 : Do a port mapping between 8080 of the service and 8080 of localhost. You can map that port if you need to make requests to your service directly.
  • myimage:tag : That's the image which we will use to build and run our service.
  • python app.py : This is the command that will be running inside the container. This (and all other tags) can be changed according to your application requirements.

For more details on connecting using Kubernetes client library and a service account for different languages, refer here.

Testing your local process inside the container

Not just containers, you can also test a simple local process like an HTTP server inside a cluster. Let’s take a look at how.

First, start by creating a helloworld.py python script to create an HTTP server which serves simple ‘Hello World!’ on port 8080. You may see the code here. Once you have the script ready, create a deployment named hello-world and a service exposing it on port 8080 using the following command

$ telepresence --new-deployment hello-world --expose 8080

Now in the resulting shell, start your HTTP server like this

$ python3 helloworld.py

Now any component of the cluster which tries to access the hello-world service would be able to access your HTTP server. Moreover, you can just kill the existing server, update its code and spin it back up and the changes would be reflected in the service.

Contacting services locally

Once you have Telepresence installed, we can start with our first scenario where we will try and access service from our remote cluster.

Note that this service does not have an external IP assigned to it and yet we are trying to access it from an external source

Create a deployment and expose it using a service called myservice, say on port 8080. Once the pod and the service are up, try to access it using the following command.

$ telepresence --run curl http://myservice:8000/

Here, Telepresence creates a new deployment, which runs a proxy. It then runs curl locally in a way that proxies networking through that deployment. Once the curl exits, the deployment gets cleaned up.

More to try your hands on

You can work with Telepresence using many other client libraries and languages and even with many other forms of clusters like OpenShift and minikube. A good place to start with is this demo. Go on and test the speed at which you can develop now.

--

--

Vaibhav Rajput

DevOps working on cloud, containers, and more. Writer for Level Up Coding, The Startup, Better Programming, Geek Culture, and Nerd for Tech.