An arsenal of Kubernetes tools
Kubernetes 101
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It comes packed with features like automated scheduling, self-healing, automated rollouts & rollback, load balancing and many more.
With more and more developers and enthusiasts adopting microservice architecture and moving towards containerization, the Kubernetes community is actively growing. There are ample developers out there working on tons of tools and services which can be plugged into a Kubernetes cluster, hence making the clusters more manageable, robust and smart. In this blog, I will be walking you through an arsenal of tool stack that I tried my hands on and found them to be tremendously useful. Here are a few of them …
Helm and Tiller: Package manager
Helm and Tiller give you access to a plethora of pluggable packages and if you are exploring tools for your cluster, you are likely to hear its name very often. Helm is the client half of the package manager which can be used to install, view and manage the packages (charts) to be installed (released). Tiller is the other half of the package manager which communicates with the Kubernetes API to manage those charts.
The first step is to install the Helm on the local system through which we will be installing the charts. To do so, follow these steps
cd /tmpcurl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.shchmod u+x get_helm.sh./get_helm.sh
Once finished, we will be able to see the client version using the helm version
command but an error will also be prompted notifying that tiller is missing. Here comes the second step: to establish a remote connection to the tiller. To do so, follow the commands mentioned below
NOTE: These commands are for setting up tiller in a RBAC enabled cluster. For other clusters, follow https://v2.helm.sh/docs/using_helm/#installing-tiller
kubectl -n kube-system create serviceaccount tillerkubectl create clusterrolebinding tiller — clusterrole=cluster-admin — serviceaccount=kube-system:tillerhelm init — service-account tiller
Try executing helm version now.
Now Helm is ready to release charts into your cluster. To look more into how to use helm, follow the docs. And stay tuned for my deep-dive blog on Helm.
Harbor: Image Registry
Kubernetes works with the concept of containers and hence requires a Container Image Registry to store and manage those container images. My weapon of choice to meet this requirement is Harbor. Harbor is an open-source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted.
helm repo add harbor https://helm.goharbor.iohelm install — name harbor-release harbor/harbor — namespace image-registry — set expose.type=loadBalancer — set expose.tls.commonName=harbor — set registry.registry.image.repository=____________________
In the above code, set the value of registry.registry.image.repository
as the repository URL where you wish to store your container images.
Once deployed, Harbor dashboard can be accessed through the link prompted on the CLI and can be logged in using the below-mentioned default credentials
ID: admin
PASSWORD: Harbor12345
If you look into the deployed components of Harbor, you will see a component called Clair, a yet another spectacle of this arsenal.
Clair
Clair is an open-source project for the static analysis of vulnerabilities in application containers. It ingests vulnerability metadata from a configured set of sources and stores it in a database. Further, Clair API can be used to query this database for vulnerabilities of a particular image.
Fluentd: Logging
Logging in Kubernetes is a vast topic on its own. To fully understand the multiple ways to configure logging in a cluster at different levels: node and cluster, Kubernetes has a well-defined documentation page.
As shown in the documentation, you will need a logging agent which is basically a dedicated tool that exposes logs or pushes logs to a backend. Fluentd acts as the logging agent by deploying a DeamonSet, hence running a copy of the pod on each node which pushes the logs from the nodes to a backend.
Fluentd can be configured with many backends and dashboards, depending upon your requirements. One widely popular backend, Elasticsearch can be deployed with Fluentd in a cluster as shown in this documentation.
Istio: Service Mesh
As the cluster grows bigger, there are many resources to manage. One point comes when it’s difficult to memorize the services and you need a tool to manage them. Istio lets you connect, secure, control, and observe these services. It is a completely open-source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Here are the steps that you can follow to deploy Istio onto your cluster
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.0 sh -export PATH=”$PATH:/root/istio-1.0.0/bin”cd /root/istio-1.0.0kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system
To Install Istio and enforce mutual TLS authentication by default, use istio-demo-auth.yaml
as
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
This will deploy Pilot, Mixer, Ingress-Controller, and Egress-Controller, and the Istio CA (Certificate Authority)
Pilot — Responsible for configuring the Envoy and Mixer at runtime.
Proxy / Envoy — Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services.
Mixer — Create a portability layer on top of infrastructure backends. Enforce policies such as ACLs, rate limits, quotas, authentication, request tracing and telemetry collection at an infrastructure level.
Citadel / Istio CA — Secures service to service communication over TLS. Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation.
Ingress/Egress — Configure path-based routing for inbound and outbound external traffic.
Control Plane API — Underlying Orchestrator such as Kubernetes or Hashicorp Nomad.
Once deployed, three endpoints are created namely, Grafana, Jaeger and Service Graph.
Grafana dashboard can be accessed at port 3000. This dashboard can return the total number of requests currently being processed, along with the number of errors and the response time of each call.
Jaeger dashboard can be accessed at port 16686. Jaeger provides tracing information for each HTTP request. It shows which calls are made and where the time was spent within each request.
Service Graph can be accessed at port 8088/dotviz. As a system grows, it can be hard to visualise the dependencies between services. The Service Graph will draw a dependency tree of how the system connects.
Prometheus + Grafana: Interactive monitoring and visualization
Kubernetes comes with auto-healing capabilities and can handle quite a few incidents on its own. However, it is still crucial to have a monitoring and alerting system integrated into your cluster. My tool of choice for handling these tasks is Prometheus, an open-source system monitoring and alerting toolkit originally built at SoundCloud.
Prometheus uses a multi-dimensional time-series database which can be queried using PromQL. Furthermore, to add even better visualization to this data, Prometheus can be tagged along with API consumers like Grafana, yet another open-source solution for analytics and monitoring. It enables you to create custom dashboards that give a cool contemporary look with the default dark skin.
To summarize this architecture, Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts.
Getting started with Prometheus:
helm install stable/prometheus-operator — name prometheus-operator — namespace monitoringkubectl port-forward -n monitoring prometheus-prometheus-operator-prometheus-0 9090kubectl port-forward $(kubectl get pods — selector=app=grafana -n monitoring — output=jsonpath=”{.items..metadata.name}”) -n monitoring 3000
One deployed, Prometheus and Grafana dashboards can be accessed at localhost:9090
and localhost:3000
respectively. Use the below-mentioned default credentials whenever prompted
ID: admin
PASSWORD: prom-operator
Web UI Dashboard: Honorable mentions
My preference has always been classic black terminal over fancy UI, but for presentation and management, a Dashboard can be pretty handy at times.
Before moving to Dashboard, let us first understand metric-server.
The Kubernetes metric server is an aggregator of resource usage data in your cluster. The metrics server is responsible for collecting resource metrics from kubelets and exposing them to Kubernetes Apiserver through Metrics API.
DOWNLOAD_URL=$(curl — silent “https://api.github.com/repos/kubernetes-sigs/metrics-server/releases/latest" | jq -r .tarball_url)DOWNLOAD_VERSION=$(grep -o ‘[^/v]*$’ <<< $DOWNLOAD_URL)curl -Ls $DOWNLOAD_URL -o metrics-server-$DOWNLOAD_VERSION.tar.gzmkdir metrics-server-$DOWNLOAD_VERSIONtar -xzf metrics-server-$DOWNLOAD_VERSION.tar.gz — directory metrics-server-$DOWNLOAD_VERSION — strip-components 1kubectl apply -f metrics-server-$DOWNLOAD_VERSION/deploy/1.8+/
Once metrics-server is configured, your Dashboard is ready to come up and use it
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
Now we need to create a service account. YAML for the same has been hosted below
kubectl apply -f admin-service-account.yaml
Now we generate the token to log into the dashboard
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk ‘{print $1}’)
Once we have the token, just run
kubectl proxy
and move to the prompted URL and use the token to log in.
Parting note
There is no limit to the creativity and potential of the Kubernetes community and the same goes for this list. There are still plenty of amazing tools like tekton, flux, open policy agent, envoy, telepresence, thanos and many more about which I’ll be writing about soon in my upcoming blogs.
And for the explorers out there wanting more, your wonderland lies at the CNCF Landscape: a CNCF’s attempts to categorize most of the projects and product offerings in the cloud-native space. Go check it out!