This post is part of the “Kubernetes Primer for Security Professionals” series, and is going to cover multiple deployment options for a Kubernetes lab, ranging from more lightweight (like running Kubernetes locally) to more realistic ones (like deploying a multi-node cluster) suitable for security research.


Option 1 - Run Kubernetes Locally

Especially at the beginning, while you are getting accustomed to Kubernetes, a local installation is probably going to be enough. Here we are going to discuss the predominant alternatives currently available.

Minikube vs Docker for Mac

Minikube is a tool that makes it easy to run Kubernetes locally: it implements a local, single-node, Kubernetes cluster inside a VM for users looking to try Kubernetes or develop with it day-to-day. In addition, it is supported on all the three major desktop operating systems (MacOS, Linux, and Windows).

At the same time, if you run MacOS you are lucky because Kubernetes is now bundled in Docker for Mac. So which one is the best option?

The Codefresh team wrote an article describing pros and cons of the two products. To summarize it:

  • Minikube is a mature solution available for all major operating systems. Its main advantage is that it provides a unified way of working with a local Kubernetes cluster regardless of the operating system. It is perfect for people that are using multiple OS machines and have some basic familiarity with Kubernetes and Docker.
  • Docker for Mac is a very user-friendly solution with good integration for the MacOS UI, but it also comes with limited configuration options.

Between the two, I ended up using Minikube for my first approach towards Kubernetes.

Setup Minikube

First of all, see “Installing Minikube” in the official Kubernetes documentation for instructions on how to obtain the latest release. Once installed successfully, you can start Minikube by typing minikube start:

❯ minikube start
😄  minikube v1.0.0 on darwin (amd64)
🤹  Downloading Kubernetes v1.14.0 images in the background ...
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
💿  Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
📶  "minikube" IP address is 192.168.0.10
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.2-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
💾  Downloading kubelet v1.14.0
💾  Downloading kubeadm v1.14.0
🚜  Pulling images required by Kubernetes v1.14.0 ...
🚀  Launching Kubernetes v1.14.0 using kubeadm ...
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!

The setup process creates a new virtual machine (based on Virtualbox in my case), pulls all the images needed, and then creates a new Kubernetes context called “minikube”.

If you are working with multiple Kubernetes clusters and different environments you should be familiar with the concept of switching contexts. You can view contexts using the kubectl config command:

❯ kubectl config get-contexts
CURRENT   NAME                 CLUSTER                      AUTHINFO             NAMESPACE
          docker-for-desktop   docker-for-desktop-cluster   docker-for-desktop
*         minikube             minikube                     minikube

Once ensured “minikube” is the default context, let’s obtain the cluster information with kubectl as a smoke test:

❯ kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
KubeDNS is running at https://192.168.99.101:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

You can also use Minikube to access the Kubernetes dashboard:

❯ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:52012/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser...
Kubernetes Dashboard
Kubernetes Dashboard.

Hello World with Minikube

To complete the example, we can follow the “Hello Minikube” tutorial of the official documentation to run a simple Hello World Node.js app on Kubernetes using Minikube:

❯ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
deployment.apps/hello-node created

❯ kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hello-node   1/1     1            1           2m43s

❯ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed

❯ kubectl get services
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.111.188.237   <pending>     8080:30567/TCP   4s
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP          21m

❯ minikube service hello-node
🎉  Opening kubernetes service default/hello-node in default browser...

Option 2 - Deploy a Deliberately Vulnerable Cluster

Minikube and Docker for Mac work fine if you just want to try Kubernetes out or for developers who have to work with it daily (i.e., by having local environments for quick development sprints).

However, if you are doing security research, you will need access to configuration and security settings of the Kubernetes deployment itself.

Run Kubernetes on a Vagrant VM

Liz Rice summarized this in her “Kubernetes in Vagrant with kubeadm” blog post:

[…] However, a lot of what I’m doing at the moment relates to security settings that you might configure on your Kubernetes cluster. For example, if I’m working on kube-bench a lot of the tests look at the parameters passed to the API Server executable. Neither Minikube nor Docker for Mac use standard installation tools like kubeadm or kops that you might use for a production cluster, so for my work I was looking for ways to tweak parameters in ways that are not the same as on a regular production server.

[…] I decided that I’d be better off running exactly the same code that a Kubernetes user might run on a production cluster. And I couldn’t see any reason not to try running that in a regular Linux VM on my local machine.

She ended up providing a fully annotated Vagrant file to reproduce her setup. This is definitely worth a try if you need a quick (and still local) deployment to use for demo, talks, etc.

Run Insecure Configurations with Kind

Kind is a tool for running local Kubernetes clusters using Docker container “nodes”, bootstrapping them with kubeadm.

What’s interesting here is that Rory McCune put together a collection of kind configuration files recreating some of the common insecure configurations you can see in Kubernetes clusters. The samples Rory put together are not compatible with the latest version of Kind anymore, so you can use my fork which should work out of the box.

# Install kind
❯ go get sigs.k8s.io/kind

# Download sample configs
❯ git clone https://github.com/marco-lancini/kind-of-insecure.git
❯ cd kind-of-insecure

# Create vulnerable cluster
❯ kind --config insecure-port.yaml --name insecure create cluster
Creating cluster "insecure" ...
 ✓ Ensuring node image (kindest/node:v1.15.0) 🖼
 ✓ Preparing nodes 📦📦📦📦
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="insecure")"
kubectl cluster-info

❯ export KUBECONFIG="$(kind get kubeconfig-path --name="insecure")"
❯ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:57546
KubeDNS is running at https://127.0.0.1:57546/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

❯ k get nodes
NAME                     STATUS   ROLES    AGE   VERSION
insecure-control-plane   Ready    master   75s   v1.15.0
insecure-worker          Ready    <none>   30s   v1.15.0
insecure-worker2         Ready    <none>   28s   v1.15.0
insecure-worker3         Ready    <none>   30s   v1.15.0

# Test for insecure port
❯ curl http://127.0.0.1:8080/
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    ...
    "/logs",
    "/metrics",
    "/openapi/v2",
    "/version"
  ]
}

Option 3 - Deploy a Multi-Node Production Ready Cluster

Kubespray aims to be a one stop shop to deploy a production ready Kubernetes cluster, which supports the most popular Linux distributions and can be deployed on basically any existing provider (AWS, GCP, Azure, OpenStack, vSphere, Packet, Oracle Cloud Infrastructure, or Baremetal). In addition, it is highly composable, allowing to select core components, network plugins, and applications to be deployed.

Setup Kubespray

To use Kubespray I will assume you have at least two available hosts, which at least 1.5 GB of memory each (although 2048 MB would be better). For my setup, I created two hosts running Ubuntu Server with statically assigned IP addresses (192.168.1.111 and 192.168.1.112) and with key-based SSH authentication (for which I created a new key pair named k8s_key).

With our hosts up and running, we can use Ansible to quickly deploy our components in an automated fashion. Since I don’t want to run Ansible straight from my host, I created a docker container (hereinafter called the “ansible_worker”) to play the role of the control machine. This container can be downloaded freely from the Github Docker Registry (more on this in a moment).

Here is the full process to deploy our Kubernetes lab:

  • First, let’s clone the Kubespray repository:
❯ git clone https://github.com/kubernetes-sigs/kubespray.git
❯ cd kubespray
  • Then, we can pull the ansible_worker from the Docker Hub, mounting both the kubespray and the ~/.ssh/ folders as volumes:
$ docker run -ti --rm -v $(pwd):/kubespray -v ~/.ssh/:/root/.ssh/ ghcr.io/marco-lancini/ansible-worker:latest
  • From within the ansible_worker container, add the identity we configured for the key-based SSH authentication:
/src $ cd /kubespray/
/kubespray $ eval "$(ssh-agent)"
/kubespray $ ssh-add ~/.ssh/k8s_key
Identity added: /root/.ssh/k8s_key (/root/.ssh/k8s_key)
  • Install the kubespray dependencies and create a new inventory starting from the sample:
# Install dependencies from requirements.txt
/kubespray $ pip3 install -r requirements.txt
# Copy inventory/sample as inventory/mycluster
/kubespray $ cp -r inventory/sample inventory/mycluster
  • Update the Ansible inventory file inventory/mycluster/inventory.ini to reflect your setup. Here is mine with just 2 hosts:
[all]
node1 ansible_host=192.168.1.111
node2 ansible_host=192.168.1.112

[kube-master]
node1

[etcd]
node1

[kube-node]
node2

[k8s-cluster:children]
kube-master
kube-node
  • Review and change parameters in the inventory/mycluster/group_vars folder, specifically in inventory/mycluster/group_vars/all/all.yml and inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
  • It can be useful to set the following two variables to true in inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml: kubeconfig_localhost (to make a copy of kubeconfig on the host that runs Ansible in { inventory_dir }/artifacts) and kubectl_localhost (to download kubectl onto the host that runs Ansible in { bin_dir })

  • Finally, we can deploy Kubernetes by running the cluster.yml playbook (here vagrant is my dummy user on the 2 hosts):
/kubespray $ ansible-playbook -b -v --become-user=root -i inventory/mycluster/inventory.ini -u vagrant --private-key=~/.ssh/k8s_key cluster.yml

PLAY [localhost] **************************************************************************************************************************************************************

TASK [Check ansible version >=2.7.6] ******************************************************************************************************************************************
Monday 15 April 2019  15:07:04 +0000 (0:00:00.156)       0:00:00.156 **********
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}
 [WARNING]: Could not match supplied host pattern, ignoring: bastion


...[omitted for brevity]...


PLAY RECAP ********************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0
node1                      : ok=367  changed=114  unreachable=0    failed=0
node2                      : ok=274  changed=81   unreachable=0    failed=0

Monday 15 April 2019  15:17:00 +0000 (0:00:00.042)       0:10:01.818 **********
===============================================================================
bootstrap-os : Install python and pip --------------------------------------------------------------------------------------------------------------------------------- 59.41s
download : file_download | Download item ------------------------------------------------------------------------------------------------------------------------------ 32.60s
kubernetes/master : kubeadm | Initialize first master ----------------------------------------------------------------------------------------------------------------- 27.96s
download : container_download | download images for kubeadm config images --------------------------------------------------------------------------------------------- 27.47s
kubernetes/preinstall : Install packages requirements ----------------------------------------------------------------------------------------------------------------- 26.87s
container-engine/docker : ensure docker packages are installed -------------------------------------------------------------------------------------------------------- 21.75s
download : file_download | Download item ------------------------------------------------------------------------------------------------------------------------------ 17.63s
kubernetes/kubeadm : Join to cluster ---------------------------------------------------------------------------------------------------------------------------------- 16.57s
container-engine/docker : Docker | pause while Docker restarts -------------------------------------------------------------------------------------------------------- 10.24s
etcd : wait for etcd up ------------------------------------------------------------------------------------------------------------------------------------------------ 8.30s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 8.28s
download : file_download | Download item ------------------------------------------------------------------------------------------------------------------------------- 8.12s
download : file_download | Download item ------------------------------------------------------------------------------------------------------------------------------- 8.07s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 8.05s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 7.75s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 7.69s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 6.53s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------------------------------------------------------------------------------------------------------ 6.28s
download : container_download | Download containers if pull is required or told to always pull (all nodes) ------------------------------------------------------------- 6.20s
etcd : Configure | Check if etcd cluster is healthy -------------------------------------------------------------------------------------------------------------------- 5.61s

Interact with the Cluster

Having set kubeconfig_localhost: true means that Ansible will automatically make a copy of the kubeconfig file on the host that runs Ansible (ansible_worker). Since we are sharing the working folder as a docker volume, we can then access the same kubeconfig from our host (on which I assume you’ll have already installed kubectl).

Let’s perform a smoke test to ensure everything works as expected (notice how we are manually specifying the --kubeconfig location so that kubectl knows how to access our new cluster):

❯ kubectl --kubeconfig=./inventory/mycluster/artifacts/admin.conf cluster-info
Kubernetes master is running at https://192.168.1.111:6443
coredns is running at https://192.168.1.111:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.1.111:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

❯ kubectl --kubeconfig=./inventory/mycluster/artifacts/admin.conf get nodes
NAME    STATUS   ROLES    AGE     VERSION
node1   Ready    master   8m      v1.13.5
node2   Ready    <none>   7m25s   v1.13.5

❯ kubectl --kubeconfig=./inventory/mycluster/artifacts/admin.conf proxy
Starting to serve on 127.0.0.1:8001

# To access the dashboard browse to:
#    http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

To avoid specifying the --kubeconfig location for every single command, you can copy admin.conf to ~/.kube/config. If you’ve never heard of the kubeconfig file, I’d recommend you to read the “Configure Access to Multiple Clusters” article from the Kubernetes docs.

Hello World with Kubespray

To complete the example, let’s deploy again the “Hello Minikube” tutorial:

❯ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
deployment.apps/hello-node created

❯ kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hello-node   1/1     1            1           50s

❯ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
service/hello-node exposed

❯ kubectl get services
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.233.35.126   <pending>     8080:30723/TCP   40s
kubernetes   ClusterIP      10.233.0.1      <none>        443/TCP          103m

# Expose the service locally
❯ kubectl port-forward svc/hello-node 30723:8080
Forwarding from [::1]:30723 -> 8080
Forwarding from 127.0.0.1:30723 -> 8080
Handling connection for 30723
Handling connection for 30723
hello-node Service Exposed to Localhost
hello-node Service Exposed to Localhost.

Additional documentation on the usage of Kubespray can be found on its Github page.


Option 4 - Deploy to Cloud

Although I haven’t explored this avenue yet because I didn’t want to worry about ongoing expenditure of a cloud lab, I felt like I had to mention at least one solution for sake of completeness.

The quickest (and most configurable) way to deploy to cloud is “Kubernetes the Easy Way”, which bootstraps Kubernetes on Google Cloud Platform. Otherwise, Kubespray can deploy to AWS/GCP/Azure.


Option 5 - Deploy on Baremetal

In Kubernetes Lab on Baremetal, I described my personal approach to deploy my own Kubernetes Lab on baremetal, and on an Intel NUC in particular.

The setup described in that post has been automated as part of k8s-lab-plz, a modular Kubernetes Lab which provides an easy and streamlined way to deploy a test cluster with support for different components. You can read more about it at: Introducing k8s-lab-plz: A modular Kubernetes Lab.

In particular, you can refer to the Baremetal Setup page of the documentation for specific instructions.


Bonus

I’ve also came across some other interesting installation methods that might come up handy at some point:

Name Description
k8s-lab-plz A modular Kubernetes lab which provides an easy and streamlined way to deploy a test cluster with support for different components.
k3s A lightweight Kubernetes. Claimed to be easy to install, half the memory, all in a binary less than 40mb.
Great for: Edge, IoT, CI, ARM.
kind-of-insecure A collection of kind configuration files that can be used to create deliberately vulnerable clusters, for the purposes of security testing/training.

Conclusion

In this post, part of the “Kubernetes Primer for Security Professionals” series, we explored some deployment options for a custom Kubernetes lab.

We saw how, although Minikube and Docker for Mac work fine for anyone who just want to try Kubernetes out or for developers who have to work with it daily, if you are doing security research you will need access to configuration and security settings of the Kubernetes deployment itself. Here, Kubespray can help deploying a “production ready” cluster using Ansible to automate provisioning.

The full Kubespray configuration used in this article, together with a handy cheatsheet, can be found in the related Github repository: https://github.com/marco-lancini/offensive-infrastructure/tree/master/kubernetes.

I hope you found this post useful and interesting, and I’m keen to get feedback on it! Therefore, if you find the information shared in this series is useful, or if something is missing, or if you have ideas on how to improve it, please leave a comment in the area below, or let me know on Twitter.