Reading time ~20 minutes
Kubernetes Lab on Baremetal
- The Hardware
- Install CoreOS
- Install Kubernetes
- Ingress Controllers and LoadBalancing on Baremetal
- Volumes and Stateful Deployments
- Automate the Setup
- Remotely Access the Cluster
- Conclusions
In “Deploy Your Own Kubernetes Lab” I covered multiple deployment options for a Kubernetes lab, ranging from more lightweight (like running Kubernetes locally) to more realistic ones (like deploying a multi-node cluster) suitable for security research.
In this blog post, I’m going to detail the steps I took to deploy my own Kubernetes Lab on baremetal, and on an Intel NUC in particular.
The Hardware
I was looking for a self-contained option, which - most importantly - didn’t take up much space, so I ended up settling on an Intel NUC, starting with 250GB of storage and 32GB of RAM.
It might be worth noting that, for the initial setup phase, it is also useful to have a small keyboard (like this one) and a monitor (a 7inch one is just fine) around.
At a high level, my home network diagram looks like the one below:

Install CoreOS
As the title of this post implies, the aim was to have a Kubernetes cluster
running directly on baremetal, hence deciding which operating system to rely on
was almost straightforward:
Fedora CoreOS (FCOS
) is a minimal operating system
specifically designed for running containerized workloads securely and at scale.
Let’s see how to get it running on the Intel NUC.
Prepare a Bootable USB
First step in the installation process involves burning a Fedora CoreOS ISO onto a bootable USB stick.
The latest stable version of the ISO for baremetal installations can be found
directly on the Fedora website
(33.20210301.3.1
at the time of writing).
From there, it is simply a matter of burning the ISO, which, on macOS, can be
done using tools like Etcher. Once launched, select the
CoreOS ISO and the USB device to use, and Etcher will take care of creating
a bootable USB from it.

Prepare an Ignition Config
For those new to FCOS (me included before creating this lab), it might be worth
explaining what an Ignition file actually is.
An Ignition file specifies the configuration for provisioning FCOS instances:
the process begins with a YAML configuration file, which gets
translated by the FCOS Configuration Transpiler (fcct
) into a machine-friendly JSON,
which is the final configuration file for Ignition.
FCOS ingests the Ignition file only on first boot,
applying the whole configuration or failing to boot in case of errors.
The Fedora documentation
proved to be excellent in detailing how to create a
basic Ignition file that modifies the default FCOS user (named core
)
to allow logins with an SSH key.
First, on your workstation create a file (named config.fcc
) with the following content,
and make sure to replace the line starting with ssh-rsa
with the contents of your SSH public key file:
➜ cat config.fcc
variant: fcos
version: 1.3.0
passwd:
users:
- name: core
groups:
- docker
- wheel
- sudo
ssh_authorized_keys:
- ssh-rsa AAAA...
In the config above, we are basically telling FCOS to add the default user
named core
to three additional groups (docker
, wheel
, and sudo
),
as well as to allow key based authentication with the the public SSH key specified
in the ssh_authorized_keys
section.
The public key will be provisioned to FCOS machine via Ignition,
whereas the private counterpart needs to be available to your user on your local workstation,
in order to remotely authenticate over SSH.
Next, we need to use fcct
, the Fedora CoreOS Config Transpiler,
to produces a JSON Ignition file from a YAML FCC file.
An easy way to use fcct
is to run it in a container:
➜ docker run --rm -i quay.io/coreos/fcct:release --pretty --strict < config.fcc > config.ign
➜ cat config.ign
{
"ignition": {
"version": "3.2.0"
},
"passwd": {
"users": [
{
"groups": [
"docker",
"wheel",
"sudo"
],
"name": "core",
"sshAuthorizedKeys": [
"ssh-rsa AAAA..."
]
}
]
}
}
Since this config.ign
will be needed to boot FCOS,
we need to make it temporarily available for devices on the local network.
There are multiple ways to accomplish this: I did opt to quickly spin up
updog (a replacement for Python’s SimpleHTTPServer
):

Install from Live USB
With the Ignition config ready,
plug the USB stick in the Intel NUC, turn it on,
and make sure to select that media as preferred boot option.
If the ISO has been burnt correctly, you should end up in a shell as
the core
user.
The actual installation can be accomplished in a quite straightforward way
with coreos-installer
:
$ sudo coreos-installer install /dev/sda \
--insecure-ignition --ignition-url http://192.168.1.150/config.ign
The command above instructs coreos-installer
to use the Ignition config
we are making available to local network from our workstation (192.168.1.150
in my case).
The --insecure-ignition
flag is needed if the Ignition file
is served over plaintext HTTP rather than TLS.
After a reboot of the Intel NUC, you should be able to SSH into it from your workstation:
❯ ssh [email protected]
Fedora CoreOS 33.20210217.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/
[core@192 ~]$ id
uid=1000(core) gid=1000(core) groups=1000(core),4(adm),10(wheel),16(sudo),190(systemd-journal)
And that’s it! FCOS is now up and running. Next step is installing Kubernetes on it.
Install Kubernetes
The installation process for Kubernetes is a bit more lengthy, and can be broken up in a few sections: installation of dependencies, installation of the cluster, and network setup.
Install Dependencies
While looking around (i.e., Googling) for the most effective way to deploy a vanilla Kubernetes on FCOS I came across a really detailed article from Matthias Preu (Fedora CoreOS - Basic Kubernetes Setup) describing exactly this process. Note that the remainder of this sub-section has been based heavily on Matthias’ setup, and you should refer to his blog post for a detailed explanation of each installation step.
First, setup CRI-O as the container runtime:
# Activating Fedora module repositories
$ sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-modular.repo
$ sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-updates-modular.repo
$ sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-updates-testing-modular.repo
# Setting up the CRI-O module
$ mkdir /etc/dnf/modules.d
$ cat <<EOF > /etc/dnf/modules.d/cri-o.module
[cri-o]
name=cri-o
stream=1.17
profiles=
state=enabled
EOF
# Installing CRI-O
$ rpm-ostree install cri-o
$ systemctl reboot
$ modprobe overlay && modprobe br_netfilter
$ cat <<EOF > /etc/modules-load.d/crio-net.conf
overlay
br_netfilter
EOF
$ cat <<EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sysctl --system
$ sed -i -z s+/usr/share/containers/oci/hooks.d+/etc/containers/oci/hooks.d+ /etc/crio/crio.conf
Next, install all tooling required to manage the cluster
(kubeadm
, kubelet
and kubectl
):
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
$ rpm-ostree install kubelet kubeadm kubectl
$ systemctl reboot
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
$ systemctl enable --now cri-o && systemctl enable --now kubelet
$ echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" | tee /etc/sysconfig/kubelet
Install the Cluster
Before starting the installation of the cluster itself, a custom cluster configuration needs to be created:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@cluster core]$ cat <<EOF > clusterconfig.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
controllerManager:
extraArgs:
flex-volume-plugin-dir: "/etc/kubernetes/kubelet-plugins/volume/exec"
networking:
podSubnet: 10.244.0.0/16
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/crio/crio.sock
EOF
- Line
4
: the Kubernetes version to deploy (1.20.5
in my case). - Line
9
: the subnet to be used to allocate pods’ IP addresses. Pay attention that the prefix10.244.X.X/X
is required by Flannel (the chosen networking solution) when used in conjunction withkubeadm
.
With the config ready, we can use kubeadm
to install the cluster:
[root@cluster core]$ kubeadm init --config clusterconfig.yml
[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cluster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.151]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [cluster localhost] and IPs [192.168.1.151 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [cluster localhost] and IPs [192.168.1.151 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 70.502118 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node cluster as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node cluster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9fcige.wjsr2lub81pr86tc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.151:6443 --token <redacted> \
--discovery-token-ca-cert-hash sha256:<redacted>
As it can be seen from the output of kubeadm
itself,
we can now grant the core
user (or any local user, actually)
access to the cluster by copying the kubeconfig file to its .kube
directory:
[core@cluster ~]$ mkdir -p $HOME/.kube
[core@cluster ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[core@cluster ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
The same config can also be copied (e.g., via scp
) onto your workstation,
so to interact with the cluster remotely without having to SSH into the NUC:
➜ kubectx cluster # alias for the cluster
➜ k cluster-info
+ kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.151:6443
KubeDNS is running at https://192.168.1.151:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
➜ kg nodes -o wide
+ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cluster Ready control-plane,master 60m v1.20.5 192.168.1.151 <none> Fedora CoreOS 33.20210301.3.1 5.10.19-200.fc33.x86_64 cri-o://1.19.1
From the output above you can see how the control plane is reachable at the
NUC’s local IP address (192.168.1.151
in my case).
Network Setup
Although it might seem everything is setup, there are still a couple of steps missing.
First, since I only have one node available,
it is necessary to allow the master
node itself to schedule pods.
This is done by removing a taint:
➜ k taint nodes --all node-role.kubernetes.io/master-
Second, we need to deploy a networking solution like Flannel:
[core@cluster ~]$ sudo sysctl net.bridge.bridge-nf-call-iptables=1
[core@cluster ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After this, you should have all the necessary components for a basic Kubernetes cluster up and running:
➜ kgpo --all-namespaces
+ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-2qdkf 1/1 Running 0 3d17h
kube-system coredns-74ff55c5b-5blfn 1/1 Running 0 3d17h
kube-system etcd-cluster 1/1 Running 0 3d17h
kube-system kube-apiserver-cluster 1/1 Running 0 3d17h
kube-system kube-controller-manager-cluster 1/1 Running 0 3d17h
kube-system kube-flannel-ds-22ltx 1/1 Running 0 3d17h
kube-system kube-proxy-2lbvn 1/1 Running 0 3d17h
kube-system kube-scheduler-cluster 1/1 Running 0 3d17h
Subscribe to CloudSecList
Ingress Controllers and LoadBalancing on Baremetal
We now have a fully functional cluster running on baremetal, but at some point you will have to expose some services. This is often accomplished with an NGINX Ingress Controller (source), an Ingress controller which uses NGINX as a reverse proxy and load balancer.
Unlike clusters running in the cloud, where network load balancers are available on-demand and can be configured simply via Kubernetes manifests, baremetal clusters require a slightly different setup to offer the same kind of access to external clients.

Install NGINX Controller
First of all, let’s deploy the NGINX Ingress Controller:
➜ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
After a few seconds, we should be able to see that the Ingress Controller
pods have started in the ingress-nginx
namespace:
➜ kubectl get pods -n ingress-nginx \
-l app.kubernetes.io/name=ingress-nginx --watch
+ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ppzs6 0/1 Completed 0 48s
ingress-nginx-admission-patch-x86wg 0/1 Completed 1 48s
ingress-nginx-controller-67897c9494-bht6p 1/1 Running 0 48s
Install MetalLB
As per NGINX’s documentation, MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

MetalLB can be installed by applying a couple of manifests:
# Enable Strict ARP mode
➜ kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
# Create namespace
➜ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
namespace/metallb-system created
# Deploy
➜ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
podsecuritypolicy.policy/controller configured
podsecuritypolicy.policy/speaker configured
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller unchanged
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker unchanged
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker unchanged
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
# Create secret (on first install only)
➜ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
secret/memberlist created
This will deploy MetalLB to the cluster, under the metallb-system
namespace.
The main components are:
metallb-system/controller
(deployment): the cluster-wide controller that handles IP address assignments.metallb-system/speaker
(daemonset): the component that speaks the protocol(s) to make the services reachable.memberlist
(secret): which contains the secretkey to encrypt the communication between speakers for the fast dead node detection.- Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
After a few seconds, we can verify the status of the installation:
➜ kgpo -n metallb-system
+ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-65db86ddc6-zbqjn 1/1 Running 0 90s
speaker-2brq5 1/1 Running 0 90s
Although running, MetalLB’s components will remain idle until they will get provided
with a configmap.
In this regard, MetalLB requires a dedicated pool of IP addresses in order
to be able to take ownership of the ingress-nginx
Service.
Bear in mind that this pool of IPs must be dedicated to MetalLB’s use,
the Kubernetes node IPs or IPs handed out by a DHCP server cannot be reused for this purpose.
➜ cat metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.160-192.168.1.190
➜ k apply -f metallb-config.yaml
+ kubectl apply -f metallb-config.yaml
configmap/config created
After creating such ConfigMap
(for my setup I chose 192.168.1.160-192.168.1.190
as reserved addresses),
MetalLB will take ownership
of the IP addresses in the pool and will update the External IP field of
each Service of type LoadBalancer
.
Install HAProxy
Finally, the last component we need is the HAProxy Ingress Controller, which can be used to route traffic from outside the cluster to services within the cluster.
As per documentation, we first need to add the HAProxy Ingress’ Helm repository:
➜ helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
"haproxy-ingress" has been added to your repositories
Next, we need to create a haproxy-ingress-values.yaml
file with custom parameters
and use it during the installation with Helm:
➜ cat haproxy-ingress-values.yaml
controller:
hostNetwork: true
➜ helm install haproxy-ingress haproxy-ingress/haproxy-ingress\
--create-namespace --namespace haproxy\
--version 0.12.1\
-f haproxy-config.yaml
NAME: haproxy-ingress
LAST DEPLOYED: Sat Mar 20 14:56:10 2021
NAMESPACE: haproxy
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
HAProxy Ingress has been installed!
HAProxy is exposed as a `LoadBalancer` type service.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running:
kubectl --namespace haproxy get services haproxy-ingress -o wide -w
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: haproxy
name: example
namespace: default
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 8080
path: /
To verify the successful installation of HAProxy:
➜ kubectl --namespace haproxy get services haproxy-ingress -o wide -w
+ kubectl --namespace haproxy get services haproxy-ingress -o wide -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
haproxy-ingress LoadBalancer 10.100.29.75 192.168.1.160 80:30349/TCP,443:32039/TCP 3m53s app.kubernetes.io/instance=haproxy-helm,app.kubernetes.io/name=haproxy-ingress
As it can be seen in the output above,
MetalLB updated the External IP of the haproxy-ingress
Service
(which is of type LoadBalancer
), and assigned it to one of the reserved IP
addresses (192.168.1.160
in this case).

Testing
If you followed along, you should have the following pods currently running in your cluster:
➜ kgpo --all-namespaces
+ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
haproxy haproxy-ingress-54c586f8b8-94mbd 1/1 Running 0 3d4h
ingress-nginx ingress-nginx-admission-create-ppzs6 0/1 Completed 0 3d5h
ingress-nginx ingress-nginx-admission-patch-x86wg 0/1 Completed 1 3d5h
ingress-nginx ingress-nginx-controller-67897c9494-bht6p 1/1 Running 0 3d5h
kube-system coredns-74ff55c5b-2qdkf 1/1 Running 0 3d21h
kube-system coredns-74ff55c5b-5blfn 1/1 Running 0 3d21h
kube-system etcd-cluster 1/1 Running 0 3d21h
kube-system kube-apiserver-cluster 1/1 Running 0 3d21h
kube-system kube-controller-manager-cluster 1/1 Running 0 3d21h
kube-system kube-flannel-ds-22ltx 1/1 Running 0 3d21h
kube-system kube-proxy-2lbvn 1/1 Running 0 3d21h
kube-system kube-scheduler-cluster 1/1 Running 0 3d21h
metallb-system controller-65db86ddc6-zbqjn 1/1 Running 0 3d5h
metallb-system speaker-2brq5 1/1 Running 0 3d5h
Let’s go and try to deploy a Service within the cluster. For this, we can use the “Expose an Application with NGINX Plus Ingress Controller” walkthrough as a starting point:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
➜ cat sample-deployment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: bookinfo-ingress
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: product.192.168.1.151.nip.io # IP of the NUC
http:
paths:
- path: /
backend:
serviceName: productpage
servicePort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
namespace: test
labels:
app: productpage
service: productpage
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 9080
selector:
app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-productpage
namespace: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage-v1
namespace: test
labels:
app: productpage
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: productpage
version: v1
template:
metadata:
labels:
app: productpage
version: v1
spec:
serviceAccountName: bookinfo-productpage
containers:
- name: productpage
image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
Note how in line 16
we had to specify the IP address of the Intel NUC
as part of the Ingress’ host.
Let’s apply this manifest:
➜ k apply -f sample-deployment.yaml
+ kubectl apply -f sample-deployment.yaml
namespace/test configured
ingress.networking.k8s.io/bookinfo-ingress created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
➜ kgsvc
+ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
productpage LoadBalancer 10.106.62.46 192.168.1.161 80:32225/TCP 4s
➜ kging
+ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
bookinfo-ingress <none> product.192.168.1.151.nip.io 80 22s
We can see how http://product.192.168.1.151.nip.io
is getting exposed via the
bookinfo-ingress
and will be reachable from clients within the local network:

Volumes and Stateful Deployments
The last thing I wanted to try was the cluster’s compatibility with volumes and stateful deployments. Luckily, it turned out that the standard setup worked out of the box:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: '/mnt/data'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: 'http-server'
volumeMounts:
- mountPath: '/usr/share/nginx/html'
name: task-pv-storage
- Lines
2-16
: create ahostPath
PersistentVolume which uses a directory on the Node (the Intel NUC) to emulate network-attached storage. - Lines
18-29
: create aPersistentVolumeClaim
, used by pods to request physical storage. - Lines
31-49
: create a sample Pod which attaches thetask-pv-claim
PVC.
Apply the manifest and, after a few moments, the Volume will show as Bound
:
+ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 37s
➜ kg pvc
+ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 33s
From here we can quickly test the setup by creating a text file under the
/mnt/data
directory of the Intel NUC, and then trying to access it from the test pod:
# Create file on the host
[core@cluster data]$ echo "Hello from Kubernetes storage" > /mnt/data/index.html
# Exec on the pod and validate access
➜ kubectl exec -it task-pv-pod -- /bin/bash
root@task-pv-pod:/# apt update && apt install curl
root@task-pv-pod:/# curl http://localhost/
Hello from Kubernetes storage
Automate the Setup
The setup described in this post has been automated as part of k8s-lab-plz, a modular Kubernetes Lab which provides an easy and streamlined way to deploy a test cluster with support for different components. You can read more about it at: Introducing k8s-lab-plz: A modular Kubernetes Lab.
In particular, you can refer to the Baremetal Setup page of the documentation for specific instructions.
Remotely Access the Cluster
To take the setup a step further, in Remotely Access your Kubernetes Lab with Cloudflare Tunnel I explain how to use Cloudflare Tunnel to connect the Intel NUC to the Cloudflare network, and Auditable Terminal to connect to it remotely using nothing more than a browser.
Conclusions
In this blog post, part of the “Kubernetes Primer for Security Professionals” series, I described the approach took to deploy my own Kubernetes Lab on baremetal, and on an Intel NUC in particular.
I hope you found this post useful and interesting, and I’m keen to get feedback on it! If you find the information shared was useful, if something is missing, or if you have ideas on how to improve it, please let me know on Twitter.