This week Google introduced GKE Autopilot, defined as a fully managed, hardened Kubernetes cluster out of the box, for true hands-free operations.

I was curious to take a look at it, so if you don’t have time to play with it, I did it for you.


Setup from Scratch

Here is a step-by-step walkthrough on how to get a GKE Autopilot running starting from a new (empty) GCP Project:

Step Screenshot
Enable the GKE APIs
Go to the GKE Console and create a cluster
Choose Autopilot as cluster type
Choose a name, region, and Private cluster as networking type
In the Networking settings, tick “Access control plane using its external IP address”. I selected this for easiness, but you can look at “Creating a private cluster” in the docs for a fully private cluster
After a few minutes (~5), the cluster will be ready

Inspecting Defaults

With the cluster up and running, let’s start by taking a look at the security defaults.

Shielded nodes and Workload Identity are enabled by default, whereas other controls like Binary authorization and Google Groups for RBAC are disabled.

Security Defaults.
Security Defaults.

From the networking point of view, it can be seen that the control plane can be accessed via a public IP address (“Control plane address range”), alongside the pod and service address ranges. It is worth noting also how control plane authorized networks and network policies are set as disabled.

Networking Defaults.
Networking Defaults.

At the same time, the main dashboard provides a handy view over general cluster and Autoscaler’s logs.

Cluster Logs.
Cluster Logs.

Subscribe to CloudSecList

If you found this article interesting, you can join thousands of security professionals getting curated security-related news focused on the cloud native landscape by subscribing to CloudSecList.com.


Connecting to the Cluster

Two options are available, either by checking workloads in the dashboard or via command line:

Connecting to the Cluster.
Connecting to the Cluster.

Connecting to the cluster via Cloud Shell resulted to be super quick:

[email protected]:~ (testing)$ gcloud container clusters get-credentials autopilot-cluster-test --region europe-west1 --project testing
Fetching cluster endpoint and auth data.
kubeconfig entry generated for autopilot-cluster-test.
[email protected]:~ (testing)$ kubectl cluster-info
Kubernetes control plane is running at https://X.X.X.X
GLBCDefaultBackend is running at https://X.X.X.X/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://X.X.X.X/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://X.X.X.X/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy
Metrics-server is running at https://X.X.X.X/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

[email protected]:~ (testing)$ kubectl get nodes
NAME                                                  STATUS   ROLES    AGE     VERSION
gk3-autopilot-cluster-te-default-pool-60aae818-fstx   Ready    <none>   4m35s   v1.18.12-gke.1210
gk3-autopilot-cluster-te-default-pool-f8420c4e-lmv9   Ready    <none>   4m35s   v1.18.12-gke.1210

Creating Workloads

One of the main advantages of Autopilot is exactly that it allows customers to focus on workloads rather than on managing the cluster itself.

This reflects also in the number of steps required to deploy a simple “hello world”.

Step Screenshot
Go to the Workloads page
Select a container image and environment variables
Select application name, namespace, and labels
After a few minutes, the deployment will be ready
Expose the deployment via a Load Balancer
Shortly after, the Service will be available

We can also validate the status of the deployment via Cloud Shell:

[email protected]:~ (testing)$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
nginx-1-7744c8886d-xmcdx   1/1     Running   0          5m45s

[email protected]:~ (testing)$ kubectl get svc
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
kubernetes        ClusterIP      10.75.128.1    <none>         443/TCP        15m
nginx-1-service   LoadBalancer   10.75.128.61   34.77.172.90   80:32161/TCP   2m12s

Conclusions

This post described my first interaction with GKE Autopilot, so that if you don’t have time to play with it, I did it for you.

Overall, it does provide a streamlined and quick experience to get from zero to a “hello world” service fully deployed. What I’ll be interested next is to explore the security implications of this setup.

I hope you found this post useful and interesting, and I’m keen to get feedback on it! If you find the information shared was useful, if something is missing, or if you have ideas on how to improve it, please let me know on Twitter.