On the 3rd of December 2018, a critical security vulnerability affecting Kubernetes API server has been announced. Without any surprise, this announcement got a lot of traction (especially on Twitter).

In this post I’ll try to dissect the information currently available.

This post has been updated on to expand on the technical details of the vulnerability, as well as by adding a section explaining how to verify if a cluster is currently affected.

1. The Issue

The official announcement provides a quick overview of the vulnerability, but it is actually issue #71411 that provides the technical details (I encourage you to go read it thoroughly if you want the full picture):

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H (9.8, critical)

With a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.

So we do know that it is rated as critical and that it affects the API server. In particular, the table below shows what conditions needs to be verified for a deployment to be affected, alongside with the relative impact:

Precondition Impact
Deployment runs extension API servers (like the metrics server) that are directly accessible from the Kubernetes API server’s network - An API call to any aggregated API server endpoint can be escalated to perform any API request against that aggregated API server.
- In default configurations, all users (authenticated and unauthenticated) are allowed to perform discovery API calls that allow this escalation.
Deployment grants pod exec/attach/portforward permissions to users that are not expected to have full access to kubelet APIs - A pod exec/attach/portforward API call can be escalated to perform any API request against the kubelet API on the node specified in the pod spec (e.g. listing all pods on the node, running arbitrary commands inside those pods, and obtaining the command output).
- Pod exec/attach/portforward permissions are included in the admin/edit/view RBAC roles intended for namespace-constrained users.

In addition, another aggravating factor is that it might prove difficult to detect whether this vulnerability has been used, since unauthorized requests might not appear in the audit logs. Although these requests do appear in the kubelet or aggregated API server logs, they are indistinguishable from correctly authorized and proxied requests via the Kubernetes API server (as stated in issue #71411).

Since then, the team at Gravitational has published a writeup in which they explain the “verify backend upgrade connection” commit and the bug’s actual impact. Here is the excerpt with the description of the vulnerability:

[…] Kubernetes API isn’t just basic HTTPS. To support remote administrative tasks, K8s also allows upgrading apiserver connections to full, live, end-to-end HTTP/1.1 websockets.

The CVE-2018-1002105 vulnerability comes from the way this websocket upgrade was handled: if the request contained the Connection: Upgrade http header, the master apiserver would forward the request and bridge the live socket to the aggregate. The problem occurs in the event that the websocket connection fails to complete. Prior to the fix, the apiserver could be tricked into assuming the pass-thru connection successfully landed even when it had triggered an error code. From that “half-open” and authenticated websocket state, the connected client could send follow-up HTTP requests to the aggregated endpoint, essentially masquerading itself as the master apiserver.

2. Mitigations that Should Already be in Place

Reading the preconditions/impact above, a few points comes to my mind:

Possible Exploit Natural Mitigation
Anonymous user to aggregated API server escalation It is already recommended to disable unauthenticated (anonymous) access to the API server in a production cluster (kube-apiserver ... --anonymous-auth=false). If anonymous access is needed for load balancing and/or health checks, then it is usually recommended to explicitly grant RBAC privileges to the system:anonymous user and the system:unauthenticated group
Authenticated user to aggregated API server escalation No natural mitigations come to mind (unless by removing access to all aggregated APIs, which is not the point here)
Authorized pod exec/attach/portforward to kubelet API escalation It is already recommended to deny privilege escalation via interactive shells or attaching to privileged containers in production (kube-apiserver ... --admission-control=...,DenyEscalatingExec)

3. Who is Affected?

Kubernetes v1.10.11, v1.11.5, and v1.12.3 have been released to address CVE-2018-1002105, providing a patch for this vulnerability.

Therefore, these are the currently affected versions:

  • Kubernetes v1.0.x-1.9.x
  • Kubernetes v1.10.0-1.10.10 (fixed in v1.10.11)
  • Kubernetes v1.11.0-1.11.4 (fixed in v1.11.5)
  • Kubernetes v1.12.0-1.12.2 (fixed in v1.12.3)

As you can see, there are currently no patches for versions < 1.10, so upgrading to a supported version is considered imperative.

4. How To Check If You Are Affected?

In the original Github issue #71411, @liggitt proposes a one-liner that can be used to verify if a cluster has aggregated APIs enabled:

kubectl get apiservices \
  -o 'jsonpath={range .items[?(@.spec.service.name!="")]}{.metadata.name}{"\n"}{end}'

Otherwise, the same team at Gravitational has created a vulnerability test utility that attempts to test for two things:

  1. whether the cluster allows unauthenticated access to the API (which will then allow unauthenticated access to aggregate API endpoint)
  2. it will also attempt to test whether the apiserver will leave the connection open on a malformed request (which indicates the cluster is susceptible to CVE-2018-1002105)
docker run -it --rm -v $HOME/.kube/config:/kubeconfig: quay.io/gravitational/cve-2018-1002105:latest