Home Business Container Platform Security at Cruise

Container Platform Security at Cruise

by admin2 admin2
29 views
Container Platform Security at Cruise

Best practices for enterprise-grade Kubernetes security.

Authors: Karl Isenberg & Mike Ruth

Kubernetes Logo in Armor

Kubernetes Logo in Armor

This is part two of our ongoing series on the Cruise PaaS:

  1. Building a Container Platform
  2. Container Platform Security

Stay tuned for more on networking, observability, and deployment!

Safety is one of our core values at Cruise. It’s why we challenge our cars to master the complexities of double-parked vehicles in San Francisco. It’s also why security is a top priority in everything we do.

However, security isn’t just a checkbox you mark off on project designs — it’s continual improvements made at multiple layers of the stack. Since security improvements often generate new requirements for existing projects, it’s good to minimize disruption by planning ahead. Because of this, security was one of the first areas we invested in when building out our internal Platform as a Service (PaaS), kickstarting our iteration towards production readiness.

In our previous post, Building a Container Platform at Cruise, we covered how the Cruise PaaS spans multiple Google Kubernetes Engine (GKE) clusters in multiple Google Cloud Provider (GCP) environments and projects, with a bunch of addons to increase the functionality of GKE and make it work on our private hybrid-cloud network.

In this post, we’ll cruise through some of the many security domains that intersect with container platforms and explore how we tackled their challenges:

  1. Identity
  2. Authentication
  3. Authorization
  4. Secrets
  5. Encryption

To better understand how all of the different domains interact with one another, we first need to look at Identity. An identity is the representation of a person or program interacting with a system. They always take one of two types, users or services, and their type depends on their use case. Both types of identity include a compound unique identifier and a set of credentials made up of multiple factors.

Here are some example identifiers and credentials:

Table describing user identity and service identity with example unique identifiers and credentials.

Table describing user identity and service identity with example unique identifiers and credentials.

Having an identity is a prerequisite to securely interacting with a service or system of services, but it is not enough by itself. In order to prove you are who you say you are, your identity needs to be managed and authenticated. Usually, this is done by a separate service, to avoid having to implement the same functionality into every service and to enable auditing and transactions across multiple services.

For identity management, we leverage Okta as our Identity Provider (IdP). Okta enables a Single Sign-On (SSO) experience for users between systems with Multi-Factor Authentication (MFA). Okta isn’t required for GKE or Kubernetes — we could have used another IdP or manually managed users within GCP itself, but Okta provides integration points and management tools that make it easier to secure a wide variety of systems.

Something neither Okta nor the majority of IdPs provide is a universal service identity. Each platform and cloud provider tends to implement their own service account management (if any at all), and as such, we’re forced to either overload a user identity within Okta, or alternatively use the built-in primitives of GCP and Kubernetes. Cruise has primarily chosen to use the latter approach for service identity, but occasionally an Okta service identity is needed for the few services that interact with the Okta API directly.

For GCP service identity, we use GCP Service Accounts (GCP SA). GCP service accounts can be granted permissions in GCP through Google Cloud IAM. GKE automatically maps GCP service accounts to user accounts within Kubernetes, allowing PaaS to leverage GCP SAs as unique identities.

Within Kubernetes, we use Kubernetes service accounts for establishing the service identity of workloads running in pods. This allows applications to authenticate to the GKE API server and allows other privileged services to look up which service account belongs to which application (using the TokenReview API). This leads us into our next topic: how does authentication work on Cruise’s PaaS?

Authentication is the means by which we confirm an identity is whom they claim to be. Together, identifiers and credentials can be used to distinguish a given identity from another and establish non-repudiation: high confidence authenticity, proof of origin, and proof of integrity.

There are multiple factors that can be used to authenticate identities:

  1. Something you know (knowledge factor)
  2. Something you have (ownership factor)
  3. Something you are (inherence factor; most common with user identities)
  4. Somewhere you are (location factor)

Password, Secure Token Smartphone App, Fingerprint, Map Location

Password, Secure Token Smartphone App, Fingerprint, Map Location

Multi-factor Authentication

For Cruise PaaS, multi-factor authentication (MFA) is achieved through Duo, integrated with Okta’s IdP. Okta is configured to require a password (knowledge factor) and a secure token (ownership factor). The Duo Mobile app can generate a secure token via either push notification or a time-based one-time passcode (TOTP). Additionally, Duo can enforce security profiles on devices that require authentication to access, by passcode, certificate, or biometric scan (fingerprint or facial recognition). Duo can also be configured to track or enforce geolocation (location factor).

For user identities, credentials are often memorized or stored in a password manager, itself accessed by one or more authentication factors. For services and other programmatic workloads, secrets management is a harder problem to solve. Checkout the Secrets section to see how we securely manage credentials.

For now, let’s take a closer look at how users and services authenticate within Cruise’s PaaS.

Google has invested heavily into OAuth2, so it may come as no surprise that GCP relies heavily on it for both user and service authentication alike. For users authenticating to GCP, this means authenticating with a password & second factor through an associated IdP. Behind the scenes, this does one of two things depending on if the user is authenticating manually via a browser, or programmatically via GCP’s CLI (gcloud), or API.

  1. Browsers: The browser Single Sign On (SSO) workflow utilizes the SAML protocol. Provided the user has properly authenticated, the SAML assertion is stored for the remainder of the session (or lifetime of the assertion, whichever comes first). Backend services then transparently validate the user’s session on each interaction using the assertion, rather than requiring the user to sign in on every request.
  2. Programs: The newer OIDC protocol is used for programmatic interactions. The user or service identity logs in with its credentials and Google generates a signed access token for use in subsequent interactions. The OIDC access token is the basis for API and CLI authentication, analogous to the SAML assertion stored in the browser flow. For terminal access, most users use the gcloud CLI, which handles the OIDC authentication flow and caches the access token.

Once authenticated with the gcloud CLI, GKE users can use it to fetch kubectl credentials, allowing them access to the Cruise PaaS using kubectl, the Kubernetes CLI, provided their identity has the required role bindings. This allows users to only have to manage their GCP credentials, and generate Kubernetes credentials on-demand.

Services can use the same identity translation process, from GCP SA to Kubernetes user. For example, some of our continuous integration and deployment (CI/CD) automation uses GCP SAs to generate kubectl credentials for deployment to Cruise PaaS. This reduces the number of credentials that need to be managed in CI/CD, since it often needs to make other GCP API calls to services like Google Cloud Storage (GCS) or Cloud SQL. GCP SA credentials can even be generated on-demand, with a TTL, using the Vault Google Cloud Secrets Engine, providing another layer of identity translation to reduce the amount of credentials stored in CI/CD. We’ll talk about Vault a bit more in the upcoming Secrets section.

Recently, Google introduced GKE Workload Identity, which allows Kubernetes SAs to act as GCP SAs, so that pods can authenticate with GCP. This replaces the legacy pattern of using GCE instance metadata, which would allow every pod on the node to have access to the same GCP SA credentials.

This feature is great for simplicity, but even without GKE, you can use the Vault Kubernetes Auth Method. With the Vault Kubernetes Auth Backend configured, pods can log into Vault using their Kubernetes SA, and use Vault Secret Engines to generate credentials for other systems, like GCP.

In order for both of these methods to work, we depend on the native Kubernetes feature that allows configuring service accounts for pods. Kubernetes handles generating service account credentials and injecting them into pods based on the configuration of the Deployment, StatefulSet, or CronJob that spawned the pod as one of its replicas. The workload operator just needs to create the service account and configure the resource to use it.

Kubernetes injects the Kubernetes SA credentials (a JWT) into the pod using a bind mount. The pod can then use that JWT as a bearer token in subsequent interactions with the Kubernetes API. This way, all replicas of the pod can authenticate as the same service identity. As a result, it’s really Kubernetes that manages the workload identity across replicas. GKE and Vault just allow translating that into an identity from another IdP.

Now that we have explained how authentication works, let’s take a look at what happens after a user or service authenticates to PaaS.

Authorization is the means by which we enforce what an authenticated identity may access. There are many types of access control, but within the context of container platforms, we typically use Role-Based Access Control (RBAC).

With RBAC, actors (individual identities or groups of identities) are granted permissions after role bindings are defined. Roles are sets of permissions. Role bindings are relationships between roles and actors. The role or the role binding also includes the resource that the permission applies to.

Figure: Groups, Permissions, and Role Based Access Control (RBAC)

Figure: Groups, Permissions, and Role Based Access Control (RBAC)

Figure: Groups, Permissions, and Role Based Access Control (RBAC)

Putting identities into groups makes it easier to bind permissions & roles without repeatedly assigning the same roles & permissions to each individual identity. Groups are generally a resource type provided by an IdP; for integration with GCP and GKE, we use Google groups provided by G Suite. In most authentication flows, group membership is a field located within the credential itself (such as a JWT’s claims), or is a property that’s possible to query against the associated IdP.

Upon looking up group membership of an identity, it is the responsibility of the specific resource’s authorization flow to recognize if membership should be referenced against a local set of groups and permissions, or if a trusted authorization resource exists that must be queried for proper permissions. In PaaS, this means referencing group membership within G Suite against associated rolebindings within GKE. Let’s see how this works.

As mentioned earlier, GKE integrates with GCP and G Suite to provide authentication, identity management, group management, and authorization within GCP.

While roles could always be bound to individual identities, it wasn’t until recently that GKE allowed binding roles to Google groups. Unfortunately, it still is not possible to add GCP SAs to Google groups, so permission management is a little more complicated and manual than it needs to be.

To solve these problems in 2018, we wrote (and open-sourced) RBACSync. RBACSync connects with the G Suite API and the Kubernetes API to manage role bindings and group membership in Kubernetes using a Custom Resource Definition (CRD). At a high level, this takes a configuration YAML with a list of group names, role names, & namespaces, and generates role bindings within the given namespace for the set of groups and roles defined in the configuration. In this way, identities (both users and service accounts alike) are provided CRUD-like permissions against Kubernetes resources based on the roles bound.

Figure: RBACSync high level workflow and example config

Figure: RBACSync high level workflow and example config

Figure: RBACSync high level workflow and example config

If you want to learn more about RBACSync, check out how we manage Kubernetes RBAC.

Secrets can be anything you want to keep private, but in the context of container platforms, it’s mostly just credentials: tokens, passwords, certificates, encryption keys, etc. Kubernetes comes with its own secret storage and injection mechanism, which is especially valuable for bootstrapping, but the built-in secrets solution is generally insufficient when platforms span multiple clusters.

It’s also worth noting that most Kubernetes deployments don’t encrypt secrets at rest by default — it wasn’t until Kubernetes v1.13 that an alpha feature was added for encryption at rest. Thankfully, GKE solves this problem for us by managing its own encrypted at rest database, with support for application-layer secrets encryption currently in beta.

Despite GKE having better secrets management than vanilla Kubernetes, it was important for us to create well-defined patterns for managing and fetching secrets that could be used across the entire organization, including environments with multiple clusters, platforms, and clouds. To this effect, we opted to use Hashicorp Vault to manage and store secrets. Like many of Hashicorp’s products, Vault looks to solve a specific set of problems for many different clouds and platforms. Two of the greatest benefits this provided to us were:

  1. Authentication methods that leverage existing identity primitives from multiple IdPs (Okta users, GCP service accounts, Kubernetes service accounts, etc.)
  2. Authorization that supports RBAC and group membership.

These capabilities allow Vault to unify diverse systems across multiple environments with centralized multi-tenant access control.

For example, as mentioned in the authentication section, PaaS workloads use the Vault Kubernetes Auth Method to log into Vault using their Kubernetes service account and retrieve secrets at initialization or runtime. Those workload secrets can then be managed by teams of engineers in a central place, without compromising the fine-grained access control that isolates the environments and workloads from each other.

Since most workloads don’t integrate directly with Vault and are not Vault-aware, we needed a way to fetch secrets so they were available when and where our workloads required. To do this in a standard and automated way, Cruise’s security team wrote an open source sidecar container: Daytona.

Daytona takes over a few responsibilities to make workloads more agnostic to the secrets backend (Vault). Daytona:

  1. Authenticates to Vault by leveraging the Kubernetes service account bound to the pod
  2. Fetches secrets needed by the workload
  3. Writes the secrets to an in-memory volume (to avoid leaking to persistent storage)
  4. Shares the volume with the workload container
  5. Updates the secret at runtime, when it changes in Vault (optional)

Figure: Secrets injection with Vault and Daytona

Figure: Secrets injection with Vault and Daytona

Figure: Secrets injection with Vault and Daytona

The workload, or the entrypoint script that manages the workload, reads the secrets from the shared volume and uses them as needed.

With a GitOps workflow to manage Vault configuration, we can easily manage access to secrets with pull requests and approvals.

If you’re interested in learning more about secrets management at Cruise, stay tuned for an upcoming blog post dedicated to secrets management in cloud agnostic environments.

Daytona is open source software, try it out!

Encryption is a broad topic, but we can break it down into two categories:

  1. Encryption in Transit
  2. Encryption at Rest

Silly graphic of encryption in transit and at rest

Silly graphic of encryption in transit and at rest

Encryption in Transit & Encryption at Rest

Both of these categories are centered around the use of certificates and how we authenticate certificates via a chain of trust, which implies managing a root Certificate Authority (CA).

There are a few common ways to manage your own chain of trust, but deploying and managing these complex implementations is often complicated and error prone. For PaaS, we trust GKE to manage and rotate an internal CA, signed by a trust chain from Google — this takes away a lot of the headaches we had when deploying our own Kubernetes clusters. For the majority of all other use cases, Cruise has implemented our own internal Public Key Infrastructure (PKI) leveraging Vault as an intermediate CA, signed by an offline root to allow for manual revocation in case of emergency.

One of the more challenging parts of securing PaaS has been ensuring all of our services communicate in a secure manner. This typically means using Transport Layer Security (TLS).

For us to better understand where exactly this was required, threat modeling was performed: that is, analyzing how information moves throughout our architecture — visualized with a Dataflow Diagram (DFD) — and thinking like an attacker in an attempt to identify areas to focus our defensive efforts.

Here’s a list of some of the services in PaaS that need TLS:

  1. Kubernetes State Storage (etcd) API
  2. Kubernetes API
  3. Kubelet API
  4. Workload Ingress

Thankfully, all the items on that list, except workload ingress, are encrypted by GKE automatically. In fact, since the Kubernetes master nodes are managed by Google, the backend storage (usually etcd, when not on GKE) isn’t even accessible externally. In GKE, the Kubernetes control plane is encrypted by default, requiring authentication, authorization, and encryption (via TLS) through the Kubernetes API Server. GKE also handles signing and rotating the certificates and key pairs for the API Server, the kubelets on the nodes it provisions, and the internal CA that these certificates are minted from. This is a huge win compared to rolling your own deployment of Kubernetes, where you would have to implement this all yourself.

Encryption for workload ingress falls upon us to implement. In PaaS, we have both public and private ingress, which means two different domains, two different ingress routing stacks, and two different certificate authorities. We’ll dig into this a little deeper in a future platform networking blog post.

In transit, we can assume the public internet is not implicitly trustworthy, and if Zero Trust best practices tell us anything, we probably shouldn’t trust our private intranet either. Taking this a step further, implicitly trusting the people with access to our physical hardware (and their virtual cloud analogs) is also undesirable. With this in mind, we know that the following data resides on persistent storage, and as a result, we’d look to encrypt it at rest:

  1. Kubernetes State Storage
  2. Kubernetes Node Disks
  3. Kubernetes Service Account Credentials
  4. Workload Secrets
  5. Workload Volumes

Again, GKE handles encrypting the majority of these. Most obviously, this includes state storage, node disk, and workload volumes. That leaves Kubernetes service account credentials and workload secrets to consider.

As we discussed in the Secrets section above, workload secrets are injected into temporary in-memory (tmpfs) volume mounts. Because these volumes are not persisted, we do not worry about encrypting them at rest.

Lastly, as mentioned in the Encryption In Transit section above, GKE manages the signing and rotation of the certificate authority (CA) used by Kubernetes. The Kubernetes CA is used to sign the certificates used by the API server and kubelets. It is also used to mint Kubernetes service account credentials (JWTs). Also discussed above, we mentioned that GKE handles encrypting state storage, and because service account credentials are stored in state storage, we actually have the benefit of having them encrypted at rest too.

Security concerns affect everything we do, but this post is already longer than most people will read and we do still have a self-driving car to build…

So if you’re building your own container platform, we recommend you do further reading on these other areas too:

  • Auditing (Compliance, Threat Detection, Alerting)
  • Platform Hardening (SecurityContext, Node Metadata Protection, Network Policies, Pod Security Policies)
  • Secure Supply Chains (Trusted Image Building, Vulnerability Scanning, Attestation)
  • Patch Management
  • Zero Trust Networking

In the next blog post of the series, we will take a look at some of the networking challenges that come with building a platform. Stay tuned for more about observability and deployment after that!

Interested in our tools? Try out RBACSync or Daytona today. Stay tuned to the Cruise blog for future open source announcements.

Read More

You may also like

Leave a Comment