SSO for Managed Kubernetes Services (Part 1)

Muralidharan K
4 min readMay 14, 2021

Recently I was working on setting up a unified authentication method for multiple managed Kubernetes clusters (EKS and GKE). My goal was to setup a seamless authentication experience for developers so that they no need to worry about whether they are logging into clusters in GCP or AWS. Also I wanted to leverage my existing identity management solution(Okta) as a single source of truth for managing identities and avoid managing identities, groups, role etc in cloud service provider’s IAM.

As you know, cloud providers use their own IAM for authenticating users against managed Kubernetes offerings. AWS uses IAM principals like users and roles for EKS authentication. And GKE manages authentication through Google Cloud users and Google Cloud service accounts mainly. GKE also supports Google Group based RBAC (Beta at the moment). But in both cases we have to manage roles or groups in their native IAM services, which is redundant for me as I'm already managing the users and groups in Okta and its also difficult when you have to manage clusters in multiple AWS Accounts and GCP Projects.

So I decide to explore OIDC authentication for Kubernetes so that I can leverage Okta for authentication, Okta group membership and Kubernetes RBAC for authorization for the clusters. This will also help us to avoid managing groups and group memberships in the cloud service provider’s IAM. As you can see in the document here, Kubernetes natively support OIDC authentication but its not that easy to setup, especially for a managed Kubernetes offerings, I will explain it in a bit why its not that easy.

As you probably noticed in the Kubernetes documentation, we have to update API Server configuration with following parameters to support OIDC authentication. The whole point of managed Kubernetes is to manage control plane by cloud service providers so they will not allow you to directly update the API Server configurations.

--oidc-issuer-url
--oidc-client-id
--oidc-username-claim
--oidc-groups-claim

However AWS now supports integration with external OIDC provider, We can easily associate Okta OIDC App with a EKS cluster from the Console, eksctl, through Transform /CloudFormation template while setting up the cluster. For this example, I have already create a Okta OIDC Application in Okta. You can follow the instructions here to setup an OIDC App. While creating the App we need to the “Login redirect URI” to “http://localhost:8080”. Once you are done with the configuration, you need to note down the OIDC Issiuer URL and Client ID. You can use those information to associate the OIDC Provider with the EKS Cluster. To associate a OIDC provider with a cluster,

  1. Login to your AWS Account
  2. Go EKS Console
  3. Click on your cluster
  4. Go to Configurations tab
  5. Go to Authentication sub tab
  6. Click on Associate a OIDC Provider and fill up the details including the Issuer URL and Client ID as shown below.

Once OIDC provider has been associated with the EKS cluster, we can use any OIDC helpers (I'm using kubelogin in this example) to get id_token, refresh token etc and also configure the values for the kubeconfig.

$ kubectl oidc-login setup --oidc-issuer-url https://my.okta.com --oidc-client-id ThisisanExampleforOktaTest --oidc-client-secret=ThisisanExampleforOktaTestSecret --oidc-extra-scope groups,email

When you run the above command, browser pops up and asks for the Okta login. On successful login, you will be redirected back to the terminal with below response.

authentication in progress...## 2. Verify authenticationYou got a token with the following claims:{
"sub": "ThisisanExampleforOktaTest",
"email": "youremail@domain.com",
"ver": 1,
"iss": "https://my.okta.com",
"aud": "hisisanExampleforOktaTest",
"iat": 1620894715,
"exp": 1620898315,
"jti": "ID.sdfs-ycycyxc",
"amr": [
"pwd",
"swk",
"mfa"
],
"idp": "yxcycvyvxvv",
"nonce": "cy<xcycyxcycyc",
"auth_time": 1620198075,
"at_hash": "dtUFgiqPaExample"
}
## 3. Bind a cluster roleRun the following command:kubectl create clusterrolebinding oidc-cluster-admin --clusterrole=cluster-admin --user='https://my.okta.com#s34dsg354g4'## 4. Set up the Kubernetes API serverAdd the following options to the kube-apiserver:--oidc-issuer-url=https://my.okta.com
--oidc-client-id=s34dsg354g4
## 5. Set up the kubeconfigRun the following command:kubectl config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://my.okta.com \
--exec-arg=--oidc-client-id=s34dsg354g4 \
--exec-arg=--oidc-client-secret=ExampleSecret \
--exec-arg=--oidc-extra-scope=groups \
--exec-arg=--oidc-extra-scope=email
## 6. Verify cluster accessMake sure you can access the Kubernetes cluster.kubectl --user=oidc get nodesYou can switch the default context to oidc.kubectl config set-context --current --user=oidcYou can share the kubeconfig to your team members for on-boarding.

Follow the instructions in the response to map the OIDC user to Kubernetes role and setup the kubeconfig.

You can Ignore the step 4, as we already associated OIDC Provider with EKS Cluster.

This works perfectly and once you configure the clousterrolebinding for the okta group or user with the cluster role, you developer can login to the cluster without using AWS principle. They just need to use a OIDC helper like kubelogin. But GKE does not support external OIDC provider yet, that means this is only a partial solution for me.

Then I came across this presentation from Josh Van Leeuwen regarding kube-oidc-proxy and decide to give it a go.

To be continued…

--

--

Muralidharan K

Platform Engineer - Cloud and Cloud Native Tech Enthusiast