`kubectl` auth changes in GKE v1.25
I was prompted by a question on Stack overflow “How to remove warning in kubectl with gcp auth plugin?” to try this new mechanism for myself. It’s described by Google in the a post Here’s what to know about changes to kubectl authentciation coming in GKE v1.25.
One question I’d not considered is: how is the change manifest? Thinking about it, I realized it’s probably evident in the users section of kubectl config. A long time, I wrote a blog post Kubernetes Engine: kubectl config that explains how kubectl leverages (!) gcloud to get an access token for GKE.
To recap, when you’re using GKE, your kubectl config file (${HOME}/.kube/config or ${KUBECONFIG}), each time you gcloud container clusters get-credentials, gcloud will append cluster, context and user entries to the kubectl config file. We’re mostly interested in the user for this discussion. For example:
users:
- name: gke_{project}_{zone}_{name}
user:
auth-provider:
config:
access-token: ya29.a0ARrdaM...
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: '2022-00-00T00:00:00Z'
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
When you use kubectl, the CLI run the command represented by {cmd-path} {cmd-args} i.e. (in my case):
/usr/lib/google-cloud-sdk/bin/gcloud config config-helper \
--format=json
Which should return:
{
"configuration": {
"active_configuration": "default",
"properties": {
"core": {
"account": "{account}",
}
}
},
"credential": {
"access_token": "ya29.a0ARrdaM...",
"id_token": "eyJhbGci...",
"token_expiry": "2022-00-00T00:00:00Z"
},
"sentinels": {
"config_sentinel": "{home}/.config/gcloud/config_sentinel"
}
}
It then extracts expiry-key from .credential.token_expiry (i.e. "2022-00-00T00:00:00Z") and token-key from .credential.access_token (i.e. "ya29.a0ARrdaM...") and updates the kubectl config user entry with these values and uses them to authenticate kubectl. See how they match?
All that changes after gke-gcloud-auth-plugin is:
- installed
- enabled (
export USE_GKE_GCLOUD_AUTH_PLUGIN=True) gcloud container clusters get-credentialsis run
NOTE the plugin must be enabled and the
kubectlconfiguserentry must be (re)created before you’ll see the change.
In my case, after doing this, the same (!) user entry is now:
users:
- name: gke_{project}_{zone}_{name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
provideClusterInfo: true
So, if you want to confirm that you’ve configured gke-cloud-auth-plugin correctly, confirming that you’ve an updated user entry is the way to do this.