[Solved] Kubectl Error: exec plugin: invalid apiversion “client.authentication.k8s.io/v1alpha1”

invalid apiversion “client.authentication.k8s.io”

When trying to connect to the AWS EKS cluster you might face exec plugin: invalid apiVersion “client.authentication.k8s.io/v1alpha1” error.

I have faced this issue and in this blog, we will see how to rectify this issue and a few other associated issues.

Solution 1: Upgrade Kubectl Version

First, you should ensure the kubectl version you are using and the version of the cluster should match. If you have an outdated kubectl version, you will get the following error while using kubectl.

error: exec plugin: invalid apiVersion “client.authentication.k8s.io/v1alpha1

To rectify the issue,

You can follow the official, installing or updating kubectl document to upgrade the correct kubectl version.

Solution 2: Add IAM User/Role to aws-auth config

In AWS, an Elastic Kubernetes Service (EKS) cluster is created by a user, that specific user is the owner of the cluster and can only access the cluster.

So, only with the EKS cluster kubeconfig file, you cannot access the cluster. Because IAM and Kubernetes RBAC are linked through the aws-auth configmap present inside the Kubernetes cluster.

So even if you have AWS administrative permissions, you cannot access the cluster resources unless the user is mapped in the aws-auth configmap.

If other IAM users or instances with IAM roles need to access the cluster, the cluster aws-auth configmap should be modified to include the user/role information to get access, otherwise, you get the client.authentication.k8s.io error when the user tries to access the cluster.

You might also get the following error.

E1109 13:10:58.239932    2065 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

To get the detailed information about the error, use the following command.

kubectl cluster-info --v=8

The output provides more details about the error.

I1110 05:44:53.853393    1929 round_trippers.go:463] GET https://824095A4FBC63C52E062D5517991B706.sk1.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue
I1110 05:44:53.853501    1929 round_trippers.go:469] Request Headers:
I1110 05:44:53.853583    1929 round_trippers.go:473]     User-Agent: kubectl/v1.28.3 (linux/amd64) kubernetes/a8a1abc
I1110 05:44:53.853675    1929 round_trippers.go:473]     Accept: application/json, */*
I1110 05:44:55.411928    1929 round_trippers.go:574] Response Status: 401 
I1110 05:44:55.412666    1929 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I1110 05:44:55.413353    1929 helpers.go:246] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
error: You must be logged in to the server (Unauthorized)

Map IAM User/Role Details in AWS-Auth

To rectify the issue, add the IAM user/role to the aws-auth using the steps given below.

Open the aws-auth configuration file from the instance where the admin has privilege, using the following command.

kubectl edit configmap aws-auth -n kube-system

Add new IAM user’s information to provide privilege to access the EKS cluster.

In the following example, I have added user and role addition examples. userarn section for users and rolearn for IAM roles. Edit it as per your needs.

Under the mapUsers section, you need to replace 814200988517 with your AWS account ID and eks-dev-user with your IAM user name. If you are using roles, replace role/test-role with your role name.

apiVersion: v1
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::814200988517:role/eksctl-custom-cluster-nodegroup-ng-NodeInstanceRole-ieXb6Zk7Og64
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
   - userarn: arn:aws:iam::814200988517:user/eks-dev-user
     username: eks-dev-user
       - system:masters
   - rolearn: arn:aws:iam::814200988517:role/test-role
     username: test-role
       - system:masters
kind: ConfigMap
  name: aws-auth
  namespace: kube-system

Note: system:masters is a cluste role with administrative priveleges. If you want to add IAM users with limited privileges to the cluster, you need to create kubernetes ClusterRole or Role with required priveleges and then map it in the aws-auth configmap.

To ensure the configuration is properly done, use the following command. Replace custom-cluster with your cluster name.

eksctl get iamidentitymapping --cluster custom-cluster --region us-west-2

You will get the following output if the configurations are correct.

ARN                                                                                             USERNAME                                GROUPS                         ACCOUNT
arn:aws:iam::814200988517:role/eksctl-custom-cluster-nodegroup-ng-NodeInstanceRole-ieXb6Zk7Og64 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
arn:aws:iam::814200988517:user/eks-dev-user                                                     eks-dev-user                            system:masters

eksctl Identity Mapping

If you are using eksctl to manage EKS cluster, you can use the identity mapping commands to add users and roles to the auth-config

For example,

eksctl create iamidentitymapping --cluster  <clusterName> --region=<region> --arn arn:aws:iam::123456:role/testing --group system:masters --username admin

Refer to identify mapping document to learn more.

Solution 3: Upgrade AWS CLI to 2.0 or Higher

If you are using AWS cli version lesser than 2.x, you could encounter the error.

When using package manager, it might install the lower versions by default. So always use the official AWS CLI binaries in projects with latest version.


I created an EKS cluster using eksctl from my workstation.

I faced the client.authentication.k8s.io issue when trying to access the EKS cluster from an ec2 instance. Even though it had administrative privileges it couldn’t connect to the cluster.

Based on my research I figured out that it could be a problem with kubectl and aws-auth. First, I updated kubectl to rule out that issue. Even after updating kubectl, the issue persisted.

Then I added the IAM role to the aws-auth config file from my workstation where I created the EKS cluster. And then from the ec2 instance, I was able to access the EKS cluster.

I have also tested this for an IAM user and it worked without any issues.

If your issues are not resolved even after configuring the steps mentioned in the blog, drop a comment with the error and we will take a look.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like