Skip to content

Instantly share code, notes, and snippets.

@lusoal
Created October 20, 2022 20:35
Show Gist options
  • Select an option

  • Save lusoal/a8c182a881ed0b0507724c375abaf6dd to your computer and use it in GitHub Desktop.

Select an option

Save lusoal/a8c182a881ed0b0507724c375abaf6dd to your computer and use it in GitHub Desktop.
IRSA Demonstration Script

Setup RBAC

Creating a new user:

export ACCOUNT_ID=0000000
export AWS_DEFAULT_REGION=us-east-2
aws iam create-user --user-name rbac-user
aws iam create-access-key --user-name rbac-user | tee /tmp/create_output.json

To make it easy to switch back and forth between the admin user you created the cluster with, and this new rbac-user, run the following command to create a script that when sourced, sets the active user to be rbac-user:

cat << EoF > rbacuser_creds.sh
#!/bin/bash
export AWS_SECRET_ACCESS_KEY=$(jq -r .AccessKey.SecretAccessKey /tmp/create_output.json)
export AWS_ACCESS_KEY_ID=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
EoF

chmod +x rbacuser_creds.sh

Map users to K8s

Next, we'll define a k8s user called rbac-user, and map to its IAM user counterpart.

eksctl create iamidentitymapping \
  --cluster irsa-demonstration \
  --arn arn:aws:iam::${ACCOUNT_ID}:user/rbac-user \
  --username rbac-user

To verify everything populated and was created correctly, run the following:

kubectl get cm aws-auth -nkube-system -oyaml

Test the UserHeader anchor link

Issue the following command to source the rbac-user's AWS IAM user environmental variables:

./rbacuser_creds.sh

Validate that we are using the correct user:

aws sts get-caller-identity

Making an API call:

kubectl get pods -nkube-system

You should get a response back similar to:

Error from server (Forbidden): pods is forbidden: User "rbac-user" cannot list resource "pods" in API group "" in the namespace "kube-system"

Just creating the user doesn't give that user access to any resources in the cluster. In order to achieve that, we'll need to define a role, and then bind the user to that role. We'll do that next.

Create Role and Role Binding

unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID

Validate that we are using the correct user:

aws sts get-caller-identity

Now that we're the admin user again, we'll create a role called pod-reader that provides list, get, and watch access for pods and deployments, but only for the workshop namespace. Run the following to create this role:

cat << EoF > rbacuser-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: kube-system
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["list","get","watch"]
- apiGroups: ["extensions","apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch"]
EoF

We have the user, we have the role, and now we're bind them together with a RoleBinding resource. Run the following to create this RoleBinding:

cat << EoF > rbacuser-role-binding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: kube-system
subjects:
- kind: User
  name: rbac-user
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
EoF

Next, we apply the Role, and RoleBindings we created:

kubectl apply -f rbacuser-role.yaml
kubectl apply -f rbacuser-role-binding.yaml

Verify the Role and Role Binding

To switch back to rbac-user, issue the following command that sources the rbac-user env vars, and verifies they've taken:

./rbacuser_creds.sh

As rbac-user, issue the following to get pods in the rbac namespace:

kubectl get pods -nkube-system

Try running the same command again, but outside of the kube-system namespace:

kubectl get pods -ndefault

You should get an error similar to:

Error from server (Forbidden): pods is forbidden: User "rbac-user" cannot list resource "pods" in API group "" in the namespace "default"

Cleanup

unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
rm rbacuser_creds.sh
rm rbacuser-role.yaml
rm rbacuser-role-binding.yaml
aws iam delete-access-key --user-name=rbac-user --access-key-id=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
aws iam delete-user --user-name rbac-user
rm /tmp/create_output.json

Next remove the rbac-user mapping from the existing configMap by editing the existing aws-auth.yaml file:

eksctl delete iamidentitymapping --cluster irsa-demonstration --arn arn:aws:iam::${ACCOUNT_ID}:user/rbac-user

IAM Roles for Service Accounts

You can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. With this feature, you no longer need to provide extended permissions to the Amazon EKS node IAM role so that pods on that node can call AWS APIs. The applications in the pod’s containers can then use an AWS SDK or the AWS CLI to make API requests to authorized AWS services.

IAM OIDC provider for your cluster

Your EKS cluster has an OpenID Connect issuer URL associated with it, and this will be used when configuring the IAM OIDC Provider. You can check it with:

aws eks describe-cluster --name irsa-demonstration --query cluster.identity.oidc.issuer --output text

Create an IAM OIDC provider for your cluster

eksctl utils associate-iam-oidc-provider --cluster irsa-demonstration --approve

If you go to the Identity Providers in IAM Console, and click on the OIDC provider link, you will see OIDC provider has created for your cluster.

Create an IAM role and attach an IAM policy

In this demonstration we will use the AWS managed policy named "AmazonS3ReadOnlyAccess" which allow get and list for all your S3 buckets.

aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn'

Now you will create a IAM role bound to a service account with read-only access to S3

eksctl create iamserviceaccount \
    --name iam-test \
    --namespace workshop \
    --cluster irsa-demonstration \
    --attach-policy-arn arn\:aws\:iam::aws\:policy/AmazonS3ReadOnlyAccess \
    --approve \
    --override-existing-serviceaccounts

Associate an IAM role with a service account

You can see that an IAM role (See the Annotations below) is associated to the Service Account iam-test in the cluster we just created.

kubectl describe sa iam-test -n workshop
Name:                iam-test
Namespace:           workshop
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::936068047509:role/eksctl-irsa-demonstration-addon-iamserviceac-Role1-19EEFEL2XHZP1
Image pull secrets:  <none>
Mountable secrets:   iam-test-token-xbksb
Tokens:              iam-test-token-xbksb
Events:              <none>

Test Success Case (List S3 buckets)

Starting by testing if the service account we created can list the S3 buckets.

Let's add job-s3.yaml that will output the result of the command aws s3 ls (this job should be successful).

mkdir ./irsa

cat <<EoF> ./irsa/job-s3.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: eks-iam-test-s3
  namespace: workshop
spec:
  template:
    metadata:
      labels:
        app: eks-iam-test-s3
    spec:
      serviceAccountName: iam-test
      containers:
      - name: eks-iam-test
        image: amazon/aws-cli:latest
        args: ["s3", "ls"]
      restartPolicy: Never
EoF

kubectl apply -f ./irsa/job-s3.yaml

Make sure your job is completed

kubectl get job -l app=eks-iam-test-s3 -n workshop

Let's check the logs to verify that the command ran successfully.

kubectl logs -l app=eks-iam-test-s3 -n workshop

Test Failure Case

Now Let's confirm that the service account cannot list the EC2 instances. Add job-ec2.yaml that will output the result of the command aws ec2 describe-instances --region ${AWS_REGION} (this job should failed).

cat <<EoF> ./irsa/job-ec2.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: eks-iam-test-ec2
  namespace: workshop
spec:
  template:
    metadata:
      labels:
        app: eks-iam-test-ec2
    spec:
      serviceAccountName: iam-test
      containers:
      - name: eks-iam-test
        image: amazon/aws-cli:latest
        args: ["ec2", "describe-instances", "--region", "us-east-2"]
      restartPolicy: Never
  backoffLimit: 0
EoF

kubectl apply -f ./irsa/job-ec2.yaml

Let's verify the job status

kubectl get job -l app=eks-iam-test-ec2 -n workshop

As you can see the job didn't complete, let's get the Pod status:

kubectl get po -nworkshop

The output should be similar:

NAME                     READY   STATUS      RESTARTS   AGE
eks-iam-test-ec2-kqrqp   0/1     Error       0          37s
eks-iam-test-s3-m9gp5    0/1     Completed   0          2m51s

Finally we will review the logs

kubectl logs -l app=eks-iam-test-ec2 -n workshop

Output

An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.

Cleanup

kubectl delete -f ./irsa/job-s3.yaml
kubectl delete -f ./irsa/job-ec2.yaml

eksctl delete iamserviceaccount \
    --name iam-test \
    --namespace workshop \
    --cluster irsa-demonstration \
    --wait

rm -rf ./irsa/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment