Creating a new user:
export ACCOUNT_ID=0000000
export AWS_DEFAULT_REGION=us-east-2
aws iam create-user --user-name rbac-user
aws iam create-access-key --user-name rbac-user | tee /tmp/create_output.jsonTo make it easy to switch back and forth between the admin user you created the cluster with, and this new rbac-user, run the following command to create a script that when sourced, sets the active user to be rbac-user:
cat << EoF > rbacuser_creds.sh
#!/bin/bash
export AWS_SECRET_ACCESS_KEY=$(jq -r .AccessKey.SecretAccessKey /tmp/create_output.json)
export AWS_ACCESS_KEY_ID=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
EoF
chmod +x rbacuser_creds.shNext, we'll define a k8s user called rbac-user, and map to its IAM user counterpart.
eksctl create iamidentitymapping \
--cluster irsa-demonstration \
--arn arn:aws:iam::${ACCOUNT_ID}:user/rbac-user \
--username rbac-userTo verify everything populated and was created correctly, run the following:
kubectl get cm aws-auth -nkube-system -oyamlIssue the following command to source the rbac-user's AWS IAM user environmental variables:
./rbacuser_creds.shValidate that we are using the correct user:
aws sts get-caller-identityMaking an API call:
kubectl get pods -nkube-systemYou should get a response back similar to:
Error from server (Forbidden): pods is forbidden: User "rbac-user" cannot list resource "pods" in API group "" in the namespace "kube-system"
Just creating the user doesn't give that user access to any resources in the cluster. In order to achieve that, we'll need to define a role, and then bind the user to that role. We'll do that next.
unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_IDValidate that we are using the correct user:
aws sts get-caller-identityNow that we're the admin user again, we'll create a role called pod-reader that provides list, get, and watch access for pods and deployments, but only for the workshop namespace. Run the following to create this role:
cat << EoF > rbacuser-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["list","get","watch"]
- apiGroups: ["extensions","apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
EoFWe have the user, we have the role, and now we're bind them together with a RoleBinding resource. Run the following to create this RoleBinding:
cat << EoF > rbacuser-role-binding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: kube-system
subjects:
- kind: User
name: rbac-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
EoFNext, we apply the Role, and RoleBindings we created:
kubectl apply -f rbacuser-role.yaml
kubectl apply -f rbacuser-role-binding.yamlTo switch back to rbac-user, issue the following command that sources the rbac-user env vars, and verifies they've taken:
./rbacuser_creds.shAs rbac-user, issue the following to get pods in the rbac namespace:
kubectl get pods -nkube-systemTry running the same command again, but outside of the kube-system namespace:
kubectl get pods -ndefaultYou should get an error similar to:
Error from server (Forbidden): pods is forbidden: User "rbac-user" cannot list resource "pods" in API group "" in the namespace "default"unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
rm rbacuser_creds.sh
rm rbacuser-role.yaml
rm rbacuser-role-binding.yaml
aws iam delete-access-key --user-name=rbac-user --access-key-id=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
aws iam delete-user --user-name rbac-user
rm /tmp/create_output.jsonNext remove the rbac-user mapping from the existing configMap by editing the existing aws-auth.yaml file:
eksctl delete iamidentitymapping --cluster irsa-demonstration --arn arn:aws:iam::${ACCOUNT_ID}:user/rbac-userYou can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. With this feature, you no longer need to provide extended permissions to the Amazon EKS node IAM role so that pods on that node can call AWS APIs. The applications in the pod’s containers can then use an AWS SDK or the AWS CLI to make API requests to authorized AWS services.
Your EKS cluster has an OpenID Connect issuer URL associated with it, and this will be used when configuring the IAM OIDC Provider. You can check it with:
aws eks describe-cluster --name irsa-demonstration --query cluster.identity.oidc.issuer --output texteksctl utils associate-iam-oidc-provider --cluster irsa-demonstration --approveIf you go to the Identity Providers in IAM Console, and click on the OIDC provider link, you will see OIDC provider has created for your cluster.
In this demonstration we will use the AWS managed policy named "AmazonS3ReadOnlyAccess" which allow get and list for all your S3 buckets.
aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn'Now you will create a IAM role bound to a service account with read-only access to S3
eksctl create iamserviceaccount \
--name iam-test \
--namespace workshop \
--cluster irsa-demonstration \
--attach-policy-arn arn\:aws\:iam::aws\:policy/AmazonS3ReadOnlyAccess \
--approve \
--override-existing-serviceaccountsYou can see that an IAM role (See the Annotations below) is associated to the Service Account iam-test in the cluster we just created.
kubectl describe sa iam-test -n workshopName: iam-test
Namespace: workshop
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::936068047509:role/eksctl-irsa-demonstration-addon-iamserviceac-Role1-19EEFEL2XHZP1
Image pull secrets: <none>
Mountable secrets: iam-test-token-xbksb
Tokens: iam-test-token-xbksb
Events: <none>
Starting by testing if the service account we created can list the S3 buckets.
Let's add job-s3.yaml that will output the result of the command aws s3 ls (this job should be successful).
mkdir ./irsa
cat <<EoF> ./irsa/job-s3.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: eks-iam-test-s3
namespace: workshop
spec:
template:
metadata:
labels:
app: eks-iam-test-s3
spec:
serviceAccountName: iam-test
containers:
- name: eks-iam-test
image: amazon/aws-cli:latest
args: ["s3", "ls"]
restartPolicy: Never
EoF
kubectl apply -f ./irsa/job-s3.yamlMake sure your job is completed
kubectl get job -l app=eks-iam-test-s3 -n workshopLet's check the logs to verify that the command ran successfully.
kubectl logs -l app=eks-iam-test-s3 -n workshopNow Let's confirm that the service account cannot list the EC2 instances. Add job-ec2.yaml that will output the result of the command aws ec2 describe-instances --region ${AWS_REGION} (this job should failed).
cat <<EoF> ./irsa/job-ec2.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: eks-iam-test-ec2
namespace: workshop
spec:
template:
metadata:
labels:
app: eks-iam-test-ec2
spec:
serviceAccountName: iam-test
containers:
- name: eks-iam-test
image: amazon/aws-cli:latest
args: ["ec2", "describe-instances", "--region", "us-east-2"]
restartPolicy: Never
backoffLimit: 0
EoF
kubectl apply -f ./irsa/job-ec2.yamlLet's verify the job status
kubectl get job -l app=eks-iam-test-ec2 -n workshopAs you can see the job didn't complete, let's get the Pod status:
kubectl get po -nworkshopThe output should be similar:
NAME READY STATUS RESTARTS AGE
eks-iam-test-ec2-kqrqp 0/1 Error 0 37s
eks-iam-test-s3-m9gp5 0/1 Completed 0 2m51s
Finally we will review the logs
kubectl logs -l app=eks-iam-test-ec2 -n workshopOutput
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
kubectl delete -f ./irsa/job-s3.yaml
kubectl delete -f ./irsa/job-ec2.yaml
eksctl delete iamserviceaccount \
--name iam-test \
--namespace workshop \
--cluster irsa-demonstration \
--wait
rm -rf ./irsa/