Skip to content

Install the AWS EBS CSI driver

The Amazon Elastic Block Store Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators to manage the lifecycle of Amazon EBS volumes. It's a convenient way to consume EBS storage, which works consistently with other CSI-based tooling (for example, you can dynamically expand and snapshot volumes).

Tell me about the features...
  • Static Provisioning - Associate an externally-created EBS volume with a PersistentVolume (PV) for consumption within Kubernetes.
  • Dynamic Provisioning - Automatically create EBS volumes and associated PersistentVolumes (PV) from PersistentVolumeClaims) (PVC). Parameters can be passed via a StorageClass for fine-grained control over volume creation.
  • Mount Options - Mount options could be specified in the PersistentVolume (PV) resource to define how the volume should be mounted.
  • NVMe Volumes - Consume NVMe volumes from EC2 Nitro instances.
  • Block Volumes - Consume an EBS volume as a raw block device.
  • Volume Snapshots - Create and restore snapshots taken from a volume in Kubernetes.
  • Volume Resizing - Expand the volume by specifying a new size in the PersistentVolumeClaim (PVC).

Ingredients

Preparation

EBS CSI Driver Namespace

We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the flux design, I create this example yaml in my flux repo at /bootstrap/namespaces/namespace-aws-ebs-csi-driver.yaml:

/bootstrap/namespaces/namespace-aws-ebs-csi-driver.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: aws-ebs-csi-driver

EBS CSI Driver HelmRepository

We're going to install the EBS CSI Driver helm chart from the aws-ebs-csi-driver repository, so I create the following in my flux repo (assuming it doesn't already exist):

/bootstrap/helmrepositories/helmrepository-aws-ebs-csi-driver.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: aws-ebs-csi-driver
  namespace: flux-system
spec:
  interval: 15m
  url: None

EBS CSI Driver Kustomization

Now that the "global" elements of this deployment (just the HelmRepository in this case) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at /aws-ebs-csi-driver/. I create this example Kustomization in my flux repo:

/bootstrap/kustomizations/kustomization-aws-ebs-csi-driver.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: aws-ebs-csi-driver
  namespace: flux-system
spec:
  interval: 30m
  path: ./aws-ebs-csi-driver
  prune: true # remove any elements later removed from the above path
  timeout: 10m # if not set, this defaults to interval duration, which is 1h
  sourceRef:
    kind: GitRepository
    name: flux-system
  healthChecks:
    - apiVersion: helm.toolkit.fluxcd.io/v2beta1
      kind: HelmRelease
      name: aws-ebs-csi-driver
      namespace: aws-ebs-csi-driver

Fast-track your fluxing! ๐Ÿš€

Is crafting all these YAMLs by hand too much of a PITA?

"Premix" is a git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!

Let the machines do the TOIL! ๐Ÿ‹๏ธโ€โ™‚๏ธ

EBS CSI Driver HelmRelease

Lastly, having set the scene above, we define the HelmRelease which will actually deploy aws-ebs-csi-driver into the cluster. We start with a basic HelmRelease YAML, like this example:

/aws-ebs-csi-driver/helmrelease-aws-ebs-csi-driver.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: aws-ebs-csi-driver
  namespace: aws-ebs-csi-driver
spec:
  chart:
    spec:
      chart: aws-ebs-csi-driver
      version: 2.24.x # auto-update to semver bugfixes only (1)
      sourceRef:
        kind: HelmRepository
        name: aws-ebs-csi-driver
        namespace: flux-system
  interval: 15m
  timeout: 5m
  releaseName: aws-ebs-csi-driver
  values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
  1. I like to set this to the semver minor version of the EBS CSI Driver current helm chart, so that I'll inherit bug fixes but not any new features (since I'll need to manually update my values to accommodate new releases anyway)
  2. Paste the full contents of the upstream values.yaml here, indented 4 spaces under the values: key

If we deploy this helmrelease as-is, we'll inherit every default from the upstream EBS CSI Driver helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the EBS CSI Driver helm chart's values.yaml, and to paste these (indented), under the values key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.

Why not put values in a separate ConfigMap?

Didn't you previously advise to put helm chart values into a separate ConfigMap?

Yes, I did. And in practice, I've changed my mind.

Why? Because having the helm values directly in the HelmRelease offers the following advantages:

  1. If you use the YAML extension in VSCode, you'll see a full path to the YAML elements, which can make grokking complex charts easier.
  2. When flux detects a change to a value in a HelmRelease, this forces an immediate reconciliation of the HelmRelease, as opposed to the ConfigMap solution, which requires waiting on the next scheduled reconciliation.
  3. Renovate can parse HelmRelease YAMLs and create PRs when they contain docker image references which can be updated.
  4. In practice, adapting a HelmRelease to match upstream chart changes is no different to adapting a ConfigMap, and so there's no real benefit to splitting the chart values into a separate ConfigMap, IMO.

Then work your way through the values you pasted, and change any which are specific to your configuration.

Install EBS CSI Driver!

Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using flux reconcile source git flux-system. You should see the kustomization appear...

~ โฏ flux get kustomizations aws-ebs-csi-driver
NAME        READY   MESSAGE                         REVISION        SUSPENDED
aws-ebs-csi-driver  True    Applied revision: main/70da637  main/70da637    False
~ โฏ

The helmrelease should be reconciled...

~ โฏ flux get helmreleases -n aws-ebs-csi-driver aws-ebs-csi-driver
NAME        READY   MESSAGE                             REVISION    SUSPENDED
aws-ebs-csi-driver  True    Release reconciliation succeeded    v2.24.x     False
~ โฏ

And you should have happy pods in the aws-ebs-csi-driver namespace:

~ โฏ k get pods -n aws-ebs-csi-driver -l app.kubernetes.io/name=aws-ebs-csi-driver
NAME                                  READY   STATUS    RESTARTS   AGE
ebs-csi-controller-77bddb4c95-2bzw5   5/5     Running   1 (10h ago)   37h
ebs-csi-controller-77bddb4c95-qr2hk   5/5     Running   0             37h
ebs-csi-node-4f8kz                    3/3     Running   0             37h
ebs-csi-node-fq8bn                    3/3     Running   0             37h
~ โฏ

Setup IRSA

Before you can attach EBS volumes with aws-ebs-csi-driver, it's necessary to perform some AWS IAM acronym-salad first ๐Ÿฅ— ..

The CSI driver pods need access to your AWS account in order to provision EBS volumes. You could feed them with classic access key/secret keys, but a more "sophisticated" method is to use "IAM roles for service accounts", or IRSA.

IRSA lets you associate a Kubernetes service account with an IAM role, so instead of stashing access secrets somewhere in a namespace (and in your GitOps repo1), you simply tell AWS "grant the service account batcave-music in the namespace bat-ertainment the ability to use my streamToAlexa IAM role.

Before we start, we have to use eksctl to generate an IAM OIDC provider for your cluster, in case we don't have one. I ran:

eksctl utils associate-iam-oidc-provider --cluster=funkypenguin-authentik-test --approve

(It's harmless to run it more than once, if you already have an IAM OIDC provider associated, the command will just error)

Once complete, I ran the following to grant the aws-ebs-csi-driver service account in the aws-ebs-csi-driver namespace the power to use the AWS-managed AmazonEBSCSIDriverPolicy policy, which exists for exactly this purpose:

eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace aws-ebs-csi-driver \
    --cluster funkypenguin-authentik-test \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --override-existing-serviceaccounts \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve

This will annotate the existing serviceaccount in the aws-ebs-csi-driver namespace, with the role to be attached.

Confirm it's worked by describing the serviceAccount - you should see an annotation indicating the role attached, like this:

Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::6831384437293:role/AmazonEKS_EBS_CSI_DriverRole

Troubleshooting

If it doesn't work for some reason (like you ran the command once with a typo!), you may find yourself unable to re-run the command. Cloudformation logs will show you that the action is failing because the role name already exits. To work around this, grab the ARN of the existing role, and change the command slightly:

eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace aws-ebs-csi-driver \
    --cluster funkypenguin-authentik-test \
    --attach-role-arn arn:aws:iam::683179697293:role/AmazonEKS_EBS_CSI_DriverRole \
    --override-existing-serviceaccounts \
    --approve

How do I know it's working?

So the AWS EBS CSI driver is installed, but how do we know it's working, especially that IRSA voodoo?

Check pod logs

First off, check the pod logs for any errors, by running:

kubectl logs -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver

If you see nasty errors about EBS access denied, then revisit the IRSA magic above. If not, proceed with the acid test ๐Ÿงช below..

Create resources

Create PVCs

Create a PVCs (persistent volume claim), by running:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: aws-ebs-csi-test
  labels:
    test: aws-ebs-csi
    funkypenguin-is: a-smartass  
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 128Mi
EOF

Examine the PVC, and note that it's in a pending state (this is normal):

kubectl get pvc -l test=aws-ebs-csi

Create Pod

Now create a pod to consume the PVC, by running:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: aws-ebs-csi-test
  labels:
    test: aws-ebs-csi
    funkypenguin-is: a-smartass  
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: ebs-volume
      mountPath: /i-am-a-volume
    ports:
    - containerPort: 80
  volumes:
  - name: ebs-volume
    persistentVolumeClaim:
      claimName: aws-ebs-csi-test
EOF

Ensure the pods have started successfully (this indicates the PVCs were correctly attached) by running:

kubectl get pod -l test=aws-ebs-csi

Clean up

Assuming that the pod is in a Running state, then your EBS provisioning, and all the background AWS plumbing, worked!

Clean up your mess, little cloud-monkey ๐Ÿต, by running:

kubectl delete pod -l funkypenguin-is=a-smartass
kubectl delete pvc -l funkypenguin-is=a-smartass

Summary

What have we achieved? We're now able to persist data in our EKS cluster, and have left the door open for future options like snapshots, volume expansion, etc.

Summary

Created:

  • AWS EBS CSI driver installed and tested in our EKS cluster
  • Future support for Velero with csi-snapshots, and volume expansion

Chef's notes ๐Ÿ““


  1. Negated somewhat with Sealed Secretsย โ†ฉ

Tip your waiter (sponsor) ๐Ÿ‘

Did you receive excellent service? Want to compliment the chef? (..and support development of current and future recipes!) Sponsor me on Github / Ko-Fi / Patreon, or see the contribute page for more (free or paid) ways to say thank you! ๐Ÿ‘

Employ your chef (engage) ๐Ÿค

Is this too much of a geeky PITA? Do you just want results, stat? I do this for a living - I'm a full-time Kubernetes contractor, providing consulting and engineering expertise to businesses needing short-term, short-notice support in the cloud-native space, including AWS/Azure/GKE, Kubernetes, CI/CD and automation.

Learn more about working with me here.

Flirt with waiter (subscribe) ๐Ÿ’Œ

Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the RSS feed, or leave your email address below, and we'll keep you updated.

EBS CSI Driver resources ๐Ÿ“

Your comments? ๐Ÿ’ฌ