Kubernetes PVC with ReadWriteMany on AWS

You can use Amazon EFS to create PersistentVolume with ReadWriteMany access mode.

Amazon EKS Announced support for the Amazon EFS CSI Driver on Sep 19 2019, which makes it simple to configure elastic file storage for both EKS and self-managed Kubernetes clusters running on AWS using standard Kubernetes interfaces.

Applications running in Kubernetes can use EFS file systems to share data between pods in a scale-out group, or with other applications running within or outside of Kubernetes.

EFS can also help Kubernetes applications be highly available because all data written to EFS is written to multiple AWS Availability zones. If a Kubernetes pod is terminated and relaunched, the CSI driver will reconnect the EFS file system, even if the pod is relaunched in a different AWS Availability Zone.

You can deploy the Amazon EFS CSI Driver to an Amazon EKS cluster following the EKS-EFS-CSI user guide, basically like this:

Step 1: Deploy the Amazon EFS CSI Driver

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

Note: This command requires version 1.14 or greater of kubectl.

Step 2: Create an Amazon EFS file system for your Amazon EKS cluster

Step 2.1: Create a security group that allows inbound NFS traffic for your Amazon EFS mount points.

Step 2.2: Add a rule to your security group to allow inbound NFS traffic from your VPC CIDR range.

Step 2.3: Create the Amazon EFS file system configured with the security group you just created.

Now you are good to use EFS with ReadWriteMany access mode in your EKS Kubernetes project with the following sample manifest files:

1. efs-storage-class.yaml: Create the storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

kubectl apply -f efs-storage-class.yaml

2. efs-pv.yaml: Create PersistentVolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ftp-efs-pv
spec:
  storageClassName: efs-sc
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 10Gi # Doesn't really matter, as EFS does not enforce it anyway
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-642da695

Note: you need to replace the volumeHandle value with your Amazon EFS file system ID.

3. efs-pvc.yaml: Create PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ftp-pv-claim
  labels:
    app: ftp-storage-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: efs-sc

That should be it. You need to refer to the aforementioned official user guide for detailed explanation, where your can also find an example app to verify your setup.


Using EFS without automatic provisioning

The EFS provisioner may be beta, but EFS itself is not. Since EFS volumes can be mounted via NFS, you can simply create a PersistentVolume with a NFS volume source manually -- assuming that automatic provisioning is not a hard requirement on your side:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-efs-volume
spec:
  capacity:
    storage: 100Gi # Doesn't really matter, as EFS does not enforce it anyway
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - hard
    - nfsvers=4.1
    - rsize=1048576
    - wsize=1048576
    - timeo=600
    - retrans=2
  nfs:
    path: /
    server: fs-XXXXXXXX.efs.eu-central-1.amazonaws.com

You can then claim this volume using a PersistentVolumeClaim and use it in a Pod (or multiple Pods) as usual.

Alternative solutions

If automatic provisioning is a hard requirement for you, there are alternative solutions you might look at: There are several distributed filesystems that you can roll out on yourcluster that offer ReadWriteMany storage on top of Kubernetes and/or AWS. For example, you might take a look at Rook (which is basically a Kubernetes operator for Ceph). It's also officially still in a pre-release phase, but I've already worked with it a bit and it runs reasonably well. There's also the GlusterFS operator, which already seems to have a few stable releases.