In this series of articles, I will discuss available Kubernetes storage solutions with the complete manual for deploying them to/for Kubernetes. This series of tutorials is helpful for everyone who knows the Kubernetes storage architecture and concepts and wants to deploy storage for Stateful applications in Kubernetes.
These commands install NFS server and export /data , which is accessible by the Kubernetes cluster. In the case of a multi-node Kubernetes cluster, you should allow all Kubernetes worker nodes.
2- Prepare Kubernetes worker nodes:
Now, to connect to the NFS server, the Kubernetes nodes need the NFS client package. You should run the following command only on the Kubernetes worker nodes – and control-plane nodes if they act as workers too.
apt install -y nfs-common
Important tip! Each storage solution may need client packages to connect to the storage server. You should install them in all Kubernetes worker nodes.
For NFS, the nfs-common package is required.
3- Using NFS in Kubernetes:
Method 1 — Connecting to NFS directly with Pod manifest:
To connect to the NFS storage directly using the Pod manifest, use the NFSVolumeSource in the PodSpec. Here is an example:
Method 3 — Dynamic provisioning using StorageClass:
You must install the NFS provisioner to provision PersistentVolume dynamically using StorageClasses. I use the nfs-subdir-external-provisioner to achieve this. The following commands install everything we need using the Helm package manager.