Publish Date: 2024-03-23
While the NFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
We will see in our demo today the steps that can be used to set up an open-source NFS dynamic storage provisioner on OpenShift.
The following instructions are applied on a cluster v4.12.42, and can be executed from any client machine with an oc Command Line Interface (CLI), with administrator rights, over the "default' namespace.
you can always check for latest steps or changes on this third-party provisioner which is available and documented at Kubernetes NFS Subdir External Provisioner.
the first thing that we must have is of course an NFS share configured with the necessary space and should be reachable by our OpenShift cluster.
execute the showmount command over your nfs server to get the path:
showmount -e 192.168.5.117
Export list for 192.168.5.117:
/mnt/data *
after that Download the setup files for the provisioner from GitHub at Kubernetes NFS Subdir External Provisioner then Extract the files and find the needed yaml files at nfs-subdir-external-provisioner-master\deploy\:
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
cd nfs-subdir-external-provisioner/deploy/
Note: all of the commands executed under "default" project, but if you prefer to do otherwise , then just replace each mention of default for the namespace in rbac.yaml, deployment.yaml and the oc adm command.
Run these two commands to deploy the provisioner security settings:
oc create -f rbac.yaml
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:nfs-client-provisioner
Edit the deployment.yaml file to include the proper NFS host and path for each location it is used.
#note: replace in red with your values
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.5.117
- name: NFS_PATH
value: /mnt/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.5.117
path: /mnt/data
Run this command to deploy the NFS-client provisioner.
oc create -f deployment.yaml
deployment.apps/nfs-client-provisioner created
Edit the class.yaml file to use the wanted names. In this example the storage class name is nfs-client:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
pathPattern: "${.PVC.namespace}-${.PVC.name}"
archiveOnDelete: "false"
Now, execute the next command to create the storage class for your dynamic provisioner:
sed -i 's@managed-nfs-storage@nfs-client@g' test-claim.yaml test-pod.yaml class.yaml
oc create -f class.yaml
storageclass.storage.k8s.io/nfs-client created
now you can check that our class is created successfully.
oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 40s
thin kubernetes.io/vsphere-volume Delete Immediate false 6d1h
thin-csi (default) csi.vsphere.vmware.com Delete WaitForFirstConsumer true 6d1h
Now we'll test our NFS storage class using the external provisioner by deploying a test pod:
oc create -f test-claim.yaml -f test-pod.yaml
pod/test-pod created
Now check for the pods at default namespace
oc get pods
nfs-client-provisioner-6c47f48954-bctx9 1/1 Running 0 16m
test-pod 0/1 Completed 0 100s
and the created PVC at NFS share mount point:
ls /mnt/share
drwxrwxrwx. 2 root root 4096 مارس 23 11:28 default-test-claim
its' cler from the output that the pod is running nd the PVC is created successfully at out NFS share mount point.
Now if we want to delete our test pod, we use the next command:
kubectl delete -f test-pod.yaml -f test-claim.yaml
persistentvolumeclaim "test-claim" deleted
we can check that our deployment is been deleted in nfs storage mount point, by looking if the folder has been deleted:
oc get pods
nfs-client-provisioner-6c47f48954-bctx9 1/1 Running 0 43m
ls /mnt/share
total 0
in our demo today we saw how to create a dynamic storage class using NFS provisioner.
just keep in mind that the provisioned storage is not guaranteed as you may allocate more than the NFS share's total size, as well as the share may also not have enough storage space left to actually accommodate the request.
also the provisioned storage limit is not enforced, as the application can expand to use all the available storage regardless of the provisioned size.