Kubernetes Part 11: How-To Configure Dynamic Persistent Volumes with a Synology NAS


 


In my last blog we configured static persistent volumes. This required that the storage (nfs share) for the application had be prepared in advance before the persistent volume ("pv") could be used by the persistent volume claim ("pvc") .

With a dynamic persistent volume claim, this it not necessary.When a pvc makes a claim to storage class, space on the pv will automatically claimed. In setup below I will explain how-to configure this on our kubernetes cluster with our Synology nas providing the nfs storage.

The following actions need to be executed at the k8s-master via ssh (putty).

- Install Git

sudo apt install git

After git is installed, enter the command below to clone nfs-subdir-exteranl-provisioner repository

git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git

This will clone all the content of the nfs-subdir-external-provisioner repository to your home folder on the k8s-master.

You now should have nfs-subdir-external-provisioner subfolder in your homedirectory. Now need to adjust 

# Make a new directory in your home folder
cd $home
mkdir nfs-client-provisioner

# Copy the required file from the github source
cp nfs-subdir-external-provisioner/deploy/deployment-arm.yaml nfs-client-provisioner/deployment-arm.yaml
cp nfs-subdir-external-provisioner/deploy/class.yaml nfs-client-provisioner/class.yaml
cp nfs-subdir-external-provisioner/deploy/rbac.yaml nfs-client-provisioner/rbac.yaml


Now you need to adjust the file similar to the settings below. In the example below the kubedata nfs share has already been configured on the nas or nfs server. See  my previous blog on how to configure this on your synology nas.

- Modify the deployment-arm.yaml file to  match you nfs server and share via the following command.

nano nfs-client-provisioner/deployment-arm.yaml

Change the file with your own values similar to the example below, and save it via Ctrl-O and Ctrl-X

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-dynamic-storage # < new provsioner name
            - name: NFS_SERVER
              value: xxx.xxx.xxx.xxx # < IP address of your NAS server
            - name: NFS_PATH
              value: /volum1/kubedata # < example value of your nfs share
      volumes:
        - name: nfs-client-root
          nfs:
            server: xxx.xxx.xxx.xxx # < IP address of your NAS server
            path: /volum1/kubedata # < example value of your nfs share

- Change the namespace setting in the rbac.yam file via the following command

nano nfs-client-provisioner/rbac.yaml

Change the setting similar to the below example

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-client-provisioner  # < change namespace to nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

- Change the provisioner in the class.yaml file via the following command

nano nfs-client-provisioner/class.yaml
Change the provisioner value similar the the example below:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-dynamic-storage # < new provsioner name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
And save the file in nano (Ctrl-O and Ctrl-X)

Now deploy the the nfs-client-provisioner to your kubernetes cluster via the following four commands:

kubectl create namespace nfs-client-provisioner 
kubectl apply -f nfs-client-provisioner/class.yamlk
kubectl apply -f nfs-client-provisioner/rbac.yaml
kubectl apply -f nfs-client-provisioner/deployment-arm.yaml

Check if the deployment went succesfully via the following command:

kubectl get all -n nfs-client-provisioner 
You should see everything running (STATUS) similar to the screenshot below


- Test nfs dynamic provision

Now the everyhing is in place for dynamic provision on a nfs share, and we should be able to test it. To test it,  I have made a yaml file the create an nginx (webserver) deployment that will use the nfs-client-provisioner for it's persistent storage.

The test is similar as for the static persistent volume. The only difference is that we don't have to create a persistant volume. The persisent volumce claim yaml file will use a storage class call managed-nfs-storage (see class.yaml) with will the trigger the nfs-client-provisioner to provision (meaning in our case "create a directory on the nfs share"). So for the test we only need to created a persistant volume claim yaml file with our test nginx-website will use.

- Create a persistent volume claim in kubernetes

nano pvc-nfs-kubedata-nginx-1-dynamic.yaml

Cut and past the text below and save the file

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-kubedata-nginx-1
  annotations:
   volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"  # <  the dynamic nfs storage call we have created earlier
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

Apply the file via the following command

kubectl apply -f pvc-nfs-kubedata-nginx-1-dynamic.yaml
Check if the persistent volume claim has been created successfully. You can do this by checking the kubedate folder on your nas of nfs server. It should look something like this






You see that the nfs-client-provisioner automatically has created a folder (default-pvc-nfs-kubedata-nginx-pvc-etc....). Next we will test how to use it.

- Create index.html file on NAS

So the the first we thing do it to create a "website" via an index.html file on the nas.

Cut and paste the text below in notepad on your desktop and save it as index.html file on newly created folder  (default-pvc-nfs-kubedata-nginx-pvc-etc....). on the NAS 

<!DOCTYPE html>
<html>
<head>
<style>
</style>
</head>
<body>

<h1>Kubernetes - Webtest 1</h1>
<p>This page is located on a dynamic persistent volume, and run on a k8s-cluster!</p>

</body>
</html>
It should look similar to this on the Synology NAS (or your nfs server)







- Create the nginx-website deployment in kubernetes. This is the same procedure as with persistent volumes.

SSH (putty) into your k8s-master and to the following

nano deploy-nginx-1-k8s.yaml

Cut and paste the text below and save the file 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-1
  labels:
    app: nginx-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-1
  template:
    metadata:
      labels:
        app: nginx-1
    spec:
      volumes:
      - name: nginx-1-volume
        persistentVolumeClaim:
          claimName: pvc-nfs-kubedata-nginx-1
      containers:
        - image: nginx
          name: nginx-1
          imagePullPolicy: Always
          resources:
           limits:
             memory: 512Mi
             cpu: "1"
           requests:
             memory: 256Mi
             cpu: "0.2"
          volumeMounts:
          - name: nginx-1-volume
            mountPath: /usr/share/nginx/html
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-1-service
spec:
  selector:
    app: nginx-1
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80 
  type: LoadBalancer
---


Apply the deployment file with the following command

kubectl apply -f deploy-nginx-1-k8s.yaml

This yaml file will create a service and a deployment. 






Next check the service, to find out the ip on which the nginx-1 website is running.

kubectl get service -n default

You should see something similar to this. The red values should be external IP to access the nginx-1 website.

erikdebont@k8s-master:~$ kubectl get service -n default
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes        ClusterIP      10.96.0.1       <none>          443/TCP        46d
nginx-1-service   LoadBalancer   10.103.42.209   xxx.xxx.xxx.xxx 80:31520/TCP   18m
erikdebont@k8s-master:~$

If you open a web browser to http://xxx.xxx.xxx.xxx (your external ip). You should see this webpage, which is the index.html file we have put in the dynamically created ( (default-pvc-nfs-kubedata-nginx-pvc-etc....). folder. As you see can NGINX running in kubernetes is using the webpage data located on the Synology NAS.






Hope this blog was helpfull. I you have any questions, do not hesitate to leave a comment in the comment section below.


More info:

The files created and used in this blog can be found here:

Comments

  1. quay.io/external_storage/nfs-client-provisioner-arm:latest

    can be replaced with

    k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 now as its a cross arch image and works with arm64

    ReplyDelete
    Replies
    1. Anonymous10:14 PM

      Thanks for the head sup. I have tested it, and updated the blogpost.

      Delete

Post a Comment