Kubernetes Part 15: Deploy Nextcloud - Yaml with application and database container and a configure a secret

 



In this blog I will show you how I have setup my Nextcloud application in my kubernetes cluster. This is also an example of an application with a frontend application container (Nextcloud and a database backend container (MariaDB), I have also included some small setup changes for Nextcloud as a bonus, so it can handle larger files.  It's a working configuration, but it currently only works with a single database pod and a single application pod. In the future I would like improve this setup, so it can have multiple pods as a cluster. 

The first thing we do is to create a namespace called "nextcloud". You can run this command on the master node or (if you have kubectl installed/configured) on your workstation.

kubectl create namespace nextcloud

The next thing we do is to create a secret, containing the MariaDB (MySQL) passwords we want to use for the nextcloud database. Change the red values into your own.

More info about kubernetes secrets:  https://kubernetes.io/docs/concepts/configuration/secret/

kubectl create secret generic nextcloud-db-secret -n nextcloud\
    --from-literal=MYSQL_ROOT_PASSWORD=<<myrootpassword>> \
    --from-literal=MYSQL_USER=nextcloud \
    --from-literal=MYSQL_PASSWORD=<<nextclouddatabasepassword>

You can check it the secret has been created properly via de command

kubectl describe secret nextcloud-db-secret -n nextcloud

You should see something similar as the screenshot below.


The nextcloud-db-secret configured account and password will be used by the database deployment. 

I have created two yaml files for the nextcloud deployment, one is the application part (Nextcloud) and a second one for the database (MariaDB) part. It's debatable why we use a container, in stead of an external MariaDB database, but for educational purposes we use a MariaDB container for a database backend. 

We now start with the database yaml part.

The first thing we configure is the persistent volume ("pv")  for the MariaDB container. It's similar to my the previous application I have shown in my blogs.  As always, change the red values into your own 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nextcloud-db-pv-nfs # < name of the persisant volume ("pv") in kubenetes
  namespace: nextcloud      # < namespace where to place the pv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi            # < max. size we reserve for the pv
  accessModes:
    - ReadWriteOnce         # < One pod can write to storage
  persistentVolumeReclaimPolicy: Retain # < the persistent volume can be reclaimed
  nfs:
    path: /volume1/data/nextcloud-db  # < Name of your NFS share with subfolder
    server: xxx.xxx.xxx.xxx               # < IP number of your NFS server             
    readOnly: false

After we need to create a persistent volume claim ("pvc") . Please keep in mind that the accessModes from the persistant volume claim needs to be same as the persistent volume.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud-db-pvc  # < name of the persistent volume claim ("pvc")
  namespace: nextcloud    # < namespace where to place the pvc
spec:
  storageClassName: ""
  volumeName: nextcloud-db-pv-nfs
  accessModes:
    - ReadWriteOnce       # < One pod can write to storage. Same value as pv
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi        # < how much data can the pvc claim from pv

After the volumes have been configured we are going to configure the database pod. In our case it will be a MariaDB container configured as statefulset set, in stead of a deployment.  I have chosen this option since we are using a single node MariaDB, which works better as a statefulset than as a deployment. Just to avoid data corruption if you are changing the replica to more then one.  

More info about statefulsets can be found here

In the service part of the yaml file, we open port 3306, which is the default database port for MariaDB.

apiVersion: apps/v1
kind: StatefulSet      # < kind of installation (statefulset vs Deployment)
metadata:
  name: nextcloud-db   # < name of the deployment
  namespace: nextcloud # < namespace where to place the statefulset and pods
  labels:
    app: nextcloud     # < label for tagging and reference
spec:
  serviceName: nextcloud-db-service       # < name of the service   (see service yaml part)
  replicas: 1                             # < number of pods to deploy
  selector:
    matchLabels:
      pod-label: nextcloud-db-pod         # < pod-label for tagging and reference
  template:
    metadata:
      labels:
        pod-label: nextcloud-db-pod
    spec:
     terminationGracePeriodSeconds: 1800
     volumes:
      - name: nextcloud-db-storage        
        persistentVolumeClaim:            # < linkname of the volume for the pvc
          claimName: nextcloud-db-pvc     # < pvc name we created in the previous yaml 
     containers:
      - name: mariadb
        image: linuxserver/mariadb
        imagePullPolicy: Always
        env:                               # < environment variables. See https://hub.docker.com/r/linuxserver/mariadb
        - name: PGID
          value: "100" # < group "user"     
        - name: PUID                       
          value: "1041" # < user "docker"
        - name: TZ
          value: Europe/Amsterdam          
        - name: MYSQL_DATABASE
          value: nextcloud
        envFrom:
        - secretRef:
            name: nextcloud-db-secret      # < link reference to the created secret
        volumeMounts:
         - name: nextcloud-db-storage      # < the volume mount in the container. Look at the relation volumelabel->pvc->pv
           mountPath: /config              # < mount location in the container
           subPath: mariadb-config         # < mounted subpath in under /config in container 
---
kind: Service
apiVersion: v1
metadata:
  name: nextcloud-db-service           # < service name (see link with statufulset yaml)
  namespace: nextcloud
spec:
  selector:
    pod-label: nextcloud-db-pod        # < reference to the statefulset (connects the service with the statefulset)
  ports:
    - name: mysql
      protocol: TCP
      port: 3306

As you can see a difference with a deployment is that you don't configure a network port in the statefulset yaml. There is only a link with a service in which the ports are configured.

After applying the yaml files (for how-to see my previous blogs). We can check if the database pod is running via the command


kubectl get all -n nextcloud

You should see something similar like the screenshot below


Our nextcloud-db statetefulset is running.

For the second part we will deploy the nextcloud application as a single node deployment. 

We start to the persistent volume ("pv")  for the Nextcloud applicaton. It's similar to my the previous applications I have shown in my blog. As always, change the red values into your own 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nextcloud-server-pv-nfs
  namespace: nextcloud
spec:
  storageClassName: ""
  capacity:
    storage: 1Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    server: xxx.xxx.xxx.xxx
    path: /volume1/data/nextcloud-server
    readOnly: false

After we need to create a persistent volume claim ("pvc") . Please keep in mind that the accessModes from the persistent volume claim needs to be same as the persistent volume. Since Nextcloud will be used to store a lot of data I have configured to reserved volume to 1Ti (1 Terabyte). This can be change ofcourse to any other value you require.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud-server-pvc
  namespace: nextcloud
spec:
  storageClassName: "" 
  volumeName: nextcloud-server-pv-nfs
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Ti

Next is to apply  the deployment and the service for the nextcloud server part.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nextcloud                  # < name of the deploymentand reference
  namespace: nextcloud             # < namespace where to place the deployment and pods
  labels:                         
    app: nextcloud                 # < label for tagging and reference
spec:
  replicas: 1                      # < number of pods to deploy
  selector:
    matchLabels:
      app: nextcloud
  strategy:
    rollingUpdate:
      maxSurge: 0                  # < The number of pods that can be created above the desired amount of pods during an update
      maxUnavailable: 1            # < The number of pods that can be unavailable during the update process
    type: RollingUpdate            # < New pods are added gradually, and old pods are terminated gradually
  template:
    metadata:
      labels:
        app: nextcloud
    spec:
      volumes:
      - name: nfs-nextcloud                # < linkname of the volume for the pvc
        persistentVolumeClaim:
          claimName: nextcloud-server-pvc  # < pvc name we created in the previous yaml
      - name: nextcloud-ssl
        secret:
          secretName: nextcloud-net-tls    # < link to certificate (see ingress yaml)
      containers:
      - image: ghcr.io/linuxserver/nextcloud  # < the name of the docker image we will use
        name: nextcloud                    # < name of container
        imagePullPolicy: Always            # < always use the latest image when creating container/pod
        env:                               # < environment variables. See https://hub.docker.com/r/linuxserver/nextcloud
        - name: PGID
          value: "100" # < group "user"
        - name: PUID
          value: "1041" # < user "docker"
        - name: TZ
          value: Europe/Amsterdam
        ports:
         - containerPort: 443              # < required network portnumber. See https://hub.docker.com/r/linuxserver/nextcloud
           name: https-443
           protocol: TCP
        volumeMounts:                      # < the volume mount in the container. Look at the relation volumelabel->pvc->pv
         - mountPath: /config              
           name: nfs-nextcloud
           subPath: config
         - mountPath: /data
           name: nfs-nextcloud
           subPath: data 
---
kind: Service
apiVersion: v1
metadata:
  name: nextcloud-service                  # < name of the service
  namespace: nextcloud                     # < namespace where to place service
spec: 
  selector:
    app: nextcloud                         # < reference to the deployment (connects service with the deployment)                                                      
  ports:
    - name: https-443
      protocol: TCP
      port: 443

And finally the Ingress yaml, to configure the Ingress-nginx controller and create a let's encrypt certificate.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nextcloud         # < name of ingress entry 
  namespace: nextcloud    # < namespace where place the ingress entry
  annotations:
    kubernetes.io/ingress.class: "nginx"   # < use the nginx ingress controller
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"  # < communicate in https with the backend (service/pod)
    cert-manager.io/cluster-issuer: "letsencrypt-prod"     # < use letsencrypt-prod application in kubernetes to generate ssl certificate
    nginx.ingress.kubernetes.io/proxy-body-size: 10240m    # < this setting allow the nginx-controller to handle large files (10Gb)
spec:
  rules:
  - host: nextcloud.mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix  # pathType no longer has a default value in v1; "Exact", "Prefix", or "ImplementationSpecific" must be specified
        backend:
          service:
            name: nextcloud-service   
            port:
              number: 443
  tls: # < placing a host in the TLS config will indicate a cert should be created
  - hosts:
    - nextcloud.mydomain.com
    secretName: nextcloud.mydomain.com-tls # < cert-manager will store the created certificate in this secret.

After applying the yaml files, check if everything is running ok.  Do this by type the command we used earlier

kubectl describe secret nextcloud-db-secret -n nextcloud

You should see something similar like the screenshot below




 What you to do now is note the IP address of the nextcloud-db-service configured with port 3306. In this example it's 10.104.116.75.

Next goto the Nextcloud application website in a browser. You should be able to use the hostname as configured in the Ingress yaml


You need to create an admin user with a password. Select as database MySql/MariaDB, and fill in the the following details. 

Database user: nextcloud
Database password: << the password you have created in the Secret config >>
Database: nextcloud
Server: IP address database container with port 3306 (in the example above this is 10.104.116.75:3306)

If everything went well you have Nextcloud rolled out to your kubernetes cluster.

The comple yaml file described in this blog can be found om the GitHub page here


BONUS

As a bonus, you can make the changes to the nextcloud-server as mentioned below, so it can handle large files (10Gb) in stead of the default 2MB. The files are located on the nfs server in the nextcloud-server directory.


1. ../nextcloud-server/config/www/nextcloud/.user.ini (under the line "output_buffering=0)
php_value upload_max_filesize 10G
php_value post_max_size 10G
php_value max_input_time 3600
php_value max_execution_time 3600
 
2. ../nextcloud-server/config/nginx/site-confs/default (under the line fastcgi_intercept_errors on;)
 fastcgi_connect_timeout 60;
 fastcgi_send_timeout 1800;
 fastcgi_read_timeout 1800;

If you have any questions, do not hesitate to leave a comment.

Comments

  1. Hi there and thank you for this comprehensive documentation of how to install Nextcloud on K8s. Especially the large file uploading was one bugbear for me. I would like to install Collabora-online for editing documents. While I can set this up using Docker, there is little documentation on how to do this for K8s. Can you advice how I can set this up in K8s? Thanks

    ReplyDelete

Post a Comment