Kubernetes Part 12: Deploy Heimdall - Yaml basics with ingress


After we have building the k8s kubernetescluster with the Raspberry Pi's (see link), it's time to deploy some applications so we can actually can use the kubenetes cluster in our home environment. The goal of this blogpost is to get acquainted with the deployment of applications via yaml files (manifests). I am planning to write 3 blog posts about application deployments in kubernetes. In this blog we will deploy the application heimdall, which is an application dashboard which make it very easy to organise you webpages and webapplication on a dashboard. 

We will start with deploying heimdall since it is a very straightforward installation. In the next blog I will describe how to deploy a Plexserver (which has some more advanced networking settings) and Nextcloud in which we will deploy a application and a database container. 

I personally prefer to use yaml files to deploy application on my kubernetes cluster. Deploying applications via helm or rancher on kubernetes is very easy, but with yaml file you really can learn and understand how kubernetes operates. 

I will first split up the yaml in the specific configuration, and add the end I will merge them together, so you will have one deployment yaml file for the application Heimdall.

The first thing we do when deploying a namespace. You can compare a namespace on kubernetes with a directory on your pc. It's basically organizing a group of files or objects that belong to eachother. You're not obliged to create a seperate namespace, but you will experience that putting everything in the default namespace will become unclear after deploying several applications. 

The creation a namespace in a yaml file is very simple. You basically instruct the kubernates api to create a namespace called heimdall.

apiVersion: v1
kind: Namespace
  name: heimdall

The second thing we need to do is to create a persistant volume. As mentioned in my previous blogs no data will be stored in the pods, so I store all my data on my Synology nas via NFS. Therefore we need to create a persistant volume. 

apiVersion: v1
kind: PersistentVolume
  name: heimdall-pv-nfs   # < name of the persistent volume ("pv") in kubenetes
  namespace: heimdall     # < namespace where place the pv
  storageClassName: ""
    storage: 1Gi          # < max. size we reserve for the pv
    - ReadWriteMany       # < Multiple pods can write to storage 
  persistentVolumeReclaimPolicy: Retain   # < The persistent volume can reclaimed 
  mountOptions:           # < Mount options specific for nfs 4.1, remove if version < 4.1 nfs server is used
    - hard
    - nfsvers=4.1
    server: xxx.xxx.xxx.xxx         # < IP number of your NFS server
    path: "/volume1/kubedata/heimdall" # < Name of your NFS share with subfolder
    readOnly: false

The "persistantVolumReclaimPolicy: Retain" means the  persistant volume can be "reclaimed" by another persistant volume claim if the original persistant volume claim has been removed.

The third yaml we create will be the persistant volume claim. With the persistant volume claim a pod can reserve (claim) the storage it will use.

apiVersion: v1
kind: PersistentVolumeClaim
  name: heimdall-pvc        # < name of the persistent volume claim ("pvc'")
  namespace: heimdall       # < namespace where place the pvc
spec: storageClassName: "" volumeName: heimdall-pv-nfs
# < the pv it will "claim" to storage. Created in the previous yaml. accessModes: - ReadWriteMany # < Multiple pods can write to storage. Same value as pv volumeMode: Filesystem resources: requests: storage: 1Gi # < How much data can the pvc claim from pv

Two things to keep in mind when configuring the persistant volume claim. 

  1. The accesmode must always be same as the Persistant Volume (example ReadWriteOnce) 
  2. The storagesize of the pvc (example 1Gi) must always be lower or the same at the persitanct volume.
The storagesize of the pvc is relevant when you use multiple pvc with one pv. Example if you create two claims, the size of the claim must not exceed the size of the volume. So 2 pvc of 500Mi each, should work with a pv sized 1Gi. You will get an error if configure 2 pvc's  of 1Gi, if your pv is sized 1GB, because pvc's cannot fit in 1Gi.

Ok, so we have configured our external storage. Now it is time to configure the deployment.

apiVersion: apps/v1
kind: Deployment
  name: heimdall        # < name of the deployment
  namespace: heimdall   # < namespace where place the deployment and pods
    app: heimdall       # < label for tagging and reference
  replicas: 1           # < number of pods to deploy
      app: heimdall
      maxSurge: 0       # < The number of pods that can be created above the desired amount of pods during an update
      maxUnavailable: 1 # < The number of pods that can be unavailable during the update process
    type: RollingUpdate # < New pods are added gradually, and old pods are terminated gradually
        app: heimdall
      - name: nfs-heimdall  # < linkname of the volume for the pvc
          claimName: heimdall-pvc # < pvc name we created in the previous yaml
      - name: heimdall-ssl
          secretName: heimdall-mydomain-tls # < the name ssl certificate, will be created in the ingress yaml
      - image: ghcr.io/linuxserver/heimdall # < the name of the docker image we will use
        name: heimdall                      # < name of container
        imagePullPolicy: Always             # < always use the latest image when creating container/pod
        env:                                # < the environment variables required (see container documentation)
        - name: PGID
          value: "100" # < group "user"
        - name: PUID
          value: "1041" # < user "docker"
        - name: TZ
          value: Europe/Amsterdam
        ports:                              # < the ports required (see container documentation)
         - containerPort: 80
           name: http-80
           protocol: TCP
         - containerPort: 443
           name: https-443
           protocol: TCP
        volumeMounts:                       # < the volume mount in the container. Look at the relation volumelabel->pvc->pv
         - mountPath: /config               # < mount location in the container
           name: nfs-heimdall               # < volumelabel configured earlier in the yaml file
           subPath: config                  # < subfolder in the nfs share to be mounted

As you can see above in the yaml file. The heimdall container requires port 80, 443 and a volume mount to the /config directory to storage it's persistant configuration data.

In the fourth yaml we wil configure the service. A service is required to expose a pod to the cluster. As you see the selector app value connect the service to heimdall app we created in the deployment yaml. This yaml file basically instructs kubernetes to expose port 80 and 443 to the rest of the kubernetes cluster, so it can communicate with other pods and services within the kubernetes cluster. 

apiVersion: v1
kind: Service
  name: heimdall-service    # < name of the service
  namespace: heimdall       # < namespace where place the deployment and pods
    app: heimdall           # < reference to the deployment (connects with this deployment)
    - name: http-80
      protocol: TCP
      port: 80
    - name: https-443
      protocol: TCP
      port: 443

The fifth and last yaml file we create an entry in the ingress controller. A service is required to expose the application within the cluster, but to expose an application (pods) to the outside world we need to create an ingress, loadbalancer or a nodeport entry. 

From heimdall we create an ingress entry so we can also have the cert-manager create valid Let's Encrypt ssl certificate. Please keep in mind configure port 80 via NAT on your router to the IP of the ingress controller, and create a public DNS entry for your application host name pointing to public address of your router. (for example heimdall.mydomain.com). Let's Encrypt always checks if it can connect to the hostname on port 80 before releasing the ssl certificate.  

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
  name: heimdall-ingress   # < name of ingress entry
  namespace: heimdall      # < namespace where place the deployment and pods
    kubernetes.io/ingress.class: "nginx"   # < use the nginx ingress controller
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # < communicate with the backend (service/pod)
    cert-manager.io/cluster-issuer: "letsencrypt-prod" # < use letsencrypt-prod application in kubernetes to generate ssl certificate
  - host: heimdall.mydomain.com  # < hostname to access the heimdall
      - path: /
          serviceName: heimdall-service # < connect the ingress entry to service created earlier
          servicePort: 443
  tls: # < placing a host in the TLS config will indicate a cert should be created
  - hosts:
    - heimdall.mydomain.com  # < hostname to access the heimdall
    secretName: heimdall-mydomain-tls # < cert-manager will store the created certificate in this secret.

So when these five yaml files have been created, you can merge them into one. This very easy. Just cut and past them underneath in a text editor seperated by 3 hyphens "---". The end result should look someting like this


If you have adjusted and saved the file. You can apply it to your kubenetes cluster via the command 

kubectl apply -f deploy-heimdall-ssl-example.yml

You can check if every is installed ok via the command:

kubectl get all -n heimdall
You should see something like this 

You can also check the ingress entry via this command

kubectl get ingress -n heimdall
If everything went ok, the big moment is here. Open a browser and type your heimdall hostname and see if the application works. On top of this blog you see a screenshot from heimdall.

I hope this blog is usefull for you. If you have any questions or remarks do not hesitate to leave a comment. 

The yaml files mentioned in my blogs can be found at my github account here