Kubernetes Part 16: Deploy Jellyfin (alternative to Plex) - New ingress yaml format

 


In this blog I will explain how-to configure Jellyfin for your Raspberry Pi Kubernetes cluster. Jellyfin is a free open-source media system (similar to Plex). I have switched from Plex to Jellyfin because I ran into issues when using Plex with Android Auto in my car. 

After using Jellyfin for a couple of weeks, I can say, it's running very well on kubernetes. The configuraton of the yaml files is very similar to Plex. So if you have a yaml file for configured for your plex mediaserver, you will recognise the similarities. 

Just like every other application in kubernetes the first part of the yaml file is to create a namespace. For Jellyfin it's called "jellyfin", just to keep it simple.

apiVersion: v1
kind: Namespace
metadata:
  name: jellyfin

The next part is the creation of persistent volumes. Which in our case are two directories (maybe more, depending how you have arranged your media data). For a detailed explanation on how to configure nfs on your Synology Nas click here. If you don't use a Synology NAS, please make sure you NFS 4 of higher due to avoid lock issues since jellyfin is using a sqlite database.

In our example case. We have a /jellyfin share and a /data share, so we create 2 persistent volumes. As in my previous blogs , red values are example values, and should be replace by your own.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jellyfin-pv-nfs-config   # < name of the persisant volume ("pv") in kubenetes
  namespace: jellyfin            # < namespace where place the pv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi                   # < max. size we reserve for the pv
  accessModes:
    - ReadWriteMany                # < Multiple pods can write to storage 
  persistentVolumeReclaimPolicy: Retain # < The persistent volume can reclaimed 
  nfs:
    path: /volume1/jellyfin        # < Name of your NFS share with subfolder
    server: xxx.xxx.xxx.xxx        # < IP number of your NFS server
    readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jellyfin-pv-nfs-data
  namespace: jellyfin
spec:
  storageClassName: ""
  capacity:
    storage: 1Ti                   # < max. size we reserve for the pv. A bigger value than the configdata
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /volume1/data            
    server: xxx.xxx.xxx.xxx
    readOnly: false

Two persistent volumes also require two persistent volume claims. Please keep in mind that access mode has to be the same value as the persistent volume, otherwise the claim won't work.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jellyfin-pvc-config   # < name of the persistant volume claim ("pvc'")
  namespace: jellyfin         # < namespace where place the pvc
spec:
  storageClassName: ""
  volumeName: jellyfin-pv-nfs-config  # < the pv it will "claim" to storage. Created in the previous yaml.
  accessModes:
    - ReadWriteMany             # < Multiple pods can write to storage. Same value as pv
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi              # < How much data can the pvc claim from pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jellyfin-pvc-data
  namespace: jellyfin
spec:
  storageClassName: ""
  volumeName: jellyfin-pv-nfs-data
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1T
The next step is the deployment of the jellyfin container itself. It's a pretty straightforward configuration. Just a few specifics:

- PGID, PUID - The Group ID, and User ID for accessing the nfs can be entered here. You need to add them here in ASCII  format.
- For hardware acceleration on a Raspberry PI the specific settings require to remarked. Keep in mind that this will only work if you are using the Raspberry OS 32-bit version on your worker nodes. If you have followed my blog, we are using Ubuntu 20.04 64-bit as an OS on a worker node, therefore these settings won't work in our setup. 

A detailed explanation of the linuxserver/jeffyfin image can be found here

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jellyfin
  name: jellyfin
  namespace: jellyfin
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:
      app: jellyfin
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      volumes:
      - name: nfs-jellyfin-config
        persistentVolumeClaim:
          claimName: jellyfin-pvc-config
      - name: nfs-jellyfin-data
        persistentVolumeClaim:
          claimName: jellyfin-pvc-data
      # The settings below have been marked out and can be used when removing the "#"
      # - name: device-vcsm # Only needed if you want to use your Raspberry Pi MMAL video decoding (Enabled as OpenMax H264 decode in gui settings).
      #   hostPath:
      #     path: /dev/vcsm 
      # - name: device-vchiq  #Only needed if you want to use your Raspberry Pi OpenMax video encoding.
      #   hostPath:
      #    path: /dev/vchiq
      # - name: device-video10  #Only needed if you want to use your Raspberry Pi V4L2 video encoding.
      #   hostPath:
      #     path: /dev/video10 
      # - name: device-video11  #Only needed if you want to use your Raspberry Pi V4L2 video encoding.
      #   hostPath:
      #     path: /dev/video11 
      # - name: device-video12  #Only needed if you want to use your Raspberry Pi V4L2 video encoding.
      #   hostPath:
      #      path: /dev/video12 
      containers:
      - env:
        - name: JELLYFIN_PublishedServerUrl 
          value: xxx.xxx.xxx.xxx # The IP number for your jellyfin server (see service config)
        - name: PGID
          value: "\x36\x35\x35\x34\x31" # < ASCII code for '65541'
        - name: PUID
          value: "\x31\x30\x34\x34" #< ACII code for '1044'
        - name: TZ
          value: Europe/Amsterdam
        securityContext:
          privileged: true # Container must run as privileged inside of the pod, required for hardware acceleration
        image: ghcr.io/linuxserver/jellyfin
        imagePullPolicy: Always
        name: jellyfin
        ports:
        - containerPort: 8096
          name: http-tcp
          protocol: TCP
        - containerPort: 8920
          name: https-tcp
          protocol: TCP
        - containerPort: 1900
          name: dlna-udp
          protocol: UDP
        - containerPort: 7359
          name: discovery-udp
          protocol: UDP      
        resources: {}
        stdin: true
        tty: true
        volumeMounts:
        - mountPath: /config
          name: nfs-jellyfin-config
        - mountPath: /data
          name: nfs-jellyfin-data
        # Below are the path to mount devices for hardware acceleration
        # The settings below have been marked out and can be used when removing the "#"
        # - mountPath: /dev/vcsm
        #   name: device-vcsm
        # - mountPath: /dev/vchiq
        #   name: device-vchiq
        # - mountPath: /dev/video10
        #   name: device-video10
        # - mountPath: /dev/video11
        #   name: device-video11
        # - mountPath: /dev/video12
        #   name: device-video12
      dnsPolicy: ClusterFirst
      restartPolicy: Always

The next part of the yaml file, will be the service part. This is a bit more complex than usual, since we need to configure UDP ports and TCP ports. In kubernetes (at least until version 1.19) is was not possible to configure UDP and TCP on the same IP. I think it should be possible in kubernetes 1.20 or higher, but I have not tested this yet. Not to worry, MetalLB is able to resolve this issue. It is possible in MetalLB  to share the external IP between to service configurations. 

This is configured in the service yaml below for the UDP part of the service. Pay attention to the annotation  "metallb.universe.tf/allow-shared-ip: jellyfin". By giving it the same name in de UDP yaml and TCP yaml, metalLB will share the IP.

Service yaml file for UDP connections. For the load balance IP you should be a free one from the reservered range in your MetalLB configuration (see here )

kind: Service
apiVersion: v1
metadata:
  name: jellyfin-udp       # < name of the service
  namespace: jellyfin      # < namespace where to place service
  annotations:
    metallb.universe.tf/allow-shared-ip: jellyfin # # < annotation name to combine the Service IP, make sure it's same name as in the service UDP yaml
spec:
  selector:
    app: jellyfin          # < reference to the deployment (connects the service with the deployment)
  ports:
  - port: 1900             # < port to open on the outside on the server
    targetPort: 1900       # < targetport. port on the pod to passthrough
    name: dlna-udp         # < reference name for the port in the deployment yaml
    protocol: UDP
  - port: 7359
    targetPort: 7359
    name: discovery-udp
    protocol: UDP
  type: LoadBalancer
  loadBalancerIP: xxx.xxx.xxx.xxx # < IP to access your jellyfinserver. Should be one from the MetalLB range and the same as the UDP yaml
  sessionAffinity: ClientIP # This is necessary for multi-replica deployments
---
kind: Service
apiVersion: v1
metadata:
  name: jellyfin-tcp       # < name of the service
  namespace: jellyfin      # < namespace where to place service
  annotations:
    metallb.universe.tf/allow-shared-ip: jellyfin # < annotation name to combine the Service IP, make sure it's same name as in the service UDP yaml
spec:
  selector:
    app: jellyfin          # < reference to the deployment (connects the service with the deployment)
  ports:
  - port: 8096             # < port to open on the outside on the server
    targetPort: 8096       # < targetport. port on the pod to passthrough
    name: http-tcp         # < reference name for the port in the deployment yaml
    protocol: TCP
  - port: 8920
    targetPort: 8920
    name: https-tcp
  type: LoadBalancer
  loadBalancerIP: xxx.xxx.xxx.xxx # < IP to access your jellyfinserver. Should be one from the MetalLB range and the same as the TCP yaml
  sessionAffinity: ClientIP # This is necessary for multi-replica deployments

And finally the ingress yaml file so you can access your jellyfin webserver via a secured https connection. With the ingress settings below you can access it via https://jellyfin.mydomain.com. Do not forget to configure your public/private DNS and portforwarding (see this blog on how-to configure this).

This ingress configuration uses the new networking.k8s.io/v1 api, which some changes in the config compared to my previous example of ingress yaml files

apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: jellyfin # < name of ingress entry namespace: jellyfin # < namespace where place the ingress enty annotations: kubernetes.io/ingress.class: "nginx" # < use the nginx ingress controller # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # < communicate in https with the backend (service/pod). With a "#" in front of http will be used. cert-manager.io/cluster-issuer: "letsencrypt-prod" # < use letsencrypt-prod application in kubernetes to generate ssl certificate # nginx.ingress.kubernetes.io/app-root: # < the root directory here if it's different from the root directory (like /web). spec: rules: - host: jellyfin.debont.net http: paths: - path: / pathType: Prefix # pathType no longer has a default value in v1; "Exact", "Prefix", or "ImplementationSpecific" must be specified backend: service: name: jellyfin-tcp port: name: http-tcp # < same label as the port in the service tcp file tls: # < placing a host in the TLS config will indicate a cert should be created - hosts: - jellyfin.mydomain.com secretName: jellyfin.mydomain.com-tls # < cert-manager will store the created certificate in this secret.
   

If the deployment went ok, you should be able to access Jellyfin via your browser (example: https://jellyfin.mydomain.com) and see the Jellyfin setup screen. More info about setting up the Jellyfin mediaserver can be found here

Hope this blog was helpful, especially in combining UDP en TCP in two service yaml with the MetalLB allowed-shared-ip feature, and the new Ingress yaml format

If you have any questions, do not hesitate to leave a comment. 

The complete yaml file described in this blog can be found om the GitHub page here.

Comments

  1. There is possibly an error in the port mapping in the deployment
    - containerPort: 7359
    name: discovery-udp
    protocol: TCP <-- should be UDP

    Otherwise this is amazing! Especially with the passing the devices to the container for hw acceleration.

    ReplyDelete
    Replies
    1. Thank for the headsup ! I have adjusted the error.

      Delete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This is really an awesome blog I ever seen. All the articles are very well written and very helpful. Thanks for sharing about Technology Blogs.

    ReplyDelete
  4. Hi, you mention hardware pass through will only work with 32 bit raspberry pi. Does this limitation also apply to x86 intel hardware? I have a dell optiplex with 9th gen intel, and I did passthrough of /dev/dri, but I cant get hardware transcoding to work. I already checked the permissions and verified the directory is mounted in the container.

    ReplyDelete
    Replies
    1. No, the limitation does not apply to x86 intel hardware. It is related to the Operating System icm CPU you are using. At the time of writing there was no hardware acceleration support for 64-bit OS on Raspberry Pi. The hardware acceleration manifest was specifically written for Raspberry Pi, but you should be able to change it for Intel. I think you need to mount the /dev/dri/renderD128 device. Just added that the manifest should do it. You can find more specific details here https://jellyfin.org/docs/general/administration/hardware-acceleration/intel

      https://jellyfin.org/docs/general/administration/hardware-acceleration/intel

      Delete

Post a Comment