Kubernetes Part 14: Deploy Plexserver - Yaml with advanced networking

 



In this blog I will describe how I have setup my plexserver on my kubernetes cluster. It is of course possible to use helm for this, but I want to use this as an example how to configure a kubernetes deployment with and UDP and TCP connections on multiple ports. I have also put my plex server behind an Ingress controller and configured it with multiple instances. In deployment below it will deploy one pod, but you can increase to multiple pods.

Can a plexserver run on a Rasberry Pi you might ask ? Yes, I have always ran Plex on "lightweight" servers. And it is perfectly doable if you convert your videos and audio files in such a way that all clients can play it without transcoding. For example video in HEVC with AC-3 format, and audio in FLAC of MP3 format. Now, back to kubernetes.

Just like Heimdall the first part of the yaml file is to create a namespace called "plexserver"  

apiVersion: v1
kind: Namespace
metadata:
  name: plexserver

The next part is the creation of persistent volumes. Which in our case are two directories (maybe more, depending how you have arranged your plex data). For a detailed explanation on how to configure nfs on your Synology Nas click here. If you don't use a Synology NAS, please make sure you NFS 4 of higher due to avoid lock issues since plex is using a sqlite database 

In our example case. We have a /plex share and a /data share, so we create 2 persistent volumes. As in my previous blogs , red values are example values, and should be replace by your own.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: plexserver-pv-nfs-config   # < name of the persisant volume ("pv") in kubenetes
  namespace: plexserver            # < namespace where place the pv
spec:
  storageClassName: ""
  capacity:
    storage: 1Gi                   # < max. size we reserve for the pv
  accessModes:
    - ReadWriteMany                # < Multiple pods can write to storage 
  persistentVolumeReclaimPolicy: Retain # < The persistent volume can reclaimed 
  nfs:
    path: /volume1/plex            # < Name of your NFS share with subfolder
    server: xxx.xxx.xxx.xxx        # < IP number of your NFS server
    readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: plexserver-pv-nfs-data
  namespace: plexserver
spec:
  storageClassName: ""
  capacity:
    storage: 1Ti                   # < max. size we reserve for the pv. A bigger value than the configdata
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /volume1/data            
    server: xxx.xxx.xxx.xxx
    readOnly: false
Two persistent volumes also require two persistent volume claims. Please keep in mind that accessmode has to be the same value as the persistent volume, otherwise the claim won't work.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: plexserver-pvc-config   # < name of the persistant volume claim ("pvc'")
  namespace: plexserver         # < namespace where place the pvc
spec:
  storageClassName: ""
  volumeName: plexserver-pv-nfs-config  # < the pv it will "claim" to storage. Created in the previous yaml.
  accessModes:
    - ReadWriteMany             # < Multiple pods can write to storage. Same value as pv
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi              # < How much data can the pvc claim from pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: plexserver-pvc-data
  namespace: plexserver
spec:
  storageClassName: ""
  volumeName: plexserver-pv-nfs-data
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1T
Next is the deployment of the plexserver container. Unfortunately we cannot user the original docker container from plex itself, since it will only run on the AMD64 platform. This is not really a problem since linuxserver.io maintains an up-to-date docker image from the plexserver which will run on the ARM64 (Raspberry Pi) platform. The deployment yalm is pretty straightforward with the exception of the following environment variables. 

- PLEX_CLAIM - Optionally you can obtain a claim token from https://plex.tv/claim and input here.
- PGID, PUID - The Group ID, and User ID for accessing the nfs can be entered here. You need to add them here in ASCII  format.

A detailed explanation of the linuxserver/plex image can be found here 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: plexserver               # < label for tagging and reference
  name: plexserver                # < name of the deployment
  namespace: plexserver           # < namespace where to place the deployment and pods               # < namespace where place the deployment and pods
spec:
  replicas: 1                     # < number of pods to deploy
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app: plexserver
  strategy:
    rollingUpdate:
      maxSurge: 0                 # < The number of pods that can be created above the desired amount of pods during an update
      maxUnavailable: 1           # < The number of pods that can be unavailable during the update process
    type: RollingUpdate           # < New pods are added gradually, and old pods are terminated gradually
  template:
    metadata:
      labels:
        app: plexserver
    spec:
      volumes:
      - name: nfs-plex-config     # < linkname of the volume for the pvc
        persistentVolumeClaim:
          claimName: plexserver-pvc-config  # < pvc name we created in the previous yaml
      - name: nfs-data
        persistentVolumeClaim:
          claimName: plexserver-pvc-data
      containers:
      - env:                       # < environment variables. See https://hub.docker.com/r/linuxserver/plex
        - name: PLEX_CLAIM
          value: claim-XwVPsHsaakdfaq66tha9
        - name: PGID
          value: "\x31\x30\x30"    # < ASCII code for '100'
        - name: PUID
          value: "\x31\x30\x33\x35" # < ACII code for '1035'
        - name: VERSION
          value: latest
        - name: TZ
          value: Europe/Amsterdam  # < Timezone
        image: ghcr.io/linuxserver/plex   # < the name of the docker image we will use
        imagePullPolicy: Always    # < always use the latest image when creating container/pod
        name: plexserver           # < name of container
        ports:
        - containerPort: 32400     # < required network portnumber. See https://hub.docker.com/r/linuxserver/plex
          name: pms-web            # < reference name from the port in the service yaml
          protocol: TCP
        - containerPort: 32469
          name: dlna-tcp
          protocol: TCP
        - containerPort: 1900
          name: dlna-udp
          protocol: UDP
        - containerPort: 3005
          name: plex-companion
          protocol: TCP  
        - containerPort: 5353
          name: discovery-udp
          protocol: UDP  
        - containerPort: 8324
          name: plex-roku
          protocol: TCP  
        - containerPort: 32410
          name: gdm-32410
          protocol: UDP
        - containerPort: 32412
          name: gdm-32412
          protocol: UDP
        - containerPort: 32413
          name: gdm-32413
          protocol: UDP
        - containerPort: 32414
          name: gdm-32414
          protocol: UDP
        resources: {}
        stdin: true
        tty: true
        volumeMounts:            # < the volume mount in the container. Look at the relation volumelabel->pvc->pv
        - mountPath: /config     # < mount location in the container
          name: nfs-plex-config  # < volumelabel configured earlier in the yaml file
        - mountPath: /data
          name: nfs-data 
      restartPolicy: Always

The next part of the yaml file, will be the service part. This is a bit more complex than usual, since we need to configure UDP ports and TCP ports. In kubernetes (at least until version 1.19) is was not possible to configure UDP and TCP on the same IP. I think it should be possible in kubernetes 1.20 or higher, but I have not tested this yet. Not to worry, MetalLB is able to resolve this issue. It is possible in MetalLB  to share the external IP between to service configurations. 

This is configured in the service yaml below for the UDP part of the service. Pay attention to the annotation  "metallb.universe.tf/allow-shared-ip: plexserver". By giving it the same name in de UDP yaml and TCP yaml, metalLB will share the IP.

Service yaml file for UDP connections. For the load balance IP you should be a free one from the reservered range in your MetalLB configuration (see here )

kind: Service
apiVersion: v1
metadata:
  name: plex-udp              # < name of the service
  namespace: plexserver       # < namespace where to place service
  annotations:
    metallb.universe.tf/allow-shared-ip: plexserver # < annotation name to combine the Service IP, make sure it's same name as in the service UDP yaml
spec:
  selector:
    app: plexserver           # < reference to the deployment (connects the service with the deployment)
  ports:
  - port: 1900                # < port to open on the outside on the server
    targetPort: 1900          # < targetport. port on the pod to passthrough
    name: dlna-udp            # < reference name for the port in the deployment yaml
    protocol: UDP
  - port: 5353
    targetPort: 5353
    name: discovery-udp
    protocol: UDP
  - port: 32410
    targetPort: 32410
    name: gdm-32410
    protocol: UDP
  - port: 32412
    targetPort: 32412
    name: gdm-32412
    protocol: UDP
  - port: 32413
    targetPort: 32413
    name: gdm-32413
    protocol: UDP
  - port: 32414
    targetPort: 32414
    name: gdm-32414
    protocol: UDP
  type: LoadBalancer
  loadBalancerIP: xxx.xxx.xxx.xxx  # < IP to access your plexserver. Should be one from the MetalLB range and the same as the UDP yaml
Service yaml file for TCP connections

kind: Service
apiVersion: v1
metadata:
  name: plex-tcp              # < name of the service
  namespace: plexserver       # < namespace where to place service
  annotations:
    metallb.universe.tf/allow-shared-ip: plexserver  # < annotation name to combine the Service IP, make sure it's same name as in the service UDP yaml
spec:
  selector:
    app: plexserver           # < reference to the deployment (connects the service with the deployment)
  ports:                      
  - port: 32400               # < port to open on the outside on the server
targetPort: 32400 # < targetport. port on the pod to passthrough name: pms-web # < reference name for the port in the deployment yaml protocol: TCP - port: 3005 targetPort: 3005 name: plex-companion - port: 8324 name: plex-roku targetPort: 8324 protocol: TCP - port: 32469 targetPort: 32469 name: dlna-tcp protocol: TCP type: LoadBalancer loadBalancerIP:
xxx.xxx.xxx.xxx # < IP to access your plexserver. Should be one from the MetalLB range and the same as the TCP yaml

And finally the ingress yaml file so you can access your plex webserver via a secured https connection.  In a standard plexserver setup you can access the plexserver via address http://plexserver.mydomain.com:32400/web. With the ingress settings below you can access it via https://plexserver.mydomain.com. Do not forget to configure your public/private DNS and portforwarding (see this blog on how-to configure this). It also required to forward port 32400 from your router to the MetalLB loadbalancer IP address you have configured in your service yaml files.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: plexserver            # < name of ingress entry
  namespace: plexserver       # < namespace where place the ingress enty
annotations: kubernetes.io/ingress.class: "nginx" # < use the nginx ingress controller nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # < communicate in https with the backend (service/pod) cert-manager.io/cluster-issuer: "letsencrypt-prod" # < use letsencrypt-prod application in kubernetes to generate ssl certificate nginx.ingress.kubernetes.io/app-root: /web # < the root directory of the plex webserver spec: rules: - host: plexserver.mydomain.com http: paths: - path: / backend: serviceName: plex-tcp servicePort: pms-web # < same label as the port in the service tcp file tls: # < placing a host in the TLS config will indicate a cert should be created - hosts: - plexserver.mydomain.com secretName: plexserver.mydomain.com-tls # < cert-manager will store the created certificate in this secret.
If the deployment went ok, you should be able to access plex via your browser (example: https://plexserver.mydomain.com) and see the plex setup screen. More info about setting up the plex mediaserver can be found here

Hope this blog was helpful, especially in combining UDP en TCP in two service yaml with the MetalLB allowed-shared-ip feature.

If you have any questions, do not hesitate to leave a comment. In my next blog I will explain how to combine a database deployment and a application deployment by deploying Nextcloud.

The complete yaml file described in this blog can be found om the GitHub page here

Comments

  1. Hi there! Great post, really like it that I do not have to fiddle with a helm chart (Manifests are moce akin to docker compose, closer to my hart).
    I just have one question: I do not seem to be able to enable the "remote access" on the server settings. Here is my slightly modified code:
    https://pastebin.com/56dV2FEc
    My nginx ingress is on 192.168.0.240 and the plex loadbalancer is set to .249. I tried port forwarding to 32400 to the later (and to the former) but no joy. (Web server is accessible on the ingress).
    Any suggestion?

    Thanks,
    Fabrice

    ReplyDelete
  2. Anonymous5:11 PM

    Hi faber ! I think the portforwarding from you router for port 32400 should go directly 192.168.0.249 in your case. Your network setup should be similar to this:

    For web access
    - https://plex.example.com -> via Ingress https://192.168.0.240 -> https://192.168.0.249:32400/web

    For remote server access
    - https://plex.example.com:32400 -> https://192.168.0.249:32400

    This setup should work for remote access on the server settings.

    Kind Regards,
    Erik

    ReplyDelete
  3. Hi Erik,

    Many thanks for the quick reply.
    I think the error might have been on my end (probably around the MetalLB deployment) and seems to be ok now that I redeployed this on my rebuilt cluster...

    Thanks again!
    Fabrice

    ReplyDelete
  4. where do you put movies?

    ReplyDelete
    Replies
    1. Anonymous9:20 AM

      In this example on a NFS server (Synology NAS). See https://www.debontonline.com/2020/10/part-10-how-to-configure-persistent.html for more details.

      Delete

Post a Comment