About

Whilst Plex Media Sever is a fairly monotonic build, it would be interesting to be able to run multiple concurrent instances in order to attain some form of high availability, load-balancing and fail over. Some of the good reasons to do so is to spread out traffic amongst machines hosting Plex, as well as leverage GPU acceleration that might be present on multiple machines across the network.

This page follows a simple setup involving Plex running with Docker, caddy acting as a load-balancer as well as other quirks and optimizations possible at all levels that will allow a user to scale Plex instances throughout a cluster of computers.

Architecture

The setup involves one single Internet uplink, although, it would be more practical if two Internet uplinks would be available such that clients external to the network could benefit from spreading the traffic across two different uplinks. The setup is easy to modify though, and can scale up, if a second connection were to be added.

Under the circumstances, plex1.DOMAIN.duckdns.org, plex2.DOMAIN.duckdns.org and plex.DOMAIN.duckdns.org are three subdomains of a domain named DOMAIN registered with the duckdns.org service and all of them point to the same IP address both externally and internally.

Externally, all three FQDNs map to an external address x.x.x.x whilst internally, the FQDNs map to an internal address y.y.y.y that represents a virtual IP that points to the caddy server (intuitively, the y.y.y.y IP address could itself be a virtual IP mapping into a Docker swarm).

The different DNS mapping is handled via ISC BIND9 using split views, for both addresses internal to the local LAN and optionally providing external DNS resolution of the domains. The split DNS is not documented here but a good write up can be found on its own dedicated page that also uses duckdns.org as an example TLD.

Nodes node1 and node2 (as well as potential scaling to other nodes and other machines), represent machines that run an instance of the Plex Media Server. Initially, the plan was to run Plex Media Server within a Docker swam, such that the scaling would have been orchestrated by the swarm itself, but due to current Docker swarm limitations, Plex is run using a SystemD service file that is placed on all nodes. The SystemD service file can be found on the page describing Docker swarm limitations and is to be distributed to all the nodes for which scaling the Plex Media Server is desirable.

Managing DuckDNS

DuckDNS is free dynamic DNS service that so happens to map all sub-domains to the same IP address compared to other free DDNS services that limit the number of subdomains created. Whilst maintaining internal DNS resolution can be achieved with the ISC bind DNS server as documented previously, it is still only part of the job and the duckdns.org domain itself must be updated on each external IP address change.

ddclient can be used to update the DuckDNS domain with a configuration file similar to the following:

# ddclient configuration for Dynu
#
# /etc/ddclient.conf

daemon=60
syslog=yes
mail=root
mail-failure=root
pid=/run/ddclient.pid

use=cmd, \
cmd="/usr/bin/dig +short myip.opendns.com @resolver1.opendns.com", \
protocol=duckdns, \
password='...', \
DOMAIN.duckdns.org

where:

  • DOMAIN.duckdns.org is the FQDN domain registered with DuckDNS, and
  • password must be set to the password used to sign up on DuckDNS

The command:

/usr/bin/dig +short myip.opendns.com @resolver1.opendns.com

will use myip.opendns.com to resolve the external IP address such that DuckDNS will be updated upon change.

In principle, one of the design goals is for the cluster to be resilient and to self-adapt to a change of location, such that all services configured use mechanisms that dynamically adjust to any network change.

Plex Media Server and The Docker Swarm

As per the documentation on issues with Docker swarm and graphics acceleration, for this setup Plex will be run as an individual service for all nodes within the Docker swarm. However, when the issues are resolved with "Secure Computing" and Docker swarm, deploying Plex to machines might just as well be orchestrated by Docker swarm because the deployment and management of the servers does not matter too much in terms of technology.

Here is a shorter version of the architecture schematic drawn in the previous section but now adapted to highlight the connectivity with Docker:

              +
              | plex1, plex2, plex.duckdns.org
              | 
              | :80
              + caddy
              |
      :32401 / \ :32402
    docker1 /   \ docker2 
           /     \
   :32400 /       \ :32400
   plex1 +         + plex2 
 

It is important to notice that docker1 and docker2 have their external container ports configured as 32401 and 32402, as per the Docker configuration parameters:

-p 32401:32400

respectively:

-p 32402:32400

where the 32400 is the default port of the Plex Media Server running within each Docker container.

The reason to do this, and in particular due to a single ISP uplink, is that the external access address is slightly modified such that one can distinguish between Plex servers via the port given that the IP address remains the external IP address of the single ISP uplink. The former helps with accessing Plex remotely, and both Plex instances should have their external ports set to 32401 respectively 32402 for plex1 and plex2.

For example, the image below illustrates how to set up the second ("plex2") server with the external port set to 32402 in order to enable "Remote Access".

Within the Docker container, Plex Media Server will still be listening on port 32400 and it does not matter because the container is both isolated (and given that Docker swarm is not used till it is fixed upstream, the containers will be running on different machines anyway).

Leveraging the Docker compose configuration, bear in mind that the SystemD service file must be changed in order to ensure that both Plex instances will have disjunct storage locations for their internal files, as well as both having to map into the shared storage in order to index media. Both files from the section below must be copied to the docker1 and docker2 nodes, enabled and started via systemctl.

Plex1 SystemD

[Unit]
Description=Plex Media Server Docker Container
After=docker.service
Requires=docker.service

[Service]
Restart=always
ExecStartPre=/usr/bin/docker pull lscr.io/linuxserver/plex:latest
ExecStart=/usr/bin/docker run --name=plex \
  --rm \
  --interactive \
  --user 0:0 \
  --privileged \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e VERSION=docker \
  --device=/dev/dri/renderD128:/dev/dri/renderD128 \
  -p 32400:32400 \
  -v /mnt/docker/data/docker1-plex/data:/config \
  -v /mnt/docker/data/docker1-plex/transcode:/transcode \
  -v /mnt/docker/data/docker1-plex/repair:/repair \
  -v /mnt/storage/Movies:/movies \
  -v /mnt/storage/TV:/tv \
  lscr.io/linuxserver/plex:latest

[Install]
WantedBy=multi-user.target

Plex2 SystemD

[Unit]
Description=Plex Media Server Docker Container
After=docker.service
Requires=docker.service

[Service]
Restart=always
ExecStartPre=/usr/bin/docker pull lscr.io/linuxserver/plex:latest
ExecStart=/usr/bin/docker run --name=plex \
  --rm \
  --interactive \
  --user 0:0 \
  --privileged \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e VERSION=docker \
  --device=/dev/dri/renderD128:/dev/dri/renderD128 \
  -p 32400:32400 \
  -v /mnt/docker/data/docker2-plex/data:/config \
  -v /mnt/docker/data/docker2-plex/transcode:/transcode \
  -v /mnt/docker/data/docker2-plex/repair:/repair \
  -v /mnt/storage/Movies:/movies \
  -v /mnt/storage/TV:/tv \
  lscr.io/linuxserver/plex:latest

[Install]
WantedBy=multi-user.target

Fail-Over and Load-Balancing with Caddy

The purpose of the domains plex1.DOMAIN.duckdns.org and plex2.DOMAIN.duckdns.org (as well as any other scale-up) is to be able to access the individual Plex instances on the individual nodes in the cluster, whilst the generic and unnumbered domain plex.DOMAIN.duckdns.org will be the domain that will be used to access the system as a whole, acting as the main point of access to the system and with further access to the two Plex instances plex1.DOMAIN.duckdns.org and plex2.DOMAIN.duckdns.org.

Typically, for such a job, something like HAProxy would do well, but given that most homelab setups use caddy these days, it makes sense to stick to caddy because fortunately enough, caddy can do load-balancing as well. Here is a sample minimal configuration that should be used for all three domains:

plex.0x90.duckdns.org {
        tls mail@mail.com

        reverse_proxy docker1:32401 docker2:32402 {
                lb_policy uri_hash
                lb_try_duration 5s
                lb_try_interval 250ms

                fail_duration 10s
                max_fails 1
                unhealthy_status 5xx
                unhealthy_latency 5s
                unhealthy_request_count 1

                trusted_proxies y.y.y.0/24

                header_up Host {host}
                header_up X-Real-IP {remote}
                header_up X-Forwarded-Host {hostport}
                header_up X-Forwarded-For {remote}
                header_up X-Forwarded-Proto {scheme}
        }
}

plex1.0x90.duckdns.org {
        tls mail@mail.com

        reverse_proxy docker1:32400 {
                trusted_proxies y.y.y.0/24

                header_up Host {host}
                header_up X-Real-IP {remote}
                header_up X-Forwarded-Host {hostport}
                header_up X-Forwarded-For {remote}
                header_up X-Forwarded-Proto {scheme}
        }
}

plex2.0x90.duckdns.org {
        tls mail@mail.com

        reverse_proxy docker2:32400 {
                trusted_proxies y.y.y.0/24

                header_up Host {host}
                header_up X-Real-IP {remote}
                header_up X-Forwarded-Host {hostport}
                header_up X-Forwarded-For {remote}
                header_up X-Forwarded-Proto {scheme}
        }
}

The configuration defines a reverse-proxy as per the architecture schematic for all plex1, plex2 and plex.DOMAIN.duckdns.org domains. HTTPs is ensured via the TLS configuration option preceding the E-mail address that must be set to something sensible - a good document to the end of configuring HTTPs with Caddy via wildcard certificates is to be found on our caddy FUSS page.

Just to test, at this point, plex1.DOMAIN.duckdns.org should correspond to the docker1 machine on the local network and plex2.DOMAIN.duckdns.org should correspond to the docker2 machine on the local network.

Finally, the load balancing address plex.DOMAIN.duckdns.org is configured with caddy to provide load-balancing of the plex servers at docker1, corresponding externally to plex1.DOMAIN.duckdns.org, respectively docker2, corresponding externally to plex2.DOMAIN.duckdns.org. The load-balancer type chosen is uri_hash, and that is deliberately chosen with the hopes of distributing unique URL requests to the array of Plex servers given some observational knowledge that Plex Media Server internally relies on hashes to reference various assets and media. The rest of the load-balancing configuration is just meant to detect when either Plex servers go down and are inaccessible such that caddy routes clients to Plex servers that are up and running.

Sharing of Common Assets

With the observational knowledge that Plex uses hashes internally to reference assets, it might just be the case that metadata and media referred to via hashes could potentially be shared between Plex instances.

Here is a filesystem layout overview that contains the relevant folders that are mapped into the docker containers running Plex Media Server, including a new folder named plex-shared that is a symlink into both separate folders.

/mnt/storage
     +
     |
     +---+ /data/
             +
             |
             +---+ /docker1-plex/ (mapped to /config in docker1 for plex1)
             |
             +---+ /docker2-plex/ (mapped to /config in docker2 for plex2)
             |
             +---+ plex-shared (contains folders common to docker1-plex and docker2-plex)

Whilst data bound to the different Plex instances cannot be share, such as SQLite databases and others, both due to the lack of concurrent-safe operations and given that the various Plex instances might write non-context free data to the bound files, some of the folders contain data that seems to be context-free. Two such folders are identified:

  • Media and,
  • Metadata

both being folders created by Plex Media Server, the first folder Media containing references to media imported in the Plex servers whilst Metadata containing stuff such as music album art, movie posters, etc, all referenced by hashes that might be the same between Plex instances.

That being said, the following symlink can be made, for docker1 and docker2 in order to share the two folders Media and Metadata between the Plex instances:

mkdir /mnt/docker/data/plex-shared/{Media,Metadata}
ln -sf /mnt/docker/data/plex-shared/{Media,Metadata} \
    /mnt/docker/data/docker1-plex/data/Library/Application\ Support/Plex\ Media\ Server/
 
ln -sf /mnt/docker/data/plex-shared/{Media,Metadata} \
    /mnt/docker/data/docker2-plex/data/Library/Application\ Support/Plex\ Media\ Server/

Now the Plex servers can be restarted, and made to re-index the media files, hopefully sharing metadata between the instances such that the same data does not have to be downloaded and stored twice.


docker/plex_media_server_high_availability_and_load-balancing.txt ยท Last modified: 2024/06/06 09:40 by office

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.