About

This document describes a setup with caddy acting as a front-facing "amplified" reverse-proxy that works as a switchboard for a media-server homelab setup that will be running most of the Servarr software components.

Diagram and Assumptions

A reserved external domain name just for the media home entertainment setup will be:

  • media.home-entertainment.tld

The internal domain name of the media server will be:

  • media.tld

The local internal DNS server is configured using views such that both the internal hostname of the media server media.tld and the external DNS name media.home-entertainment.tld that was reserved for the project resolve to the same local LAN IP address. That is, accessing either media.tld or media.home-entertainment.tld on the local LAN in a browser should result in connecting to the media server on the local LAN.

Dynamic DNS

It is possible to register a regular domain name for the project but the idea was to use an extremely low cost of resources and setup complexity such that the final chosen variant was to use online dynamic DNS services (DDNS). In reality dynamic DNS does not refer to the typical bind server DNS update on IP addresss reservation, but rather refers to an online service that exposes a programming hook that can be used in order to update the external IP address of a domain name supplied by the "dynamic dns provider" which is a task that is trivially accomplished with tools such as ddclient.

Similarly, another domain name auth.home-entertainment.tld should be registered and should point to the same IP address as media.home-entertainment.tld such that both domains will point to the external IP address of the home entertainment system.

That is, the setup described within this document uses just two domain names, media.home-entertainment.tld, respectively auth.home-entertainment.tld, without registering any additional sub-domain names for each Servarr (Sonarr, Lidarr, etc). Fortunately, most dynamic DNS services allow sub-domains of the reserved domain, such as media or auth. One trick is to configure both media and auth sub-domains to be pointers (PTR) or canonical names (CNAME) of the top level reserved dynamic DNS such as home-entertainment.tld. By configuring the sub-domains as pointers or canonical names of the top level domain, whenever the IP address is updated via a tool such as ddclient, the rest of the sub-domain names will point to the updated IP address. A mistake would be to create A records for media and auth because A records can be configured separately to point to different IP addresses such that it is not a guarantee that ddclient would update them.

To conclude, the following DNS records are established:

home-entertainment.tld.        A        EXTERNAL_IP
media                          PTR      home-entertainment.tld.
auth                           PTR      home-entertainment.tld.

where:

  • EXTERNAL_IP is set to the external IP address of the media home lab.

Setting up Servarr Instances

The Servarrs that will be used here are:

  • Sonarr, retrieves TV shows,
  • Radarr, retrieves movies,
  • Readarr, retrieves books,
  • Lidarr, retrieves music,
  • Bazarr, retrives subtitles for Sonarr and Radarr,
  • Prowlarr, centralized indexer for all @rr severs (similar to Jackett)

The configuration of the individual instances does not matter too much just that they will have to be configured to use to a base path instead of the root path.

That is, all Servarr servers will be configured to have a baseURL, such that the following host and path pairs should be equivalent:

  • media.tld/sonarr, media.home-entertainment.tld/sonarr,
  • media.tld/radarr, media.home-entertainment.tld/radarr

and so on for all Servarr servers.

This can be configured in the "General" section of each Servarr.

While setting up all the Servarrs, it is possible to disable authentication completely, or to create the necessary exclusion set for non-routable LAN IP addresses to bypass authentication and then access the Servarrs directly.

Setting up Organizr and Services

Although not a canonical @rr, "Organizr" is a wrapping tool that just provides an interface where the various Servarrs can be added to tabs and then accessed conveniently from an unified interface. Organizr is not alone, perhaps the oldest, but there are other alternatives such as Heimdall or Muximux. Most of what these interfaces do, is to just load websites inside an iframe and with the required fixes to bypass CORS security enforcement.

After installing Organizr, the service has to be configured by logging in, clicking the tab editor and then adding a Servarr. There are two settings that are important:

  • Tab URL,
  • Tab Local URL

where:

  • Tab URL should be set to the external hostname:
    • for Sonarr, the FQDN URL would be https://media.home-entertainment.tld/sonarr
  • Tab Local URL should be set to the internal IP address and port pair of the Servarr service:
    • for Sonarr listening on 8989 the local FQDN URL would be http://media.tld:8989/sonarr

Organizr's job will now be to pass each Servarr to the corresponding local or external FQDN URL:

                     Organizr
                         +
                         |
       /media.home-entertainment.tld or /media.tld
                         |
            +--------+---+----+--------+----. . .
            |        |        |        |
        /sonarr  /radarr  /redarr  /lidarr                              
 

in different tabs.

Setting up the Reverse Proxies

For convenience, two reverse-proxies are configured using caddy. The first reverse-proxy will be present on the media.tld internal server and will allow accessing the Servarr instances using the baseURL paths. The following is a sample configuration for caddy that will reverse-proxy multiple Servarr services such that they can be accessed via the LAN FQDN of the "media" server plus the baseURL that the respective Servarr has been configured with:

:80 {
        reverse_proxy 127.0.0.1:32400

        reverse_proxy /kavita* {
                to 127.0.0.1:5000
        }

        reverse_proxy /radarr* {
                to 127.0.0.1:7878
        }

        reverse_proxy /sonarr* {
                to 127.0.0.1:8989
        }

        reverse_proxy /lidarr* {
                to 127.0.0.1:8686
        }

        reverse_proxy /readarr* {
                to 127.0.0.1:8787
        }

        reverse_proxy /whisparr* {
                to 127.0.0.1:6969
        }

        reverse_proxy /bazarr* {
                to 127.0.0.1:6767
        }

        reverse_proxy /prowlarr* {
                to 127.0.0.1:9696
        }

        reverse_proxy /unmanic* {
                to 127.0.0.1:8888
        }

        reverse_proxy /unpackerr* {
                to 127.0.0.1:5656
        }

        redir /services /organizr/
        redir /organizr /organizr/
        handle_path /organizr/* {
                rewrite /organizr* /{uri}
                reverse_proxy 127.0.0.1:2468 {
                        header_up Host {host}
                        header_up X-Real-IP {remote}
                        header_up X-Forwarded-Host {hostport}
                        header_up X-Forwarded-For {remote}
                        header_up X-Forwarded-Proto {scheme}
                }
        }

The last Organizr entry configures not only /organizr but also an additional /services URL path that redirects to /organizr. Using the previous schematic, here is an updated schematic that shows the disposition of the various services:

                           v media.home.entertainment.tld/            
                           | media.tld/                                 
                           |                                            
                           |                                            
           +---------------+-------------------+                        
           |                                   |                        
           |                                   +                        
           |                               Organizr                     
           |                   /media.home-entertainment.tld/services   
           |                   /media.home-entertainment.tld/organizr   
           |                           /media.tld/services              
           |                           /media.tld/organizr              
           |                                   +                        
           +                                   |                        
          Plex               +--------+--------+--------+--------+--------+     
      Media Server           |        |        |        |        |        |
                         /sonarr  /radarr  /redarr  /lidarr  /prowlarr  /bazarr
 

Now, corresponding to the first line in the configuration 127.0.0.1:32400, the default destination for any request via HTTP(s) to the media server on port 80 will be redirected to port 32400 on the local machine, which is the default listening port for the Plex Media server.

The second reverse-proxy server that will be installed, will live on the front-facing Internet gateway and will reverse-proxy external requests to the internal servers on the media server. The reason for adding another caddy server to the setup is separation of concerns where the media server is a self-standing entity that can be made part of any LAN (ie: a container, such as a virtual machine) while the caddy on the Internet gateway will act for a given configured LAN in particular. The limitation, of course, is due to HTTP and HTTPs ports being unique and other services might need exposing within the LAN.

The gateway caddy reverse-proxy will also be responsible for performing authentication for clients external to the local LAN because it is convenient to just protect the entire allotment of Servarrs, instead of going through all of the services and turning on authentication and protection. Even if authentication is enabled for all the Servarrs, as well as additional services that do not even have built-in authentication, it is still one more level of separation where requests coming over the external Internet connection will be processed on the front-facing servers and will not be just NATed into the local network (specifically, the media machine).

That being said, here is the complete configuration for caddy on the Internet-facing gateway:

{
        log {
                output file /var/log/caddy/access.log {
                        roll_size 250mb
                        roll_keep 5
                        roll_keep_for 720h
                }
        }

        order authenticate before respond
        order authorize before reverse_proxy

        security {
                local identity store localdb {
                        realm local
                        #path /tmp/users.json
                        path /etc/caddy/auth/local/users.json
                }
                authentication portal media {
                        enable identity store localdb
                        cookie domain media.home.entertainment.tld
                        cookie lifetime 86400
                        ui {
                                theme basic
                                links {
                                        "Services" /services
                                }
                        }
                }
                authorization policy admin_policy {
                        set auth url https://auth.home.entertainment.tld
                        allow roles authp/user authp/admin
                }
        }

        order replace after encode
}

auth.home.entertainment.tld {
        tls some@email.tld
        
        authenticate with media
}

media.home.entertainment.tld {
        tls some@email.tld

        # Expose iCal without authorization and only based on API key authentication.
        # /sonarr/feed/calendar/Sonarr.ics?apikey=
        # /radarr/feed/v3/calendar/Radarr.ics?apikey=
        @noauth {
                not path_regexp \/.+?\/feed(/.*?)*\/calendar\/.+?\.ics$
                not remote_ip 192.168.0.1/24
        }

        handle @noauth {
                authorize with admin_policy
        }

        reverse_proxy media.tld {
                header_up Host {host}
                header_up X-Real-IP {remote}
                header_up X-Forwarded-Host {hostport}
                header_up X-Forwarded-For {remote}
                header_up X-Forwarded-Proto {scheme}
        }
}

When a request arrives at the front-facing Internet gateway, the caddy server based on the previous configuration, will perform the following steps, in order:

  • an authentication cookie will be checked to be present on the browser accessing the caddy server as well as the IP address of the accessing browser,
  • if a cookie is present or the request stems from the local network 192.168.1.0/24, then the request is forwarded to the media server at media.tld,
  • if there is no authentication cookie in the browser and the request is external to the local LAN, caddy will redirect the request to auth.home.entertainment.tld where the user will be prompted to authenticate using a form-based authenticator,
  • upon successful authentication, the user will be redirected back to the original FQDN and URL requested, thereby hopefully reaching one of the Servarrs

The only step that has been left out is the following exception:

not path_regexp \/.+?\/feed(/.*?)*\/calendar\/.+?\.ics$

that allows Servarr paths matching the regular expression, such as:

        # /sonarr/feed/calendar/Sonarr.ics?apikey=
        # /radarr/feed/v3/calendar/Radarr.ics?apikey=

to be accessed without needing authentication.

Mostly, this is fine, because accessing the iCal service for all Servarrs requires an API key, such that even if the service is publicly exposed, a user would need an API key to be able to effectively pull the iCal calendar. Furthermore, given that caddy was configured with a TLS E-Mail via the settings:

... {
        tls some@email.tld
        ...
}

then caddy will automatically generate an SSL / TLS certificate for the domains media.home.entertainment.tld and auth.home.entertainment.tld. It is also possible to set up wildcard SSL / TLS certificates but that requires a provider such as CloudFlare in order to resolve LetsEncrypt or ZeroDNS challenges for certificate generation.

Populating Google Calendar with Servarr Events

With all of this in-place, it is possible to retrieve the iCal URLs generated by the various Servarrs and import them into an online calendar such as Google. Importing the events is possible due to the front-facing caddy reverse-proxy that has the necessary regular expression excludes in order to pass through accessing the Servarr calendar.

The procedure to do so is very simple:

  • grab the URL from one of the Servarr instances, for example, from Sonarr https://media.home-entertainment.com/sonarr/feed/calendar/Sonarr.ics?apikey=...,
  • log-in to Google calendar,
  • navigate to the "Other calendars" section, press the plus button and select "From URL",
  • enter the full URL retrieved from the Servarr instance; in this example, https://media.home-entertainment.com/sonarr/feed/calendar/Sonarr.ics?apikey=...

The Google calendar page might need reloading but, if everything is set up correctly, after some time the events should show up. One indicator that the process of adding the Servarr iCal link to the Google calendar did not work is that the URL will still be visible in the same section where the iCal link is added. In other words, after adding the iCal link, and after Google retrieves the Servarr calendar, the name in the list of calendars should change from the long URL to the Servar calendar name (ie: "Sonarr TV Schedule" for sonarr, or "Redarr Book Schedule" for readarr, etc.).

Plex vs. Jellyfin

Whilst Jellyfin is open-source, Plex Media Server tends to have a better resilience when the server is behind a NAT, or, even fatally, a carrier grade NAT (CGNAT) because the server can be accessed through plex.tv even if the port is blocked and the traffic severed. Both Plex and Jellyfin are similar, with the only drawbacks that Plex requires an USD100 subscription to view and record TV using an external TV tuner (but the subscription is a lifetime subscription, and has been for decades), and the fact that Jellyfin packs an E-Book "reader" of sorts (but the Jellyfin E-Book reader is barely functional, doesn't distinguish between audio or paper books and is outright dysfunctional).

While not much can be done about the lifetime subscription, and is really a completionist dream to add a TV tuner to the media server, the "e-book" reader can (barely) be replaced by something like Kavita that is a proper E-Book reader, supporting various standards. Unfortunately, Kavita is not perfect but the Jellyfin built-in "e-book" reader is practically unusable anyway.

The following mobile applications are advanced and well-developed for Plex Media Server:

  • Plex, the application itself that can be used to do anything you'd do from a computer but from a mobile device,
  • Plexamp, an application along the lines of WinAmp but with the mandatory iTunes feature of being able to download songs from your own collection onto the mobile device in order to play the music without an Internet connection.

Converting Shows and Movies Automatically

Whether installing Node-Red or using Unmanic, it is important to remember that depending on your needs, some formats are way better than others. For starters, most modern browsers support directly-decoding MP4 files such that even if there exist more advanced formats such as WebP/WebM, it is still much preferable to transcode everything to MP4.

When Plex or Jellyfin are asked to play back a video, if the video is not supported by the browser that is accessing plex or jellyfin, then ffmpeg is used to transcode the video on the fly and pass the result to the browser. Of course, ffmpeg transcoding costs a lot of resources and even if FFMpeg is invoked with options to perform the transcode on the GPU, then it is still one more moving part that is not really necessary when the entire collection of videos can just be converted to an MP4 container with a x264 codec.

Some inspiration can be taken from the lowest common denominator FFMpeg FUSS section or the custom FFMpeg script for Unmanic and should make converted videos compatible with any browser. The genreral guidelines to follow seem to be:

  • MP4 container (Matroska / MKV is way more advanced, as an example, yet it is not a container compatible with all the browsers),
  • x264 instead of HEVC (again, due to browser compatibility),
  • the ffmpeg flag -movflags faststart are almost crucial for FFMpeg because it instructs FFMpeg to move all the metadata at the start of the file such that skimming through the video is instant without requiring any buffering,
  • some major reductions in file size can be performed, iff:
    • the quality is not important, then SDTV for TV shows is great,
    • separate audio tracks for various languages are not necessary, then only English can be maintained and subtitles can be added using Bazarr,
    • depending on the audio setup, 128k for the audio bitrate, "should be enough for everyone",

Regardless of the method chosen, one good way to convert videos on the fly as they are added is to use inotify or look for some software like Unamnic that leverages inotify in order to be told by the filesystem that a new file has been added such that the video file gets converted.

Unpacking Releases

An older custom used to be to pack everything in archives, or archives within other archives, very neatly and very tightly. The reasoning back then was that the Internet was still slow overall, and physical mediums to transfer data were limited by technologies such as floppies, magnetic tapes or CD/DVD ROMs. The packing was both to squeeze even the smallest byte, but also to be able to split an archive across multiple floppies, tapes or CD/DVD ROMs.

Meanwhile, high speed Internet made all of that obsolete, such that the customs have adapted and typically it is not suitable to upload releases to a tracker that are packed using archiving tools. From time to time, and if Usenet archives are used, a release gets downloaded that is archived such that the Servarrs cannot read it and do not recognize it.

Unpackerr is a tool that can be set up to interact with the Servarrs and unpack downloads if they are archived, along with notifying the various Servarrs that the file was updated. The configuration itself is straight-forward, set and forget with plenty of ample documentation within the configuration file.

Download Clients

All the Servarrs can use a torrent and an Usenet client in order to download releases such that clients are required for each. Empirically, out of experience, picking nzbget as the Usenet downloader and flood with rTorrent is perhaps the best choice.

Ideally, the setup would be the following:

with:

  • multiple rTorrent clients set up running on the NAS machine where the physical storage is to be found,
  • all the Servarr servers connecting through flood and multiplexing the rTorrent torrent clients

Typically, seed-box like environments necessary to maintain a media library require hundreds or perhaps thousands of torrents to be seeded, and even if each torrent is just a book consisting in a PDF file of negligible size, it is still the case that not many torrent clients are able to scale all that well. At the very least, two torrent clients should be ran, one for books and another for everything else (in particular, when backfilling is not necessary).

For the multiple rTorrent clients, docker can be used, but a manual semi-automated setup should work as well.

Semi-Automated rTorrent Deployment

Here is a sample SystemD service file that can be used to just spawn instances of rTorrent by issuing just one command:

[Unit]
Description=rTorrent Instance %i
After=network.target

[Service]
Environment=DOWNLOAD_DIRECTORY=/mnt/archie/Downloads
Environment=SESSION_DIRECTORY=/opt/rtorrent/instances/%i
Environment=SESSION=%i
Environment=PORT=62296
Environment=SCGI_PORT=5000
Environment=DHT_PORT=6881
Environment=RTORRENT_CONFIG_PAYLOAD=ZGlyZWN0b3J5LmRlZmF1bHQuc2V0ID0gJHtET1dOTE9BRF9ESVJFQ1RPUll9CnNlc3Npb24ucGF0aC5zZXQgPSAke1NFU1NJT05fRElSRUNUT1JZfQpuZXR3b3JrLmJpbmRfYWRkcmVzcy5zZXQgPSAwLjAuMC4wCiMgc2V0IG9uIHRoZSBjb21tYW5kIGxpbmUKI25ldHdvcmsucG9ydF9yYW5nZS5zZXQgPSAKbmV0d29yay5wb3J0X3JhbmRvbS5zZXQgPSBubwpuZXR3b3JrLnNlbmRfYnVmZmVyLnNpemUuc2V0ID0gMTI4TQoKcGllY2VzLmhhc2gub25fY29tcGxldGlvbi5zZXQgPSBubwpwaWVjZXMucHJlbG9hZC50eXBlLnNldCA9IDEKcGllY2VzLnByZWxvYWQubWluX3NpemUuc2V0ID0gMjYyMTQ0CnBpZWNlcy5wcmVsb2FkLm1pbl9yYXRlLnNldCA9IDUxMjAKcGllY2VzLm1lbW9yeS5tYXguc2V0ID0gMTAyNE0KCnRyYWNrZXJzLnVzZV91ZHAuc2V0ID0geWVzCgpwcm90b2NvbC5lbmNyeXB0aW9uLnNldCA9IGFsbG93X2luY29taW5nLHRyeV9vdXRnb2luZyxlbmFibGVfcmV0cnkKcHJvdG9jb2wucGV4LnNldCA9IHllcwoKZGh0Lm1vZGUuc2V0ID0gYXV0bwojIHNldCBvbiB0aGUgY29tbWFuZCBsaW5lCiNkaHQucG9ydC5zZXQgPSAKCnN5c3RlbS5kYWVtb24uc2V0ID0gdHJ1ZQoKIyBzZXQgb24gdGhlIGNvbW1hbmQgbGluZQojbmV0d29yay5zY2dpLm9wZW5fcG9ydCA9IAo=
ExecStartPre=/usr/bin/env sh -c "mkdir -p ${SESSION_DIRECTORY}"
ExecStartPre=/usr/bin/env sh -c "echo ${RTORRENT_CONFIG_PAYLOAD} | base64 -d | envsubst > ${SESSION_DIRECTORY}/rtorrent.rc"
ExecStartPre=/usr/bin/env sh -c "rm -f ${SESSION_DIRECTORY}/rtorrent.lock"
ExecStart=/usr/bin/env bash -c "/usr/bin/rtorrent  -n -o import=${SESSION_DIRECTORY}/rtorrent.rc -o network.port_range.set=$((${PORT} + ${SESSION}))-$((${PORT} + ${SESSION})) -o network.scgi.open_port=0.0.0.0:$((${SCGI_PORT} + ${SESSION})) -o dht.port.set=$((${DHT_PORT} + ${SESSION}))"
ExecStopPost=/usr/bin/env bash -c "rm -f ${SESSION_DIRECTORY}/rtorrent.{lock,rc}"
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=rtorrent-%i
User=root
Group=root

[Install]
WantedBy=multi-user.target

In order to use the system, copy the contents to a file at /etc/systemd/system/rtorrent@.service, and then issue:

systemctl daemon-reload

in order to reload the service files.

Now, whenever an rtorrent instance must be spawned, the following commands have to be issued:

systemctl enable rtorrent@0.service

followed by:

systemctl start rtorrent@0.service

where:

  • 0 is an instance number, preferably incremented sequentially for each rTorrent client to start.

For example, opening up two rTorrent client instances via:

systemctl enable rtorrent@{0,1}.service
systemctl start rtorrent@{0,1}.service

results in two rTorrrent processes being spawned:

1383724 ?        Ssl    0:00 /usr/bin/rtorrent -n -o import=/opt/rtorrent/instances/0/rtorrent.rc -o network.port_range.set=62296-62296 -o network.scgi.open_port=0.0.0.0:5000 -o dht.port.set=6881
1383775 ?        Ssl    0:00 /usr/bin/rtorrent -n -o import=/opt/rtorrent/instances/1/rtorrent.rc -o network.port_range.set=62297-62297 -o network.scgi.open_port=0.0.0.0:5001 -o dht.port.set=6882

As you can observe, the torrent clients listen on incremental ports and this is due to the SystemD file being crafted specifically to add the rtorrent instance number to the port and SCGI port in order to build a separate instance.

Similarly, a configuration file is generated on the fly by unpacking the Base64 string, substituting the environment variables via envsubst and then writing the configuration file specific to the started rTorrent instance. The Base64 string payload, is just an rTorrent configuration file rtorrent.rc, with the following contents:

directory.default.set = ${DOWNLOAD_DIRECTORY}
session.path.set = ${SESSION_DIRECTORY}
network.bind_address.set = 0.0.0.0
# set on the command line
#network.port_range.set =
network.port_random.set = no
network.send_buffer.size.set = 128M

pieces.hash.on_completion.set = no
pieces.preload.type.set = 1
pieces.preload.min_size.set = 262144
pieces.preload.min_rate.set = 5120
pieces.memory.max.set = 1024M

trackers.use_udp.set = yes

protocol.encryption.set = allow_incoming,try_outgoing,enable_retry
protocol.pex.set = yes

dht.mode.set = auto
# set on the command line
#dht.port.set =

system.daemon.set = true

# set on the command line
#network.scgi.open_port =

In order to make changes to the dynamic configuration file, simply save the configuration to a file called rtorrent.rc, and then issue:

cat rtorrent.rc | base64 | xargs | sed 's/ //g'

to encode the file to a Base64 string. Then, substitute the already existing Base64 string in the SystemD service file, referenced by RTORRENT_CONFIG_PAYLOAD with the newly generated Base64 string of the rTorrent configuration file.

YouTube

Currently there is no Servarr that covers downloading from YouTube but yt-dlp does a good job and is able to download entire channels as well as YouTube playlists. Neither Plex nor Jellyfin have any special handling of YouTube such that a standard video folder is suitable to add YouTube videos. In any case, it is a good idea to remove any TV shows or series added to Sonarr and instead use yt-dlp to automate the download of YouTube content as explained on the "downloading YouTube channel" page.

Compared to using the YouTube subscription downloader, one alternative is to use yt-dlp from the command line and create a script that will run on a schedule in order to download YouTube videos or playlists. For example, here is a script that downloads all the playlists with their videos created by "The Psychedelic Muse" YouTube channel.

#!/bin/sh
###########################################################################
##  Copyright (C) Wizardry and Steamworks 2024 - License: MIT            ##
###########################################################################
 
/usr/bin/yt-dlp \
	--no-progress \
	--output "/nas/YouTube/The Psychedelic Muse/%(playlist_title)s/%(title)s - %(upload_date)s [%(id)s].%(ext)s" \
	--download-archive /opt/yt-dlp/archives/the_psychedelic_muse.txt \
	--write-info-json \
	--write-subs \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTa3q0xT7dl4YAPrIw9pGAzf" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTa4" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaFOoEHFOa" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaiWR" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTak0DqATB0QRl" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaLH9ydnYvIJEfjgjHWY2pL" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaTVcFGBKvvvSJuV17LP1hq" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaU6yc0aUbKaYOSLrELkmad" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTawMUeOTlk8ifOHjLw4LaIu" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaWo3A6IcztvlEKFvdu0dNP" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTaXSPt" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTb0jYxvWDjjmSGvIJ2MPhEB" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTb5DwcyvCC4jAlZCIWSGq0X" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTbIoE7AjU" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTbjJ5CdILI79FO41m8fwx7y" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTboykOvcVtmTsYbFbhrKIZj" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTbuRasLMP" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTY2Hcf2xeuevAvoJX9B75ar" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYCNpARJt6UyF6kF2FSbXdf" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYhIjxhdN38GcQTSOtE2lXL" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYj57RduCoUDdZ8n6ntJP3f" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYouXChnsCjEMrF3OOEvXUx" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYQVqP9gQGT0FDWlZuVuT66" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTYShYoUr5buIv9Y7wM" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZ2nLRYn1s5QirnAk186z4B" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZ3FHyW44EwDK6Mve9" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZ9ESqsuo2I97UHszfY26ij" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZhY7wHjCIvCM" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZipz4AE0d0BNrGuht4kk80" \
	"https://www.youtube.com/playlist?list=PLobEMHfBbtTZNwTuPlWLNFMu1HEgXvbHz" \
	2>&1 | logger

where:

  • /nas/YouTube/The Psychedelic Muse/ should be changed to the root directory where the videos should be stored.

The script takes care to create a separate folder for each playlist, as well as create a specific format that can be recognized by YouTube metadata scanners for Jellyfin. This is done by using the yt-dlp special sequences in order to substitute various metadata within the created filenames and directory paths:

/nas/YouTube/The Psychedelic Muse/%(playlist_title)s/%(title)s - %(upload_date)s [%(id)s].%(ext)s

For The Psychedelic Muse, in particular, making folders for each playlist makes sense, because The Psychedelic Muse typically releases entire albums by sorting each song into files within a playlist. However, depending on how a YouTube channel is organized the substitution sequence will more than likely need to be amended. All the substitution parameters within the path string can be looked up on the yt-dlp documentation page.

Working around Bans & Throttles

Since the acquisition of YouTube by Google, YouTube has started off on an authoritarian spree, first by removing the dislike button, then by refusing to fix one of their own bugs that conflicted with browser addons, and now recently by making great strides to force people to pay for a subscription. That being said, one of the recent problems at the time of writing (cca. 2024) is that downloading with yt-dlp might results in errors claiming that you cannot watch anymore videos "with this app" and similar, which is really just a very evasive way of saying that your IP address got throttled and banned.

The solution is to use yt-dlp together with "yt-dlp-youtube-oauth2" in order to log-in to YouTube and only then download the video. This helps in the sense that, for now, YouTube widens the throttle bandwidth if a user is at least registered on YouTube.

In order to to use yt-dlp-youtube-oauth2, all that is required is to install yt-dlp using pip and then yt-dlp-youtube-oauth2 using python pip as well, which generates the most compatible configuration that will allow downloading from YouTube again. The invocation of the command line changes slightly to include:

 --username oauth2 --password ''

which, if yt-dlp and the addon yt-dlp-youtube have been successfully installed, will make yt-dlp ask the user to supply a code for the very first time using a browser. After that, the videos should download as normal.

Integration with PVRs (Plex and Jellyfin)

Once the YouTube content can be downloaded, the question is how to manage that content. Typically, TV shows are canonically downloaded via Torrents and UseNET such that they benefit from a naming convention and are also indexed and recognized by PVRs such as Plex Media Server or Jellyfin via online databases such as TMDB.

However content downloaded from YouTube is not really indexed, does not really have any metadata aside the one offered by YouTube and there is no database (yet) holding publicly accessible information that would make automation possible. With that said, most people would file content from YouTube under an "Other Videos" collection, but interestingly enough, if one were to do that, then there are some drawbacks due to the PVRs not recognizing or treating the content as sequential or a series. Counter-intuitively, the content should, in fact, be imported into Plex (or Jellyfin) as a regular "TV Show" collection (or TV series) along with the script mentioned before changed a little to name files incrementally.

Here is an example of downloading "Grade A Under A"'s channel using an incremental naming convention via yt-dlp:

#!/bin/sh
###########################################################################
##  Copyright (C) Wizardry and Steamworks 2024 - License: MIT            ##
###########################################################################
 
# Acquire a lock.
LOCK_FILE="/tmp/gradeaundera"
if mkdir $LOCK_FILE 2>&1 >/dev/null; then
    trap '{ rm -rf $LOCK_FILE; }' KILL QUIT TERM EXIT INT HUP
else
    exit 0
fi
 
mkdir -p /scratch/yt-dlp/archives
yt-dlp \
        --output "/mnt/storage/YouTube/Grade A Under A/%(upload_date>%Y)s/Grade A Under A - S%(upload_date>%Y)sE%(upload_date>%m)s%(upload_date>%d)s - %(title)s - [%(id)s].%(ext)s" \
        --download-archive "/scratch/yt-dlp/archives/gradeaundera.txt" \
        --write-info-json \
        --write-subs \
        "https://www.youtube.com/channel/UCz7iJPVTBGX6DNO1RNI2Fcg/videos" 2>&1 | logger

The secret here is to name videos downloaded sequentially by using the year for the "season" and then the month followed by the day of the video being published for the episode. This results in an incremental sequence where later episodes appear after earlier episodes that can easily be indexed and handled by PVRs. When playing back, each season will be the year and all videos can be played back and binge-watched, which is something that could not do if the videos were added to the PVR as "Other Videos".

Using Plug-ins and Scanners

Perhaps a better way to integrate YouTube videos such that they can be properly rendered with appropriate metadata is to use a third-party scanner and a YouTube plugin. The "Absolute Series Scanner" python file should be downloaded to the "Scanners/Series/" folder under the Plex Media Server library folder and the "Youtube Agent" bundle must be downloaded and saved to the "Plugin-ins" folder by following the instructions on their development pages.

The YouTube videos will have to be downloaded using the following notation using yt-dlp:

%(uploader)s [%(channel_id)s]/%(uploader)s - %(title)s [%(id)s].%(ext)s

where the ID of the channel and the ID of the video will be included in the naming of the folder, respectively of the downloaded video.

Now, when indexing the folder, Plex Media Server will be able to recognize the videos with the proper title, uploader as well as add season folders.


caddy/reverse-proxy_with_authentication_for_servarr_linux_pvr_homelabs.txt ยท Last modified: 2024/06/16 16:58 by office

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.