About

The following is a description of a full Servarr stack presented as a deployable collection of services that require a minimal amount of setup. The stack implements some technologies created by Wizardry and Steamworks such as service resolution as well as other helper services that help with issues related with the Servarr stack, for example a notification mechanism via Gotify and a mechanism for backfill via huntarr / cleanuparr.

Ingredients

The following section describes the contents of this stack.

Media Viewers

  • Plex Media Server is incldued as the default media viewer being just a little more refined than the free and opensource Jellyfin; Plex Media Server is actually a stunningly good business, requiring a downpayment of $100 for a lifetime product and access to the TV/TV tuner area of the application, along with other goodies. Since forever, the payment amount has been the same and, in spite of being commercial, Plex does not seem to want to gauge their users in order to extort any more money by pestering them with other hidden payments.

Download Clients

  • qBittorrent - in principle, due to the minimalism, rTorrent might have been a better option, but qBittorrent is supported by more software and some of the Servarr helpers such as cleanuparr do not even support rTorrent

Notification System

Official Servarrs

  • bazarr, radarr, lidarr, readarr, whisparr, prowlarr and sonarr; all of these constitute the standard Servarr stack as listed on the Servarr wiki, maybe with readarr, the book manager being the only one in a state of flux apparently due to infighting between developers and other political interests,

Servarr Helpers

  • autobrr - is a helper Servarr that works on the principle that private torrent trackers typically announce their releases IRC first nd only after a longer delay does the release appear listed on the private tracker website such that autobrr will connect to multiple private trackers all at once and monitor the IRC channels for releases; this solves the issue that live content such as TV shows are typically downloaded by sonarr much later and only when the TV show episode appears on the tracker website by helping sonarr pull the TV episode much faster, hopefully after it is recorded by a group and then announced as a release on the tracker IRC channel,
  • huntarr (swaparr) / cleanuparr - huntarr and it's built-in side-arr swaparr are two helper Servarrs that solve the backfill issue that Servarr has, namely the fact that most Servarrs are conceived to only download recent content but they are not actually designed to look for past missing content and then download the corresponding media to fill the library (we took jab at his as well using shell scripts and with relatively good results); cleanuparr is created by the same author as huntarr and swaparr and is advertised as a more extended version of swaparr, a Servarr that will scan default Servarrs like Sonarr or Radarr and will remove stalling downloads, slow downloads as well as initiate searches for missing media,
  • byparr - byparr is a replacement for flaresolver, a cloudflare bypasser, implemented as a proxy service that prowlarr can use in order to manage to automatically search for releases and comes as a response to flaresolver development being relatively slow and without updates,
  • checkrr - checkrr is a no-interface Servarr that fulfills the very useful role of going through the media stash and looks for corrupt or broken media; this can happen for example during migrations or due to data loss where files become corrupt and there is partial loss such that chekrr will find this broken media and will delete it

Utility

  • watchtower - watchtower is a zero-configuration service stack updater that will search for updated docker images, download them, update the local store and run the updated container automatically,
  • registry - is a local registry implementation for Docker that is a must-have in case some of the services will have to be manually rebuilt in order to include or modify parts of the services,
  • traefik - traefik is an automatic reverse-proxy designed for Docker that can automatically set up reverse paths and solve domain-name issues for a Docker stack; for this stack, traefik is configured to set up seamless local domain-name resolution via mDNS using avahi created by Wizardry and Steamworks, which makes all the docker services available on the local network by browsing to their names directly, for example, http://sonarr.local or http://autobrr.local that will be automatically be created and will allow a local browser to be pointed at the former automatically generated URLs

Usage

The stack can be checked out via Subversion, by issuing:

svn co https://svn.grimore.org/servarr-stack

The following is a filesystem overview after checking out the repository via Subversion:

.
├── dockerfiles
│   ├── readarr
│   │   ├── Dockerfile
│   │   ├── Dockerfile.aarch64
│   │   ├── Jenkinsfile
│   │   ├── jenkins-vars.yml
│   │   ├── LICENSE
│   │   ├── package_versions.txt
│   │   ├── README.md
│   │   ├── readme-vars.yml
│   │   └── root
│   │       ├── donate.txt
│   │       └── etc
│   │           └── s6-overlay
│   │               └── s6-rc.d
│   │                   ├── init-config-end
│   │                   │   └── dependencies.d
│   │                   │       └── init-readarr-config
│   │                   ├── init-readarr-config
│   │                   │   ├── dependencies.d
│   │                   │   │   └── init-config
│   │                   │   ├── run
│   │                   │   ├── type
│   │                   │   └── up
│   │                   ├── svc-readarr
│   │                   │   ├── data
│   │                   │   │   └── check
│   │                   │   ├── dependencies.d
│   │                   │   │   └── init-services
│   │                   │   ├── notification-fd
│   │                   │   ├── run
│   │                   │   └── type
│   │                   └── user
│   │                       └── contents.d
│   │                           ├── init-readarr-config
│   │                           └── svc-readarr
│   └── whisparr
│       ├── Dockerfile
│       └── rootfs
│           └── usr
│               └── local
│                   └── bin
│                       └── run
└── services
    ├── autobrr.service
    ├── bazarr.service
    ├── byparr.service
    ├── checkrr.service
    ├── cleanuparr.service
    ├── gotify.service
    ├── huntarr.service
    ├── lidarr.service
    ├── plex.service
    ├── prowlarr.service
    ├── qbittorrent.service
    ├── radarr.service
    ├── readarr.service
    ├── registry.service
    ├── sonarr.service
    ├── traefik.service
    ├── watchtower.service
    └── whisparr.service

22 directories, 41 files

Regrettably, some of the Docker services will have to be built using the provided Dockerfile within the dockerfiles and then service sub-folders. Right now, the following services require a manual rebuild:

  • readarr - readarr is in a state of flux due to developer disagreements such that Docker images were dropped thereby requiring a third-party source for Docker images,
  • whisparr - whisparr unfortunately does not have any official images and automatic generators like LinuxSever.io for some reason do not have images available for whisparr such that these images will have to be created locally

that can be accomplished by changing directory into the service folder, for example, dockerfiles/whisparr and then issuing a rebuild using docker build. The stack provides the registry image, such that it can be deployed before building the image in order to store the image after it is built. This stack references localhost:5000 in multiple files which represents the hostname and port that the registry image runs on.

The service files within the services top-level folder contain the pre-configured SystemD files that should be dropped into /etc/systemd/system followed by reloading the service files with systemctl daemon-reload and then finally started using systemctl or made to start on boot using systemctl enable. Some of these service files have some local information that would need to be changed. For example, checking the default Servarrs for health is implemented by making a call to the Servarr inside the container using the locally-generated API key, such that all some Servarr service files within the top level services folder will have to be updated in order to set the local API key. The paths mentioned in the service files are also local but they are not many and are easily (re)created. Here is a description of the folders mentioned within the service files:

  • /mnt/archie - is the directory where the media is stored
  • /mnt/swarm/downloads - is a directory where temporary downloads are stored (before being moved to some sub-folder of /mnt/archie),
  • /mnt/swarm/docker/data - is a directory where all the Docker services store their local configurations

Either create these folders and/or adjust the mount points of various media or use regular expressions to batch-update all the service files in order to update the paths to match the local configuration you might have. As described in our computer engineering section, it is a good idea to map these folders to various storage techologies given that their usage patterns are very different. Here is a recommended rundown for the aforementioned folders:

  • /mnt/archie - is used as long-term storage such that it is best placed on a slow-but-large storage medium like a spinning 3.5" disk. Given its usage pattern, one could even configure the drive to spin down because this device will rarely be used for writing and will only be used for reading when the media is watched via, say, the Plex Media Player,
  • /mnt/swarm/downloads - represents the storage folder for downloads and we recommend placing this folder onto an USB thumb drive (even more so if USB3 is available) because this folder can be lost completely and nothing of value would have been lost,
  • /mnt/swarm/docker/data - is a long term configuration storage folder, with some read access, some write access that finds itself between /mnt/archie and the /mnt/swarm/downloads folder in terms of IO activity; the folder is somewhat vital in terms that it stores the current configuration of the services but it is not critical like /mnt/archie that stores the actual data

Setup Guideline

With the previous observations in mind, here are the large steps required to set this up, with some possible intermediary steps that can be omitted:

  • install the operating system and required tools,
  • check out the Subversion repository containing the stack,
  • move all the files ending up in service from the services top-level folder to the /etc/systemd/system/ folder and reload the system services by issuing the command systemctl daemon-reload,
  • launch the registry service with systemctl start registry,
  • switch to the dockerfiles sub-folder of the checked out Subversion stack and build the subfolders (to date, readarr and whisparr need building) for example, by using docker build -t localhost:5000/whisparr:latest . followed by pushing them onto the local registry by issuing docker push localhost:5000/whisparr:latest to push the image onto the local storage,
  • go back to the system service folder at /etc/systemd/system/ and go through the service files copied over in order to change local parameters such as paths or API keys,
  • start the remaining services using systemctl start,
  • connect to each started service and configure the service for usage

with these operations completed, the stack should be fully started and it will also start offering services on the local network via mDNS.

The service files are configurable to some degree, for example the mDNS publisher for Docker containers allows customizing the local domain.

Hardware

The stack is designed with minimalism in mind and we were surprised that we were able to squeeze the project into USD200, with USD100 going for the mini PC and the other USD100 going for the 3.5" hard-drive enclosure. The stack was deployed to a custom hardware NAS also built in-house by Wizardry and Steamworks because the design turned out way smaller and even more reliable than using an inexpensive NAS enclosure. Maybe a very helpful requirement would be to make sure that the hardware that the operating system is installed on that runs the Docker services has some updated and low-requirements means of accelerating transcoding - lots of mini PCs built with an Intel GPU that supports QSV for hardware transcoding of streams.

The following image is a screenshot of the glances program showing a system overview with all the services running and Plex playing some music.

Remember that this is a custom NAS enclosure that runs a fully automated media ecosystem that automatically pulls media from allover the Internet and then centralizes that data to be read by Plex Media Player. Quite frankly, the only process that stresses the system is Plex Media Player that has lots of built-in background processes that performs media file accounting and admittedly those processes could be further limited to lower the stress on the machine.

One ironic observation is that the custom hardware NAS that we built seems to be even more stable than a setup using an inexpensive RAID enclosure, which we ended up attributing to "the myth of a cheap RAID NAS". Intuitively, daisy-chaining liabilities results in a larger overall liability than less of liability even accounting for the extra perks that RAID would offer such as data deduplication or data mirroring. For computer engineering in general, the realization hinges on "cheap" hardware that is missing very crucial functionality like SATA NCQ, contains very trashy controllers (like the SATA backplane itself) or also rubbish RAID controllers (even happens with industrial equipment like the nightmarish HP raid controllers), that would lead to intermittent failures and spurious disconnects that required the whole system to be rebooted. This is even mentioned for USB drives in the inexpensive NAS solutions writeups where it is suggested that daisy-chaining a bunch of USB3 drives via an USB3 HUB might be the most cost-effective solution. However, it turned out that a more monolithic build, with less moving parts, much, much cheaper than the "inexpensive NAS" (due to not having to buy smaller individual drives), as can be observed, leads to an autonomous uptime of 15 days.

Or, very simply put, one large 3.5" drive connected via USB beats the hell out of multiple 2.5" external drives bridged at the logical level through an "inexpensive" RAID solution.

The custom NAS rebuild is the result of mitigating data loss after four separate 2.5" Seagate SATA drives that failed at the same time (all of them having roughly 3 years of RAID/ZFS usage) that almost took down the whole ZFS tank. We recently transferred a large amount of data using one of these "failed drives" after placing them back into their original external enclosure and using them as external drives and, guess what, we experienced no errors during the transfer either while transferring the data onto the drive and then afterward while transferring the data off the drive. We speculate that the usage pattern of the drive itself triggers some software or hardware error on its controller circuit that makes the drive unsuitable for continuous usage but when the drive is used casually as an external "from time to time" hard-drive for ferrying files, the drive will be fine and will have a much longer lifespan.


servarr/complete_stack.txt · Last modified: by office

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.