The following is a description of a full Servarr stack presented as a deployable collection of services that require a minimal amount of setup. The stack implements some technologies created by Wizardry and Steamworks such as service resolution as well as other helper services that help with issues related with the Servarr stack, for example a notification mechanism via Gotify and a mechanism for backfill via huntarr / cleanuparr.
The following section describes the contents of this stack.
http://sonarr.local
or http://autobrr.local
that will be automatically be created and will allow a local browser to be pointed at the former automatically generated URLsThe stack can be checked out via Subversion, by issuing:
svn co https://svn.grimore.org/servarr-stack
The following is a filesystem overview after checking out the repository via Subversion:
. ├── dockerfiles │ ├── readarr │ │ ├── Dockerfile │ │ ├── Dockerfile.aarch64 │ │ ├── Jenkinsfile │ │ ├── jenkins-vars.yml │ │ ├── LICENSE │ │ ├── package_versions.txt │ │ ├── README.md │ │ ├── readme-vars.yml │ │ └── root │ │ ├── donate.txt │ │ └── etc │ │ └── s6-overlay │ │ └── s6-rc.d │ │ ├── init-config-end │ │ │ └── dependencies.d │ │ │ └── init-readarr-config │ │ ├── init-readarr-config │ │ │ ├── dependencies.d │ │ │ │ └── init-config │ │ │ ├── run │ │ │ ├── type │ │ │ └── up │ │ ├── svc-readarr │ │ │ ├── data │ │ │ │ └── check │ │ │ ├── dependencies.d │ │ │ │ └── init-services │ │ │ ├── notification-fd │ │ │ ├── run │ │ │ └── type │ │ └── user │ │ └── contents.d │ │ ├── init-readarr-config │ │ └── svc-readarr │ └── whisparr │ ├── Dockerfile │ └── rootfs │ └── usr │ └── local │ └── bin │ └── run └── services ├── autobrr.service ├── bazarr.service ├── byparr.service ├── checkrr.service ├── cleanuparr.service ├── gotify.service ├── huntarr.service ├── lidarr.service ├── plex.service ├── prowlarr.service ├── qbittorrent.service ├── radarr.service ├── readarr.service ├── registry.service ├── sonarr.service ├── traefik.service ├── watchtower.service └── whisparr.service 22 directories, 41 files
Regrettably, some of the Docker services will have to be built using the provided Dockerfile
within the dockerfiles
and then service sub-folders. Right now, the following services require a manual rebuild:
that can be accomplished by changing directory into the service folder, for example, dockerfiles/whisparr
and then issuing a rebuild using docker build
. The stack provides the registry
image, such that it can be deployed before building the image in order to store the image after it is built. This stack references localhost:5000
in multiple files which represents the hostname and port that the registry
image runs on.
The service files within the services
top-level folder contain the pre-configured SystemD files that should be dropped into /etc/systemd/system
followed by reloading the service files with systemctl daemon-reload
and then finally started using systemctl
or made to start on boot using systemctl enable
. Some of these service files have some local information that would need to be changed. For example, checking the default Servarrs for health is implemented by making a call to the Servarr inside the container using the locally-generated API key, such that all some Servarr service files within the top level services
folder will have to be updated in order to set the local API key. The paths mentioned in the service files are also local but they are not many and are easily (re)created. Here is a description of the folders mentioned within the service files:
/mnt/archie
- is the directory where the media is stored/mnt/swarm/downloads
- is a directory where temporary downloads are stored (before being moved to some sub-folder of /mnt/archie
),/mnt/swarm/docker/data
- is a directory where all the Docker services store their local configurationsEither create these folders and/or adjust the mount points of various media or use regular expressions to batch-update all the service files in order to update the paths to match the local configuration you might have. As described in our computer engineering section, it is a good idea to map these folders to various storage techologies given that their usage patterns are very different. Here is a recommended rundown for the aforementioned folders:
/mnt/archie
- is used as long-term storage such that it is best placed on a slow-but-large storage medium like a spinning 3.5" disk. Given its usage pattern, one could even configure the drive to spin down because this device will rarely be used for writing and will only be used for reading when the media is watched via, say, the Plex Media Player,/mnt/swarm/downloads
- represents the storage folder for downloads and we recommend placing this folder onto an USB thumb drive (even more so if USB3 is available) because this folder can be lost completely and nothing of value would have been lost,/mnt/swarm/docker/data
- is a long term configuration storage folder, with some read access, some write access that finds itself between /mnt/archie
and the /mnt/swarm/downloads
folder in terms of IO activity; the folder is somewhat vital in terms that it stores the current configuration of the services but it is not critical like /mnt/archie
that stores the actual dataWith the previous observations in mind, here are the large steps required to set this up, with some possible intermediary steps that can be omitted:
service
from the services
top-level folder to the /etc/systemd/system/
folder and reload the system services by issuing the command systemctl daemon-reload
,registry
service with systemctl start registry
,dockerfiles
sub-folder of the checked out Subversion stack and build the subfolders (to date, readarr
and whisparr
need building) for example, by using docker build -t localhost:5000/whisparr:latest .
followed by pushing them onto the local registry by issuing docker push localhost:5000/whisparr:latest
to push the image onto the local storage,/etc/systemd/system/
and go through the service files copied over in order to change local parameters such as paths or API keys,systemctl start
,with these operations completed, the stack should be fully started and it will also start offering services on the local network via mDNS.
The service files are configurable to some degree, for example the mDNS publisher for Docker containers allows customizing the local domain.
The stack is designed with minimalism in mind and we were surprised that we were able to squeeze the project into USD200, with USD100 going for the mini PC and the other USD100 going for the 3.5" hard-drive enclosure. The stack was deployed to a custom hardware NAS also built in-house by Wizardry and Steamworks because the design turned out way smaller and even more reliable than using an inexpensive NAS enclosure. Maybe a very helpful requirement would be to make sure that the hardware that the operating system is installed on that runs the Docker services has some updated and low-requirements means of accelerating transcoding - lots of mini PCs built with an Intel GPU that supports QSV for hardware transcoding of streams.
The following image is a screenshot of the glances
program showing a system overview with all the services running and Plex playing some music.
Remember that this is a custom NAS enclosure that runs a fully automated media ecosystem that automatically pulls media from allover the Internet and then centralizes that data to be read by Plex Media Player. Quite frankly, the only process that stresses the system is Plex Media Player that has lots of built-in background processes that performs media file accounting and admittedly those processes could be further limited to lower the stress on the machine.
One ironic observation is that the custom hardware NAS that we built seems to be even more stable than a setup using an inexpensive RAID enclosure, which we ended up attributing to "the myth of a cheap RAID NAS". Intuitively, daisy-chaining liabilities results in a larger overall liability than less of liability even accounting for the extra perks that RAID would offer such as data deduplication or data mirroring. For computer engineering in general, the realization hinges on "cheap" hardware that is missing very crucial functionality like SATA NCQ, contains very trashy controllers (like the SATA backplane itself) or also rubbish RAID controllers (even happens with industrial equipment like the nightmarish HP raid controllers), that would lead to intermittent failures and spurious disconnects that required the whole system to be rebooted. This is even mentioned for USB drives in the inexpensive NAS solutions writeups where it is suggested that daisy-chaining a bunch of USB3 drives via an USB3 HUB might be the most cost-effective solution. However, it turned out that a more monolithic build, with less moving parts, much, much cheaper than the "inexpensive NAS" (due to not having to buy smaller individual drives), as can be observed, leads to an autonomous uptime of 15 days.
Or, very simply put, one large 3.5" drive connected via USB beats the hell out of multiple 2.5" external drives bridged at the logical level through an "inexpensive" RAID solution.
The custom NAS rebuild is the result of mitigating data loss after four separate 2.5" Seagate SATA drives that failed at the same time (all of them having roughly 3 years of RAID/ZFS usage) that almost took down the whole ZFS tank. We recently transferred a large amount of data using one of these "failed drives" after placing them back into their original external enclosure and using them as external drives and, guess what, we experienced no errors during the transfer either while transferring the data onto the drive and then afterward while transferring the data off the drive. We speculate that the usage pattern of the drive itself triggers some software or hardware error on its controller circuit that makes the drive unsuitable for continuous usage but when the drive is used casually as an external "from time to time" hard-drive for ferrying files, the drive will be fine and will have a much longer lifespan.
For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.