The proper link format is:
https://mydomain.com/SERVARR/feed/calendar/SERVARR.ics?apikey=API_KEY
where:
SERVARR
one of the servarr applications, API_KEY
is the API key for the servarr applicationEven though Servarr PVRs are not meant to perform backfill searches, the following short script can be scheduled with cron and will periodically connect to all the PVRs in order to trigger the "search missing" functionality.
#!/usr/bin/env bash ########################################################################### ## Copyright (C) Wizardry and Steamworks 2024 - License: GNU GPLv3 ## ########################################################################### # This is a small script that can be used to automatically search for # # missing content for different PVR servers under the Servarr family. # ########################################################################### ########################################################################### ## CONFIGURATION ## ########################################################################### # Path to a lock file. LOCK_FILE='/tmp/@rr-missing-state.lock' # Turn on verbosity for debugging. VERBOSE=0 # All SERVARR_* variables are configured sequentially where the fist URL # within SERVARR_COMMAND_URL matches the first API key in SERVARR_API_KEY. # {lidarr,radarr,readarr,sonarr,whisparr}.tld should be changed to the # server name of the respective PVR. SERVARR_COMMAND_URL=( http://lidarr.tld/lidarr/api/v1/command http://radarr.tld/radarr/api/v3/command http://readarr.tld/readarr/api/v1/command http://sonarr.tld/sonarr/api/v3/command http://whisparr.tld/whisparr/api/v3/command ) # The API Key for each PVR, in sequence must be added line-by-line to # the following array definition. SERVARR_API_KEY=( 60b725f10c9c85c70d97880dfe8191b3 5d47bb807bf03f3248c00151c0b00382 66b9b1109eb98010edf7f135565b0579 a3db842660bf5d8d432db9d743629fec b04999d7fa5215eb7f103e99226baa7f ) # This variable need not be changed unless the sequence changes. SERVARR_NAME=( lidarr radarr readarr sonarr whisparr ) # This variable need not be changed unless the sequence changes. SERVARR_COMMAND_NAME=( MissingAlbumSearch MissingMoviesSearch MissingBookSearch MissingEpisodeSearch MissingEpisodeSearch ) ########################################################################### ## INTERNALS ## ########################################################################### # Acquire a lock. if mkdir $LOCK_FILE 2>&1 >/dev/null; then trap '{ rm -rf $LOCK_FILE; }' KILL QUIT TERM EXIT INT HUP else exit 0 fi for INDEX in "${!SERVARR_COMMAND_URL[@]}"; do SEARCH=$(curl -L -s -d "{ \"name\": \"${SERVARR_COMMAND_NAME[$INDEX]}\" }" -H "Content-Type: application/json" -X POST ${SERVARR_COMMAND_URL[$INDEX]}?apikey=${SERVARR_API_KEY[$INDEX]} | jq -r .status) [ $VERBOSE = 1 ] && case $SEARCH in "started") echo "Search for missing started on "${SERVARR_NAME[$INDEX]} ;; *) echo "Failed to search for missing content on "${SERVARR_NAME[$INDEX]} ;; esac done
In order to use the script, change the variables in the INTERNALS
section of the script accordingly and then schedule the script via cron to run periodically (on Debian, simply drop the script into /etc/cron.daily
). Note that backfill searches might add a great amount of missing items to the queue as well as grab them and add the download to the download client such that it is recommended to run this script at most once per day. The script works well with the script meant to periodically remove stalled downloads from the PVRs.
A companion service can be used to check whether the downloaded media files are sane; one such service is checkrr that can be run under docker and is able to crawl media collections and then issue a redownload request to Sonarr, Radarr and Lidarr in order to download the media again.
Sometimes when accessing Activity
→Queue
, due to various reasons and bugs, the message "Failed to load queue" might be displayed. Typically what happens is that some release is stuck in the queue and gets corrupted such that the queue fails to be displayed.
The easiest fix, short of reporting the bug, is to shut down the Servarr (Sonarr, Radarr, Lidarr or Whisparr), to access the database and truncate the PendingReleases
table.
For example, using the command-line SQLite client (and for the Radarr Servarr):
sqlite3 radarr.db
followed by:
DELETE FROM PendingReleases;
Restarting the Servarr should now fix the issue and allow the queue to be displayed correctly.
One of the problems when spanning multiple trackers is that some private trackers require special handling or are more important than public trackers due to the entry requirements. Similarly, downloading large amounts of files is fairly understandable when using automated systems due to the heuristics used searching, adding and rejecting files to match the requirements thereby ending up sometimes leaving files behind (which is not really a problem, given that seeding older files will just improve the tracker ratio).
One valid combination that matches a scalable goal would be between the "food" frontend and the "rTorrent" torrent client. Both of these can be run within Docker in order to remove the requirement of having to pollute the base-system with third-party requirements (such as a node.js
stack for the "flood" frontend).
Jesse Chan, the creator of "flood" has a nice stack that includes the "rTorrent" client with modifications to make "rTorrent" work seamlessly with "flood". There are several Docker containers available, including a container with both "flood" and "rTorrent" but perhaps the best topology is to run one single "flood" instance and through that instance manage multiple "rTorrent" clients - with a single "rTorrent" client being able to perhaps handle up to a thousand torrent files at the same time.
The following sections include service files that can be dropped into /etc/systemd/system
on a machine that is supposed to run both "flood" and "rTorrent" in order to start the services on boot. In other words, save the contents of both service files, say flood.service
, respectively rtorrent.service
and drop them into /etc/systemd/system
, issue systemctl daemon-reload
and then start both of them with systemctl start flood
, respectively systemctl start rtorrent
.
There are no special requirements to running "flood", the main change from the original Jesse Chan configuration being that "flood" is in this case being redirected to use the path /config
as its local storage (the official container uses the XDG convention with the state directory being placed at $HOME/.local/share/flood
which is unnecessarily complicated given containerization). The trick is to omit the HOME
environment variable declaration and then use –rundir /config
.
[Unit] Description=Flood Docker Container After=docker.service Requires=docker.service StartLimitIntervalSec=0 [Service] Restart=always RestartSec=5s ExecStartPre=/usr/bin/docker pull jesec/flood ExecStart=/usr/bin/docker run --name=flood \ --rm \ --interactive \ --user 0:0 \ --privileged \ -p 3000:3000 \ -v /mnt/docker/data/flood:/config \ -v /mnt/downloads/:/mnt/downloads/ \ jesec/flood \ --port 3000 \ --rundir /config ExecStop=/usr/bin/docker rm -f jesec/flood TimeoutSec=300 [Install] WantedBy=multi-user.target
The "rTorrent" Docker service file will start "rTorrent" from Jesse Chan but with the internal built-in configuration file disabled such that the SCGI port can be mapped to a TCP connection.
Even though "rTorrent" does use socket files for its SCGI service due to the service exposing full control of the rTorrent
client which can even lead up to privilege escalation exploits rendering root on the hosting machine, Docker is specifically a sandboxed environment and it is still a violation of Docker architecture to connect two independent applications through files instead of using the networking stack. For that purpose, the service file opens up port 5000
for SCGI allowing XML-RPC communications with the outside world and it is up to the administrator to secure port 5000
as necessary. With that being said "Flood" will then connect to the "rTorrent" client over port 5000
.
[Unit] Description=rTorrent Docker Container After=docker.service Requires=docker.service StartLimitIntervalSec=0 [Service] Restart=always RestartSec=5s ExecStartPre=/usr/bin/docker pull jesec/rtorrent ExecStart=/usr/bin/docker run --name=rtorrent \ --rm \ -h HOST_NAME \ --interactive \ --user 0:0 \ --privileged \ -p 62296:62296 \ -p 62296:62296/udp \ -p 6881/udp \ -p 5000:5000 \ -v /mnt/docker/data/rtorrent:/config \ -v /mnt/downloads/:/mnt/downloads/ \ jesec/rtorrent \ -n \ -b 0.0.0.0 \ -d /mnt/downloads/ \ -o session.path.set=/config,system.daemon.set=true,network.port_range.set=62296-62296,dht.port.set=6881,network.scgi.open_port=0.0.0.0:5000 ExecStop=/usr/bin/docker rm -f jesec/rtorrent TimeoutSec=300 [Install] WantedBy=multi-user.target
The main highlights are:
HOST_NAME
will become the hostname of the container and it should be set to some string that does not change,62296
is the torrent listening port,6881
is the DHT / PEX port,5000
is the SCGI / XML-RPC port,/mnt/docker/data/rtorrent
is the host path where the "rTorrent" configuration is stored and will be mapped into the Docker container under /config
with the "rTorrent" client setting /config
as the session path,/mnt/downloads/
is the host path to a directory where downloads will be placed; the path is left as a full absolute path when mapped into the container, such as /mnt/downloads
instead of being mapped into the root, as in /downloads
, in order to provide compatibility with Servarr servers that like the path to be the same both on the download client and on the Servarr instance,
Note that all the "rTorrent" options are passed via the command line -o
option and that there is not other configuration being used.
With this configuration in place, it is now possible to duplicate the "rTorrent" configuration, give it a different name, adjust the ports and paths and run another instance of "rTorrent" that would process torrents for different trackers.