Perhaps a flip-side of the switch to systemd from systemv is the ability to create unit files which can facilitate running multiple instances of the same daemon in a distribution-compliant and maintainable way.
This short tutorial jots down some ideas about system initialisation scripts and hopefully provides some insight on the need to run multiple instances of the same type. As an example, the tutorial will run multiple instances of the polipo system daemon.
Systemd files containing the @
are special unit files that can be created in order to run a systemd script with some given context. For instance, suppose that the file /etc/systemd/system/polipo@.service
is created, then by running:
systemctl enable polipo@something.service
will create an instance of the script that shall be ran by the system with the context something
passed to the service file. Further instance can be create, for example:
systemctl enable polipo@tor.service systemctl enable polipo@i2p.service
will create two further instances with the context being set to tor
, respectively i2p
.
SystemV did not provide a flexible way to create multiple instances of the same initialization script and it was left up to the user to create their own init scripts and place them in /etc/init.d
.
The context, in these case something
, tor
, i2p
can then be retrieved in the service file using the placeholder %i
that will be expanded when the script is run to something
, tor
or i2p
for each instance in part.
In complex server setups, it is sometimes required to create multiple instances of the same daemon in order to listen to multiple addresses. polipo
is a socks proxy that is frequently used in a proxy chain in order to translate HTTP queries from a downstream proxy such as squid
to an upstream proxy such as the tor
network. For this example, polipo can be used as a middle proxy to relay requests coming from squid, through polipo and then to - and here is the problem, both tor
and i2p
. However, polipo
by itself cannot do split proxying depending on the TLD (unlike squid). As such, two separate instances of polipo
must be ran, one of them relaying to tor
and the other relaying to i2p
.
The first step is to disable the distribution provided polipo
service file, by issuing:
systemctl disable polipo
yet leaving the file untouched such that the distribution will not have trouble upgrading.
Next, as per the previous section, a file is created /etc/systemd/system/polipo@.service
with the following contents:
[Unit] Description = Polipo proxy server using configuration %i After = network.target [Service] Type=forking Restart=always User=root Group=root ExecStart = /usr/bin/polipo -c /etc/polipo/config-%i pidFile=/var/run/polipo/polipo-%i.pid logFile=/var/log/polipo/polipo-%i.log daemonise=true ExecStartPre = -/bin/mkdir -p /var/run/polipo ExecStartPre = -/bin/chmod privoxy:privoxy /var/run/polipo ExecStartPre = -/bin/chmod 755 /var/run/polipo PIDFile = /var/run/polipo/polipo-%i.pid [Install] WantedBy = multi-user.target
As mentioned, if the file is saved and the command:
systemctl enable polipo@tor
is issued, then an instance is created that will pass tor
to the service file when it is read in by systemd
. The placeholder %i
will be expanded, such that for all file references in the previous service file, the %i
placeholder will be substituted by tor
.
In this case, polipo
, just like any other system daemon must create and access files, such that running two instances of polipo
would make the daemons overwrite each other leading to a certain amount of chaos. By using a unit file with multiple instances, the service polipo@tor
will now create the PID file at /var/run/polipo/polipo-tor.pid
whilst the polipo@i2p
instance will create its PID file at /var/run/polipo/polipo-i2p.pid
- the same concept applies to all other created via systemctl
.
Since standardization is lacking on Unix, various system daemons have different requirements that must be satisfied for a daemon to start. Not only is the program itself dependent on some invariant of the environment but the programs are also dependent on the Linux flavour and where the distribution places files.
A good rule is to check out how the original distribution-provided script runs the daemon. Following the example, and looking at /etc/init.d/polipo
on a Debian distribution, the top of the script contains the following snippet:
# Make sure /var/run/polipo exists. if [ ! -e /var/run/$NAME ] ; then mkdir -p /var/run/$NAME chown proxy:proxy /var/run/$NAME chmod 755 /var/run/$NAME fi
The snippet ensures that the directory /var/run/polipo
exists, and if it does not, it creates the directory and sets appropriate permissions such that the polipo
daemon can create a PID file in that directory. This dependency has nothing to do with polipo
but rather with the way that Debian provides the software, ready to run and reliable enough to restart.
Without even doubting it, if the procedure to create the /var/run/polipo
is not somehow wired-in to the polipo@.service
file, then the daemons will all fail to start. In order to wire-in the creation of the /var/run/polipo
the ExecStartPre
directive is used in the polipo@.service
file to maintain the same behaviour:
ExecStartPre = -/bin/mkdir -p /var/run/polipo ExecStartPre = -/bin/chmod privoxy:privoxy /var/run/polipo ExecStartPre = -/bin/chmod 755 /var/run/polipo
The -
in front of the commands, ensure that the script will not fail in case the commands cannot be executed (ie, if the directory does not exist, for the mkdir
command). All the ExecStartPre
directives will be executed sequentially before the actual 'ExecStart
directive is executed.