Docker

Reasons for Using Docker

  • The new paradigm that Docker introduces that is most useful is the prioritization of data over software.
    • In other words, due to the amount of available software, a very large trend has been to adhere to certain software packages, like being part of a fandom of brands, when, in fact, any software that can solve a problem and generate, and / or process, some desired data is a great software to use. The adulation of software over data is over with Docker.
  • Linux was never designed with tidiness in mind (except for definitions like the FSH, that came later) such that it is fully expected to unpack all the crap in /bin or on top of / and then just deal with the fallout when some file goes astray and starts creating problems. For that reason, Docker is a great solution because it keeps all the requirements for whatever software is run within a container without polluting the operating system. In some ways, it is the only "sandboxing" feature, with less requirements than hypervisor virtualization and more management tools than a common root-jail, that Linux ever had.
  • There is a lot of software that has been created across the years by people with varying backgrounds in computer science, a lot of the software being, in fact, created by scientists working in natural sciences, that tend to churn over obscure libraries with esoteric requirements, either obsolete, hard to find, and mostly hard to recompile, such that a lot of the time, you really just want to get the software up and running without delving into the ins and outs of how a package is compiled. Unfortunately, the sound of the other shoe is that by containerizing, the software runs in a black box and a very tangential consequence, yet pertinent, is that the user is unable to assess how bad the software really is in terms of quality. Sometimes, software that is too difficult to compile, should just not be used.
    • There is a corollary to the former where ironically some software is so badly designed that Docker even seems to complicate the matters further (sometimes, people even suspect that is done deliberately in order to maintain control over the project).

Reasons for Not Using Docker

  • Whilst Docker does its best to mask itself as open-source and being deeply involved with the "community", the reality is that docker is a private company whose decisions might not necessarily align with the decisions of the "community", nor with open-source dialectics (remember when Google made the 20-year old free translation service non-free, overnight and started charging for it such that there is probably a large slice of the Internet that still contains third-party translators made to work with Google translate that can now just be scrapped because they will never work again?).
    • The first warning (concerning docker) a user gets is when they start working with docker and find out for the very first time that their pull requests are, in fact, limited and in order to pull containers more often, the users need to sign up and additionally, for some setups, become paying customers,
  • Unless the user takes it upon themselves to recompile everything and build containers themselves (there are a few…), pre-made containers are available from a vast array of third-party individuals that may or may not either update the container with the new software, or also not slide backdoors into containers that are made public.
    • The typical mitigation is to only use official containers, yet that is not always possible and the task to containerize every single application seems fairly daunting, even in terms of workload.
  • Unless users partition their containers, there is a lot of data, and now with the docker paradigm, service-deduplication, where two containers might contain the same software (ie: the same database engine).
    • The typical mitigation for this is to split containers up, manually, yet this leads back to the same problem of having the extra workload of (even) breaking up official containers.
  • On the low-level, docker containers are treated as disposable, and interaction with the software inside is rather brutal. For example, when a container is shut down, the entire environment is just torn away and disposed, thereby potentially leaving static data in an unknown state (one example thereof is terminating a container whilst a program inside the container holds one file open, leading to possibly trashing the file when the handle is removed abruptly), other examples include the corruption of databases such as SQLite that make a point out of being gracefully opened and gracefully closed for the sake of a consistent (deterministic) workflow.
  • There are some semantic lacks that are justified with a lot of dark magic; for instance, there is no way to "just restart" a container within a swarm, the only way to "restart" is to just dispose the entire environment and then re-create it from scratch, the justification thereof being a bleak mention on how containers in the swarm "are just sent out there" so they cannot be restarted (compared to containers without a swarm, that can indeed be restarted via "start" and "stop").
  • Typically, every single "fix" that is suggested, involves tearing down the setup and then, admittedly using scripts, recreating the environment from scratch thereby potentially occluding setup mistakes and architectural bad choices, even involving the usage of docker.
  • Even though the docker swarm has (had?) potential, one architectural problem is that the swarm is designed to have a handful of "managing nodes", and a plethora of "worker nodes" with the "managing nodes" being the only ones capable of pushing containers to a swarm as well as performing other administrative tasks. This is a design flaw, and a missed opportunity, because all the (few) "managing nodes" become single points of failure and then the entire swarm will need a central managing system (even if multiple managing nodes, consisting of the managing nodes themselves) (making all nodes manager nodes does not work as a workaround to obtain a P2P-like topology!).
  • Another problem with the docker swam is that there is no way to perform tasks that should be possible (in fact, some of these tasks are "possible", but via various commands that … hack the result):
    • migrating a container onto a given node is not possible,
    • balancing the swarm across the cluster is performed by some crutch-command that is really just meant to refresh the container, not to make the swarm balance itself,
    • (even though this requires extra code) there is no way to let the swarm automatically balance and re-balance itself depending on what machine containers end up on, the only attainable balancing algorithm being just an equal-share spread of containers across machines, but without accounting for the necessary computational requirements of the containers themselves (ie: distribute a workload / CPU bound heavy container to a more powerful machine),
  • The networking side of docker seems to be neatly brushed under the carpet where opening up containers in certain ways or deploying to a swarm ends up creating various network bridges, overlay networks and other paraphernalia resulting sometimes even in cutting off the server's connection to the local network, as well as generating "redundant networks" with containers ending up with multiple IP addresses.
  • Networks seem to always be created in the class A private address space (10…) such that if any such network already exists, just installing docker and starting a container will interfere with the local networks.
  • It is very difficult to customize low-level properties of the networks created by Docker, simply because the design of Docker extends to multiple domains (applications, networking, etc) without fully covering all the usage cases. For example, it is very difficult to configure Docker to set a custom MTU for the various interfaces it creates and typically one would resort to udev to create rules to configure the interfaces.
  • On a philosophical level, docker tends to phase out the importance of various software packages, and makes all the talk about "just installing a bunch of applications" - so the official Apache docker does not contain a module you need? Doesn't matter, scrap the container and find some third-party re-packaged Apache container and use that instead. The result thereof is hiding the need to be well-acquainted or experienced with well-established Linux software packages (Apache, Bind, ISC DHCP) and then adding the philosophy of "just scrapping" the current workflow makes docker the ultimate tool of dumbing down IT in general.
  • There is a problem with precedence (or clash of interests) when accounting for software that has an update feature because Docker would rather have the user upgrade the container itself rather than the application within updating itself. Depending on who packaged the application, a container update might not appear as soon as it would be desirable compared to the built-in application update feature.
  • The security gained via containerization is also security lost due to many containers being built using otdated or obsolete distribution versions that do not receive any more security updates and are hence vulnerable forever.

Docker Templates

Automatically Update Container Images

The Watchtower container can be ran along with the other containers, will automatically monitor the various docker containers and will then automatically stop the containers, update the image and restart the containers.

Similarly, for swarms, shepherd is able to be deployed within a swarm in order to update containers for which a Wizardry and Steamworks guide exists.

Rebalance Swarm

The following command:

docker service ls -q | xargs -n1 docker service update --detach=false --force

will list all the services within a swarm and then force the services to be re-balanced and distributed between the nodes of the swarm.

After executing the command, all nodes can be checked with:

docker ps

to see which service got redistributed to which node.

One idea is to run the rebalancing command using crontab in order to periodically rebalance the swarm.

Run a Shell in Container within a Swarm

Typically, to open a console, the user would write:

docker run -it CONTAINER bash

where:

  • CONTAINER is the name or hash of the container to start "bash" within

However, given that containers are distributed in a swarm, one should first locate on which node the container is running by issuing:

docker service ps CONTAINER

where:

  • CONTAINER is a container running in the swarm

The output will display in one of the columns the current node that the container is executing on. Knowing the node, the shell of the node has to be accessed and then the command:

docker ps 

can be used to retrieve the container ID (first column).

Finally, the console can be started within the distributed container by issuing:

docker exec -it CONTAINER_ID sh

where:

  • CONTAINER_ID is the id of the container running in the swarm on the local node

Pushing to Private Registry

The syntax is as follows:

docker login <REGISTRY_HOST>:<REGISTRY_PORT>
docker tag <IMAGE_ID> <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
docker push <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>

Restart Docker if Worker cannot Find Swarm Manager

If a worker cannot find the swarm manager when it starts up, at the current time of writing, Docker is made to terminate. This is problematic because the manager might go online after a while such that the workers should just wait to connect.

On some Linux distributions, such as Debian, Docker is started via a service file located at /lib/systemd/system/docker.service and it can be copied to /etc/systemd/system with some modifications in order to make SystemD restart Docker if it terminates.

On Debian, the service file is missing the RestartSec configuration line, such that it should be added to /etc/systemd/system/docker.service after being copied. Here is the full service file with the added line:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service containerd.service
Wants=network-online.target containerd.service
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/sbin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_OPTS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
RestartSec=10s

[Install]
WantedBy=multi-user.target

With this change, SystemD will try to bring Docker up every 10s after it has failed. Unfortunately, this fix has to be applied for all nodes in a Docker swarm.

Achieving an Even-Distribution of Services Across the Swarm

Unfortunately, services do not spread evenly through the swarm such that re-balancing is necessary. In fact, the strategy of distributing services across the swarm is surprisingly bad, with the manager node taking upon itself most of the services and with very few left over to the last node of the swarm. It seems Docker spreads services on a bucket-fill-like strategy where services are only spread out if the current node is deemed somehow full.

Irrespective of the lack of a strategy, here is one constructed command:

docker service ls | \
    awk '{ print $1 }' | \
    tail -n +2 | \
    xargs docker service ps --format "{{.Node}}" --filter "desired-state=running" | \
    awk ' { node[$0]++ } END { for (i in node) print node[i] } ' | \
    awk '{ x += $0; y += $0 ^ 2 } END { print int(sqrt( y/NR - (x/NR) ^ 2)) }' | \
    xargs -I{} test {} -gt 2 && docker service ls -q | xargs -n1 docker service update --detach=false --force

that performs the following operations in order:

  • retrieves the services in the swarm,
  • gets the number of services running per node,
  • computes the standard deviation of the services running per node,
  • if the standard deviation is greater than $2$ then the rebalancing command is ran in order to distribute services across the swarm.

In other words, the distribution strategy of the cluster is to place an equal share of services per available nodes.

Intuitively, the command can be placed in a cron script and, compared to just calling the swarm re-distribution command, the script should have no effect when the services are distributed evenly across the nodes due to the standard deviation falling well under $2$ (with $0$ being the theoretical point that the standard deviation should be when the services are evenly spread out).

Building using The Distributed Compiler

Some packages have to be compiled manually such that it is beneficial to use a distributed compiler in order to distribute the compilation workload across multiple computers. However, the system should be flexible enough to include the edge case when a distributed compiler is not available.

To that end, here is a Dockerfile that is meant to define some variables such that "distcc" will be used to distribute the compilation across a range of computers:

FROM debian:latest AS builder

# define compilation variables
ARG DISTCC_HOSTS=""
ARG CC=gcc
ARG CXX=g++

# install required packages
RUN apt-get --assume-yes update && apt-get --assume-yes upgrade && \
    apt-get --assume-yes install \
        build-essential \
        gcc \
        g++ \
        automake \
        distcc

# ... compile ...
RUN DISTCC_HOSTS="${DISTCC_HOSTS}" CC=${CC} CXX=${CXX} make

and the invocation will be as follows:

docker build \
    -t TAG \
    --build-arg DISTCC_HOSTS="a:35001 b:35002" \
    --build-arg CC=distcc \
    --build-arg CXX=distcc \
    .

where:

  • TAG is a tag to use for the build (can be used to upload to a registry),
  • DISTCC_HOSTS, CC and CXX are the environment variables setting the compiler to distcc and the hosts to be used to compile (in this case, computers a and b listening on port 35001 and 35002)

If you would like a ready-made container for distcc, you can use the Wizardry and Steamworks build.

Opening up a Port Across Multiple Replicas

Even though multiple replicas of a container can exist even on the same system or spread out through a swarm, due to the nature of TCP/IP a single port might be allocated at the same time for any single process, such that when starting a series of clones of a program, there must exist a way to specify a port range or a series of ports for each instance of the program being launched.

The syntax is as follows:

START_PORT-END_PORT:CONTAINER_PORT

where:

  • START_PORT and END_PORT delimit a range from a starting port to an ending port that the clones of the programs will use to select their listening outbound port and,
  • CONTAINER_PORT represents the port for the program running within the container that will be exposed.

Interestingly, this feature does not work as expected and whilst the ports will be used for all nodes within the swarm for all replicas of the service, all ports will be replicated by all nodes such that accessing one port within the port range successively will lead to a service on a different node within the docker swarm. If stickyness is desired, the current solution at the time of writing is to either use jwilder/nginx-proxy or to just declare multiple services of the same image with the constraints set appropriately to each node in the swarm.

Restarting Containers on a Schedule

Depending on the application, in some rare cases some containers must be restarted. For example, invidious documents stat that invidious should be restarted at least once per day or invidious will stop working. There are multiple ways to accomplish that, either by using the system scheduling system, such as cron on Linux, but the most compact seems to use docker-cli and trigger a restart of the service. For example, the following additional service can be added to the invidious service in order to restart invidious at 8pm:

  invidious-restarter:
    image: docker:cli
    restart: unless-stopped
    volumes: ["/var/run/docker.sock:/var/run/docker.sock"]
    entrypoint: ["/bin/sh","-c"]
    command:
      - |
        while true; do
          if [ "$$(date +'%H:%M')" = '20:00' ]; then
            docker restart invidious
          fi
          sleep 60
        done

When running under a swarm, it gets a little more complicated due to the controlling service only being present on master nodes such that the supplementary service has to only be deployed on master nodes in order to restart the service. Here is the modified snippet:

  invidious-restarter:
    image: docker:cli
    restart: unless-stopped
    volumes: ["/var/run/docker.sock:/var/run/docker.sock"]
    entrypoint: ["/bin/sh","-c"]
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
    command:
      - |
        while true; do
          if [ "$$(date +'%H:%M')" = '20:00' ]; then
            docker service ls --filter name=general_invidious --format "{{.ID}}" | \
                head -n 1 | \
                xargs -I{} docker service update --force --with-registry-auth "{}"
          fi
          sleep 60
        done

that will make sure that the general_invidious service will be restarted every day at 8pm.

Docker Swarm - Server Contingencies

Here are some useful changes to mitigate various issues with running a Docker swarm:

  • RAM and OOM
    • Docker does not take the available amount of RAM per machine into account such that any process that is distributed to a machine that will end up consuming more RAM than the machine has available, will simply end up using up all the RAM on that machine till the machine hangs. An additional OOM killer such as the SystemD OOM killer could be used to attempt to prevent a process started by Docker to grind the machine to a halt.
  • CPU and hangs
    • The hangcheck-timer module could be used in the absence of a hardware watchdog to reboot the machine in case the machine has stalled for a long time.
    • Linux control-groups can be used in order to limit the total amount of CPU and RAM allocated to Docker as a whole because Docker does not implement any upper limit itself. This would have to be tailored to all nodes in a Docker swarm.
  • Storage
    • nodes will be similar to storageless thin-clients mounted over NFS, such that log files on each node should be irrelevant. Trimming the logs down in size, in particular under SystemD systems that tend to set a very large log size is good idea.

Dumping Running Container Process Identifier to Files

The following script was written in order to query the currently running containers on a machine running Docker and then create a directory and write within that directory PID files containing the PIDs of the services being ran within the Docker container.

The script was used for monitoring services on multiple machines in a Docker swarm where it was found necessary to retrieve the PID of the services within a Docker container without breaking container isolation.

#!/usr/bin/env bash
###########################################################################
##  Copyright (C) Wizardry and Steamworks 2024 - License: MIT            ##
###########################################################################
 
# path to the swarm state directory where PID files will be stored
STATE_DIRECTORY=/run/swarm
 
if [ ! -d $STATE_DIRECTORY ]; then
    mkdir -p $STATE_DIRECTORY
fi
 
DOCKER_SWARM_SERVICES=$(docker container ls --format "{{.ID}}" | \
    xargs docker inspect -f '{{.State.Pid}} {{(index .Config.Labels "com.docker.stack.namespace")}} {{(index .Config.Labels "com.docker.swarm.service.name")}}')
while IFS= read -r LINE; do
    read -r PID NAMESPACE FULLNAME <<< "$LINE"
    IFS='_' read -r NAMESPACE NAME <<< "$FULLNAME"
    PIDFILE="$STATE_DIRECTORY/$NAME"".pid"
    if [ ! -f "$PIDFILE" ]; then
        echo $PID >"$PIDFILE"
        continue
    fi
    test $(cat "$PIDFILE") -eq $PID || \
        echo $PID >"$PIDFILE"
done <<< "$DOCKER_SWARM_SERVICES"

Searching Logs on the Command Line

It seems that the Docker logs command will print out the logs on stderr such that piping the output to grep or other tools will not work properly. In order to making piping work, stderr has to be redirected to stdout and then piped to whatever tool needs to be used:

docker service logs --follow general_mosquitto 2>&1 | grep PING

Repositories not Signed in Docker Container

Sometimes the reason behind the errors claiming that repositories are not signed during a Docker build are due to the lack of space on the hard-drive. The errors are along the line of:

#7 0.692 Get:1 http://deb.debian.org/debian bookworm InRelease [151 kB]
#7 0.771 Get:2 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB]
#7 0.814 Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
#7 0.869 Err:1 http://deb.debian.org/debian bookworm InRelease
#7 0.869   At least one invalid signature was encountered.
#7 0.954 Err:2 http://deb.debian.org/debian bookworm-updates InRelease
#7 0.954   At least one invalid signature was encountered.
#7 1.066 Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
#7 1.066   At least one invalid signature was encountered.
#7 1.101 Reading package lists...

Healthcheck within Docker Compose file vs. Healthcheck within Dockerfile

Both Docker compose files and Dockerfiles allow the creation of health checks and the difference is that health checks placed within compose files will be executed by the host and thus cannot access the inside of the Docker container whilst health checks within Dockerfiles take place inside the container.

If possible, it is always preferable to create health checks within a Dockerfile when building a container image mainly because this represents a separation of concerns and also respects the containerization principle of software running with Docker.

Getting Docker UTF-8 Support on Debian

Some software requires the console to be set to UTF-8, in particular software that deals with the Linux command line such as Jenkins. By default the debian or debian-slim images are configured to have a POSIX locale by default, such that the locale has to be changed to UTF-8 during the build process of the image.

The following snippet should be inserted into a Dockerfile that inherits from debian or debian-slim images in order to set the locale to UTF-8:

# UTF-8 support
RUN apt-get install coreutils -y locales && \
    echo "en_US.UTF-8" | tee -a /etc/locale.gen && \
    locale-gen
    
# set environment variables
ENV LC_ALL=en_US.UTF-8
ENV LANG=en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8

Docker Resource Consumption Accounting using Linux Control Groups

Docker implements special support for c-groups in order to allow controlling the resource usage of Docker itself. In order to enable c-groups, edit or create /etc/docker/daemon.json in order to add the following contents:

{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "cgroup-parent": "docker_limits.slice"

}

The configuration will:

  • enable the native SystemD c-groups driver,
  • use docker_limits.slice to mitigate the resource consumption.

In turn, the file docker_limits.slice is placed at /etc/systemd/system/docker_limits.slice and contains the following:

[Unit]
Description=Slice that limits Docker resources
Before=slices.target

[Slice]
CPUAccounting=true
CPUQuota=90%
MemoryAccounting=true
MemoryHigh=2G
MemoryMax=2.5G

that enables both CPU and RAM accounting, sets the maximum CPU usage to $90\%$ and the maximum memory consumption to $2.5GiB$.

Lastly, in order to check the RAM usage of Docker, the systemd-cgtop tool can be used that displays the resource consumption for c-groups.

Docker Services "Just Not Starting" in a Docker Swarm

Docker on its own performs no accounting in terms of services running within a Docker swarm and the only distribution strategy of services is an equal "spread" of services. Depending on what node is up and at what time, the distribution of services is not even equal to all nodes such that fairer end-user service distribution solutions make sense to keep a balance of services across a set of nodes.

However, even with equally distributed services, Docker does not and can not know what amount of CPU or RAM a service might require at runtime such that a runtime solution to shift services around in a swarm would make more sense. One way to check the CPU consumption is to check all the services and see what total CPU usage they collectively generate and then repeat the same procedure for RAM and/or other resources that the services might consume.

Without accounting for resource consumption it often happens for the Docker managers of a swarm to place services on the same node within a swarm such that the node ends up overloaded and without the ability to answer requests. This section explores possibilities to mitigate such Denial of Service issues that stem from the inability to predict the amount of resource usage head of time in order to ensure that services placed on a node do not end up slowing the node down due to their high resource consumption patterns.

Pinning

Similar to multitasking solution, one obvious solution is to pin the heavy services to different nodes in order to ensure that they do not all run together. This would work by changing the service constrains to pin the service to different nodes.

Here is a snippet from a Docker compose service:

    deploy:
      labels:
        - shepherd.enable=true
        - shepherd.auth.config=docker
      replicas: 1
      placement:
        max_replicas_per_node: 1
        constraints:
          - node.hostname == docker2

where the node.hostname == docker2 constraint makes sure that the service will run on the node with the hostname docker2.

Although this is a fine solution, it will not work in terms of load-balancing and adaptability because when the node docker2 becomes unavailable, the Docker managers would simply not know where to place the service. Furthermore, manually pinning services to nodes adds a level of locality that is unbecoming of a cluster - in other words, if all services are pinned, why even bother running a cluster and not just run the software on the nodes directly?

By Specification

Fortunately, Docker does perform the minimal level of accounting necessary in order to be aware of how many resources the node has such that working by specification, which is the best option, is very much possible. Here is an example excerpt out of a Docker compose service:

    deploy:
      labels:
        - shepherd.enable=true
        - shepherd.auth.config=docker
      replicas: 1
      placement:
        max_replicas_per_node: 1
#        constraints:
#          - node.hostname == docker2
      resources:
        reservations:
          cpus: '1'
          memory: 1G

Now, instead of pinning the service to the node with the hostname docker2, the service is defined (or specified) to require a full core (cpus: '1') and also require $1GiB$ of RAM. Now, when the service is deployed to the swarm, each node that is a potential candidate for deployment will cross-check the requirements with the resources available, and, if the required amount of CPU and RAM are not met, the node will reject the service. This process shall carry on until a node either accepts the service or the service enters a fail state that can be observed with docker ps that will hint that no solution is available that would match the deployment requirements.

It is not even required to provide a specification for all services, adding the requirements for services that seem to generate heavy load should be sufficient.

Automatic Configuration Reload for Docker Software

Typically *nix daemons are not meant to restart or reload themselves especially as a consequence of a changed configuration, which means that software running within a Docker container will require the Docker container to be restarted in order for the daemon to reload its configuration. It is however possible to implement a generic solution that should work across the board for any sort of software running within a container based on filesystem primitives such as INOTIFY.

The script is fairly simple and consists in just one command watching a directory and then raising an alarm when files are changed within that directory:

#!/usr/bin/env bash
###########################################################################
##  Copyright (C) Wizardry and Steamworks 2024 - License: MIT            ##
###########################################################################
# This script can be used to make a daemon reload its configuration       #
# whenever a change occurs within a defined directory, presumably the     #
# same directory where the configuration is stored in the first place.    #
#                                                                         #
# The script requires the "inotify-tools" package to be installed or      #
# whatever other package provides the "inotifywait" command line tool.    #
# Next, the script must be modified to make the necessary changes in the  #
# "CONFIGURATION" section where the path to the directory to be watched   #
# is specified and to also define a command that should be used to reload #
# the daemon. Note that whatever the command contains, must also be       #
# installed for the script to work.                                       #
#                                                                         #
# The script has to be ran permanently for the entire duration that the   #
# processes that it is monitoring is running. This can be accomplished by #
# starting the script using "supervisord" or any other tool that can run  #
# daemons, including bash scripts.                                        #
###########################################################################
 
###########################################################################
#                             CONFIGURATION                               #
###########################################################################
 
MONITOR_DIRECTORY=/data
RELOAD_COMMAND="kill -s HUP `pidof freeradius`"
 
###########################################################################
#                               INTERNALS                                 #
###########################################################################
 
# alarm(2)
function alarm {
    sleep $1
    eval $RELOAD_COMMAND
}
 
ALARM_PID=0
trap '{ test $ALARM_PID = 0 || kill -9 $ALARM_PID; }' KILL QUIT TERM EXIT INT HUP
 
inotifywait -q -m "$MONITOR_DIRECTORY" -r \
    -e "modify" -e "create" -e "delete" | \
    while IFS=$'\n' read -r LINE; do
    if [ -d /proc/"$ALARM_PID" ]; then
        kill -9 $ALARM_PID &>/dev/null || true
    fi
    alarm "5" &
    ALARM_PID=$!
done

when the alarm runs, the script executes a user-defined command that is supposed to make the daemon reload its configuration. In this example the command is kill -s HUP `pidof freeradius` and is meant to signal FreeRADIUS to reload its configuration by delivering a HUP signal. Both the directory to be watched and the reload command can be modified and adjusted to match whatever other daemon must be monitored for configuration changes.

Enumerate Services that Have not Been Replicated Completely Within a Docker Swarm

The following script can be used in order to list the services in a Docker swarm that have not fully replicated within the swarm. The script will output just the name of the services that have not been fully replicated. In order to use the script, download the text and save it to a file and make it executable.

#!/usr/bin/env bash
###########################################################################
##  Copyright (C) Wizardry and Steamworks 2024 - License: MIT            ##
###########################################################################
# This script is meant to enumerate Docker swarm service names that have  #
# not yet replicated across the swarm. The script compares the number of  #
# replicas that have been distributed across the swarm with the number of #
# total expected replicas and prints the service name in case there is a  #
# mismatch between the two.                                               #
###########################################################################
 
for DATA in \
    `docker service ls --format="{{.Name}},{{.Replicas}}" | \
         perl -pe 's/\(.+?\)//g'`; do
    NAME=$(printf $DATA | awk -F',' '{ print $1 }')
    RATIO=$(printf $DATA | awk -F',' '{ print $2 }')
 
    A=$(printf $RATIO | awk -F'/' '{ print $1 }')
    B=$(printf $RATIO | awk -F'/' '{ print $2 }')
 
    # If the number of replicas is equal to the number of expected
    # replicas then assume that the service has been already properly
    # distributed across the swarm.
    if [ "$A" = "$B" ]; then
        continue
    fi
 
    echo $SERVICE
done

Computing the Total Amount of CPU Usage For All Running Containers

The following command lists the services running on a Docker node and sums up the CPU usage for all services.

docker stats --no-stream | awk '{ print $3 }' | sed 's/\%//g' |  tail -n +2 | sort -u | awk '{s+=$1} END {print s}'

fuss/docker.txt · Last modified: 2024/12/20 06:17 by office

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.