Reasons for Not Using Docker

U.S. Senator on the implementation of rigorous legislation of cryptocurrencies: "This is a solution to a problem that does not yet exist."

  • whilst Docker does its best to mask itself as open-source and being deeply involved with the "community", the reality is that docker is a private company whose decisions might not necessarily align with the decisions of the "community", nor with open-source dialectics (remember when Google made the 20-year old free translation service non-free, overnight and started charging for it such that there is probably a large slice of the Internet that still contains third-party translators made to work with Google translate that can now just be scrapped because they will never work again?),
    • the first warning (concerning docker) a user gets is when they start working with docker and find out for the very first time that their pull requests are, in fact, limited and in order to pull containers more often, the users need to sign up and additionally, for some setups, become paying customers,
  • docker satisfies the problem of Linux being a messy operating system (given that a lot of people involved with Linux distributions are working excessively hard to ensure that at least distribution packages work well together) by effectively re-inventing the concept of sandboxed applications, yet it comes across as a band-aid for a much deeper underlying problem, in particular regarding the Linux and dependency tracking and dependency-hell,
  • unless the user takes it upon themselves to recompile everything and build containers themselves (there are a few…), pre-made containers are available from a vast array of third-party individuals that may or may not either update the container with the new software, or also not slide backdoors into containers that are made public,
    • the typical mitigation is to only use official containers, yet that is not always possible and the task to containerize every single application seems fairly daunting, even in terms of workload,
  • unless users partition their containers, there is a lot of data, and now with the docker paradigm, service-deduplication, where two containers might contain the same software (ie: the same database engine),
    • the typical mitigation for this is to split containers up, manually, yet this leads back to the same problem of having the extra workload of (even) breaking up official containers,
  • on the low-level, docker containers are treated as disposable, and interaction with the software inside is rather brutal; for example, when a container is shut down, the entire environment is just torn away and disposed, thereby potentially leaving static data in an unknown state (one example thereof is terminating a container whilst a program inside the container holds one file open, leading to possibly trashing the file when the handle is removed abruptly), other examples include the corruption of databases such as SQLite that make a point out of being gracefully opened and gracefully closed for the sake of a consistent (deterministic) workflow,
  • there are some semantic lackings that are justified with a lot of dark magic; for instance, there is no way to "just restart" a container within a swarm, the only way to "restart" is to just dispose the entire environment and then re-create it from scratch, the justification thereof being a bleak mention on how containers in the swarm "are just sent out there" so they cannot be restarted (compared to containers without a swarm, that can indeed be restarted via "start" and "stop"),
  • typically, every single "fix" that is suggested, involves tearing down the setup and then, admittedly using scripts, recreating the environment from scratch thereby potentially occluding setup mistakes and architectural bad choices, even involving the usage of docker,
  • even though the docker swarm has (had?) potential, one architectural problem is that the swarm is designed to have a handful of "managing nodes", and a plethora of "worker nodes" with the "managing nodes" being the only ones capable of pushing containers to a swarm as well as performing other administrative tasks; this is a design flaw, and a missed opportunity, because all the (few) "managing nodes" become single points of failure and then the entire swarm will need a central managing system (even if multiple managing nodes, consisting of the managing nodes themselves) (making all nodes manager nodes does not work as a workaround to obtain a P2P-like topology!),
  • another problem with the docker swam is that there is no way to perform tasks that should be possible (in fact, some of these tasks are "possible", but via various commands that … hack the result):
    • migrating a container onto a given node is not possible,
    • balancing the swarm across the cluster is performed by some crutch-command that is really just meant to refresh the container, not to make the swarm balance itself,
    • (even though this requires extra code) there is no way to let the swarm automatically balance and re-balance itself depending on what machine containers end up on, the only attainable balancing algorithm being just an equal-share spread of containers across machines, but without accounting for the necessary computational requirements of the containers themselves (ie: distribute a workload / CPU bound heavy container to a more powerful machine),
  • the networking side of docker seems to be neatly brushed under the carpet where opening up containers in certain ways or deploying to a swarm ends up creating various network bridges, overlay networks and other paraphernalia resulting sometimes even in cutting off the server's connection to the local network, as well as generating "redundant networks" with containers ending up with multiple IP addresses,
  • networks seem to always be created in the class A private address space (10…) such that if any such network already exists, just installing docker and starting a container will interfere with the local networks,
  • it is very difficult to customize low-level properties of the networks created by Docker, simply because the design of Docker extends to multiple domains (applications, networking, etc) without fully covering all the usage cases; for example, it is very difficult to configure Docker to set a custom MTU for the various interfaces it creates and typically one would resort to udev to create rules to configure the interfaces,
  • on a philosophical level, docker tends to phase out the importance of various software packages, and makes all the talk about "just installing a bunch of applications" - so the official Apache docker does not contain a module you need? Doesn't matter, scrap the container and find some third-party re-packaged Apache container and use that instead; the result thereof is hiding the need to be well-acquainted or experienced with well-established Linux software packages (Apache, Bind, ISC DHCP) and then adding the philosophy of "just scrapping" the current workflow makes docker the ultimate tool of dumbing down IT in general,
  • another dumbing-down effect is achieved due to Docker facilitating (in the sense of, making it easy) the deployment of software with esoteric requirements, complex setups, built on outdated libraries and dwindling documentation; software that would have otherwisely righteously perished due to being overly obscure is now easy to install within a container where all the dubious requirements are well-arranged (of course, at the expense of security due to outdated libraries, the lack of documentation misleading users, lack of updates and other problems that emerge due to software that has not been designed in a clean way but can now just be deployed within a container),
  • there is a problem with precedence (or clash of interests) when accounting for software that has an update feature because Docker would rather have the user upgrade the container itself rather than the application within updating itself; however, depending on who packaged the application, a container update might not appear as soon as it would be desirable compared to the built-in application update feature

Automatically Update Container Images

The Watchtower container can be ran along with the other containers, will automatically monitor the various docker containers and will then automatically stop the containers, update the image and restart the containers.

Similarly, for swarms, shepherd is able to be deployed within a swarm in order to update containers.

Rebalance Swarm

The following command:

docker service ls -q | xargs -n1 docker service update --detach=false --force

will list all the services within a swarm and then force the services to be re-balanced and distributed between the nodes of the swarm.

After executing the command, all nodes can be checked with:

docker ps

to see which service got redistributed to which node.

One idea is to run the rebalancing command using crontab in order to periodically rebalance the swarm.

Run a Shell in Container within a Swarm

Typically, to open a console, the user would write:

docker run -it CONTAINER bash

where:

  • CONTAINER is the name or hash of the container to start "bash" within

However, given that containers are distributed in a swarm, one should first locate on which node the container is running by issuing:

docker service ps CONTAINER

where:

  • CONTAINER is a container running in the swarm

The output will display in one of the columns the current node that the container is executing on. Knowing the node, the shell of the node has to be accessed and then the command:

docker ps 

can be used to retrieve the container ID (first column).

Finally, the console can be started within the distributed container by issuing:

docker exec -it CONTAINER_ID sh

where:

  • CONTAINER_ID is the id of the container running in the swarm on the local node

Pushing to Private Registry

The syntax is as follows:

docker login <REGISTRY_HOST>:<REGISTRY_PORT>
docker tag <IMAGE_ID> <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>
docker push <REGISTRY_HOST>:<REGISTRY_PORT>/<APPNAME>:<APPVERSION>

fuss/docker.txt · Last modified: 2024/02/28 10:26 by office

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.