About

rclone is an utility that can be used to synchromize cloud storage accounts or local filesystems with one-another from the command line. In the past, we have created Docker images that bundle rclonebrowser, a project that adds a graphical interface and then uses an X11 virtual server to display the rclone interface in a web-browser, but one of the problems with the image is that it runs on top of X11 and thus will be very CPU expensive without a GPU (and, furthermore, passing a GPU to docker in a swarm is not possible at the current time of writing).

rclone's support of cloud storage providers is shockingly exhaustive with a very large repertoire that cannot be ignored and that even surpasses any FUSE-like filesystem drivers that were once very important in the Linux world. That being said, while rclonebrowser is slow, it is not very easy to find a substitute for rclone, such that finding a more efficient system like one described on this page would be great. Another problem is that setting up access to cloud storage like Google Drive is not trivial at all given the excessive security restrictions that Google implements and rclone itself has a very long tutorial on how to set up Google Drive as a remote (needless to say that it requires setting up an application, as if the drive would be made public, but then only generating an internal API key to access the drive, something that turns out to be contextually not-secure but required due to security restrictions).

Diagram

The way this will be setup will be to have a Docker container running rclone with the web interface started and ready to process remote requests. The rclone container will also have all the necessary paths mounted and set up. Then, a different container will be set up that will be running cronicle, a cron-like utility with a web-interface, that will access rclone remotely and then run various synchronization jobs.

<-------------------------------------------------->

    +----------+  +--------+
    | cronicle |  | rclone |
    +----+-----+  +----+---+        
         |             ^            Docker swarm
         |             |
         +-------------+

<------------------------------------------------>

In fact, it is possible to just set up rclone as the only container and then remotely schedule jobs externally using rclone from a different machine but running cronicle makes it so much more convenient.

Setting up Cronicle

A cronicle Dockerfile is already provided but will more than likely need adjustments in case other functionality is desired because the image builds-in a bunch of tools that might not be necessary. The following stanza from the Dockerfile will install the rclone tool which will be necessary to be able to remotely schedule jobs:

RUN curl -L -s https://downloads.rclone.org/v1.69.1/rclone-v1.69.1-linux-amd64.zip -o /tmp/kitchen/rclone-v1.69.1-linux-amd64.zip && \
    unzip rclone-v1.69.1-linux-amd64.zip && \
    cp rclone-v1.69.1-linux-amd64/rclone /usr/local/bin && \
    chmod +x /usr/local/bin/rclone

Otherwise, the image can be built, pushed to a local registry and then redistributed to a swarm if necessary.

Setting up rclone

One of the ways to run rclone within Docker is to just use the bitnami image for rclone and then run a custom command. The following is a docker compose file that can be used to run rclone within a swarm with a customized startup command that starts the web-interface:

version: '3.8'
services:
  rclone:
    image: bitnami/rclone:latest
    user: root
    command: rcd --rc-web-gui --rc-web-gui-no-open-browser --rc-addr 0.0.0.0:5572 --rc-user admin --rc-pass admin --config=/config/rclone.conf --cache-dir=/cache --rc-web-gui-update --rc-web-gui-force-update --rc-enable-metrics
    ports:
      - 5572:5572
    volumes:
      - /mnt/docker/data/rclone/config:/config
      - /mnt/docker/data/rclone/cache:/cache
      - /mnt/data:/mnt/data
    environment:
      - GROUP_ID=0
      - USER_ID=0
      - TZ=Etc/UTC
    deploy:
      labels:
        - shepherd.enable=true
        - shepherd.auth.config=docker
      replicas: 1
      placement:
        max_replicas_per_node: 1
      resources:
        reservations:
          cpus: '1.0'
          memory: 256M

The docker compose file will achieve the following:

  • will run the rclone web-interface binding on all interfaces and listening on port 5572,
  • will define a username admin and a password admin; these are necessary to customize because they will be used remotely by the rclone binary

It might be necessary to create and empty configuration file on the host at /mnt/docker/data/rclone/config/rclone.conf, for instance, by issuing touch /mnt/docker/data/rclone/config/rclone.conf, in order to ensure that rclone starts properly.

When the container is run, it is possible to create a proper configuration file and set up all the cloud providers by finding the container ID:

docker container ls | grep rclone

and then starting a shell:

docker container exec -it ... bash

within the container.

From there on out, it is just a matter of running rclone on the command line in order to set up the drives.

Setting up Cronicle Jobs

The cronicle job to set up would now consist in just a shell script, or multiple shell scripts if you like, it does not matter, except for the invocation of the rclone command line:

#!/usr/bin/env bash
 
export RCLONE_VERBOSE=1
rclone rc \
    --url=http://rclone.tld:5572 \
    --user admin \
    --pass admin \
    sync/bisync path1=talk:/ path2=local:/mnt/talk/ resync=true resilient=true

where:

  • rclone.tld is the internal hostname or IP address of the machine that is running the rclone container,
  • admin should be changed for both username and password to match the rclone compose file from the previous section,
  • the rest of the command sync/bisync path1=cookie:/ path2=local:/mnt/cookie/ resync=true resilient=true, is a remote rclone command that:
    • will synchronize the paths cookie:/ and local:/mnt/cookie/ configured inside the rclone container created previously with Docker compose,
    • will perform a resync in order to ensure that the paths will sync (not necessary, and could only be performed once),
    • will make sure that subsequent synchronizations are compatible via resilient=true

Now it's a matter of just scheduling this small bash script in order to make the remote rclone synchronize the paths.


web/elegant_cloud_file_synchronization.txt ยท Last modified: 2025/04/06 19:45 by 127.0.0.1

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.