Please be aware that the application is distributed as is and is not recommended for use in a production environment.
And don't forget about regular backups of important data.
Automatic updates are disabled by default. You can choose only what you need.
- main features
- deploy
- check process
- update process
- private registries
- custom labels
- notifications
- auth
- api
- env
- screenshots
- contributing
- development
- todo
- Web UI with authentication
- Multiple hosts support
- Socket proxy support
- Crontab scheduling
- Notifications to a wide range of services
- Per-container config (check only or auto-update)
- Automatic/manual check and update
- Automatic/manual image pruning
- Linked containers support (compose and custom)
- Private registries support
- Basic container control (start, stop, etc.)
- Container detailed info (inspect, logs)
-
Use docker-compose.app.yml or following docker commands.
# create volume docker volume create tugtainer_data # pull image docker pull ghcr.io/quenary/tugtainer:1 # run container docker run -d -p 9412:80 \ --name=tugtainer \ --restart=unless-stopped \ -v tugtainer_data:/tugtainer \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ ghcr.io/quenary/tugtainer:1
Important
Keep in mind that you cannot update an agent or a socket-proxy from within the app because they are used to communicate with the Docker CLI. Avoid including these containers in a docker-compose that contains other containers you want to update automatically, as this will result in an error during the update. To keep them updated, you can activate the "check" only to receive notifications, and recreate manually or from another tool, such as Portainer.
-
To manage remote hosts from one UI, you have to deploy the Tugtainer Agent. To do so, you can use docker-compose.agent.yml or following docker commands.
After deploying the agent, in the UI follow Menu -> Hosts, and add it with the respective parameters.
Remember that the machine with the agent must be accessible for the primary instance.
Don't forget to change AGENT_SECRET variable. It is used for backend-agent requests signature.
Backend and agent use http to communicate, so you can utilize reverse proxy for https.
# pull image docker pull ghcr.io/quenary/tugtainer-agent:1 # run container docker run -d -p 9413:8001 \ --name=tugtainer-agent \ --restart=unless-stopped \ -e AGENT_SECRET="CHANGE_ME!" \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ ghcr.io/quenary/tugtainer-agent:1
-
You can use Tugtainer and Tugtainer Agent without direct mount of docker socket.
docker-compose.app.yml and docker-compose.agent.yml use this approach by default.
Manual setup:
- Deploy socket-proxy e.g. https://hub.docker.com/r/linuxserver/socket-proxy
- Enable at least CONTAINERS, IMAGES, POST, INFO, PING for the check feature, and NETWORKS for the update feature;
- Set env var DOCKER_HOST="tcp://my-socket-proxy:port" to the Tugtainer(-agent) container(s);
- Verify that a container is suitable for checking (not local image);
- Pull image (if enabled in the settings, disabled by default), this may be handy if you using registry proxy;
- Request current digest of an image from a registry;
- Compare digests;
- If different, the container marked as available.
Scheduled process includes all enabled hosts and all containers selected for auto-check.
Manual process includes all containers despite auto-check toggle (or a single container if you've clicked one)
-
- Containers of a host are processed as a single set;
- A global dependency graph is constructed for all containers with:
- Compose dependencies (com.docker.compose.depends_on label for containers with same com.docker.compose.project and com.docker.compose.project.config_files labels)
- Custom dependencies dev.quenary.tugtainer.depends_on
- Dependencies are directional: if container A depends on B, B must be started before A and stopped after A;
- Containers without dependencies are treated as independent nodes in the graph
-
- A global dependency graph is built:
- protected containers are skipped;
- not
runningcontainers are skipped by default (can be changed in the settings);
- A set of updatable containers is calculated:
- Updatable container is a container which marked as available and selected for auto-update or was clicked for update;
- A set of affected containers is calculated:
- includes all containers that depend (directly or transitively) on any updatable container;
- excludes the updatable containers themselves;
- A unified topological execution order is built based on the dependency graph;
- Image pull performed for updatable containers;
- All involved containers (updatable and affected) are stopped once, in order from most dependent to most dependable;
- Then, in reverse order (from most dependable to most dependent):
- Updatable containers are recreated and started;
- Affected containers are started;
Scheduled process being performed for all enabled hosts.
- A global dependency graph is built:
To use private registries, you have to mount docker config to Tugtainer or Tugtainer Agent, depending on where the container with the private image is located.
- Create the config using one of the methods on the host machine
- Log into the registry
docker login <registry> - Manually
{ "auths": { "<registry>": { "auth": "base64 encoded 'username:password_or_token'" } } } - Log into the registry
- Mount the config to the Tugtainer (Agent) as a readonly volume
-v $HOME/.docker/config.json:/root/.docker/config.json:roor in a docker-compose file. - That's all you need to do, Docker CLI will take care of the rest.
-
dev.quenary.tugtainer.protected=true
This label indicates that the container cannot be stopped. This means that even if there is a new image for the container, it cannot be updated from the app. This label is primarily used for tugtainer itself and tugtainer-agent, as well as for socket-proxy in the provided docker-compose files.
-
dev.quenary.tugtainer.depends_on="my_postgres,my_redis"
This label is an alternative to the docker compoes label. It allows you to declare that a container depends on another container, even if they are not in the same compose project. List of container names, separated by commas.
The app uses Apprise to send notifications and Jinja2 to generate their content. You can view the documentation for each of them for more details.
Jinja2 custom filters:
- any_worthy - checks that at least one of the items has result equal to "available", "updated", "rolled_back" or "failed"
Jinja2 context schema:
{
"hostname": "Tugtainer container hostname",
"results": [
{
"host_id": 0,
"host_name": "string",
"items": [
{
"container": {
"id": "string",
"image": "string",
"...other keys of 'docker container inspect' in snake_case": {},
},
"local_image": {
"id": "string",
"repo_digests": [
"digest1",
"digest2",
],
"...other keys of 'docker image inspect' in snake_case": {},
},
"remote_image": {
"...same schema as for local_image": {},
},
"local_digests": [
"list of platform specific image digests",
],
"remote_digests": [
"list of platform specific image digests",
],
"result": "not_available|available|available(notified)|updated|rolled_back|failed|None"
}
],
"prune_result": "string",
}
]
}"result" options:
- "not_available": No new image found.
- "available": New image available for the container.
- "available(notified)": New image available for the container, but it was in the previous notification. The app preserves digests of new images, so if another new image has appeared, the result will still be "available".
- "updated": Container successfully recreaded with the new image.
- "rolled_back": The app failed to recreate the container, but was able to restore it with the old image.
- "failed": The app failed to recreate container.
The notification is sent only if the body is not empty. For instance, if there is only containers with "available(notified)" results, the body will be empty (with default template), and notification will not be sent.
If you want to restore default template, it's here
The app uses password authorization by default. The password is stored in the file in encrypted form.
Auth cookies are not domain-specific and not https only, but you can change this using env variables.
Starting with v1.6.0, you can use the OpenID Connect provider instead of password. This can also be configured using env variables.
The backend API is served under the /api base path.
- Swagger UI:
/api/docs - Redoc UI:
/api/redoc
GET /api/public/healthGET /api/public/versionGET /api/public/summary(requiresENABLE_PUBLIC_API=true)GET /api/public/update_count(requiresENABLE_PUBLIC_API=true)GET /api/public/is_update_available(requiresENABLE_PUBLIC_API=true)
Environment variables are not required, but you can still define some. There is .env.example containing list of vars with description.
Contributions are welcome. Please follow these guidelines to keep the project consistent and maintainable.
-
- Use the Conventional Commits format for all commit messages e.g.
feat(backend): add user authentication. Common types: feat, fix, docs, refactor, test, chore - Keep commits focused, avoid mixing unrelated changes
- Use the Conventional Commits format for all commit messages e.g.
-
- Follow the existing code style and structure
- Prefer clear, readable solutions
- Avoid introducing unnecessary dependencies
-
- All new features must include unit tests
- If you modify existing functionality, update or add/extend the related tests
- Ensure all tests pass before submitting changes
- Ensure lint and typechecks pass before submitting changes (see backend/frontend readme for details)
-
- Provide a clear description of what was changed and why
- Reference related issues if applicable
- Keep pull requests focused, avoid mixing unrelated changes
-
- If a breaking change is required, сonsider opening an issue and discussing it first
- Update documentation (this file) when behavior changes
- angular for frontend
- python for backend and agent
See /backend/README.md and /frontend/README.md for more details
- add unit tests
- Dozzle integration or something more universal (list of urls for redirects?)
- Swarm support?
- Try to add release notes (from labels or something)




