Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions services/open-webui/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#version=1.1
#URL=https://github.com/tailscale-dev/ScaleTail
#COMPOSE_PROJECT_NAME= # Optional: only use when running multiple deployments on the same infrastructure.

# Service Configuration
SERVICE=open-webui # Service name. Used as hostname in Tailscale and for container naming (app-${SERVICE}).
IMAGE_URL=ghcr.io/open-webui/open-webui:main # Docker image URL from container registry.

# Network Configuration
SERVICEPORT=8080 # Port to expose to local network. Uncomment the "ports:" section in compose.yaml to enable.
DNS_SERVER=9.9.9.9 # Preferred DNS server for Tailscale. Uncomment the "dns:" section in compose.yaml to enable.

# Tailscale Configuration
TS_AUTHKEY= # Auth key from https://tailscale.com/admin/authkeys. See: https://tailscale.com/kb/1085/auth-keys#generate-an-auth-key for instructions.

# Open WebUI Configuration
# Point to your Ollama instance - can be local or remote.
# Examples:
# Ollama on same Docker host: http://host.docker.internal:11434
# Ollama on LAN: http://192.168.1.x:11434
# Ollama over Tailnet: http://100.x.x.x:11434
# Leave blank to configure a different provider (e.g. OpenAI) via the UI.
OLLAMA_BASE_URL=http://host.docker.internal:11434
WEBUI_SECRET_KEY= # Random secret key for session security. Generate with: openssl rand -hex 32
TZ=Europe/Amsterdam # Timezone for the container.
40 changes: 40 additions & 0 deletions services/open-webui/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Open WebUI with Tailscale Sidecar Configuration

This Docker Compose configuration sets up [Open WebUI](https://openwebui.com/) with Tailscale as a sidecar container to keep the app reachable over your Tailnet.

## Open WebUI

[Open WebUI](https://openwebui.com/) is a feature-rich, self-hosted AI platform that provides a ChatGPT-style interface for local and cloud-based AI models. It supports Ollama and any OpenAI-compatible API. Pairing it with Tailscale means your private AI interface is securely accessible from any of your devices without exposing it to the public internet.

## Configuration Overview

In this setup, the `tailscale-open-webui` service runs Tailscale, which manages secure networking for Open WebUI. The `open-webui` service utilizes the Tailscale network stack via Docker's `network_mode: service:` configuration. This keeps the app Tailnet-only unless you intentionally expose ports.

## What to document for users

- **Prerequisites**: Docker and Docker Compose installed. No special group membership, GPU, or devices required for CPU-only inference. A Tailscale account with an auth key from <https://tailscale.com/admin/authkeys>.
- **Volumes**: Pre-create `./open-webui-data` before deploying to avoid Docker creating a root-owned directory: `mkdir -p ./open-webui-data ./config ./ts/state`
- **MagicDNS/Serve**: Enable MagicDNS and HTTPS in your Tailscale admin console before deploying. The serve config proxies to port `8080` — this is hardcoded in the `configs` block and does not consume `.env` values. Uncomment `TS_ACCEPT_DNS=true` in `compose.yaml` if DNS resolution issues arise.
- **Ollama**: Set `OLLAMA_BASE_URL` in `.env` to point at your Ollama instance. Options:
- Same Docker host: `http://host.docker.internal:11434`
- LAN machine: `http://<local-ip>:11434` (use the private IP of the machine running Ollama)
- Another Tailnet device: `http://100.x.x.x:11434`
- Leave blank to configure a different provider (e.g. OpenAI) via the UI after first launch.
- **Ports**: The `0.0.0.0:${SERVICEPORT}:${SERVICEPORT}` mapping is commented out by default. Uncomment only if LAN access is required alongside Tailnet access.
- **Gotchas**:
- Create your admin account immediately after first launch — Open WebUI is open to registration until the first user is created.
- Open WebUI requires WebSocket support — ensure nothing in your network path blocks WebSocket connections.
- After adding new models to Ollama, refresh the model list in Open WebUI via **Settings → Connections**.

## Files to check

Please check the following contents for validity as some variables need to be defined upfront.

- `.env` // Main variables: `TS_AUTHKEY`, `SERVICE`, `IMAGE_URL`, `OLLAMA_BASE_URL`, `WEBUI_SECRET_KEY`

## Resources

- [Open WebUI Documentation](https://docs.openwebui.com/)
- [Open WebUI GitHub](https://github.com/open-webui/open-webui)
- [Tailscale Serve docs](https://tailscale.com/kb/1242/tailscale-serve)
- [Tailscale Docker guide](https://tailscale.com/blog/docker-tailscale-guide)
70 changes: 70 additions & 0 deletions services/open-webui/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
configs:
ts-serve:
content: |
{"TCP":{"443":{"HTTPS":true}},
"Web":{"$${TS_CERT_DOMAIN}:443":
{"Handlers":{"/":
{"Proxy":"http://127.0.0.1:8080"}}}},
"AllowFunnel":{"$${TS_CERT_DOMAIN}:443":false}}

services:
# Make sure you have updated/checked the .env file with the correct variables.
# All the ${ xx } need to be defined there.
# Tailscale Sidecar Configuration
tailscale:
image: tailscale/tailscale:latest # Image to be used
container_name: tailscale-${SERVICE} # Name for local container management
hostname: ${SERVICE} # Name used within your Tailscale environment
environment:
- TS_AUTHKEY=${TS_AUTHKEY}
- TS_STATE_DIR=/var/lib/tailscale
- TS_SERVE_CONFIG=/config/serve.json # Tailscale Serve configuration to expose the web interface on your local Tailnet
- TS_USERSPACE=false
- TS_ENABLE_HEALTH_CHECK=true # Enable healthcheck endpoint: "/healthz"
- TS_LOCAL_ADDR_PORT=127.0.0.1:41234 # The <addr>:<port> for the healthz endpoint
#- TS_ACCEPT_DNS=true # Uncomment when using MagicDNS
- TS_AUTH_ONCE=true
configs:
- source: ts-serve
target: /config/serve.json
volumes:
- ./config:/config # Config folder used to store Tailscale files - you may need to change the path
- ./ts/state:/var/lib/tailscale # Tailscale requirement - you may need to change the path
devices:
- /dev/net/tun:/dev/net/tun # Network configuration for Tailscale to work
cap_add:
- net_admin # Tailscale requirement
#ports:
# - 0.0.0.0:${SERVICEPORT}:${SERVICEPORT} # Binding port ${SERVICEPORT} to the local network - may be removed if only exposure to your Tailnet is required
# If any DNS issues arise, use your preferred DNS provider by uncommenting the config below
#dns:
# - ${DNS_SERVER}
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://127.0.0.1:41234/healthz"] # Check Tailscale has a Tailnet IP and is operational
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In your PR additional notes you state:

The standard 41234/healthz health check endpoint is not available when using TS_USERSPACE=true. This config uses tailscale status as the health check instead.

In the current configuration I see the USERSPACE and health check back to the default configuration.

Please double check.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tried locally with an already running service. Changed the variable TS_USERSPACE from false to true, restarted the service and still got the standard 41234/healthz healthcheck to work.

...
  tailscale-nginx:
    image: tailscale/tailscale:latest
    container_name: tailscale-nginx
    hostname: proxy
    environment:
      #- TS_AUTHKEY=
      - TS_STATE_DIR=/var/lib/tailscale
#      - TS_SERVE_CONFIG=/config/serve.json
      **- TS_USERSPACE=true**
...
SCR-20260406-igzi

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated my PR and removed the section regarding the /healthz endpoint issues.

I tested with,

  • TS_LOCAL_ADDR_PORT=127.0.0.1:41234
  • TS_ENABLE_HEALTH_CHECK=true
    And it worked and deployed. I also set the following to true and false and it also worked.
    TS_USERSPACE=true
Screenshot open-webui

interval: 1m # How often to perform the check
timeout: 10s # Time to wait for the check to succeed
retries: 3 # Number of retries before marking as unhealthy
start_period: 10s # Time to wait before starting health checks
restart: always

# Open WebUI
application:
image: ${IMAGE_URL} # Image to be used
network_mode: service:tailscale # Sidecar configuration to route Open WebUI through Tailscale
container_name: app-${SERVICE} # Name for local container management
environment:
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
- TZ=${TZ}
volumes:
- ./open-webui-data:/app/backend/data
depends_on:
tailscale:
condition: service_healthy
healthcheck:
test: ["CMD", "pgrep", "-f", "open-webui"] # Check if open-webui process is running
interval: 1m # How often to perform the check
timeout: 10s # Time to wait for the check to succeed
retries: 3 # Number of retries before marking as unhealthy
start_period: 30s # Time to wait before starting health checks
restart: always