π§ Atlas Lab is a localhost-first self-hosted platform made of a Node.js/TypeScript CLI, a layered Docker Compose stack, and an operational React dashboard served by the gateway. It is designed to provide Git hosting, project and design collaboration, internal documentation, collaborative markdown notes, browser-delivered Obsidian vaults, optional local AI services with Open WebUI, Ollama, and n8n, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
Atlas Lab is built for a practical goal: run a repeatable local engineering platform without depending on custom DNS, hosts-file edits, scattered bootstrap scripts, or ad hoc reverse-proxy plumbing.
- π§± An always-on core layer with Atlas Dashboard, Gitea, Plane, Penpot, BookStack, HedgeDoc, and Obsidian
- π§ An optional AI LLM layer with Open WebUI, Ollama, and n8n
- π οΈ An optional workbench layer with browser-based Node and Python environments plus shared PostgreSQL
- π HTTPS-only ingress on
localhost - π¦ A self-contained npm package that can run without a local repository checkout
- πΎ Persistent state stored in named Docker volumes
- π½ Single-file backup and restore for Docker images and volumes
- no internal DNS
- no
hostsfile edits - no disposable init containers in Compose
- no hard dependency on a checked-out repo
- one coherent operational flow across development, packaging, and day-to-day use
- ποΈ Architecture
- π Services, Ports, and URLs
- πΈοΈ Docker Networks
- πΎ Persistence
- π§ͺ Host Requirements
- βοΈ Central Configuration
- π Quick Start
- π οΈ CLI Workflows
- π₯οΈ Atlas Dashboard
- π½ Backup and Restore
- π Default Credentials
- π Repository Layout
- π©Ί Troubleshooting
- π‘οΈ Security Notes
- π License
- π Official References
Atlas Lab is split into three explicit layers:
| Layer | Status | Includes | Purpose |
|---|---|---|---|
core |
always on | gateway, Atlas Dashboard, Gitea, Plane, Penpot, BookStack, HedgeDoc, Obsidian, and their backing data services | baseline platform |
ai-llm |
optional | Open WebUI, Ollama, n8n, AI gateway | local AI workflows and automation |
workbench |
optional | Node Forge, Python Grid, shared PostgreSQL, workbench gateway | browser-based development |
The project went through three shapes:
- subpath-based reverse proxying
- custom hostnames such as
*.lab.home.arpa - the current
localhost + dedicated HTTPS portsmodel
The current model is the most pragmatic for a single-machine lab:
- predictable URLs
- fewer frontend issues than subpath routing
- no local DNS to maintain
- no
hostsfile maintenance
Bootstrap is handled by the TypeScript CLI rather than by throwaway Compose init containers.
The CLI:
- starts the stack
- runs host preflight checks
- reconciles runtime state
- bootstraps Gitea
- bootstraps the initial BookStack admin
- aligns the Plane instance admin
- aligns the Penpot root profile
- aligns the n8n owner bootstrap account when the AI LLM layer is enabled
- reconciles Ollama only when the AI LLM layer is enabled
All public web entry points are exposed over HTTPS on localhost.
The only host-level TCP service exposed directly is PostgreSQL from the workbench layer.
| Service | Layer | URL / Endpoint | Notes |
|---|---|---|---|
| Atlas Dashboard | core |
https://localhost:8443/ |
operational dashboard |
| Gitea | core |
https://localhost:8444/ |
Git forge, issues, reviews |
| Plane | core |
https://localhost:8445/ |
project planning and issue tracking |
| Open WebUI | ai-llm |
https://localhost:8446/ |
only with --with-ai-llm |
| Ollama | ai-llm |
https://localhost:8447/ |
HTTPS API |
| Penpot | core |
https://localhost:8448/ |
collaborative design workspace |
| BookStack | core |
https://localhost:8449/ |
internal wiki and knowledge base |
| Node Forge | workbench |
https://localhost:8450/ |
Node / TypeScript workspace |
| Python Grid | workbench |
https://localhost:8451/ |
Python workspace |
| HedgeDoc | core |
https://localhost:8452/ |
collaborative markdown notes |
| n8n | ai-llm |
https://localhost:8453/ |
workflow automation and agent orchestration |
| Obsidian | core |
https://localhost:8454/ |
browser knowledge vault |
| PostgreSQL | workbench |
localhost:15432 |
host-side desktop access |
- browsers always go through the gateway
- optional layers never start implicitly
- host-side PostgreSQL clients must use
localhost:15432, notpostgres-dev
| Network | Type | Purpose |
|---|---|---|
edge-net |
exposed | published ingress ports |
apps-net |
internal | Gitea, BookStack, HedgeDoc, Obsidian, and shared browser-facing core services |
ai-llm-net |
internal | Open WebUI, Ollama, and n8n |
data-net |
internal | data services and infrastructure databases |
workbench-net |
internal | workbenches and PostgreSQL |
workbench-host-net |
bridge | host-side PostgreSQL bind |
services-egress-net |
selective egress | outbound access for core services |
workbench-egress-net |
selective egress | outbound access for workbench services |
postgres-devexists only inside Docker networking- desktop tools should connect to
localhost:15432 - the gateway remains the only public browser entry point
Atlas Lab uses named Docker volumes for runtime state.
Key volumes include:
gateway-certsgateway-configgateway-sitegateway-datagitea-datagitea-dbbookstack-configbookstack-dbhedgedoc-dbhedgedoc-uploadsobsidian-configollama-datan8n-dataopen-webui-datapostgres-dev-data- workbench home/workspace volumes for Node and Python
Recreating containers does not wipe state. Removing the volumes does.
Docker EnginewithDocker Compose v2Node.js >= 20npm
The AI LLM layer requires:
- an
NVIDIAGPU - a working
nvidia-smion the host - Docker configured with NVIDIA GPU support
- CPU:
4 vCPUor better - RAM:
8 GBminimum,12-16 GBpreferred - disk:
20 GBfree or more - VRAM:
8 GBor more for comfortable Ollama usage
84438444844584468447844884498450845184528453845415432whenworkbenchis enabled
On restrictive PowerShell setups, prefer the .cmd shims:
npm.cmd --version
atlas-lab.cmd statusThe lab uses a self-signed certificate for localhost.
Certificate download URL:
https://localhost:8443/assets/lab.crt
Git for Windows with schannel may require importing that certificate into the Windows trust store.
The main runtime configuration lives in:
Key variables include:
APP_VERSIONLAB_HTTPS_PORT,GITEA_HTTPS_PORT,PLANE_HTTPS_PORT,PENPOT_HTTPS_PORTBOOKSTACK_HTTPS_PORT,OPENWEBUI_HTTPS_PORT,OLLAMA_HTTPS_PORT,HEDGEDOC_HTTPS_PORT,N8N_HTTPS_PORTNODE_DEV_HTTPS_PORT,PYTHON_DEV_HTTPS_PORTPOSTGRES_DEV_HOST_PORTOLLAMA_CHAT_MODEL,OLLAMA_EMBEDDING_MODEL,OLLAMA_RUNTIME_MODELSGITEA_ROOT_USERNAME,GITEA_ROOT_PASSWORDBOOKSTACK_ROOT_NAME,BOOKSTACK_ROOT_EMAIL,BOOKSTACK_ROOT_PASSWORDPLANE_ROOT_EMAIL,PLANE_ROOT_PASSWORDOPENWEBUI_ROOT_EMAIL,OPENWEBUI_ROOT_PASSWORDPENPOT_ROOT_EMAIL,PENPOT_ROOT_PASSWORDN8N_ROOT_EMAIL,N8N_ROOT_PASSWORD
Rule of thumb:
- change ports, versions, credentials, and models in
env/lab.env - change routing and runtime content in
config/gateway/templates/ - change CLI behavior in
src/
docker version
docker compose version
node --version
npm --versionnpm installnpm run dev -- upnpm run dev -- up --with-ai-llmnpm run dev -- up --with-workbenchnpm run dev -- up --with-ai-llm --with-workbenchnpm run dev -- statusnpm run dev -- doctor --smoke
npm run dev -- doctor --with-ai-llm --smokenpm run dev -- down| Mode | Command | Purpose |
|---|---|---|
| dev mode | npm run dev -- up |
runs the TypeScript source with tsx |
| CLI build | npm run build |
bundles the CLI into dist/ |
| dashboard build | npm run build:atlas-dashboard |
typechecks and builds the dashboard |
| dashboard typecheck | npm run typecheck:atlas-dashboard |
checks dashboard TypeScript |
| local dashboard dev | npm run dev:atlas-dashboard |
starts local dashboard development |
| versioning | npm run set:version |
updates managed version files and creates the release commit |
| local pack | npm run pack:local |
creates a self-contained npm tarball |
| global install | npm install -g . |
installs atlas-lab globally |
| Command | Role |
|---|---|
atlas-lab up |
starts core only |
atlas-lab up --with-ai-llm |
adds the AI LLM layer |
atlas-lab up --with-workbench |
adds the workbench layer |
atlas-lab up --with-ai-llm --with-workbench |
starts the full lab |
atlas-lab bootstrap |
reruns core bootstrap |
atlas-lab bootstrap --with-ai-llm |
reruns bootstrap, n8n owner alignment, and Ollama reconciliation |
atlas-lab doctor |
runs host and configuration checks |
atlas-lab doctor --smoke |
adds smoke tests for the core layer |
atlas-lab doctor --with-ai-llm --smoke |
adds smoke tests for the AI LLM layer |
atlas-lab status |
shows Compose/runtime status |
atlas-lab down |
stops the stack |
atlas-lab save-images |
exports Docker images to a single archive |
atlas-lab restore-images |
restores Docker images from an archive |
atlas-lab save-volumes |
exports Docker volumes to a single archive |
atlas-lab restore-volumes |
restores Docker volumes from an archive |
The global npm package already includes:
- Compose files
env/lab.env- gateway templates
- custom Dockerfiles
- dashboard sources
- bootstrap scripts
This allows atlas-lab to run without a local repository checkout.
npm run pack:local
npm install -g .\cli-node-docker-atlas-lab-<version>.tgz
atlas-lab statusThe dashboard frontend lives in:
Its toolchain config now lives alongside the app:
- visualize layer state
- surface operational links
- expose local markdown briefings
- show credentials and runtime notes
- support
it/enlocalization
npm run dev:atlas-dashboardOptional layers can be simulated with:
ATLAS_DASHBOARD_DEV_AI_LLM_ENABLEDATLAS_DASHBOARD_DEV_AI_IMAGE_ENABLEDATLAS_DASHBOARD_DEV_WORKBENCH_ENABLED
Atlas Lab supports backup and restore for both Docker images and Docker volumes.
- one
.tar.gzarchive for selected images - one
.tar.gzarchive for selected volumes - embedded manifest metadata
- realtime progress logs during export and restore
- support for
core,ai-llm, andworkbenchlayer selection
npm run dev -- save-images --with-ai-llm --with-workbench
npm run dev -- restore-images --input .\backups\images\atlas-lab-images.tar.gz
npm run dev -- down
npm run dev -- save-volumes --with-ai-llm --with-workbench
npm run dev -- restore-volumes --input .\backups\volumes\atlas-lab-volumes.tar.gzBootstrap is idempotent and reconciles Gitea, BookStack, Plane, Penpot, and, when ai-llm is enabled, n8n and Ollama.
β οΈ These credentials are intended for trusted local environments and are configurable throughenv/lab.env.
| Service | URL / Endpoint | Credentials |
|---|---|---|
| Atlas Dashboard | https://localhost:8443/ |
no dedicated login |
| Gitea | https://localhost:8444/ |
root / RootGitea!2026 |
| Plane | https://localhost:8445/ |
root@plane.local / RootPlane!2026 |
| Open WebUI | https://localhost:8446/ |
root@openwebui.local / RootOpenWebUI!2026 |
| Ollama | https://localhost:8447/ |
gateway basic auth root / RootOllama!2026 |
| Penpot | https://localhost:8448/ |
root@penpot.local / RootPenpot!2026 |
| BookStack | https://localhost:8449/ |
root@bookstack.local / RootBookStack!2026 |
| HedgeDoc | https://localhost:8452/ |
local email accounts enabled; create the first account in-app |
| n8n | https://localhost:8453/ |
owner bootstrap root@n8n.local / RootN8NApp!2026 |
| Obsidian | https://localhost:8454/ |
basic auth atlas / RootObsidian!2026 |
| PostgreSQL host-side | localhost:15432 |
postgres / RootPostgresDev!2026 |
For DBeaver and other desktop PostgreSQL clients:
- host:
localhost - port:
15432 - database:
lab - username:
postgres - password:
RootPostgresDev!2026
| Area | Purpose | Paths |
|---|---|---|
| CLI shell | entrypoint, command registration, terminal rendering | src/cli/, bin/ |
| domain services | runtime orchestration, diagnostics, integrations, archive workflows | src/services/ |
| shared contracts | config schemas, Docker helpers, utilities, shared types | src/config/, src/lib/, src/types/, src/utils/ |
| dashboard | React frontend plus local Vite and TS config | apps/atlas-dashboard/ |
| runtime assets | packaged env files and gateway templates | env/, config/gateway/templates/ |
| infrastructure | Compose layers, Dockerfiles, startup scripts | infra/docker/ |
| verification and tooling | unit tests, release helpers, CI support | tests/, scripts/, .github/ |
Source tree:
src/
cli/
commands/
ui/
config/
lib/
services/
archive/
diagnostics/
integrations/
orchestration/
runtime/
types/
utils/
Key files:
package.jsonLICENSEenv/lab.envinfra/docker/compose.ymlinfra/docker/compose.ai-llm.ymlinfra/docker/compose.workbench.ymlsrc/cli/atlas-lab.tssrc/cli/create-cli-app.tssrc/services/orchestration/stack.service.tssrc/services/runtime/project.service.tssrc/lib/compose.tsconfig/gateway/templates/Caddyfile.templateconfig/gateway/templates/runtime/lab-config.json.templateinfra/docker/images/gateway/bootstrap-gateway.sh
Expected behavior. The lab uses a self-signed certificate.
One of the configured lab ports (8443-8454 or 15432) is occupied or excluded by the system.
atlas-lab status
docker ps --format "table {{.Names}}\t{{.Ports}}\t{{.Status}}"This is usually a Docker daemon GPU pass-through issue, not an Ollama issue.
nvidia-smi -L
docker infoWorkbenches are not part of the core layer. Start them explicitly:
npm run dev -- up --with-workbenchVerify:
- the AI LLM layer is enabled
OLLAMA_CHAT_MODEL,OLLAMA_EMBEDDING_MODEL, andOLLAMA_RUNTIME_MODELSare sethttps://localhost:8447/api/tagsresponds- the AI LLM bootstrap has run
Atlas Lab is intended for:
- local use
- technical lab environments
- trusted networks
- development and prototyping
It is not an internet-facing production deployment hardened out of the box.
If you want to harden it further:
- move secrets into an external secret-management system
- replace the default certificate with one signed by an internal CA
- tighten network segmentation further
- define recurring backup policies
- audit logs and default credentials
This project is distributed under the MIT license.
- license file:
LICENSE - npm metadata:
package.json
- Compose startup order: https://docs.docker.com/compose/how-tos/startup-order/
- Compose profiles: https://docs.docker.com/compose/how-tos/profiles/
- Compose networks: https://docs.docker.com/reference/compose-file/networks/
- Docker networking drivers: https://docs.docker.com/engine/network/drivers/
- Caddyfile concepts: https://caddyserver.com/docs/caddyfile
- Global options: https://caddyserver.com/docs/caddyfile/options
- Reverse proxy: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy
- Install with Docker: https://docs.gitea.com/installation/install-with-docker
- Admin CLI: https://docs.gitea.com/administration/command-line
- Self-hosted user management: https://docs.n8n.io/hosting/configuration/user-management-self-hosted/
- Docker install: https://docs.n8n.io/hosting/installation/docker/
- Installation: https://www.bookstackapp.com/docs/admin/installation
- Commands: https://www.bookstackapp.com/docs/admin/commands/
- Docker image: https://docs.hedgedoc.org/setup/docker/
- Configuration: https://docs.hedgedoc.org/configuration/
- Docker Hub image: https://hub.docker.com/r/linuxserver/obsidian
- Source repository: https://github.com/linuxserver/docker-obsidian
- Environment configuration: https://docs.openwebui.com/getting-started/env-configuration/
- Reverse proxy notes: https://docs.openwebui.com/tutorials/integrations/unraid
- FAQ: https://docs.ollama.com/faq
- API reference: https://github.com/ollama/ollama/blob/main/docs/api.md
- Official docs: https://coder.com/docs/code-server/latest
- package.json scripts: https://docs.npmjs.com/cli/v11/configuring-npm/package-json
- npm link: https://docs.npmjs.com/cli/v11/commands/npm-link
- npm pack: https://docs.npmjs.com/cli/v11/commands/npm-pack
- child_process: https://nodejs.org/api/child_process.html
Atlas Lab is a complete local platform with:
- an always-on core plane
- optional AI LLM and workbench layers
- a dark-first React dashboard
- a globally installable TypeScript CLI
- self-contained npm packaging
- structured backup and restore workflows
- MIT licensing