The README describes a Kubernetes + ZeroTier deployment of private IPFS nodes as part of the FlatRacoon Network Stack. This file traces the manifest structure, private swarm configuration, integration points, and how the deployment is checked.
Deploy IPFS nodes inside Kubernetes and bind them to the ZeroTier overlay network for secure, decentralised storage. Part of the FlatRacoon Network Stack.
How it works. The deployment consists of five Kubernetes manifests in
manifests/:
-
statefulset.yaml— declares the IPFS nodes as aStatefulSet(stable network identities, ordered scaling). Each pod mounts aPersistentVolumeClaimfor the IPFS datastore. The pod spec binds the IPFS swarm listener exclusively to the ZeroTier network interface address, preventing public internet exposure. -
service.yaml— internalClusterIPservice exposing ports 4001 (swarm), 5001 (API), and 8080 (gateway). -
pvc.yaml—PersistentVolumeClaimdeclarations for each node’s datastore. -
configmap.yaml— mounts the Nickel-rendered IPFS configuration into each pod at/data/ipfs/config. -
secret.yaml— holds the base64-encoded private swarm key (swarm.key).
The configuration is generated from Nickel sources in configs/:
ipfs-config.ncl computes the full IPFS config JSON (Bootstrap empty,
MDNS disabled, DHT routing type none, API on 127.0.0.1 only);
swarm.ncl generates the PSK swarm key in /key/swarm/psk/1.0.0//base16/<64-hex-chars> format;
bootstrap.ncl holds the private bootstrap peer address list.
The private routing configuration ("Routing": {"Type": "none"}) combined
with an empty Bootstrap list and disabled MDNS means these nodes form a
fully air-gapped swarm — they discover each other only via the explicit
bootstrap list, all of which resolve to ZeroTier IP addresses.
Honest caveat. The README marks status as "Production-ready / 100% complete"
but notes that a Helm chart alternative and automated cluster scaling are
planned next steps. The current manifests are static YAML — changing the
replica count requires editing statefulset.yaml directly; no values.yaml
abstraction exists yet.
Critical path. configs/swarm.ncl (generate PSK) →
scripts/generate-swarm-key.sh (encode as base64) →
manifests/secret.yaml (mount into pod) →
statefulset.yaml mounts secret at /data/ipfs/swarm.key →
IPFS daemon starts with private swarm → nodes discover each other via
configs/bootstrap.ncl peer list.
ZeroTier-only binding, no public DHT. Private swarm key, private routing, MDNS disabled. Encrypted mesh between nodes.
How it works. The IPFS config rendered by configs/ipfs-config.ncl sets:
"Swarm": {"DisableNatPortMap": true} (prevents UPnP/NAT-PMP port mapping
to the public internet); "Routing": {"Type": "none"} (disables DHT — no
announcements to the public IPFS network); "Discovery": {"MDNS": {"Enabled":
false}} (no local network multicast discovery). The "Bootstrap": [] empty
list ensures no public bootstrap nodes are contacted at startup. The only
reachable swarm addresses are the ZeroTier IP:4001 pairs listed in
configs/bootstrap.ncl.
The ZeroTier integration is provided by the zerotier-k8s-link sibling repo,
which ensures the ZT network interface is present in each Kubernetes node and
that the ZT subnet is routable within the cluster. The statefulset.yaml
uses a node affinity or host-network configuration to bind IPFS to the ZT
interface address.
Honest caveat. The swarm key is stored in a Kubernetes Secret (base64
encoded), which provides namespace isolation but not encryption at rest.
Integration with Vault or poly-secret-mcp for key retrieval is listed in
the Inputs table in the README but not yet implemented in the manifests.
Critical path. zerotier-k8s-link ensures ZT interface is present →
manifests/secret.yaml provides PSK → configmap.yaml provides routing
config with DHT disabled → statefulset.yaml starts Kubo daemon → daemon
only accepts connections from ZT-addressed bootstrap peers.
just deploy # kubectl apply -f manifests/
just cluster-status
deno test --allow-read tests/The tests/ directory contains structural validation:
-
All five manifests exist in
manifests/. -
All three Nickel config files exist in
configs/. -
All three scripts exist in
scripts/. -
ipfs-overlay.manifest.ncl(the machine-readable module manifest) is present and parseable. -
hooks/directory exists for lifecycle hook scripts.
scripts/health-check.sh polls the IPFS API endpoint at /ipfs/health
(mapped to the internal service.yaml ClusterIP) and exits non-zero if
any node is unreachable or reports a non-OK swarm state.
| Consumer | How it uses this repo |
|---|---|
|
Direct dependency. ipfs-overlay binds exclusively to the ZeroTier network
interface that |
|
ipfs-overlay is the storage layer of the FlatRacoon stack. The machine-
readable manifest ( |
|
The Ansible |
|
Gossamer’s distributed asset storage backend optionally routes large binary assets through the private IPFS cluster managed by this repo. |
|
sustainabot pins important repository snapshots to the private IPFS cluster
for archival. The pin command uses the IPFS API endpoint exposed by
|
| Path | What’s There |
|---|---|
|
Kubernetes |
|
Internal |
|
|
|
Mounts the rendered IPFS JSON config. Generated from |
|
Contains base64-encoded |
|
Nickel source for full IPFS configuration JSON. Sets Bootstrap empty,
MDNS disabled, DHT type |
|
Nickel source for PSK swarm key generation in the standard
|
|
Nickel source for private bootstrap peer address list (ZeroTier IPs only). |
|
Node initialisation: sets up the IPFS datastore directory, copies the
swarm key into place, and initialises the Kubo daemon with |
|
Generates a new random PSK using |
|
Polls |
|
Machine-readable module manifest (Nickel). Declares |
|
Lifecycle hook scripts (pre-deploy, post-deploy, pre-undeploy). |
|
Architecture notes and operational runbook. |
|
Entry points: |
|
Must-pass checks (RSR standard). |
|
Deno test suite for structural validation. |
|
Planned next steps: Helm chart, automated scaling, Vault integration. |
|
Licence philosophy statement. |
|
AI agent manifest for this repo. |
|
Visual architecture map and completion dashboard. |
|
Current project state (A2ML format). |
|
Architecture decisions and ADRs. |
|
Ecosystem position and related projects. |
Open an issue on GitHub or reach out to Jonathan D.A. Jewell <j.d.a.jewell@open.ac.uk> — happy to explain anything in more detail.