pyCluster is a modern DX cluster core written in Python.
It keeps the familiar telnet-style operator experience, adds a public web UI and a System Operator web console, and remains compatible with legacy cluster ecosystems such as DXSpider-family node links.
- public web UI: https://pycluster.ai3i.net
- public telnet listeners:
- pycluster.ai3i.net:7300
- pycluster.ai3i.net:7373
- pycluster.ai3i.net:8000
- Telnet-first DX cluster workflow with modernized operator output
- Public web UI for users and a dedicated web console for system operators
- SQLite persistence, CTY refresh tooling, and fail2ban integration
- Validated deploy path across modern Debian, Ubuntu, Fedora, and Red Hat-family Linux
- serves DX-style telnet access for users and operators
- provides a public web UI for viewing and posting cluster traffic
- provides a System Operator web console for runtime, protocol, user, and peer management
- stores spots, messages, and user preferences in SQLite
- supports node linking with profile-aware behavior for legacy cluster families
- ships with deployment tooling for systemd-based Linux hosts
- integrates with fail2ban for login-abuse protection
- supports age-based cleanup for spots, messages, and bulletins
- maintains local CTY data with optional automatic refresh from Country Files
pyCluster is not just trying to mimic old command names. It is trying to keep the parts of legacy cluster software that matter while improving the parts that usually feel neglected.
Key improvements:
- cleaner telnet output and more human-readable replies
- explicit operator command namespace with
sysop/* - public web UI for normal users
- System Operator web console for runtime and policy management
- clearer link and protocol visibility
- more protective routing and duplicate-handling behavior built into the core engine
- per-user access matrix for telnet and web
- integrated audit and security visibility
- structured auth-failure logging with fail2ban support
- age-based retention controls with daily cleanup
- bundled and refreshable CTY data instead of relying on stale host copies
- Linux-first deployment with systemd tooling
pyCluster is designed to reduce the amount of defensive cluster administration that older systems often push onto the operator.
In practice that means:
- duplicate and loop-resistant behavior is handled primarily in core logic rather than depending on heavy manual route-filter tuning
- in normal deployments you can usually link to multiple partner nodes without first writing special defensive route filters
- duplicate suppression, routing protections, and peer-state handling are intended to make multi-link operation work safely by default
- peer cleanup, policy-drop accounting, and protocol-health visibility are built in
- operators can still apply filters and policy controls when needed, but normal operation should not require constant route-filter micromanagement
- the goal is safer default behavior with fewer admin headaches, not recreating a large manual-maintenance burden
pyCluster is usable today as a single-node cluster with web and telnet access, persistent storage, peer linking, and operator controls. The codebase is still evolving, but it is no longer just a prototype.
Primary human and compatibility interface.
- user prompt:
N0CALL-1> - sysop prompt:
N0CALL-1# - DX-style command surface with
show/*,set/*,unset/*, aliases, andsysop/*
User-facing browser interface.
- spot list and filters
- cluster view
- watch lists and recent matches
- operate tab for login and posting
- profile editing for normal users
Operator-facing browser console.
- node presentation and MOTD
- user and access management
- peer and link management
- protocol health and policy drops
- audit and security views
Get the code with SSH:
cd /usr/src
git clone git@github.com:AI3I/pyCluster.git
cd pyClusterOr with HTTPS:
cd /usr/src
git clone https://github.com/AI3I/pyCluster.git
cd pyClusterUpdate an existing checkout:
git pull --ff-onlyRun locally for development:
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
pycluster --config ./config/pycluster.toml serveDeploy on a supported Linux host:
sudo ./deploy/install.sh
sudo ./deploy/doctor.shFor a host-level install, cloning into /usr/src/pyCluster is the recommended layout.
The deploy scripts create the pycluster system user and group automatically; the installer does not require the operator to create that account first.
The installed runtime tree is placed under /home/pycluster/pyCluster.
Typical deployed layout:
/usr/src/pyCluster # administrator managed checkout used for install/upgrade
/home/pycluster/pyCluster/ # live runtime tree
βββ config/
β βββ pycluster.toml # active node configuration
β βββ strings.toml # hot-reloadable operator text
βββ data/
β βββ pycluster.db # live SQLite database
βββ logs/
β βββ proto/ # protocol (PCxx) trace logs
βββ src/ # installed application code
/var/log/pycluster/authfail.log # authentication failure log watched by fail2ban
/root/pycluster-initial-sysop.txt # bootstrap SYSOP credentials note (needed post-install!)
Upgrade an existing deployment:
git pull --ff-only
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.shFor upgrades from 1.0.0 through 1.0.3, deploy/upgrade.sh performs the required cumulative state conversion for older installs:
- hashes any legacy plaintext passwords still stored in
user_prefs - seeds
config/strings.tomlif it is missing - preserves compatibility with older
pycluster.tomlfiles by supplying defaults for newer optional config sections such as[qrz] - preserves the existing
config/pycluster.toml, data, and logs in place
Default listeners:
- telnet: 0.0.0.0:7300
- sysop web: 127.0.0.1:8080
- public web: 127.0.0.1:8081
Production deployment is handled through the checked-in deploy/ scripts and systemd units.
Validated deployment targets:
- Debian 12 and 13
- Ubuntu 24.04 LTS and 25.10
- Fedora 42 and 43 with SELinux enforcing
- CentOS Stream 9 and 10 with SELinux enforcing
- AlmaLinux 8, 9, and 10 with SELinux enforcing
- Rocky Linux 8, 9, and 10 with SELinux enforcing
Likely install candidates (not yet tested):
- Fedora 44 with SELinux enforcing (official release April 14, 2026)
- Red Hat 8, 9 and 10 with SELinux enforcing (presumed working)
Deployment notes:
install.sh,upgrade.sh,repair.sh, anduninstall.shhave been validated on the distributions above- Fedora, CentOS Stream, AlmaLinux, and Rocky Linux installs on very small 1 GB hosts may require temporary swap during package installation; the deploy scripts now handle that automatically
- RHEL support is expected to track the validated Fedora, CentOS Stream, AlmaLinux, and Rocky Linux path, but has not yet been tested on a subscription-backed Red Hat host
- Oracle Linux is likely to work as a Red Hat-family target, but has not yet been directly validated
- Raspberry Pi OS / Raspbian is not yet validated, though 64-bit Debian- or Ubuntu-style images are the most likely to work cleanly
- Older baselines should not be attempted:
- Debian 11
- Ubuntu 22.04 LTS
- CentOS 7 / RHEL 7 / Oracle Linux 7 and below
- pyCluster requires Python 3.11+, so older distro baselines without a current Python runtime are out of scope for the supported deployment path
Typical install:
sudo ./deploy/install.sh
sudo ./deploy/doctor.shInitial System Operator web access uses the SYSOP account. The generated bootstrap password is printed prominently by the installer, written to /root/pycluster-initial-sysop.txt, and interactive installs pause for explicit acknowledgement so the credentials are not missed.
Typical upgrade:
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.shIf you are moving an existing node from 1.0.0 to 1.0.1, run that upgrade path instead of reinstalling. The upgrader handles the 1.0.1 state conversion in place.
Installed services:
pycluster.servicepyclusterweb.servicepycluster-cty-refresh.timerpycluster-retention.timer
Minimum practical deployment:
- 1 vCPU
- 1 GB RAM
- 10 GB storage
- persistent network connectivity
Recommended small production node:
- 2 vCPU
- 2 GB RAM
- 20 GB SSD-backed storage
Notes:
- SQLite works well at this scale
- reverse proxy, fail2ban, and package upgrades are more comfortable with 2 GB RAM
- very small Fedora or Red Hat-family hosts may temporarily need swap during package operations
pyCluster supports:
- local callsign blocking
- per-user access controls for telnet and web
- structured auth-failure logging
- shipped
fail2banfilters and jails - imported exact-IP blocks from DXSpider
badip.local - sysop visibility for recent auth failures and current bans
Auth-failure log retention:
- shipped logrotate policy for
/var/log/pycluster/authfail.log
pyCluster ships with a bundled cty.dat, and install/upgrade perform a best-effort refresh from Country Files.
Manual refresh:
python3 ./scripts/update_cty.py --config ./config/pycluster.tomlAutomatic refresh:
pycluster-cty-refresh.timer
pyCluster can automatically prune older operational data.
- spots, messages, and bulletins can be retained for configurable day counts
- the System Operator web UI exposes:
- ability to enable age-based cleanup
- per-category day values
- ad-hoc, on-demand cleanup
- scheduled cleanup runs daily through:
pycluster-retention.timer
- User Manual
- Administration Manual
- Installation
- Migration
- Configuration
- Feature Highlights
- Telnet Commands
- Telnet Command Reference
- System Operator Web
- Public Web UI
- Node Linking
- Security
- Operations
- Architecture
- Roadmap
- Project History
pyCluster is created and led by John D. Lewis, AI3I with help from ChatGPT OpenAI Codex and Anthropic Claude AI.
Special thanks for advice, assistance, consideration and testing:
- Eric Tichansky, NO3M
- Howard Leadmon, WB3FFV
- Joe Reed, N9JR
See CONTRIBUTING.md.
See CHANGELOG.md.