Skip to content

AI3I/pyCluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

60 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

pyCluster

pyCluster is a modern DX cluster core written in Python.

It keeps the familiar telnet-style operator experience, adds a public web UI and a System Operator web console, and remains compatible with legacy cluster ecosystems such as DXSpider-family node links.

πŸ”΄ Live Demo

  • public web UI: https://pycluster.ai3i.net
  • public telnet listeners:
    • pycluster.ai3i.net:7300
    • pycluster.ai3i.net:7373
    • pycluster.ai3i.net:8000

✨ Highlights

  • Telnet-first DX cluster workflow with modernized operator output
  • Public web UI for users and a dedicated web console for system operators
  • SQLite persistence, CTY refresh tooling, and fail2ban integration
  • Validated deploy path across modern Debian, Ubuntu, Fedora, and Red Hat-family Linux

🧭 What pyCluster Does

  • serves DX-style telnet access for users and operators
  • provides a public web UI for viewing and posting cluster traffic
  • provides a System Operator web console for runtime, protocol, user, and peer management
  • stores spots, messages, and user preferences in SQLite
  • supports node linking with profile-aware behavior for legacy cluster families
  • ships with deployment tooling for systemd-based Linux hosts
  • integrates with fail2ban for login-abuse protection
  • supports age-based cleanup for spots, messages, and bulletins
  • maintains local CTY data with optional automatic refresh from Country Files

Where pyCluster Improves on Legacy Cluster Software

pyCluster is not just trying to mimic old command names. It is trying to keep the parts of legacy cluster software that matter while improving the parts that usually feel neglected.

Key improvements:

  • cleaner telnet output and more human-readable replies
  • explicit operator command namespace with sysop/*
  • public web UI for normal users
  • System Operator web console for runtime and policy management
  • clearer link and protocol visibility
  • more protective routing and duplicate-handling behavior built into the core engine
  • per-user access matrix for telnet and web
  • integrated audit and security visibility
  • structured auth-failure logging with fail2ban support
  • age-based retention controls with daily cleanup
  • bundled and refreshable CTY data instead of relying on stale host copies
  • Linux-first deployment with systemd tooling

Less Manual Admin Work

pyCluster is designed to reduce the amount of defensive cluster administration that older systems often push onto the operator.

In practice that means:

  • duplicate and loop-resistant behavior is handled primarily in core logic rather than depending on heavy manual route-filter tuning
  • in normal deployments you can usually link to multiple partner nodes without first writing special defensive route filters
  • duplicate suppression, routing protections, and peer-state handling are intended to make multi-link operation work safely by default
  • peer cleanup, policy-drop accounting, and protocol-health visibility are built in
  • operators can still apply filters and policy controls when needed, but normal operation should not require constant route-filter micromanagement
  • the goal is safer default behavior with fewer admin headaches, not recreating a large manual-maintenance burden

πŸ“Œ Current Status

pyCluster is usable today as a single-node cluster with web and telnet access, persistent storage, peer linking, and operator controls. The codebase is still evolving, but it is no longer just a prototype.

πŸ–₯️ Interfaces

Telnet

Primary human and compatibility interface.

  • user prompt: N0CALL-1>
  • sysop prompt: N0CALL-1#
  • DX-style command surface with show/*, set/*, unset/*, aliases, and sysop/*

Public Web UI

User-facing browser interface.

  • spot list and filters
  • cluster view
  • watch lists and recent matches
  • operate tab for login and posting
  • profile editing for normal users

System Operator Web UI

Operator-facing browser console.

  • node presentation and MOTD
  • user and access management
  • peer and link management
  • protocol health and policy drops
  • audit and security views

πŸš€ Quick Start

Get the code with SSH:

cd /usr/src
git clone git@github.com:AI3I/pyCluster.git
cd pyCluster

Or with HTTPS:

cd /usr/src
git clone https://github.com/AI3I/pyCluster.git
cd pyCluster

Update an existing checkout:

git pull --ff-only

Run locally for development:

python3 -m venv .venv
source .venv/bin/activate
pip install -e .

pycluster --config ./config/pycluster.toml serve

Deploy on a supported Linux host:

sudo ./deploy/install.sh
sudo ./deploy/doctor.sh

For a host-level install, cloning into /usr/src/pyCluster is the recommended layout. The deploy scripts create the pycluster system user and group automatically; the installer does not require the operator to create that account first. The installed runtime tree is placed under /home/pycluster/pyCluster.

Typical deployed layout:

/usr/src/pyCluster                  # administrator managed checkout used for install/upgrade
/home/pycluster/pyCluster/          # live runtime tree
β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ pycluster.toml              # active node configuration
β”‚   └── strings.toml                # hot-reloadable operator text
β”œβ”€β”€ data/
β”‚   └── pycluster.db                # live SQLite database
β”œβ”€β”€ logs/
β”‚   └── proto/                      # protocol (PCxx) trace logs
└── src/                            # installed application code

/var/log/pycluster/authfail.log     # authentication failure log watched by fail2ban
/root/pycluster-initial-sysop.txt   # bootstrap SYSOP credentials note (needed post-install!)

Upgrade an existing deployment:

git pull --ff-only
sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.sh

For upgrades from 1.0.0 through 1.0.3, deploy/upgrade.sh performs the required cumulative state conversion for older installs:

  • hashes any legacy plaintext passwords still stored in user_prefs
  • seeds config/strings.toml if it is missing
  • preserves compatibility with older pycluster.toml files by supplying defaults for newer optional config sections such as [qrz]
  • preserves the existing config/pycluster.toml, data, and logs in place

Default listeners:

  • telnet: 0.0.0.0:7300
  • sysop web: 127.0.0.1:8080
  • public web: 127.0.0.1:8081

πŸ› οΈ Deployment

Production deployment is handled through the checked-in deploy/ scripts and systemd units.

Validated deployment targets:

  • Debian 12 and 13
  • Ubuntu 24.04 LTS and 25.10
  • Fedora 42 and 43 with SELinux enforcing
  • CentOS Stream 9 and 10 with SELinux enforcing
  • AlmaLinux 8, 9, and 10 with SELinux enforcing
  • Rocky Linux 8, 9, and 10 with SELinux enforcing

Likely install candidates (not yet tested):

  • Fedora 44 with SELinux enforcing (official release April 14, 2026)
  • Red Hat 8, 9 and 10 with SELinux enforcing (presumed working)

Deployment notes:

  • install.sh, upgrade.sh, repair.sh, and uninstall.sh have been validated on the distributions above
  • Fedora, CentOS Stream, AlmaLinux, and Rocky Linux installs on very small 1 GB hosts may require temporary swap during package installation; the deploy scripts now handle that automatically
  • RHEL support is expected to track the validated Fedora, CentOS Stream, AlmaLinux, and Rocky Linux path, but has not yet been tested on a subscription-backed Red Hat host
  • Oracle Linux is likely to work as a Red Hat-family target, but has not yet been directly validated
  • Raspberry Pi OS / Raspbian is not yet validated, though 64-bit Debian- or Ubuntu-style images are the most likely to work cleanly
  • Older baselines should not be attempted:
    • Debian 11
    • Ubuntu 22.04 LTS
    • CentOS 7 / RHEL 7 / Oracle Linux 7 and below
  • pyCluster requires Python 3.11+, so older distro baselines without a current Python runtime are out of scope for the supported deployment path

Typical install:

sudo ./deploy/install.sh
sudo ./deploy/doctor.sh

Initial System Operator web access uses the SYSOP account. The generated bootstrap password is printed prominently by the installer, written to /root/pycluster-initial-sysop.txt, and interactive installs pause for explicit acknowledgement so the credentials are not missed.

Typical upgrade:

sudo ./deploy/upgrade.sh
sudo ./deploy/doctor.sh

If you are moving an existing node from 1.0.0 to 1.0.1, run that upgrade path instead of reinstalling. The upgrader handles the 1.0.1 state conversion in place.

Installed services:

  • pycluster.service
  • pyclusterweb.service
  • pycluster-cty-refresh.timer
  • pycluster-retention.timer

πŸ“¦ Hardware Requirements

Minimum practical deployment:

  • 1 vCPU
  • 1 GB RAM
  • 10 GB storage
  • persistent network connectivity

Recommended small production node:

  • 2 vCPU
  • 2 GB RAM
  • 20 GB SSD-backed storage

Notes:

  • SQLite works well at this scale
  • reverse proxy, fail2ban, and package upgrades are more comfortable with 2 GB RAM
  • very small Fedora or Red Hat-family hosts may temporarily need swap during package operations

πŸ” Security

pyCluster supports:

  • local callsign blocking
  • per-user access controls for telnet and web
  • structured auth-failure logging
  • shipped fail2ban filters and jails
  • imported exact-IP blocks from DXSpider badip.local
  • sysop visibility for recent auth failures and current bans

Auth-failure log retention:

  • shipped logrotate policy for /var/log/pycluster/authfail.log

🌍 CTY Data

pyCluster ships with a bundled cty.dat, and install/upgrade perform a best-effort refresh from Country Files.

Manual refresh:

python3 ./scripts/update_cty.py --config ./config/pycluster.toml

Automatic refresh:

  • pycluster-cty-refresh.timer

🧹 Retention and Cleanup

pyCluster can automatically prune older operational data.

  • spots, messages, and bulletins can be retained for configurable day counts
  • the System Operator web UI exposes:
    • ability to enable age-based cleanup
    • per-category day values
    • ad-hoc, on-demand cleanup
  • scheduled cleanup runs daily through:
    • pycluster-retention.timer

πŸ“š Documentation

πŸ™ Credits

pyCluster is created and led by John D. Lewis, AI3I with help from ChatGPT OpenAI Codex and Anthropic Claude AI.

Special thanks for advice, assistance, consideration and testing:

  • Eric Tichansky, NO3M
  • Howard Leadmon, WB3FFV
  • Joe Reed, N9JR

🀝 Contributing

See CONTRIBUTING.md.

πŸ•’ Change Log

See CHANGELOG.md.

About

Modern DX cluster core with telnet, public web UI, sysop console, SQLite persistence, and DXSpider-compatible node linking

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Languages