Want a fast GUI on Linux? → Adaptive CycleScanner Desktop — charts, CSV import, and sliders on top of this same Rust-backed library.
Whitepaper (PDF): WHITEPAPER.pdf — theory and methodology behind the cycle + drift framework (from cycles_with_drift_sexy_version).
A Python package for adaptive cycle scanning: the heavy work runs in Rust (via PyO3), so it is typically on the order of ~200× faster than a pure-Python version—while you still install with pip and write normal Python.
Author: Edward Samokhvalov · License: MIT · Source code: github.com/algomaschine/adaptive_cycle_scanner
What does this do?
You give it a stream of prices (or similar numbers). It tries to find which cycle length (in bars) best explains the wiggles in your data right now—useful for charts, backtests, or live dashboards.
Why not only Python?
The same math in plain Python can be slow on long series. The core here is written in Rust and exposed to Python, so update() and fit() stay fast.
What you need
- Python 3.9 or newer
- To install a pre-built wheel: nothing else.
- To build from this source folder: Rust and
pip install maturin.
Once the project is on PyPI (after you or the maintainer publishes it):
python3 -m pip install adaptive_cycle_scannerFrom a wheel file you already built (file name will match your Python version and OS):
python3 -m pip install target/wheels/adaptive_cycle_scanner-*.whlBuild the wheel yourself (from the repository root):
python3 -m pip install maturin
python3 -m maturin build --release
python3 -m pip install target/wheels/adaptive_cycle_scanner-*.whlimport adaptive_cycle_scanner as acr
scanner = acr.AdaptiveCycleScanner(
min_cycle=5,
max_cycle=200,
forgetting_factor=0.98,
bartels_window=200,
genuineness_threshold=49.0,
use_regime_detection=True,
)
# One new price at a time
out = scanner.update(1800.0)
print("Dominant cycle (bars):", out["dominant_length"])
print("Phase / amplitude:", out["dominant_phase"], out["dominant_amplitude"])
# Or many prices at once
fit_out = scanner.fit([1800.0, 1801.1, 1802.0, 1801.7])
state = scanner.get_state()out is a regular Python dict with keys like dominant_length, trend, strengths, and more (see below).
If you already installed the package, you can scan prices from a spreadsheet export without writing your own loop.
-
Make a CSV with a header row and a column named
closeorprice(one number per row).Example (
examples/sample_prices.csvin this repo):close 100.0 100.2 100.5 ... -
From the folder where you cloned the repo:
python3 examples/read_csv.py
That uses the bundled sample file. To use your file:
python3 examples/read_csv.py /path/to/your_prices.csv
-
You should see how many rows were read and the last estimated dominant cycle length (in bars). The script uses only Python’s built-in
csvmodule (no pandas).
Do it in a few lines yourself: read numbers into a list, then call fit:
import csv
import adaptive_cycle_scanner as acr
prices = []
with open("your_file.csv", newline="") as f:
for row in csv.DictReader(f):
prices.append(float(row["close"])) # or your column name
scanner = acr.AdaptiveCycleScanner(min_cycle=5, max_cycle=80)
out = scanner.fit(prices)
print("Last dominant cycle (bars):", out["dominant_lengths"][-1])| Idea | What to tweak |
|---|---|
| Which cycle lengths to try | min_cycle … max_cycle (in bars) |
| How fast old data fades | forgetting_factor (closer to 1 = longer memory) |
| How much history for statistics | bartels_window |
| How “sure” a cycle must look | genuineness_threshold |
| Spot regime / behavior changes | use_regime_detection and change_point_threshold |
Full constructor list and defaults are in the reference at the bottom.
Roughly: which cycle won, its phase and strength, the Kalman trend, per-length scores, and whether a regime change was flagged.
Keys include: dominant_length, dominant_phase, dominant_amplitude, trend, strengths, genuineness, regime_change_detected.
Batch version: parallel lists of lengths, phases, amplitudes, trends, plus strengths_list, genuineness_list, regime_change_flags, and ending Kalman kalman_level_end / kalman_slope_end.
python3 -m pydoc adaptive_cycle_scannerOr in a REPL: help(adaptive_cycle_scanner.AdaptiveCycleScanner).
The package also ships type hints (py.typed + .pyi) so editors can autocomplete.
| File | What it shows |
|---|---|
examples/read_csv.py |
Start here: load CSV → fit() (uses sample_prices.csv by default) |
examples/basic_update.py |
Live-style update() loop |
examples/fit_series.py |
Batch fit() |
examples/state_debug.py |
Inspecting internal state |
smoke_test.py |
Quick “did it install?” check |
python3 examples/read_csv.py
python3 examples/basic_update.py
python3 examples/fit_series.py
python3 examples/state_debug.pyIf you have the older pure-Python scanner next to this project, you can run:
python3 parity_check.pyIt checks that outputs line up in a practical sense (not necessarily bit-identical).
A separate app uses this library for charts and controls: adaptive_cyclescanner_desktop on GitHub (also linked at the top of this README).
Build a wheel:
python3 -m pip install maturin
python3 -m maturin build --releasePublish to TestPyPI (practice) then PyPI (real pip):
python3 -m maturin publish --repository testpypi
python3 -m maturin publishYou need a PyPI account and an API token. Wheels you build locally only match your OS/CPU/Python; for wide support, use CI (e.g. maturin-action) to build for Linux, macOS, and Windows.
KalmanTrendFilter— trend filter used for detrending (also usable on its own).BayesianChangePointDetector— online change-point helper used when regime detection is on.AdaptiveCycleScanner— main scanner.
min_cycle=5,max_cycle=200forgetting_factor=0.98bartels_window=200genuineness_threshold=49.0change_point_threshold=0.8kalman_process_level=1e-4,kalman_process_slope=1e-6,kalman_observation=1.0use_regime_detection=Truecusum_threshold=2.0
update(new_price)→ dictfit(prices)→ dictget_state()→ dict
_lengths, _t, _regime_change_detected, _last_dominant_length
Pipeline: Kalman detrending → Goertzel-style cycle scoring (with forgetting) → Bartels-style genuineness → pick a dominant cycle → optional Bayesian-style regime flags on the dominant-length history.
- This repo is the computational core + Python bindings, not the GUI.
- Speed vs pure Python is workload-dependent; ~200× is a typical order-of-magnitude for the hot path vs a naive Python loop on the same algorithm class.