The bottleneck is not the processor. It is the shape of the cell.
A benchmarking library that compares cubic (6-connected) and FCC/rhombic dodecahedral (12-connected) lattice topologies across graph theory, spatial operations, and signal processing.
| Metric | FCC vs Cubic | Scale |
|---|---|---|
| Average shortest path | 30% shorter | 125 – 8,000 nodes |
| Graph diameter | 40% smaller | 125 – 8,000 nodes |
| Algebraic connectivity | 2.4× higher | 125 – 8,000 nodes |
| Flood fill reach | 55% more nodes | 125 – 8,000 nodes |
| NN query speed | 17% faster | 125 – 8,000 nodes |
| Signal reconstruction | 4-10× lower MSE | 216 – 1,000 samples |
| Reconstruction isotropy | 5-20× more uniform | 216 – 1,000 samples |
| Embedding neighbor recall | +15-26pp at 1-hop | 125 – 1,000 nodes |
| Information diffusion | 1.4-2× faster | 125 – 1,000 nodes |
| Edge cost | ~2× more edges | (the price) |
These ratios are stable across all tested scales. They hold at every size tested, consistent with derivation from Voronoi cell geometry rather than sample size.
| Metric | Corpus vs Uniform | Scale |
|---|---|---|
| Fiedler ratio (direction-weighted) | 2.3x → 6.1x | 125 – 8,000 nodes |
| Path advantage | 30% → 60% shorter | 125 – 1,000 nodes |
| Consensus speedup | 1.0x → 6.7x | 125 nodes |
| Prime-vertex coherence | p = 0.000025 | Single cell (40,320 permutations) |
Heterogeneous edge weights amplify the FCC advantage. Direction-based weighting — mapping structured values to the 6 direction pairs of the FCC lattice — nearly triples the Fiedler ratio. The mechanism is bottleneck resilience: FCC routes around suppressed edges that strangle cubic lattices.
Computation is built on the cube. Memory is linear. Pixels are square. Voxels are cubic. Nobody chose this — it accumulated. Descartes gave us orthogonal coordinates. Von Neumann gave us linear memory. The cubic lattice is the spatial expression of Cartesian geometry.
Is the cube optimal? This library measures the alternative: the face-centered cubic lattice, whose Voronoi cells are rhombic dodecahedra. 12 faces instead of 6. The densest sphere packing in three dimensions (Kepler, proved by Hales 2005, formally verified 2017). The lattice that nature uses for copper, aluminum, and gold.
pip install rhombic # minimal (numpy + networkx)
pip install "rhombic[viz]" # add matplotlib for plots
pip install "rhombic[all]" # everything including dev toolsReproduce all results:
python -m rhombic.benchmarkUse in code:
from rhombic.lattice import CubicLattice, FCCLattice
cubic = CubicLattice(n=10) # 1000 nodes, 6-connected
fcc = FCCLattice(n=6) # ~864 nodes, 12-connected
# Convert to networkx for any graph analysis
G_cubic = cubic.to_networkx()
G_fcc = fcc.to_networkx()Compare topologies:
from rhombic.lattice import CubicLattice, FCCLattice
cubic, fcc = CubicLattice(5), FCCLattice(5)
print(f"Cubic: {cubic.stats().connectivity}-connected, {cubic.stats().node_count} nodes")
print(f"FCC: {fcc.stats().connectivity}-connected, {fcc.stats().node_count} nodes")
# Cubic: 6-connected, 125 nodes
# FCC: 12-connected, 500 nodesFour metrics, three scales, consistent ratios. The FCC lattice outperforms the cubic lattice on every measure of routing efficiency and structural robustness. The cost is bounded: ~2× edges for ~30% shorter paths and ~2.4× robustness.
The routing advantage translates. FCC flood fill reaches 55% more nodes per hop. Nearest-neighbor queries are 17% faster. Range queries return 24% more nodes per volume (denser packing). The cost: range query time scales with density — 3-5× slower for sphere/box queries at 8,000 nodes.
Direct empirical measurements confirm the FCC advantage. FCC spatial sampling produces 4-10× lower MSE and 5-20× more isotropic reconstruction than cubic sampling at matched sample counts. The advantage peaks in the mid-frequency range (10-60% of Nyquist) and grows with scale — from +6 dB at 216 samples to +10 dB at 1,000. Above Nyquist, both lattices alias and cubic's axis alignment accidentally helps.
Does the FCC advantage survive when the lattice organizes high-dimensional embedding data? FCC captures 15-26 more percentage points of an embedding's true nearest neighbors at 1-hop. Information diffuses 1.4-2× faster. Consensus converges 1.58× faster at moderate scale (500 nodes), though per-neighbor weight dilution reduces the advantage at 1,000 nodes.
A proof-of-concept ANN index that organizes high-dimensional embeddings on lattice topology. At matched node counts, the FCC index captures +7 to +20 percentage points more true nearest neighbors at 1-hop than the cubic index. The only variable is the connectivity pattern.
from rhombic.index import FCCIndex, CubicIndex, brute_force_knn
fcc = FCCIndex.from_target_nodes(dim=384, target_nodes=500).build(embeddings)
results = fcc.query(query_vector, k=10, hops=1)
recall = fcc.recall_at_k(queries, ground_truth, k=10, hops=1)What happens when edges carry heterogeneous weights? Seven experiments across two scales (lattice and single-cell). The FCC advantage amplifies under structured weights — direction-based corpus weighting pushes the Fiedler ratio from 2.3x to 6.1x. Prime-vertex coherence is significant at the optimal mapping (p = 0.000025 vs 40,320 alternatives). Spectral bottleneck creation is universal across 24-edge polytopes, not RD-specific.
Thirteen experiments across four model families (1.1B–14B parameters), demonstrating that a cybernetic feedback mechanism discovers rhombic dodecahedral geometry in multi-channel LoRA bridge matrices.
Key finding: When the Steersman (contrastive + spectral feedback) is active at channel count n=6, 100% of bridge matrices develop block-diagonal structure aligned to the three coordinate planes of the rhombic dodecahedron. Without the Steersman: 0%. The co-planar/cross-planar coupling ratio peaks at 82,854:1. Structure locks in by step 200, survives adversarial initialization, and costs 0.17% validation loss.
| Finding | Value |
|---|---|
| Block-diagonal rate (cybernetic n=6) | 100% (42,500+ matrices) |
| Block-diagonal rate (non-cybernetic) | 0% (570 matrices) |
| Peak co-planar/cross-planar ratio | 82,854:1 |
| Lock-in speed | ~200 steps (half-life 123 steps) |
| Adversarial initialization suppression | 99.5% in 900 steps |
| Bridge Fiedler bifurcation (n=6 vs n≠6) | 1,020× |
| Val loss cost of topology | 0.17% max |
| Scale invariance | 1.1B, 7B, 14B (Fiedler converges ~0.10) |
7-round adversarial audit complete. 232 findings across 7 rounds, 87 fixed, zero CRITICAL or MAJOR findings remaining. Both Papers 2 and 3 are submission-ready.
- Audit trail — full findings, hub validations, rewrite log
- Cross-phase synthesis
The complete argument across all four rungs — cultural genealogy, empirical evidence, cybernetic interpretation, and practical recommendations.
- Reproducible by default. Every result has code that generates it.
- The geometry is the argument. The numbers are the evidence.
- Cost is always reported alongside benefit.
- Sparse results are data, not failure.
rhombic-agent — a Hermes Agent
that thinks in 12 dimensions. 9 custom tools + 3 conversational skills. Ask it
to run experiments, generate visualizations, and explain the geometry.
- Interactive demo — try it in your browser
- The essay — the thesis for humans
- PyPI —
pip install rhombic - Full synthesis — the complete argument across all four rungs
- Weighted extensions — what happens under heterogeneous weights
TeLoRA adds a learnable n×n coupling matrix — the bridge — between the A and B projections in LoRA, adding n² parameters per layer. When the bridge is the identity matrix, the architecture reduces exactly to standard LoRA.
The bridge does not improve fine-tuning loss. It provides something LoRA cannot: a compact, interpretable diagnostic of adapter behavior — an n²-parameter summary of what training discovered, readable without inference or evaluation. At n=6, a cybernetic feedback mechanism (the Steersman) discovers rhombic dodecahedral geometry in the bridge.
- Architecture:
rhombic.nn—RhombiLoRALinear, topology, bridge init - Training:
scripts/train_cybernetic.py— full Steersman pipeline - 20 experimental learnings: LEARNINGS.md
- Paper 1: The Shape of the Cell — four-domain topology comparison (arXiv cs.DS)
- Paper 2: Structured Edge Weights Amplify FCC Lattice Topology — bottleneck resilience under heterogeneous weights
- Paper 3: The Learnable Bridge — cybernetic feedback discovers rhombic dodecahedral geometry in multi-channel LoRA (13 experiments, 4 model families, 7-round audit)
See CONTRIBUTING.md. We're looking for new topologies, new metrics, and new rungs on the experimental ladder.
MPL-2.0 — Use freely. Modifications to library files shared back to the commons.




