A C++ library implementing traditional fractal image compression with plans for AI-powered optimization.
Fractal image compression works by representing an image as a set of self-similar transformations. The image is partitioned into range blocks, and for each range block, the algorithm finds a larger domain block that can be transformed (scaled, rotated, flipped) to closely approximate it.
- ✅ Traditional fractal compression/decompression
- ✅ Configurable block sizes and search parameters
- ✅ PGM image format support
- ✅ Cross-platform (Linux, macOS, Windows)
- 🚧 AI-powered domain selection (planned)
- 🚧 GPU acceleration (planned)
- 🚧 Mobile platform support (Android/iOS) (planned)
- C++17 compatible compiler (GCC 7+, Clang 5+, MSVC 2017+)
- CMake 3.15+
- Optional: OpenMP for parallel encoding
mkdir build
cd build
cmake ..
makemkdir build
cd build
cmake ..
cmake --build . --config Release# Run demo with test pattern
./fractal_demo
# Compress your own image (PGM format)
./fractal_demo input.pgm#include "encoder.h"
#include "decoder.h"
#include "image_io.h"
// Load image
auto img = fractal::loadPGM("input.pgm");
// Configure compression
fractal::CompressionParams params;
params.range_size = 8; // Range block size
params.domain_size = 16; // Domain block size (2x range)
params.domain_step = 4; // Search step size
params.max_iterations = 10; // Decoding iterations
// Encode
fractal::FractalEncoder encoder;
auto mappings = encoder.encode(img, params);
fractal::saveMappings(mappings, "compressed.fic");
// Decode
fractal::FractalDecoder decoder;
auto decoded = decoder.decode(mappings, img.width(), img.height(), params);
fractal::savePGM(decoded, "output.pgm");- Partition image into non-overlapping range blocks (e.g., 8×8)
- Create overlapping domain blocks (e.g., 16×16) with configurable step size
- For each range block:
- Search all domain blocks
- Try 8 geometric transformations (4 rotations × 2 flip states)
- Compute optimal contrast/brightness using least squares
- Select mapping with minimum MSE
- Start with initial gray image
- For each iteration:
- Extract domain blocks from current image
- Apply stored transformations
- Write results to range positions
- Converge after fixed iterations (typically 8-10)
Typical encoding times on a modern CPU:
- 256×256 image: ~10-30 seconds
- 512×512 image: ~2-5 minutes
Compression ratios: 10:1 to 50:1 depending on image content and parameters.
Simple grayscale image format. Can convert from other formats using ImageMagick:
convert input.png -colorspace Gray output.pgmBinary format storing:
- Number of mappings
- For each mapping:
- Domain position (x, y)
- Range position (x, y)
- Transform parameters (scale, offset, rotation, flips)
- Error metric
- Smaller (4×4): More mappings, better quality, slower encoding
- Larger (16×16): Fewer mappings, lower quality, faster encoding
- Recommended: 8×8
- Must be larger than range_size (typically 2×)
- Affects compression ratio and quality
- Smaller (1-2): Exhaustive search, best quality, very slow
- Larger (8-16): Sparse search, lower quality, much faster
- Recommended: 4 for balance
- More iterations: Better convergence, slower decoding
- Fewer iterations: Faster decoding, may not fully converge
- Recommended: 8-10
The next phase will integrate a neural network to predict optimal domain-range mappings:
- Domain selection: Predict top-K candidate domains instead of exhaustive search
- Transform prediction: Estimate optimal rotation/flip/scale parameters
- Expected speedup: 10-100× faster encoding with minimal quality loss
- Android JNI bindings
- iOS Swift/Objective-C bindings
- Optimized for ARM processors
- Quantized AI models for mobile deployment
- Fisher, Y. (1995). Fractal Image Compression: Theory and Application
- Barnsley, M. F., & Hurd, L. P. (1993). Fractal Image Compression
MIT License (or your preferred license)
Contributions welcome! Areas of interest:
- Performance optimization
- Additional image format support
- GPU acceleration
- AI model development
- Mobile platform support
Built as a modern implementation of fractal compression with planned AI enhancements.