Releases: AlessioCappello2/TrStream
v3.0.0 - Revolut integration and refactoring
This release introduces full Revolut integration into the TrStream pipeline and includes a strategic refactoring to reduce code duplication and improve reuse across services.
Highlights
-
Revolut Integration Stack
- Hybrid architecture combining local components (generator, worker) with publicly exposed services (webhook, event queue)
- Integrated Sandbox Revolut API for authentication and transaction event testing
- Introduced a custom CLI to facilitate authentication, webhook registration and account listing
-
Services Refactoring
- Defined base modules to centralize shared components across services (reader/writer, producer/consumer, configuration, logging)
- Refactored configuration structure and updated YAML definitions
- Removed relative imports to improve module clarity and packaging consistency
- Standardized environment variable ingestion within Docker images
v2.0.0 - Stripe integration and storage update
This release introduces the full Stripe integration in the TrStream pipeline, along with key storage and processing updates.
Highlights
- Stripe Integration stack
- Fully containerized webhook and event generator services
- Host-based Stripe CLI event forwarding for real and test events
- Automated scripts for starting and stopping the stack
- Storage and processing updates
- Updated MinIO paths and partitioning logic
- Event normalization for Stripe and internal producers
- Introduction of medallion architecture: raw → processed → analytics
- Service adaptations
- Partitioner evolved into a processor and compacter slightly adapted to handle the new storage layout
- Schema- and source-aware processing maintained for query-ready outputs
This release enables local testing of Stripe events, improves data organization and lays the groundwork for further integrations (e.g. Revolut API, Airflow scheduling)
v1.0.0 - Stable pipeline architecture
This release marks the first stable version of TrStream, a distributed real-time transaction processing pipeline.
Highlights
- End-to-end streaming pipeline built on Kafka and S3-compatible storage
- Horizontally scalable producers and consumers
- Immutable Parquet-based data lake with explicit lifecycle stages
- Dedicated partitioning and compaction services for analytical workloads
- SQL query layer powered by DuckDB with a Streamlit-based editor
- Fully containerized local environment with Docker Compose
- Kafka UI integration for observability
Architecture
The pipeline models a modern fintech-style data flow:
- event-driven ingestion
- durable raw event storage for auditing
- lakehouse-style optimization for analytics
Tooling
- Helper scripts for build, run, scaling and shutdown
- Health checks and deterministic startup order
- Clear separation of responsibilities across services
Status
This release is intended as a stable foundation.
Future work will focus on:
- integration with real transaction APIs (e.g. Stripe, Revolut)
- metrics and observability improvements
- advanced analytics and fraud detection use cases