A Rust API server built with Actix-web that includes configuration management, database integration, MQTT messaging, and OpenAPI documentation. The application uses PostgreSQL (Supabase or local) for data persistence and Mosquitto for MQTT communication.
The application follows a clean architecture pattern with the following layers:
- Routes: Handle HTTP requests and responses
- Services: Business logic layer for exchanging messages and accessing telemetry
- Repository: Data access layer with database abstraction
- Messaging: Communication layer for interacting with ground stations
- Models: Data structures and DTOs
actix_webandutoipafor the HTTP server and API documentation respectivelyrumqttcfor MQTT integrationsqlxfor postgres integration- The app was developed with
psqlfor the database andmosquittofor the MQTT broker
The server uses a config.toml file for configuration. The following sections are available:
host: Server host address (default: 127.0.0.1)port: Server port (default: 8080)
url: Database connection stringpool_size: Connection pool size
host: Message broker addressport: Port for the connectionkeep_alive: keepalive message interval
The server uses environment variables with the API_ prefix for configuration. You can set these in a .env file (recommended) or export them directly.
Important: Copy .env.example to .env and configure for your environment:
cp .env.example .env
# Edit .env with your settings.env.example provides two configurations:
- Local Development (Docker) - Uses containerized PostgreSQL
- Supabase (Remote) - Uses hosted Supabase database
Key variables:
API_SERVER_HOST/API_SERVER_PORT- Server bindingAPI_DATABASE_URL/DATABASE_URL- Database connection (both should point to same DB)API_SKIP_MIGRATIONS- Set totruewhen using Supabase (migrations already applied)API_MESSAGE_BROKER_HOST/API_MESSAGE_BROKER_PORT- MQTT broker connection
The easiest way to run the application locally with all dependencies:
# 1. Clone and enter the repository
git clone <repository-url>
cd rustar-api
# 2. Copy and configure environment
cp .env.example .env
# Uncomment "Option 1: Local PostgreSQL" in .env for local development
# 3. Start all services (PostgreSQL, Mosquitto, API)
docker compose up -d
# 4. Check logs
docker compose logs -f api
# 5. Verify database tables were created
docker compose exec postgres psql -U postgres -d rustar-api -c "\dt"The API will be available at http://localhost:9090
# Stop services but keep data
docker compose down
# Stop and remove all data (fresh start)
docker compose down -vIf you prefer to run the API directly without Docker:
Option A: Use Docker Compose (Recommended)
- Mosquitto is included in
docker-compose.yamland starts automatically
Option B: Manual Installation
- Install mosquitto
- Run the broker:
mosquitto -p 1883 - Update
.envwith the correct host and port
Option A: Use Supabase (Production/Remote)
-
Set your Supabase connection string in
.env:API_DATABASE_URL=postgresql://postgres:YOUR_PASSWORD@db.gxrcklaazsihvgbxxddy.supabase.co:5432/postgres?sslmode=require DATABASE_URL=postgresql://postgres:YOUR_PASSWORD@db.gxrcklaazsihvgbxxddy.supabase.co:5432/postgres?sslmode=require API_SKIP_MIGRATIONS=true # Migrations already applied on Supabase
-
The schema is already deployed on Supabase. No additional setup needed.
Option B: Use Local PostgreSQL
-
Install PostgreSQL or use Docker:
docker compose up -d postgres
-
Install sqlx-cli:
cargo install sqlx-cli --no-default-features --features postgres
-
Set local database URL in
.env:API_DATABASE_URL=postgresql://postgres:postgres@localhost:5433/rustar-api DATABASE_URL=postgresql://postgres:postgres@localhost:5433/rustar-api API_SKIP_MIGRATIONS=false # Run migrations on startup -
Migrations run automatically when the API starts. To run manually:
sqlx migrate run
-
(Optional) Generate test data:
cargo run --bin seed_data
-
Ensure
.envis configured (see Environment Variables section above) -
Run the API:
cargo run --bin api
-
The server will:
- Connect to the database
- Run migrations (unless
API_SKIP_MIGRATIONS=true) - Connect to MQTT broker
- Start listening on the configured host:port
GET /api/telemetry/{satellite}/latest?amount=10- Get latest telemetry data for a satelliteGET /api/telemetry/{satellite}/history?startTime=<unix>&endTime=<unix>- Get historic telemetry data
GET /api/tracking/position?sat_id=<id>&gs_id=<id>&epoch=<unix>- Calculate satellite position using TLE
POST /api/control/command- Send commands to satellite via MQTT
GET /config- View current server configurationGET /swagger-ui/- Interactive OpenAPI documentation
The application uses the following tables (see migrations/ for details):
satellites- Satellite information and TLE dataground_stations- Ground station locations (latitude, longitude, altitude)telemetry- Telemetry data from satellitesjobs- Scheduled communication jobs between satellites and ground stationsjobs_status_updates- Job execution status tracking
When you need to modify the database schema:
# 1. Create a new migration file
sqlx migrate add your_migration_name
# 2. Edit the generated file in migrations/ with your SQL
# Example: migrations/20251026184727_your_migration_name.sql
# 3. Apply locally (automatic on Docker restart or manually)
sqlx migrate run
# For Docker:
docker compose build api --no-cache # Rebuild with new migration
docker compose up -d # Restart servicesAfter creating and testing a migration locally, apply it to the production Supabase database:
# 1. Export environment variables from .env
export $(cat .env | grep -v '^#' | xargs)
# 2. Run the migration script
./scripts/apply_migrations_to_supabase.sh
# Or with automatic confirmation:
AUTO_CONFIRM=1 ./scripts/apply_migrations_to_supabase.shThe script will:
- ✅ Detect which migrations are already applied in Supabase
- ✅ Show you the pending migrations
- ✅ Apply only the new migrations
- ✅ Register them in
_sqlx_migrationstable - ✅ Skip migrations that would cause conflicts (tables already exist)
Complete workflow for a schema change:
# 1. Create migration
sqlx migrate add add_new_column
# 2. Edit migrations/TIMESTAMP_add_new_column.sql
# Add your SQL: ALTER TABLE satellites ADD COLUMN status TEXT;
# 3. Apply migration locally
docker compose up -d postgres
sqlx migrate run --database-url "postgresql://postgres:postgres@localhost:5433/rustar-api"
# 4. If you have new SQL queries in code, update the SQLX cache
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspace
# 5. Test with Docker
docker compose build api
docker compose up -d
docker compose logs -f api # Verify migration ran
# 6. Apply to Supabase
export $(cat .env | grep -v '^#' | xargs)
./scripts/apply_migrations_to_supabase.sh
# 7. Commit changes
git add migrations/ .sqlx/ # Don't forget .sqlx/ !
git commit -m "Add new column to satellites table"Note: For a more detailed workflow when adding new endpoints, see the Adding New Endpoints and Migrations section below.
If you need to export the current Supabase schema (e.g., to create a fresh local copy):
# 1. Export full schema from Supabase
./scripts/export_supabase_schema.sh
# 2. Extract only public schema (removes auth, storage, etc.)
./scripts/extract_public_schema.sh
# Result: public_schema_only.sqlWhen you add new endpoints that interact with the database, you need to update the SQLX query cache for Docker builds. This is a critical step that many developers forget.
Follow these steps when adding new database-backed endpoints:
# Create a new migration file
sqlx migrate add add_your_feature
# Edit the generated file in migrations/TIMESTAMP_add_your_feature.sql
# Example: ALTER TABLE ground_stations ADD COLUMN description TEXT;# Start local database
docker compose up -d postgres
# Wait a few seconds for DB to be ready, then apply migrations
sqlx migrate run --database-url "postgresql://postgres:postgres@localhost:5433/rustar-api"Create your repository, service, and route handlers:
- Add queries in
src/repository/your_feature.rs - Add business logic in
src/services/your_feature_service.rs - Add HTTP handlers in
src/routes/your_feature.rs - Register routes and OpenAPI docs in
src/main.rs
This step is required for Docker builds to work! The Dockerfile uses SQLX_OFFLINE=true, which means it needs pre-compiled query metadata.
# Ensure database is running and has latest migrations
docker compose up -d postgres
sleep 3
# Regenerate the query cache (creates/updates .sqlx/ directory)
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspaceThis command will:
- ✅ Connect to your local database
- ✅ Analyze all SQL queries in your code
- ✅ Generate type-safe metadata in
.sqlx/directory - ✅ Allow Docker to build without a database connection
Important: Always commit the .sqlx/ directory to git!
git add .sqlx/
git add src/
git add migrations/ # if you created new migrations
git commit -m "Add new feature with database queries"# Run the API directly with cargo
cargo run --bin api
# Test your endpoints
curl http://localhost:9090/api/your-endpoint# Rebuild and restart (the .sqlx cache makes this work)
docker compose down
docker compose build api
docker compose up -d
# Check logs
docker compose logs -f api
# Test endpoints
curl http://localhost:9090/api/your-endpoint# Export environment variables
export $(cat .env | grep -v '^#' | xargs)
# Apply migrations to Supabase
./scripts/apply_migrations_to_supabase.shCause: You added new SQL queries but didn't regenerate the .sqlx/ cache.
Solution:
docker compose up -d postgres
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspace
docker compose build api
docker compose up -dCause: Your local database doesn't have the latest migrations applied.
Solution:
sqlx migrate run --database-url "postgresql://postgres:postgres@localhost:5433/rustar-api"
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspaceCause: Docker cached an old image.
Solution:
docker compose down
docker compose build api # Will use new code and .sqlx cache
docker compose up -d# When you add/modify SQL queries:
docker compose up -d postgres
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspace
git add .sqlx/
docker compose build api
docker compose up -d
# When you add migrations:
sqlx migrate run --database-url "postgresql://postgres:postgres@localhost:5433/rustar-api"
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/rustar-api" cargo sqlx prepare --workspace
./scripts/apply_migrations_to_supabase.sh # For productionThe server is structured with:
src/main.rs- Main application entry pointsrc/config.rs- Configuration managementsrc/models/- Data models and DTOssrc/repository/- Database access layersrc/services/- Business logic layersrc/routes/- HTTP route handlerssrc/database/- Database connection managementsrc/messaging/- MQTT broker integrationmigrations/- Database schema migrationsscripts/- Utility scripts for schema export and management
# Build release binary
cargo build --release
# Or build Docker image
docker build -t rustar-api .cargo testIf you see migration conflicts:
# Clean and restart
docker compose down -v
docker compose up -d- Verify PostgreSQL is running:
docker compose ps - Check
.envhas correctAPI_DATABASE_URL - For Supabase, ensure password is correct and
?sslmode=requireis included
If you see sqlx macro errors in the editor (for example invalid port number or macro expansion failures from query!/query_as!), it's usually because the language server or cargo check can't connect to the configured DATABASE_URL. Here are three safe options to remove those warnings:
Use a local PostgreSQL for editor checks (recommended for development)
- Ensure your local DB is running (Docker Compose is easiest):
docker compose up -d postgres- Set the local DB URL in your
.env(example already configured for local dev):
# in .env
DATABASE_URL=postgresql://postgres:postgres@localhost:5433/postgres
API_DATABASE_URL=postgresql://postgres:postgres@localhost:5433/postgres- Prepare the sqlx offline cache (this writes
.sqlx/):
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/postgres" cargo sqlx prepare- Run a local check to validate macros:
DATABASE_URL="postgresql://postgres:postgres@localhost:5433/postgres" cargo check- Verify Mosquitto is running:
docker compose logs mosquitto - Check
API_MESSAGE_BROKER_HOSTandAPI_MESSAGE_BROKER_PORTin.env