- Efficient offline-capable synchronization
- Minimal client-server round trips
- Robust conflict detection and resolution
- Stateless, scalable server-side design
- Simple to reason about but extensible
- Pull → Push model: client pulls recent changes, then pushes local changes
- Repository generation (epoch) is separate from the change stream cursor (
since.version/change_id):repository_generationis a monotonic integer on the server; it increments only when an administrator performs a hard repository reset (wiping observation/attachment sync state).- Clients must send
x-repository-generationon sync pull, push, attachment manifest, and attachment upload; responses include the current epoch in JSON (and may repeat it in the header). - If the client’s epoch lags or diverges, the server responds with 409 Conflict and stable error
code: repository_reset_required. Clients must align (typically by pulling fresh state after wiping local observation/attachment data that no longer matches the server).
- Each record contains:
idschemaTypeschemaVersiondatahash(computed fromdata,schemaType, andschemaVersion)last_modified(server-assigned timestamp; order can be inferred fromchange_id, so strict monotonicity is not required)last_modified_by(username from JWT)change_id(strictly increasing integer, server-assigned)deleted(soft delete flag)origin_client_id(for provenance)
- Each record has a strictly increasing
change_id, assigned server-side - Client stores last seen
change_idperschemaType - Pull returns all records where
change_id > last_seen
Pros:
- No dependence on system clocks
- No ambiguity about ordering
- Enables clean pagination, partial pull, and deduplication
Server considerations:
- Maintain a per-record global
change_id - Mirror
change_idto audit log
Each form submission is an entity.
- Each form type (JSONForms schema) defines an implicit "entity" type
- This matches how ODK-X and DHIS2 Tracker often operate
- SchemaType + Version provides namespacing for evolution
Evaluation:
- ✅ Good for flexibility and multi-purpose platforms
- 🚫 Makes cross-form relationships more complex (if needed)
In addition to data (form payload), each observation may include optional root-level fields. Clients and servers must treat all of these as optional; sync must succeed when they are absent.
| Field | Type | Purpose |
|---|---|---|
geolocation |
object or null | Optional GPS fix (latitude, longitude, accuracy, etc.) |
author |
string or null | Optional creator label (e.g. username) |
device_id |
string or null | Optional stable device/client identifier |
tags |
array of strings or null | Optional tags for labeling, future extensions, or data-cleaning workflows |
These fields are stored with the observation and returned on pull; push payloads may omit any of them.
- The server validates that uploaded attachments match the
_hashdeclared in the record reference - If an attachment is missing when a record references it,
_sync_stateremainsawaiting_upload - If an attachment is deleted but still referenced,
_sync_statebecomesmissing - Clients are responsible for checking
_sync_statebefore using attachments
- If server’s hash ≠ client’s last seen hash, treat as conflict
- Allow server to:
- Accept overwrite with warning
- Store previous version in
conflictstable
- Conflict info returned in
warningsarray during push
-
Managed as a separate collection, but referenced from within record
data -
Each file has:
id(UUID or content-addressed hash, assigned by client)hash(SHA-256)sizelast_modified(server-assigned, monotonic)change_id(for consistent delta sync)sync_state(e.g.awaiting_upload,synced,orphaned,missing)
-
In
data, attachments are represented as objects with structured metadata. Example:{ "profile_photo": { "_id": "att-uuid-1", "_sync_state": "awaiting_upload", "_hash": "abc123..." }, "greeting": { "_id": "att-uuid-2", "_sync_state": "synced", "_hash": "def456..." } } -
Server indexes attachment references at push time, and tracks missing or orphaned attachments
-
If a record references an attachment not yet uploaded, server logs it with
_sync_state = awaiting_upload -
Once uploaded, attachment
sync_statetransitions tosyncedandchange_idis incremented -
/attachments/manifest?after_change_id=XYZprovides attachment delta sync -
Clients are responsible for tracking which attachments they have downloaded
-
Orphaned attachments (not referenced by any record for a defined window) are eligible for cleanup
-
Optional:
/attachments/cleanupendpoint for explicit removal -
ETag support for efficient downloading
- Each record points to
schemaType+schemaVersion - Never mutate existing record structure
- Schema validation performed at push using version-specific schema
- Future: tooling to migrate data across schema versions
- All routes require JWT with role claim
- Roles:
read-only,read-write - Token refresh support
- API versions follow Semantic Versioning (MAJOR.MINOR.PATCH)
- Major version increments indicate breaking changes requiring client updates
- Minor version increments add new functionality in a backward-compatible manner
- Patch version increments represent backward-compatible bug fixes
- Clients MUST send
x-ode-versionon API requests - Example:
x-ode-version: 1.0.0 - Server validates major-version compatibility and returns
426 Upgrade Requiredif missing, invalid, or incompatible
- Supported: Currently maintained and recommended for use
- Deprecated: Still functional but marked for future removal
- Sunset: No longer available, returns 410 Gone
- GET
/api/versionsendpoint lists all available API versions and their status - Version-mismatch responses include
x-synkronus-versionto help clients detect compatible versions
- Within the same major version:
- Existing endpoints will never be removed
- Required request parameters will never be added
- Response field semantics will never change
- New optional fields may be added to responses
- New endpoints may be added
- Major version upgrades will be maintained for at least 12 months after a new major version is released
sync_logtable: records who synced, when, and with what resultaudit_log: append-only log of all updates withold_hash,new_hash,change_id, anduser
- Partial pull (filter by form type or custom query)
- Soft delete cleanup mechanism
- Record provenance (which user/client created/updated it)
- All sync endpoints support pagination using cursor-based tokens
- Each response includes a
next_page_tokenwhen more data is available - Tokens are opaque, base64-encoded strings containing cursors and limits
{
"records": [...],
"next_page_token": "eyJsYXN0X2NoYW5nZV9pZCI6MTIzNCwibGltaXQiOjUwfQ==",
"has_more": true
}- Default batch size: 50 records
- Maximum batch size: 500 records
- Clients can request smaller batches with
limitparameter - Clients MUST NOT assume all responses will contain the requested number of records
- Server sets a reasonable timeout for each batch operation (typically 30 seconds)
- If timeout is reached during processing, the server returns a partial result
- Partial results include a valid
next_page_tokento resume from - Clients MUST check
has_moreflag to determine if additional requests are needed
- Clients SHOULD retry with exponential backoff on 429 or 5xx responses
- Servers SHOULD implement rate limiting based on response time metrics
- For massive datasets, servers MAY return a 202 Accepted with a job ID
The server automatically generates multiple quality variants for supported image types:
| Quality Level | Description | Max Dimensions | Usage |
|---|---|---|---|
original |
Unmodified source file | No limit | Archive, printing |
large |
High quality | 2048px | Detailed viewing |
medium |
Standard quality | 1024px | Normal display |
small |
Thumbnail | 320px | Previews, lists |
- Variants maintain aspect ratio and are never enlarged
- Metadata (e.g., EXIF) is preserved in
originalbut stripped from other variants - For non-image files, only
originalis available
- Client specifies desired quality via
qualityquery parameter - Example:
/attachments/123?quality=medium - If omitted,
mediumis the default for images - Server responds with appropriate
Content-Typeheader - The response includes a
vary: accept-encoding, qualityheader
- Each sync push operation MUST include a client-generated
transmission_id(UUID v4) - Server stores this ID with successful operations for a retention period (default: 24 hours)
- Duplicate pushes with the same
transmission_idwithin the retention period are ignored - Server returns the original success response for duplicate operations
{
"transmission_id": "550e8400-e29b-41d4-a716-446655440000",
"records": [...],
"change_cutoff": 1234
}- For network failures during transmission, clients MUST retry with the same
transmission_id - For 4xx errors (except 429), clients SHOULD NOT retry with the same payload
- For 5xx errors or 429, clients SHOULD implement exponential backoff
- Maximum retry count: 5 attempts with delays of 1s, 2s, 4s, 8s, 16s
- Server may accept some records but reject others
- Response includes arrays of
successesandfailures - On retry, client SHOULD only resend failed records
- Each record in
failuresincludes error details and validation messages
- 400 Bad Request: Malformed request structure
- 422 Unprocessable Entity: Schema validation failures
- 409 Conflict: Conflicts with server state
- 413 Payload Too Large: Request exceeds size limits
Validation errors follow RFC 7807 (Problem Details for HTTP APIs) format:
{
"type": "https://synkronus.org/docs/errors/validation",
"title": "Validation Error",
"status": 422,
"detail": "One or more records failed validation",
"errors": [
{
"recordId": "abc-123",
"schemaType": "patient",
"schemaVersion": "1.2",
"path": "data.age",
"message": "Age must be a positive integer",
"code": "TYPE_ERROR"
}
]
}- If server doesn't support the client's schema version:
- Returns 422 with
"code": "UNSUPPORTED_SCHEMA_VERSION" - Includes
supported_versionsarray in response
- Returns 422 with
- If schema deprecated but still supported:
- Accepts the data
- Includes a warning in response
- Suggests migration timeline
- Transport layer:
- Use standard HTTPS REST API
- Enable gzip compression at reverse proxy (e.g. Caddy, Nginx)
- Server MUST support compressed request/response bodies (gzip, deflate, brotli)
- All endpoints support HTTP/2 for efficient connection reuse
- Avoids complexity of gRPC/protobuf while remaining debuggable
- In transit: HTTPS enforced with Let's Encrypt
- At rest:
- Database encryption via Postgres (at-rest encryption provided by the underlying database / storage layer)
- Attachments optionally encrypted at rest
- All secrets stored via
.envor environment variables
- ODK Classic: simple full pull/push
- ODK-X: delta + sync log + client-side IDs
- DHIS2 Tracker: metadata-driven forms with conflict tracking