From 704cdac72badbe3201ffd08b7f74fd655f8c088b Mon Sep 17 00:00:00 2001 From: shum Date: Tue, 31 Mar 2026 12:58:42 +0000 Subject: [PATCH 01/24] xftp: add PostgreSQL backend design spec --- ...2026-03-25-xftp-postgres-backend-design.md | 300 ++++++++++++++++++ 1 file changed, 300 insertions(+) create mode 100644 plans/2026-03-25-xftp-postgres-backend-design.md diff --git a/plans/2026-03-25-xftp-postgres-backend-design.md b/plans/2026-03-25-xftp-postgres-backend-design.md new file mode 100644 index 000000000..a4488cea0 --- /dev/null +++ b/plans/2026-03-25-xftp-postgres-backend-design.md @@ -0,0 +1,300 @@ +# XFTP Server PostgreSQL Backend + +## Overview + +Add PostgreSQL backend support to xftp-server, following the SMP server pattern. Supports bidirectional migration between STM (in-memory with StoreLog) and PostgreSQL backends. + +## Goals + +- PostgreSQL-backed file metadata storage as an alternative to STM + StoreLog +- Polymorphic server code via `FileStoreClass` typeclass with associated `StoreMonad` (following `MsgStoreClass` pattern) +- Bidirectional migration: StoreLog <-> PostgreSQL via CLI commands +- Shared `server_postgres` cabal flag (same flag enables both SMP and XFTP Postgres support) +- INI-based backend selection at runtime + +## Non-Goals + +- Hybrid mode (STM caching + Postgres persistence as a distinct user-facing mode) +- Soft deletion / `deletedTTL` (XFTP uses random IDs with no reuse concern) +- Storing file data in PostgreSQL (files remain on disk) +- Separate cabal flag for XFTP Postgres + +## Architecture + +### FileStoreClass Typeclass + +Polymorphic over `StoreMonad`, following the `MsgStoreClass` pattern with injective type family: + +```haskell +class FileStoreClass s where + type StoreMonad s = (m :: Type -> Type) | m -> s + type FileStoreConfig s :: Type + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> StoreMonad s (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> StoreMonad s (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> StoreMonad s (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> StoreMonad s (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> StoreMonad s (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> StoreMonad s (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> StoreMonad s () + ackFile :: s -> RecipientId -> StoreMonad s (Either XFTPErrorType ()) + + -- Expiration + expiredFiles :: s -> Int64 -> StoreMonad s [(SenderId, Maybe FilePath, Word32)] + + -- Storage and stats (for init-time computation) + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int +``` + +- STM backend: `StoreMonad s ~ STM` +- Postgres backend: `StoreMonad s ~ DBStoreIO` (i.e., `ReaderT DBTransaction IO`) + +Store operations executed via a runner: `atomically` for STM, `withTransaction` for Postgres. + +### FileRec and TVar Fields + +`FileRec` retains its `TVar` fields (matching SMP's `PostgresQueue` pattern): + +```haskell +data FileRec = FileRec + { senderId :: SenderId + , fileInfo :: FileInfo + , filePath :: TVar (Maybe FilePath) + , recipientIds :: TVar (Set RecipientId) + , createdAt :: RoundedFileTime + , fileStatus :: TVar ServerEntityStatus + } +``` + +- **STM backend**: TVars are the source of truth, as currently. +- **Postgres backend**: `getFile` reads from DB and creates a `FileRec` with fresh TVars populated from the DB row. Typeclass mutation methods (`setFilePath`, `blockFile`, etc.) update both the DB (persistence) and the TVars (in-session consistency). The `recipientIds` set is populated from a subquery on the `recipients` table. + +### usedStorage Ownership + +`usedStorage :: TVar Int64` moves from the store to `XFTPEnv`. The store typeclass does **not** manage `usedStorage` — it only provides `getUsedStorage` for init-time computation. + +- **STM init**: StoreLog replay calls `setFilePath` (which only sets the filePath TVar — the STM `setFilePath` implementation is changed to **not** update `usedStorage`). After replay, `getUsedStorage` computes the sum over all file sizes (matching current `countUsedStorage` behavior). +- **Postgres init**: `getUsedStorage` executes `SELECT COALESCE(SUM(file_size), 0) FROM files`. +- **Runtime**: Server manages `usedStorage` TVar directly for reserve/commit/rollback during uploads, and adjusts after `deleteFile`/`blockFile` calls. + +**Note on `getUsedStorage` semantics**: The current STM `countUsedStorage` sums all file sizes unconditionally (including files without `filePath` set, i.e., created but not yet uploaded). The Postgres `getUsedStorage` matches this: `SELECT SUM(file_size) FROM files` (no `WHERE file_path IS NOT NULL`). In practice, orphaned files (created but never uploaded) are rare and short-lived (expired within 48h), so the difference is negligible. A future improvement could filter by `file_path IS NOT NULL` in both backends to reflect actual disk usage more accurately. + +### Server.hs Refactoring + +`Server.hs` becomes polymorphic over `FileStoreClass s`. A `runStore` helper dispatches `StoreMonad` execution (`atomically` for STM, `withTransaction` for Postgres). + +**Call sites requiring changes** (exhaustive list): + +1. **`receiveServerFile`** (line 563): `atomically $ writeTVar filePath (Just fPath)` → `runStore $ setFilePath store senderId fPath`. The `reserve` logic (line 551-555) stays as direct TVar manipulation on `usedStorage` from `XFTPEnv`. + +2. **`verifyXFTPTransmission`** (line 453): `atomically $ verify =<< getFile st party fId` — the `getFile` call and subsequent `readTVar fileStatus` are in a single `atomically` block. Refactored to: `runStore $ getFile st party fId`, then read `fileStatus` from the returned `FileRec`'s TVar (safe for both backends — STM TVar is the source of truth, Postgres TVar is a fresh snapshot from DB). + +3. **`retryAdd`** (line 516): Signature `XFTPFileId -> STM (Either XFTPErrorType a)` → `XFTPFileId -> StoreMonad s (Either XFTPErrorType a)`. The `atomically` call (line 520) replaced with `runStore`. + +4. **`deleteOrBlockServerFile_`** (line 620): Parameter `FileStore -> STM (Either XFTPErrorType ())` → `FileStoreClass s => s -> StoreMonad s (Either XFTPErrorType ())`. The `atomically` call (line 626) replaced with `runStore`. After the store action, server adjusts `usedStorage` TVar in `XFTPEnv` based on `fileInfo.size`. + +5. **`ackFileReception`** (line 601): `atomically $ deleteRecipient st rId fr` → `runStore $ deleteRecipient st rId fr`. + +6. **Control port `CPDelete`/`CPBlock`** (lines 371, 377): `atomically $ getFile fs SFRecipient fileId` → `runStore $ getFile fs SFRecipient fileId`. + +7. **`expireServerFiles`** (line 636): Replace per-file `expiredFilePath` iteration with bulk `runStore $ expiredFiles st old`, which returns `[(SenderId, Maybe FilePath, Word32)]` — the `Word32` file size is needed so the server can adjust the `usedStorage` TVar after each deletion. The `itemDelay` between files applies to the deletion loop over the returned list, not the store query itself. + +8. **`restoreServerStats`** (line 694): `FileStore {files, usedStorage} <- asks store` accesses store fields directly. Refactored to: `usedStorage` from `XFTPEnv` via `asks usedStorage`, file count via `getFileCount store` (new typeclass method). STM: `M.size <$> readTVarIO files`. Postgres: `SELECT COUNT(*) FROM files`. + +### Store Config Selection + +GADT in `Env.hs`: + +```haskell +data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore +#if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore +#endif +``` + +`XFTPEnv` becomes polymorphic: + +```haskell +data XFTPEnv s = XFTPEnv + { config :: XFTPServerConfig + , store :: s + , usedStorage :: TVar Int64 + , storeLog :: Maybe (StoreLog 'WriteMode) + , ... + } +``` + +The `M` monad (`ReaderT (XFTPEnv s) IO`) and all functions in `Server.hs` gain `FileStoreClass s =>` constraints. + +## Module Structure + +``` +src/Simplex/FileTransfer/Server/ + Store.hs -- FileStoreClass typeclass + shared types (FileRec, FileRecipient, etc.) + Store/ + STM.hs -- STMFileStore (extracted from current Store.hs) + Postgres.hs -- PostgresFileStore [CPP-guarded] + Postgres/ + Migrations.hs -- Schema migrations [CPP-guarded] + Config.hs -- PostgresFileStoreCfg [CPP-guarded] + StoreLog.hs -- Unchanged (interchange format for both backends + migration) + Env.hs -- XFTPStoreConfig GADT, polymorphic XFTPEnv + Main.hs -- Store selection, migration CLI commands + Server.hs -- Polymorphic over FileStoreClass +``` + +## PostgreSQL Schema + +Initial migration (`20260325_initial`): + +```sql +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + file_size INT4 NOT NULL, + file_digest BYTEA NOT NULL, + sender_key BYTEA NOT NULL, + file_path TEXT, + created_at INT8 NOT NULL, + status TEXT NOT NULL DEFAULT 'active' +); + +CREATE TABLE recipients ( + recipient_id BYTEA NOT NULL PRIMARY KEY, + sender_id BYTEA NOT NULL REFERENCES files ON DELETE CASCADE, + recipient_key BYTEA NOT NULL +); + +CREATE INDEX idx_recipients_sender_id ON recipients (sender_id); +CREATE INDEX idx_files_created_at ON files (created_at); +``` + +- `file_size` is `INT4` matching `Word32` in `FileInfo.size` +- `sender_key` and `recipient_key` stored as `BYTEA` using `StrEncoding` serialization (includes type tag for `APublicAuthKey` algebraic type — Ed25519 or X25519 variant) +- `file_path` nullable (set after upload completes via `setFilePath`) +- `ON DELETE CASCADE` for recipients when file is hard-deleted +- `created_at` stores rounded epoch seconds (1-hour precision, `RoundedFileTime`) +- `status` as TEXT via `StrEncoding` (`ServerEntityStatus`: `EntityActive`, `EntityBlocked info`, `EntityOff`) +- Hard deletes (no `deleted_at` column) +- No PL/pgSQL functions needed; row-level locking via `SELECT ... FOR UPDATE` on `setFilePath` to prevent duplicate uploads +- `used_storage` computed on startup: `SELECT COALESCE(SUM(file_size), 0) FROM files` (matches STM `countUsedStorage` — all files, see usedStorage Ownership section) + +### Postgres Operations + +Key query patterns: + +- **`addFile`**: `INSERT INTO files (...) VALUES (...)`, return `DUPLICATE_` on unique violation. +- **`setFilePath`**: `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`, `FOR UPDATE` row lock. Only persists the path; `usedStorage` managed by server. +- **`addRecipient`**: `INSERT INTO recipients (...)`, plus check for duplicates. No need for `recipientIds` TVar update — Postgres derives it from the table. +- **`getFile`** (sender): `SELECT ... FROM files WHERE sender_id = ?`, returns auth key from `sender_key` column. +- **`getFile`** (recipient): `SELECT f.*, r.recipient_key FROM recipients r JOIN files f ON ... WHERE r.recipient_id = ?`. +- **`deleteFile`**: `DELETE FROM files WHERE sender_id = ?` (recipients cascade). +- **`blockFile`**: `UPDATE files SET status = ? WHERE sender_id = ?`, optionally with file path clearing when `deleted = True`. +- **`expiredFiles`**: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ?` — single query replaces per-file iteration, includes `file_size` for `usedStorage` adjustment. + +## INI Configuration + +New keys in `[STORE_LOG]` section: + +```ini +[STORE_LOG] +enable: on +store_files: memory # memory | database +db_connection: postgresql://xftp@/xftp_server_store +db_schema: xftp_server +db_pool_size: 10 +db_store_log: off +expire_files_hours: 48 +``` + +`store_files` selects the backend (`store_files` rather than `store_queues` because XFTP stores files, not queues): +- `memory` -> `XSCMemory` (current behavior) +- `database` -> `XSCDatabase` (requires `server_postgres` build flag) + +### PostgresFileStoreCfg + +```haskell +data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts -- connstr, schema, poolSize, createSchema + , dbStoreLogPath :: Maybe FilePath + , confirmMigrations :: MigrationConfirmation + } +``` + +No `deletedTTL` (hard deletes). + +### Default DB Options + +```haskell +defaultXFTPDBOpts :: DBOpts +defaultXFTPDBOpts = DBOpts + { connstr = "postgresql://xftp@/xftp_server_store" + , schema = "xftp_server" + , poolSize = 10 + , createSchema = False + } +``` + +## Migration CLI + +Bidirectional migration via StoreLog as interchange format: + +``` +xftp-server database import files [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +xftp-server database export files [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +``` + +CLI options reuse `dbOptsP` parser from `Simplex.Messaging.Server.CLI`. + +### Import (StoreLog -> PostgreSQL) + +1. Read and replay StoreLog into temporary `STMFileStore` +2. Connect to PostgreSQL, run schema migrations +3. Batch-insert file records into `files` table +4. Batch-insert recipient records into `recipients` table +5. Report counts + +### Export (PostgreSQL -> StoreLog) + +1. Connect to PostgreSQL +2. Open new StoreLog file for writing +3. Fold over all file records, writing per file (in this order, matching existing `writeFileStore`): `AddFile` (with `ServerEntityStatus` — this preserves `EntityBlocked` state), `AddRecipients`, then `PutFile` (if `file_path` is set) +4. Report counts + +Note: `AddFile` carries `ServerEntityStatus` which includes `EntityBlocked info`, so blocking state is preserved through export/import without needing separate `BlockFile` log entries. + +File data on disk is untouched by migration — only metadata moves between backends. + +## Cabal Integration + +Shared `server_postgres` flag. New Postgres modules added to existing conditional block: + +```cabal +if flag(server_postgres) + cpp-options: -DdbServerPostgres + exposed-modules: + ...existing SMP modules... + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Migrations + Simplex.FileTransfer.Server.Store.Postgres.Config +``` + +CPP guards (`#if defined(dbServerPostgres)`) in: +- `Store.hs` — Postgres `FromField`/`ToField` instances for XFTP-specific types if needed +- `Env.hs` — `XSCDatabase` constructor +- `Main.hs` — database CLI commands, store selection for `database` mode, Postgres imports +- `Server.hs` — Postgres-specific imports if needed + +## Testing + +- **Unit tests**: `PostgresFileStore` operations — add/get/delete/block/expire, duplicate detection, auth errors +- **Migration round-trip**: STM store → export to StoreLog → import to Postgres → export back → verify equality (including blocked file status) +- **Integration test**: run xftp-server with Postgres backend, perform file upload/download/delete cycle +- **Tests location**: in `tests/` alongside existing XFTP tests, guarded by `server_postgres` CPP flag +- **Test database**: PostgreSQL on `localhost:5432`, using a dedicated `xftp_server_test` schema (dropped and recreated per test run, following `xftp-web` test cleanup pattern) From 7a76102001ee54b60e9f62312667c8846266fc98 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 07:53:50 +0000 Subject: [PATCH 02/24] update doc --- ...2026-03-25-xftp-postgres-backend-design.md | 265 ++++++++++++++---- 1 file changed, 211 insertions(+), 54 deletions(-) diff --git a/plans/2026-03-25-xftp-postgres-backend-design.md b/plans/2026-03-25-xftp-postgres-backend-design.md index a4488cea0..0512fe7d7 100644 --- a/plans/2026-03-25-xftp-postgres-backend-design.md +++ b/plans/2026-03-25-xftp-postgres-backend-design.md @@ -7,27 +7,19 @@ Add PostgreSQL backend support to xftp-server, following the SMP server pattern. ## Goals - PostgreSQL-backed file metadata storage as an alternative to STM + StoreLog -- Polymorphic server code via `FileStoreClass` typeclass with associated `StoreMonad` (following `MsgStoreClass` pattern) +- Polymorphic server code via `FileStoreClass` typeclass with IO-based methods (following `QueueStoreClass` pattern) - Bidirectional migration: StoreLog <-> PostgreSQL via CLI commands - Shared `server_postgres` cabal flag (same flag enables both SMP and XFTP Postgres support) - INI-based backend selection at runtime -## Non-Goals - -- Hybrid mode (STM caching + Postgres persistence as a distinct user-facing mode) -- Soft deletion / `deletedTTL` (XFTP uses random IDs with no reuse concern) -- Storing file data in PostgreSQL (files remain on disk) -- Separate cabal flag for XFTP Postgres - ## Architecture ### FileStoreClass Typeclass -Polymorphic over `StoreMonad`, following the `MsgStoreClass` pattern with injective type family: +IO-based typeclass following the `QueueStoreClass` pattern — each method is a self-contained IO action, with the implementation responsible for its own atomicity (STM backend wraps in `atomically`, Postgres backend uses database transactions): ```haskell class FileStoreClass s where - type StoreMonad s = (m :: Type -> Type) | m -> s type FileStoreConfig s :: Type -- Lifecycle @@ -35,27 +27,76 @@ class FileStoreClass s where closeFileStore :: s -> IO () -- File operations - addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> StoreMonad s (Either XFTPErrorType ()) - setFilePath :: s -> SenderId -> FilePath -> StoreMonad s (Either XFTPErrorType ()) - addRecipient :: s -> SenderId -> FileRecipient -> StoreMonad s (Either XFTPErrorType ()) - getFile :: s -> SFileParty p -> XFTPFileId -> StoreMonad s (Either XFTPErrorType (FileRec, C.APublicAuthKey)) - deleteFile :: s -> SenderId -> StoreMonad s (Either XFTPErrorType ()) - blockFile :: s -> SenderId -> BlockingInfo -> Bool -> StoreMonad s (Either XFTPErrorType ()) - deleteRecipient :: s -> RecipientId -> FileRec -> StoreMonad s () - ackFile :: s -> RecipientId -> StoreMonad s (Either XFTPErrorType ()) - - -- Expiration - expiredFiles :: s -> Int64 -> StoreMonad s [(SenderId, Maybe FilePath, Word32)] + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration (with LIMIT for Postgres; called in a loop until empty) + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] -- Storage and stats (for init-time computation) getUsedStorage :: s -> IO Int64 getFileCount :: s -> IO Int ``` -- STM backend: `StoreMonad s ~ STM` -- Postgres backend: `StoreMonad s ~ DBStoreIO` (i.e., `ReaderT DBTransaction IO`) +- STM backend: each method wraps its STM transaction in `atomically` internally. +- Postgres backend: each method runs its query via `withDB` / database connection internally. + +No polymorphic monad or `runStore` dispatcher needed — unlike `MsgStoreClass`, XFTP file operations are individually atomic and don't require grouping multiple operations into backend-dependent transactions. + +### PostgresFileStore Data Type + +```haskell +data PostgresFileStore = PostgresFileStore + { dbStore :: DBStore + , dbStoreLog :: Maybe (StoreLog 'WriteMode) + } +``` + +- `dbStore` — connection pool created via `createDBStore`, runs schema migrations on init. +- `dbStoreLog` — optional parallel log file (enabled by `db_store_log` INI setting). When present, every mutation (`addFile`, `setFilePath`, `deleteFile`, `blockFile`, `addRecipient`, `ackFile`) also writes to this log via a `withLog` wrapper. `withLog` is called AFTER the DB operation succeeds (so the log reflects committed state only). Log write failures are non-fatal (logged as warnings, do not fail the DB operation). This provides an audit trail and enables recovery via export. + +`closeFileStore` for Postgres calls `closeDBStore` (closes connection pool) then `mapM_ closeStoreLog dbStoreLog` (flushes and closes the parallel log). For STM, it closes the storeLog. Called from a `finally` block during server shutdown, matching SMP's `stopServer` → `closeMsgStore` → `closeQueueStore` pattern. + +### STMFileStore Type + +After extracting from current `Store.hs`, `STMFileStore` retains the file and recipient maps but no longer owns `usedStorage` (moved to `XFTPEnv`): + +```haskell +data STMFileStore = STMFileStore + { files :: TMap SenderId FileRec + , recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) + } +``` + +`closeFileStore` for STM is a no-op (TMaps are garbage-collected; the env-level `storeLog` is closed separately by the server). -Store operations executed via a runner: `atomically` for STM, `withTransaction` for Postgres. +### Error Handling + +Postgres operations follow SMP's `withDB` / `handleDuplicate` pattern: + +```haskell +withDB :: Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> IO (Either XFTPErrorType a) +withDB op st action = + E.try (withTransaction (dbStore st) action) >>= \case + Right r -> pure r + Left (e :: SomeException) -> logError ("STORE: " <> op <> ", " <> tshow e) $> Left INTERNAL + +handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) +handleDuplicate e = case constraintViolation e of + Just (UniqueViolation _) -> pure $ Left DUPLICATE_ + _ -> E.throwIO e +``` + +- All DB operations wrapped in `withDB` — catches exceptions, logs, returns `INTERNAL`. +- Unique constraint violations caught by `handleDuplicate` and mapped to `DUPLICATE_`. +- UPDATE operations verified with `assertUpdated` — returns `AUTH` if 0 rows affected (matching SMP pattern, prevents silent failures when WHERE clause doesn't match). +- Critical sections (DB write + TVar update) wrapped in `uninterruptibleMask_` to prevent async exceptions from leaving inconsistent state between DB and TVars. ### FileRec and TVar Fields @@ -73,13 +114,13 @@ data FileRec = FileRec ``` - **STM backend**: TVars are the source of truth, as currently. -- **Postgres backend**: `getFile` reads from DB and creates a `FileRec` with fresh TVars populated from the DB row. Typeclass mutation methods (`setFilePath`, `blockFile`, etc.) update both the DB (persistence) and the TVars (in-session consistency). The `recipientIds` set is populated from a subquery on the `recipients` table. +- **Postgres backend**: `getFile` reads from DB and creates a `FileRec` with fresh TVars populated from the DB row (matching SMP's `mkQ` pattern — `newTVarIO` per load). Mutation methods (`setFilePath`, `blockFile`, etc.) update both the DB (persistence) and the TVars (in-session consistency). The `recipientIds` TVar is initialized to `S.empty` — no subquery needed because no server code reads `recipientIds` directly; all recipient operations go through the typeclass methods (`addRecipient`, `deleteRecipient`, `ackFile`), which query the `recipients` table for Postgres. ### usedStorage Ownership `usedStorage :: TVar Int64` moves from the store to `XFTPEnv`. The store typeclass does **not** manage `usedStorage` — it only provides `getUsedStorage` for init-time computation. -- **STM init**: StoreLog replay calls `setFilePath` (which only sets the filePath TVar — the STM `setFilePath` implementation is changed to **not** update `usedStorage`). After replay, `getUsedStorage` computes the sum over all file sizes (matching current `countUsedStorage` behavior). +- **STM init**: StoreLog replay calls `setFilePath` (which only sets the filePath TVar — the STM `setFilePath` implementation is changed to **not** update `usedStorage`). Similarly, STM `deleteFile` (Store.hs line 117) and `blockFile` (line 125) are changed to **not** update `usedStorage` — the server handles all `usedStorage` adjustments externally. After replay, `getUsedStorage` computes the sum over all file sizes (matching current `countUsedStorage` behavior). - **Postgres init**: `getUsedStorage` executes `SELECT COALESCE(SUM(file_size), 0) FROM files`. - **Runtime**: Server manages `usedStorage` TVar directly for reserve/commit/rollback during uploads, and adjusts after `deleteFile`/`blockFile` calls. @@ -87,25 +128,25 @@ data FileRec = FileRec ### Server.hs Refactoring -`Server.hs` becomes polymorphic over `FileStoreClass s`. A `runStore` helper dispatches `StoreMonad` execution (`atomically` for STM, `withTransaction` for Postgres). +`Server.hs` becomes polymorphic over `FileStoreClass s`. Since all typeclass methods are IO, call sites replace `atomically` with direct IO calls to the store. **Call sites requiring changes** (exhaustive list): -1. **`receiveServerFile`** (line 563): `atomically $ writeTVar filePath (Just fPath)` → `runStore $ setFilePath store senderId fPath`. The `reserve` logic (line 551-555) stays as direct TVar manipulation on `usedStorage` from `XFTPEnv`. +1. **`receiveServerFile`** (line 563): `atomically $ writeTVar filePath (Just fPath)` → `setFilePath store senderId fPath`. The `reserve` logic (line 551-555) stays as direct TVar manipulation on `usedStorage` from `XFTPEnv`. -2. **`verifyXFTPTransmission`** (line 453): `atomically $ verify =<< getFile st party fId` — the `getFile` call and subsequent `readTVar fileStatus` are in a single `atomically` block. Refactored to: `runStore $ getFile st party fId`, then read `fileStatus` from the returned `FileRec`'s TVar (safe for both backends — STM TVar is the source of truth, Postgres TVar is a fresh snapshot from DB). +2. **`verifyXFTPTransmission`** (line 453): `atomically $ verify =<< getFile st party fId` — the `getFile` call and subsequent `readTVar fileStatus` are in a single `atomically` block. Refactored to: `getFile st party fId` (IO), then `readTVarIO (fileStatus fr)` from the returned `FileRec` (safe for both backends — STM TVar is the source of truth, Postgres TVar is a fresh snapshot from DB). -3. **`retryAdd`** (line 516): Signature `XFTPFileId -> STM (Either XFTPErrorType a)` → `XFTPFileId -> StoreMonad s (Either XFTPErrorType a)`. The `atomically` call (line 520) replaced with `runStore`. +3. **`retryAdd`** (line 516): Signature `XFTPFileId -> STM (Either XFTPErrorType a)` → `XFTPFileId -> IO (Either XFTPErrorType a)`. The `atomically` call (line 520) replaced with `liftIO`. -4. **`deleteOrBlockServerFile_`** (line 620): Parameter `FileStore -> STM (Either XFTPErrorType ())` → `FileStoreClass s => s -> StoreMonad s (Either XFTPErrorType ())`. The `atomically` call (line 626) replaced with `runStore`. After the store action, server adjusts `usedStorage` TVar in `XFTPEnv` based on `fileInfo.size`. +4. **`deleteOrBlockServerFile_`** (line 620): Parameter `FileStore -> STM (Either XFTPErrorType ())` → `FileStoreClass s => s -> IO (Either XFTPErrorType ())`. The `atomically` call (line 626) removed — the store method is already IO. After the store action, server adjusts `usedStorage` TVar in `XFTPEnv` based on `fileInfo.size`. -5. **`ackFileReception`** (line 601): `atomically $ deleteRecipient st rId fr` → `runStore $ deleteRecipient st rId fr`. +5. **`ackFileReception`** (line 605): `atomically $ deleteRecipient st rId fr` → `deleteRecipient st rId fr`. -6. **Control port `CPDelete`/`CPBlock`** (lines 371, 377): `atomically $ getFile fs SFRecipient fileId` → `runStore $ getFile fs SFRecipient fileId`. +6. **Control port `CPDelete`/`CPBlock`** (lines 371, 377): `atomically $ getFile fs SFRecipient fileId` → `getFile fs SFRecipient fileId`. -7. **`expireServerFiles`** (line 636): Replace per-file `expiredFilePath` iteration with bulk `runStore $ expiredFiles st old`, which returns `[(SenderId, Maybe FilePath, Word32)]` — the `Word32` file size is needed so the server can adjust the `usedStorage` TVar after each deletion. The `itemDelay` between files applies to the deletion loop over the returned list, not the store query itself. +7. **`expireServerFiles`** (line 636): Replace per-file `expiredFilePath` iteration with batched `expiredFiles st old batchSize`, which returns `[(SenderId, Maybe FilePath, Word32)]` — the `Word32` file size is needed so the server can adjust the `usedStorage` TVar after each deletion. Called in a loop until the returned list is empty. The `itemDelay` between files applies to the deletion loop over each batch, not the query itself. STM backend ignores the batch size limit (returns all expired files from TMap scan); Postgres uses `LIMIT`. -8. **`restoreServerStats`** (line 694): `FileStore {files, usedStorage} <- asks store` accesses store fields directly. Refactored to: `usedStorage` from `XFTPEnv` via `asks usedStorage`, file count via `getFileCount store` (new typeclass method). STM: `M.size <$> readTVarIO files`. Postgres: `SELECT COUNT(*) FROM files`. +8. **`restoreServerStats`** (line 694): `FileStore {files, usedStorage} <- asks store` accesses store fields directly. Refactored to: `usedStorage` from `XFTPEnv` via `asks usedStorage`, file count via `getFileCount store`. STM: `M.size <$> readTVarIO files`. Postgres: `SELECT COUNT(*) FROM files`. ### Store Config Selection @@ -133,6 +174,65 @@ data XFTPEnv s = XFTPEnv The `M` monad (`ReaderT (XFTPEnv s) IO`) and all functions in `Server.hs` gain `FileStoreClass s =>` constraints. +**StoreLog lifecycle per backend:** + +- **STM mode**: `storeLog = Just sl` (current behavior — append-only log for persistence and recovery). +- **Postgres mode**: `storeLog = Nothing` (main storeLog disabled — Postgres is the source of truth). The optional parallel `dbStoreLog` inside `PostgresFileStore` provides audit/recovery if enabled via `db_store_log` INI setting. + +The existing `withFileLog` pattern in Server.hs continues to work unchanged — it maps over `Maybe (StoreLog 'WriteMode)`, which is `Nothing` in Postgres mode so the calls become no-ops. + +### Main.hs Store Type Dispatch + +The `Start` CLI command gains a `--confirm-migrations` flag (default `MCConsole` — manual prompt, matching SMP's `StartOptions`). For automated deployments, `--confirm-migrations up` auto-applies forward migrations. The import command uses `MCYesUp` (always auto-apply). + +Following SMP's existential dispatch pattern (`AStoreType` + `run`), `Main.hs` selects the store type from INI config and dispatches to the polymorphic server: + +```haskell +runServer ini = do + let storeType = fromRight "memory" $ lookupValue "STORE_LOG" "store_files" ini + case storeType of + "memory" -> run $ XSCMemory (enableStoreLog $> storeLogFilePath) + "database" -> +#if defined(dbServerPostgres) + run $ XSCDatabase PostgresFileStoreCfg {..} +#else + exitError "server not compiled with Postgres support" +#endif + _ -> exitError $ "Invalid store_files value: " <> storeType + where + run :: FileStoreClass s => XFTPStoreConfig s -> IO () + run storeCfg = do + env <- newXFTPServerEnv storeCfg config + runReaderT (xftpServer config) env +``` + +**`newXFTPServerEnv` refactored signature:** + +```haskell +newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s) +newXFTPServerEnv storeCfg config = do + (store, storeLog) <- case storeCfg of + XSCMemory storeLogPath -> do + st <- newFileStore () + sl <- mapM (`readWriteFileStore` st) storeLogPath + pure (st, sl) + XSCDatabase dbCfg -> do + st <- newFileStore dbCfg + pure (st, Nothing) -- main storeLog disabled for Postgres + usedStorage <- newTVarIO =<< getUsedStorage store + ... + pure XFTPEnv {config, store, usedStorage, storeLog, ...} +``` + +### Startup Config Validation + +Following SMP's `checkMsgStoreMode` pattern, `Main.hs` validates config before starting: + +- **`store_files=database` + StoreLog file exists** (without `db_store_log=on`): Error — "StoreLog file present but store_files is `database`. Use `xftp-server database import` to migrate, or set `db_store_log: on`." +- **`store_files=database` + schema doesn't exist**: Error — "Create schema in PostgreSQL or use `xftp-server database import`." +- **`store_files=memory` + Postgres schema exists**: Warning — "Postgres schema exists but store_files is `memory`. Data in Postgres will not be used." +- **Binary compiled without `server_postgres` + `store_files=database`**: Error — "Server not compiled with Postgres support." + ## Module Structure ``` @@ -176,27 +276,53 @@ CREATE INDEX idx_files_created_at ON files (created_at); ``` - `file_size` is `INT4` matching `Word32` in `FileInfo.size` -- `sender_key` and `recipient_key` stored as `BYTEA` using `StrEncoding` serialization (includes type tag for `APublicAuthKey` algebraic type — Ed25519 or X25519 variant) +- `sender_key` and `recipient_key` stored as `BYTEA` using binary encoding via `C.encodePubKey` / `C.decodePubKey` (matching SMP's `ToField`/`FromField` instances for `APublicAuthKey` — includes algorithm type tag in the binary format) - `file_path` nullable (set after upload completes via `setFilePath`) - `ON DELETE CASCADE` for recipients when file is hard-deleted - `created_at` stores rounded epoch seconds (1-hour precision, `RoundedFileTime`) - `status` as TEXT via `StrEncoding` (`ServerEntityStatus`: `EntityActive`, `EntityBlocked info`, `EntityOff`) - Hard deletes (no `deleted_at` column) -- No PL/pgSQL functions needed; row-level locking via `SELECT ... FOR UPDATE` on `setFilePath` to prevent duplicate uploads +- No PL/pgSQL functions needed; `setFilePath` uses `WHERE file_path IS NULL` to prevent duplicate uploads (the `UPDATE` itself acquires a row-level lock) - `used_storage` computed on startup: `SELECT COALESCE(SUM(file_size), 0) FROM files` (matches STM `countUsedStorage` — all files, see usedStorage Ownership section) +### Migrations Module + +Following SMP's `QueueStore/Postgres/Migrations.hs` pattern: + +```haskell +module Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) where + +import Data.List (sortOn) +import Data.Text (Text) +import Simplex.Messaging.Agent.Store.Shared (Migration (..)) +import Text.RawString.QQ (r) + +xftpServerMigrations :: [Migration] +xftpServerMigrations = sortOn name $ map (\(name, up, down) -> Migration {name, up, down}) schemaMigrations + +schemaMigrations :: [(String, Text, Maybe Text)] +schemaMigrations = + [ ("20260325_initial", m20260325_initial, Nothing) -- no down migration for initial + ] + +m20260325_initial :: Text +m20260325_initial = [r| ... CREATE TABLE files ... |] +``` + +The `Migration` type (from `Simplex.Messaging.Agent.Store.Shared`) has fields `{name :: String, up :: Text, down :: Maybe Text}`. Initial migration has `Nothing` for `down`. Future migrations should include `Just down_migration` for rollback support. Called via `createDBStore dbOpts xftpServerMigrations (MigrationConfig confirmMigrations Nothing)`. + ### Postgres Operations Key query patterns: - **`addFile`**: `INSERT INTO files (...) VALUES (...)`, return `DUPLICATE_` on unique violation. -- **`setFilePath`**: `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`, `FOR UPDATE` row lock. Only persists the path; `usedStorage` managed by server. +- **`setFilePath`**: `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`, verified with `assertUpdated` (returns `AUTH` if 0 rows affected — file not found or already uploaded). The `WHERE file_path IS NULL` prevents duplicate uploads; the `UPDATE` acquires a row lock implicitly. Only persists the path; `usedStorage` managed by server. - **`addRecipient`**: `INSERT INTO recipients (...)`, plus check for duplicates. No need for `recipientIds` TVar update — Postgres derives it from the table. - **`getFile`** (sender): `SELECT ... FROM files WHERE sender_id = ?`, returns auth key from `sender_key` column. - **`getFile`** (recipient): `SELECT f.*, r.recipient_key FROM recipients r JOIN files f ON ... WHERE r.recipient_id = ?`. - **`deleteFile`**: `DELETE FROM files WHERE sender_id = ?` (recipients cascade). -- **`blockFile`**: `UPDATE files SET status = ? WHERE sender_id = ?`, optionally with file path clearing when `deleted = True`. -- **`expiredFiles`**: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ?` — single query replaces per-file iteration, includes `file_size` for `usedStorage` adjustment. +- **`blockFile`**: `UPDATE files SET status = ? WHERE sender_id = ?`. When `deleted = True`, the server adjusts `usedStorage` externally (matching current STM behavior where `blockFile` only updates status and storage, not `filePath`). +- **`expiredFiles`**: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? LIMIT ?` — batched query replaces per-file iteration, includes `file_size` for `usedStorage` adjustment. Called in a loop until no rows returned. ## INI Configuration @@ -217,6 +343,30 @@ expire_files_hours: 48 - `memory` -> `XSCMemory` (current behavior) - `database` -> `XSCDatabase` (requires `server_postgres` build flag) +### INI Template Generation (`xftp-server init`) + +The `iniFileContent` function in `Main.hs` must be updated to generate the new keys in the `[STORE_LOG]` section. Following SMP's `iniDbOpts` pattern with `optDisabled'` (prefixes `"# "` when value equals default), Postgres keys are generated commented out by default: + +```ini +[STORE_LOG] +enable: on + +# File storage mode: `memory` or `database` (PostgreSQL). +store_files: memory + +# Database connection settings for PostgreSQL database (`store_files: database`). +# db_connection: postgresql://xftp@/xftp_server_store +# db_schema: xftp_server +# db_pool_size: 10 + +# Write database changes to store log file +# db_store_log: off + +expire_files_hours: 48 +``` + +Reuses `iniDBOptions` from `Simplex.Messaging.Server.CLI` for runtime parsing (falls back to defaults when keys are commented out or missing). `enableDbStoreLog'` pattern (`settingIsOn "STORE_LOG" "db_store_log"`) controls `dbStoreLogPath`. + ### PostgresFileStoreCfg ```haskell @@ -246,26 +396,32 @@ defaultXFTPDBOpts = DBOpts Bidirectional migration via StoreLog as interchange format: ``` -xftp-server database import files [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] -xftp-server database export files [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +xftp-server database import [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +xftp-server database export [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] ``` +No `--table` flag needed (unlike SMP which has queues/messages/all) — XFTP has a single entity type (files + recipients, always migrated together). + CLI options reuse `dbOptsP` parser from `Simplex.Messaging.Server.CLI`. ### Import (StoreLog -> PostgreSQL) -1. Read and replay StoreLog into temporary `STMFileStore` -2. Connect to PostgreSQL, run schema migrations -3. Batch-insert file records into `files` table -4. Batch-insert recipient records into `recipients` table -5. Report counts +1. Confirm: prompt user with database connection details and StoreLog path +2. Read and replay StoreLog into temporary `STMFileStore` +3. Connect to PostgreSQL, run schema migrations (`createSchema = True`, `confirmMigrations = MCYesUp`) +4. Batch-insert file records into `files` table using PostgreSQL COPY protocol (matching SMP's `batchInsertQueues` pattern for performance). Progress reported every 10k files. +5. Batch-insert recipient records into `recipients` table using COPY protocol +6. Verify counts: `SELECT COUNT(*) FROM files` / `recipients` — warn if mismatch +7. Rename StoreLog to `.bak` (prevents accidental re-import, preserves original for rollback) +8. Report counts ### Export (PostgreSQL -> StoreLog) -1. Connect to PostgreSQL -2. Open new StoreLog file for writing -3. Fold over all file records, writing per file (in this order, matching existing `writeFileStore`): `AddFile` (with `ServerEntityStatus` — this preserves `EntityBlocked` state), `AddRecipients`, then `PutFile` (if `file_path` is set) -4. Report counts +1. Confirm: prompt user with database connection details and output path. Fail if output file already exists. +2. Connect to PostgreSQL +3. Open new StoreLog file for writing +4. Fold over all file records, writing per file (in this order, matching existing `writeFileStore`): `AddFile` (with `ServerEntityStatus` — this preserves `EntityBlocked` state), `AddRecipients`, then `PutFile` (if `file_path` is set) +5. Report counts Note: `AddFile` carries `ServerEntityStatus` which includes `EntityBlocked info`, so blocking state is preserved through export/import without needing separate `BlockFile` log entries. @@ -293,8 +449,9 @@ CPP guards (`#if defined(dbServerPostgres)`) in: ## Testing +- **Parameterized server tests**: Existing `xftpServerTests` refactored to accept a store type parameter (following SMP's `SpecWith (ASrvTransport, AStoreType)` pattern). The same server tests run against both STM and Postgres backends — STM tests run unconditionally, Postgres tests added under `#if defined(dbServerPostgres)` with `postgressBracket` for database lifecycle (drop → create → test → drop). - **Unit tests**: `PostgresFileStore` operations — add/get/delete/block/expire, duplicate detection, auth errors -- **Migration round-trip**: STM store → export to StoreLog → import to Postgres → export back → verify equality (including blocked file status) -- **Integration test**: run xftp-server with Postgres backend, perform file upload/download/delete cycle +- **Migration round-trip**: STM store → export to StoreLog → import to Postgres → export back → verify StoreLog equality (including blocked file status) - **Tests location**: in `tests/` alongside existing XFTP tests, guarded by `server_postgres` CPP flag -- **Test database**: PostgreSQL on `localhost:5432`, using a dedicated `xftp_server_test` schema (dropped and recreated per test run, following `xftp-web` test cleanup pattern) +- **Test database**: PostgreSQL on `localhost:5432`, using a dedicated `xftp_server_test` schema (dropped and recreated per test run via `postgressBracket`, following SMP's test database lifecycle pattern) +- **Test fixtures**: `testXFTPStoreDBOpts :: DBOpts` with `createSchema = True`, `confirmMigrations = MCYesUp`, in `tests/XFTPClient.hs` From 1bf3211d6ebe33fef7802188e022ac394e81503a Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 11:30:36 +0000 Subject: [PATCH 03/24] adjust styling --- ...2026-03-25-xftp-postgres-backend-design.md | 91 +++++++++++-------- 1 file changed, 53 insertions(+), 38 deletions(-) diff --git a/plans/2026-03-25-xftp-postgres-backend-design.md b/plans/2026-03-25-xftp-postgres-backend-design.md index 0512fe7d7..78a32a507 100644 --- a/plans/2026-03-25-xftp-postgres-backend-design.md +++ b/plans/2026-03-25-xftp-postgres-backend-design.md @@ -20,7 +20,7 @@ IO-based typeclass following the `QueueStoreClass` pattern — each method is a ```haskell class FileStoreClass s where - type FileStoreConfig s :: Type + type FileStoreConfig s -- Lifecycle newFileStore :: FileStoreConfig s -> IO s @@ -53,8 +53,8 @@ No polymorphic monad or `runStore` dispatcher needed — unlike `MsgStoreClass`, ```haskell data PostgresFileStore = PostgresFileStore - { dbStore :: DBStore - , dbStoreLog :: Maybe (StoreLog 'WriteMode) + { dbStore :: DBStore, + dbStoreLog :: Maybe (StoreLog 'WriteMode) } ``` @@ -69,8 +69,8 @@ After extracting from current `Store.hs`, `STMFileStore` retains the file and re ```haskell data STMFileStore = STMFileStore - { files :: TMap SenderId FileRec - , recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) + { files :: TMap SenderId FileRec, + recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) } ``` @@ -81,11 +81,14 @@ data STMFileStore = STMFileStore Postgres operations follow SMP's `withDB` / `handleDuplicate` pattern: ```haskell -withDB :: Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> IO (Either XFTPErrorType a) +withDB :: Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> ExceptT XFTPErrorType IO a withDB op st action = - E.try (withTransaction (dbStore st) action) >>= \case - Right r -> pure r - Left (e :: SomeException) -> logError ("STORE: " <> op <> ", " <> tshow e) $> Left INTERNAL + ExceptT $ E.try (withTransaction (dbStore st) action) >>= either logErr pure + where + logErr :: E.SomeException -> IO (Either XFTPErrorType a) + logErr e = logError ("STORE: " <> err) $> Left INTERNAL + where + err = op <> ", withDB, " <> tshow e handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) handleDuplicate e = case constraintViolation e of @@ -104,12 +107,12 @@ handleDuplicate e = case constraintViolation e of ```haskell data FileRec = FileRec - { senderId :: SenderId - , fileInfo :: FileInfo - , filePath :: TVar (Maybe FilePath) - , recipientIds :: TVar (Set RecipientId) - , createdAt :: RoundedFileTime - , fileStatus :: TVar ServerEntityStatus + { senderId :: SenderId, + fileInfo :: FileInfo, + filePath :: TVar (Maybe FilePath), + recipientIds :: TVar (Set RecipientId), + createdAt :: RoundedFileTime, + fileStatus :: TVar ServerEntityStatus } ``` @@ -164,11 +167,11 @@ data XFTPStoreConfig s where ```haskell data XFTPEnv s = XFTPEnv - { config :: XFTPServerConfig - , store :: s - , usedStorage :: TVar Int64 - , storeLog :: Maybe (StoreLog 'WriteMode) - , ... + { config :: XFTPServerConfig, + store :: s, + usedStorage :: TVar Int64, + storeLog :: Maybe (StoreLog 'WriteMode), + ... } ``` @@ -290,23 +293,34 @@ CREATE INDEX idx_files_created_at ON files (created_at); Following SMP's `QueueStore/Postgres/Migrations.hs` pattern: ```haskell -module Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) where +module Simplex.FileTransfer.Server.Store.Postgres.Migrations + ( xftpServerMigrations, + ) +where import Data.List (sortOn) import Data.Text (Text) -import Simplex.Messaging.Agent.Store.Shared (Migration (..)) +import Simplex.Messaging.Agent.Store.Shared import Text.RawString.QQ (r) -xftpServerMigrations :: [Migration] -xftpServerMigrations = sortOn name $ map (\(name, up, down) -> Migration {name, up, down}) schemaMigrations - -schemaMigrations :: [(String, Text, Maybe Text)] -schemaMigrations = - [ ("20260325_initial", m20260325_initial, Nothing) -- no down migration for initial +xftpSchemaMigrations :: [(String, Text, Maybe Text)] +xftpSchemaMigrations = + [ ("20260325_initial", m20260325_initial, Nothing) ] +xftpServerMigrations :: [Migration] +xftpServerMigrations = sortOn name $ map migration xftpSchemaMigrations + where + migration (name, up, down) = Migration {name, up, down = down} + m20260325_initial :: Text -m20260325_initial = [r| ... CREATE TABLE files ... |] +m20260325_initial = + [r| +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + ... +); + |] ``` The `Migration` type (from `Simplex.Messaging.Agent.Store.Shared`) has fields `{name :: String, up :: Text, down :: Maybe Text}`. Initial migration has `Nothing` for `down`. Future migrations should include `Just down_migration` for rollback support. Called via `createDBStore dbOpts xftpServerMigrations (MigrationConfig confirmMigrations Nothing)`. @@ -371,9 +385,9 @@ Reuses `iniDBOptions` from `Simplex.Messaging.Server.CLI` for runtime parsing (f ```haskell data PostgresFileStoreCfg = PostgresFileStoreCfg - { dbOpts :: DBOpts -- connstr, schema, poolSize, createSchema - , dbStoreLogPath :: Maybe FilePath - , confirmMigrations :: MigrationConfirmation + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation } ``` @@ -383,12 +397,13 @@ No `deletedTTL` (hard deletes). ```haskell defaultXFTPDBOpts :: DBOpts -defaultXFTPDBOpts = DBOpts - { connstr = "postgresql://xftp@/xftp_server_store" - , schema = "xftp_server" - , poolSize = 10 - , createSchema = False - } +defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } ``` ## Migration CLI From 2caf2e54e2cef3e599c561e65bc2bc2915877034 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 12:53:04 +0000 Subject: [PATCH 04/24] add implementation plan --- ...03-25-xftp-postgres-implementation-plan.md | 648 ++++++++++++++++++ 1 file changed, 648 insertions(+) create mode 100644 plans/2026-03-25-xftp-postgres-implementation-plan.md diff --git a/plans/2026-03-25-xftp-postgres-implementation-plan.md b/plans/2026-03-25-xftp-postgres-implementation-plan.md new file mode 100644 index 000000000..2ae334670 --- /dev/null +++ b/plans/2026-03-25-xftp-postgres-implementation-plan.md @@ -0,0 +1,648 @@ +# XFTP PostgreSQL Backend — Implementation Plan + +> **For agentic workers:** REQUIRED: Use superpowers-extended-cc:subagent-driven-development (if subagents available) or superpowers-extended-cc:executing-plans to implement this plan. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Add PostgreSQL backend support to xftp-server as an alternative to STM + StoreLog, with bidirectional migration. + +**Architecture:** Introduce `FileStoreClass` typeclass (IO-based, following `QueueStoreClass` pattern). Extract current STM store into `Store/STM.hs`, make `Server.hs` polymorphic, then add `Store/Postgres.hs` behind `server_postgres` CPP flag. `usedStorage` moves from store to `XFTPEnv` so the server manages quota tracking externally. + +**Tech Stack:** Haskell, postgresql-simple, STM, fourmolu, cabal with CPP flags + +**Design spec:** `plans/2026-03-25-xftp-postgres-backend-design.md` + +--- + +## File Structure + +**Existing files modified:** +- `src/Simplex/FileTransfer/Server/Store.hs` — rewritten: becomes typeclass + shared types +- `src/Simplex/FileTransfer/Server/Env.hs` — polymorphic `XFTPEnv s`, `XFTPStoreConfig` GADT +- `src/Simplex/FileTransfer/Server.hs` — polymorphic over `FileStoreClass s` +- `src/Simplex/FileTransfer/Server/StoreLog.hs` — update for IO store functions +- `src/Simplex/FileTransfer/Server/Main.hs` — INI config, dispatch, CLI commands +- `simplexmq.cabal` — new modules +- `tests/XFTPClient.hs` — Postgres test fixtures +- `tests/Test.hs` — Postgres test group + +**New files created:** +- `src/Simplex/FileTransfer/Server/Store/STM.hs` — `STMFileStore` (extracted from current `Store.hs`) +- `src/Simplex/FileTransfer/Server/Store/Postgres.hs` — `PostgresFileStore` [CPP-guarded] +- `src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs` — `PostgresFileStoreCfg` [CPP-guarded] +- `src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs` — schema SQL [CPP-guarded] +- `tests/CoreTests/XFTPStoreTests.hs` — Postgres store unit tests [CPP-guarded] + +--- + +## Task 1: Move `usedStorage` from `FileStore` to `XFTPEnv` + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` + +- [ ] **Step 1: Remove `usedStorage` from `FileStore` in `Store.hs`** + + 1. Remove `usedStorage :: TVar Int64` field from `FileStore` record (line 47). + 2. Remove `usedStorage <- newTVarIO 0` from `newFileStore` (line 75) and drop the field from the record construction (line 76). + 3. In `setFilePath` (line 92-97): remove `modifyTVar' (usedStorage st) (+ fromIntegral (size fileInfo))` — keep only `writeTVar filePath (Just fPath)`. Change pattern from `\FileRec {fileInfo, filePath}` to `\FileRec {filePath}` (fileInfo is now unused — `-Wunused-matches` error). + 4. In `deleteFile` (line 112-119): remove `modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo)`. Change outer pattern match from `FileStore {files, recipients, usedStorage}` to `FileStore {files, recipients}`. Change inner pattern from `Just FileRec {fileInfo, recipientIds}` to `Just FileRec {recipientIds}` (`fileInfo` is now unused — `-Wunused-matches` error). + 5. In `blockFile` (line 122-127): remove `when deleted $ modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo)`. Change pattern match from `st@FileStore {usedStorage}` to `st`. The `deleted` parameter and `fileInfo` in the inner pattern become unused — prefix with `_` or remove from pattern to avoid `-Wunused-matches`. + +- [ ] **Step 2: Add `usedStorage` to `XFTPEnv` in `Env.hs`** + + 1. Add `usedStorage :: TVar Int64` field to `XFTPEnv` record (between `store` and `storeLog`, line 93). + 2. In `newXFTPServerEnv` (line 112-126): replace lines 117-118: + ``` + used <- countUsedStorage <$> readTVarIO (files store) + atomically $ writeTVar (usedStorage store) used + ``` + with: + ``` + usedStorage <- newTVarIO =<< countUsedStorage <$> readTVarIO (files store) + ``` + 3. Add `usedStorage` to the `pure XFTPEnv {..}` construction. + +- [ ] **Step 3: Update all `usedStorage` access sites in `Server.hs`** + + 1. Line 552: `us <- asks $ usedStorage . store` → `us <- asks usedStorage`. + 2. Line 569: `us <- asks $ usedStorage . store` → `us <- asks usedStorage`. + 3. Line 639: `usedStart <- readTVarIO $ usedStorage st` → `usedStart <- readTVarIO =<< asks usedStorage`. + 4. Line 647: `usedEnd <- readTVarIO $ usedStorage st` → `usedEnd <- readTVarIO =<< asks usedStorage`. + 5. Line 694: `FileStore {files, usedStorage} <- asks store` → split into `FileStore {files} <- asks store` and `usedStorage <- asks usedStorage`. + 6. In `deleteOrBlockServerFile_` (line 620): after `void $ atomically $ storeAction st`, add usedStorage adjustment — `us <- asks usedStorage` then `atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo)` when file had a path (check `path` from `readTVarIO filePath` earlier in the function). + +- [ ] **Step 4: Build and verify** + + Run: `cabal build` + +- [ ] **Step 5: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git commit -m "refactor(xftp): move usedStorage from FileStore to XFTPEnv" + ``` + +--- + +## Task 2: Add `getUsedStorage`, `getFileCount`, `expiredFiles` functions + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` + +- [ ] **Step 1: Add three new functions to `Store.hs`** + + 1. Add to exports: `getUsedStorage`, `getFileCount`, `expiredFiles`. + 2. Remove `expiredFilePath` from exports AND delete the function definition (dead code → `-Wunused-binds` error). Also remove `($>>=)` from import `Simplex.Messaging.Util (ifM, ($>>=))` → `Simplex.Messaging.Util (ifM)` — `$>>=` was only used by `expiredFilePath`. + 3. Add import: `qualified Data.Map.Strict as M` (needed for `M.foldl'` in `getUsedStorage` and `M.toList` in `expiredFiles`). + 4. Implement: + ```haskell + getUsedStorage :: FileStore -> IO Int64 + getUsedStorage FileStore {files} = + M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files + + getFileCount :: FileStore -> IO Int + getFileCount FileStore {files} = M.size <$> readTVarIO files + + expiredFiles :: FileStore -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + expiredFiles FileStore {files} old _limit = do + fs <- readTVarIO files + fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> + if createdAt + fileTimePrecision < old + then do + path <- readTVarIO filePath + pure $ Just (sId, path, size) + else pure Nothing + ``` + 5. Add imports: `Data.Maybe (catMaybes)`, `Data.Word (Word32)` (note: `qualified Data.Map.Strict as M` already added in item 3). + +- [ ] **Step 2: Replace `countUsedStorage` in `Env.hs`** + + 1. Replace `countUsedStorage <$> readTVarIO (files store)` with `getUsedStorage store` in `newXFTPServerEnv`. + 2. Remove `countUsedStorage` function definition and its export. + 3. Remove `qualified Data.Map.Strict as M` import if no longer used. + +- [ ] **Step 3: Update `restoreServerStats` in `Server.hs` to use `getFileCount`** + + In `restoreServerStats` (line 694-696): replace `FileStore {files} <- asks store` and `_filesCount <- M.size <$> readTVarIO files` with `st <- asks store` and `_filesCount <- liftIO $ getFileCount st` (eliminates the `FileStore` pattern match — `files` binding no longer needed). + +- [ ] **Step 4: Replace `expireServerFiles` iteration in `Server.hs`** + + 1. Replace the body of `expireServerFiles` (lines 636-660). Remove `files' <- readTVarIO (files st)` and the `forM_ (M.keys files')` loop. + 2. New body: call `expiredFiles st old 10000` in a loop. For each `(sId, filePath_, fileSize)` in returned list: apply `itemDelay`, remove disk file if present, call `atomically $ deleteFile st sId`, adjust `usedStorage` TVar by `fileSize`, increment `filesExpired` stat. Loop until `expiredFiles` returns `[]`. + 3. Remove `Data.Map.Strict` import from Server.hs if no longer needed (was used for `M.size` and `M.keys` — now replaced by `getFileCount` and `expiredFiles`). + +- [ ] **Step 5: Build and verify** + + Run: `cabal build` + +- [ ] **Step 6: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git commit -m "refactor(xftp): add getUsedStorage, getFileCount, expiredFiles store functions" + ``` + +--- + +## Task 3: Change `Store.hs` functions from STM to IO + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `src/Simplex/FileTransfer/Server/StoreLog.hs` + +- [ ] **Step 1: Change all Store.hs function signatures from STM to IO** + + For each of: `addFile`, `setFilePath`, `addRecipient`, `getFile`, `deleteFile`, `blockFile`, `deleteRecipient`, `ackFile`: + 1. Change return type from `STM (Either XFTPErrorType ...)` to `IO (Either XFTPErrorType ...)` (or `STM ()` to `IO ()` for `deleteRecipient`). + 2. Wrap the function body in `atomically $ do ...`. + 3. Keep `withFile` and `newFileRec` as internal STM helpers (called inside the `atomically` blocks). + +- [ ] **Step 2: Update Server.hs call sites — remove `atomically` wrappers** + + 1. Line 563 (`receiveServerFile`): change `atomically $ writeTVar filePath (Just fPath)` → add `st <- asks store` then `void $ liftIO $ setFilePath st senderId fPath` (design call site #1 — `store` is not in scope in `receiveServerFile`'s `receive` helper, so bind via `asks`; `void` avoids `-Wunused-do-bind` warning on the `Either` result). + 2. Line 453 (`verifyXFTPTransmission`): split `atomically $ verify =<< getFile st party fId` into: `liftIO (getFile st party fId)` (IO→M lift), then pattern match on result, use `readTVarIO (fileStatus fr)` instead of `readTVar`. + 3. Lines 371, 377 (control port `CPDelete`/`CPBlock`): change `ExceptT $ atomically $ getFile fs SFRecipient fileId` → `ExceptT $ liftIO $ getFile fs SFRecipient fileId` (inside `unliftIO u $ do` block which runs in M monad — `liftIO` required to lift IO into M). + 4. Line 508 (`addFile` in `createFile`): the `ExceptT $ addFile st sId file ts EntityActive` — `addFile` is now IO, `ExceptT` wraps IO directly. Remove any `atomically`. + 5. Line 514 (`addRecipient`): same — `ExceptT . addRecipient st sId` works directly in IO. + 6. Line 516 (`retryAdd`): change parameter type from `(XFTPFileId -> STM (Either XFTPErrorType a))` to `(XFTPFileId -> IO (Either XFTPErrorType a))`. Line 520: change `atomically (add fId)` to `liftIO (add fId)`. + 7. Line 605 (`ackFileReception`): change `atomically $ deleteRecipient st rId fr` to `liftIO $ deleteRecipient st rId fr`. + 8. Line 620 (`deleteOrBlockServerFile_`): change third parameter type from `(FileStore -> STM (Either XFTPErrorType ()))` to `(FileStore -> IO (Either XFTPErrorType ()))`. Line 626: change `void $ atomically $ storeAction st` to `void $ liftIO $ storeAction st`. + 9. `expireServerFiles` `delete` helper: change `atomically $ deleteFile st sId` to `liftIO $ deleteFile st sId` (deleteFile is now IO; `liftIO` required because the helper runs in M monad, not IO). + +- [ ] **Step 3: Update `StoreLog.hs` — remove `atomically` from replay** + + In `readFileStore` (line 93), function `addToStore`: + 1. Change `atomically (addToStore lr)` to `addToStore lr` — store functions are now IO. + 2. The `addToStore` body calls `addFile`, `setFilePath`, `deleteFile`, `blockFile`, `ackFile` — all IO now, no `atomically` needed. + 3. For `AddRecipients`: `runExceptT $ mapM_ (ExceptT . addRecipient st sId) rcps` — `addRecipient` returns `IO (Either ...)`, so `ExceptT . addRecipient st sId` works directly. + +- [ ] **Step 4: Build and verify** + + Run: `cabal build` + +- [ ] **Step 5: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git commit -m "refactor(xftp): change file store operations from STM to IO" + ``` + +--- + +## Task 4: Extract `FileStoreClass` typeclass, move STM impl to `Store/STM.hs` + +**Files:** +- Rewrite: `src/Simplex/FileTransfer/Server/Store.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/STM.hs` +- Modify: `src/Simplex/FileTransfer/Server/StoreLog.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `simplexmq.cabal` + +- [ ] **Step 1: Create `Store/STM.hs` — move all implementation code** + + 1. Create directory `src/Simplex/FileTransfer/Server/Store/`. + 2. Create `src/Simplex/FileTransfer/Server/Store/STM.hs`. + 3. Move from `Store.hs`: `FileStore` data type (rename to `STMFileStore`), all function implementations, internal helpers (`withFile`, `newFileRec`), all STM-specific imports. + 4. Rename all `FileStore` references to `STMFileStore` in the new file. + 5. Module declaration: `module Simplex.FileTransfer.Server.Store.STM` exporting only `STMFileStore (..)` — do NOT export standalone functions (`addFile`, `setFilePath`, etc.) to avoid name collisions with the typeclass methods from `Store.hs`. + +- [ ] **Step 2: Rewrite `Store.hs` as the typeclass module** + + 1. Add `{-# LANGUAGE TypeFamilies #-}` pragma to `Store.hs` (required for `type FileStoreConfig s` associated type). + 2. Keep in `Store.hs`: `FileRec (..)`, `FileRecipient (..)`, `RoundedFileTime`, `fileTimePrecision` definitions and their `StrEncoding` instance. + 3. Add `FileStoreClass` typeclass: + ```haskell + class FileStoreClass s where + type FileStoreConfig s + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + + -- Stats + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int + ``` + 4. Do NOT re-export from `Store/STM.hs` — this would create a circular module dependency (Store.hs imports Store/STM.hs, Store/STM.hs imports Store.hs). Consumers must import `Store.STM` directly where they need `STMFileStore`. + 5. Remove all STM-specific imports that are no longer needed. + +- [ ] **Step 3: Add `FileStoreClass` instance in `Store/STM.hs`** + + 1. Import `FileStoreClass` from `Simplex.FileTransfer.Server.Store`. + 2. Inline all implementations directly in the instance body (do NOT delegate to standalone functions — the standalone names collide with typeclass method names, causing ambiguous occurrences for importers): + ```haskell + instance FileStoreClass STMFileStore where + type FileStoreConfig STMFileStore = () + newFileStore () = do + files <- TM.emptyIO + recipients <- TM.emptyIO + pure STMFileStore {files, recipients} + closeFileStore _ = pure () + addFile st sId fileInfo createdAt status = atomically $ ... + setFilePath st sId fPath = atomically $ ... + -- ... (each method's body is the existing function body, inlined) + ``` + 3. Remove the standalone top-level function definitions — they are now instance methods. Keep only `withFile` and `newFileRec` as internal helpers used by the instance methods. + +- [ ] **Step 4: Update importers** + + 1. `Env.hs`: add `import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..))`. Change `FileStore` → `STMFileStore` in `XFTPEnv` type and `newXFTPServerEnv`. Change `store <- newFileStore` to `store <- newFileStore ()` (typeclass method now takes `FileStoreConfig STMFileStore` which is `()`). Keep `import Simplex.FileTransfer.Server.Store` for `FileRec`, `FileRecipient`, `FileStoreClass`, etc. + 2. `Server.hs`: add `import Simplex.FileTransfer.Server.Store.STM`. Change `FileStore` → `STMFileStore` in any explicit type annotations. Import `FileStoreClass` from `Simplex.FileTransfer.Server.Store`. + 3. `StoreLog.hs`: add `import Simplex.FileTransfer.Server.Store.STM` to access concrete `STMFileStore` type and store functions used during log replay. Change `FileStore` → `STMFileStore` in `readWriteFileStore` and `writeFileStore` parameter types. + +- [ ] **Step 5: Update cabal file** + + Add `Simplex.FileTransfer.Server.Store.STM` to `exposed-modules` in the `!flag(client_library)` section, alongside existing XFTP server modules. + +- [ ] **Step 6: Build and verify** + + Run: `cabal build` + +- [ ] **Step 7: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 8: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Store/STM.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Store/STM.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs simplexmq.cabal + git commit -m "refactor(xftp): extract FileStoreClass typeclass, move STM impl to Store.STM" + ``` + +--- + +## Task 5: Make `XFTPEnv` and `Server.hs` polymorphic over `FileStoreClass` + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` +- Modify: `tests/XFTPClient.hs` (if it calls `runXFTPServerBlocking` directly) + +- [ ] **Step 1: Make `XFTPEnv` polymorphic in `Env.hs`** + + 1. Add `XFTPStoreConfig` GADT: `data XFTPStoreConfig s where XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore`. + 2. Change `data XFTPEnv` to `data XFTPEnv s` — field `store :: FileStore` becomes `store :: s`. + 3. Change `newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv` to `newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s)`. + 4. Pattern match on `XSCMemory storeLogPath` in `newXFTPServerEnv` body. Create store via `newFileStore ()`, storeLog via `mapM (`readWriteFileStore` st) storeLogPath`. + +- [ ] **Step 2: Make `Server.hs` polymorphic** + + 1. Change `type M a = ReaderT XFTPEnv IO a` to `type M s a = ReaderT (XFTPEnv s) IO a`. + 2. Add `FileStoreClass s =>` constraint to all functions using `M s a`. Use `forall s.` in signatures of functions that have `where`-block bindings with `M s` type annotations — `ScopedTypeVariables` requires explicit `forall` to bring `s` into scope for inner type signatures (matching SMP's `smpServer :: forall s. MsgStoreClass s => ...` pattern). Full list: `xftpServer`, `processRequest`, `verifyXFTPTransmission`, `processXFTPRequest` and all its `where`-bound functions (`createFile`, `addRecipients`, `receiveServerFile`, `sendServerFile`, `deleteServerFile`, `ackFileReception`, `retryAdd`, `addFileRetry`, `addRecipientRetry`), `deleteServerFile_`, `blockServerFile`, `deleteOrBlockServerFile_`, `expireServerFiles`, `randomId`, `getFileId`, `withFileLog`, `incFileStat`, `saveServerStats`, `restoreServerStats`, `randomDelay` (inside `#ifdef slow_servers` CPP block). Also update `encodeXftp` (line 236) and `runCPClient` (line 339) which use explicit `ReaderT XFTPEnv IO` instead of the `M` alias — change to `ReaderT (XFTPEnv s) IO`. + 3. Change `runXFTPServerBlocking` and `runXFTPServer` to take `XFTPStoreConfig s` parameter. + 4. Add `closeFileStore store` call to the server shutdown path (in the `finally` block or `stopServer` equivalent — after saving stats, before logging "Server stopped"). This ensures Postgres connection pool and `dbStoreLog` are properly closed. For STM this is a no-op. + +- [ ] **Step 3: Update `Main.hs` dispatch** + + 1. In `runServer`: construct `XSCMemory (enableStoreLog $> storeLogFilePath)`. + 2. Add dispatch function that calls the updated `runXFTPServer` (which creates `started` internally): + ```haskell + run :: FileStoreClass s => XFTPStoreConfig s -> IO () + run storeCfg = runXFTPServer storeCfg serverConfig + ``` + 3. Call `run` with the `XSCMemory` config. + +- [ ] **Step 4: Update test helper if needed** + + If `tests/XFTPClient.hs` calls `runXFTPServerBlocking` directly, update the call to pass an `XSCMemory` config. Check the `withXFTPServer` / `serverBracket` helper. + +- [ ] **Step 5: Build and verify** + + Run: `cabal build && cabal build test:simplexmq-test` + +- [ ] **Step 6: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/Main.hs + git add src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/Main.hs tests/XFTPClient.hs simplexmq.cabal + git commit -m "refactor(xftp): make XFTPEnv and server polymorphic over FileStoreClass" + ``` + +--- + +## Task 6: Add Postgres config, migrations, and store skeleton + +**Files:** +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `simplexmq.cabal` + +- [ ] **Step 1: Create `Store/Postgres/Config.hs`** + + ```haskell + module Simplex.FileTransfer.Server.Store.Postgres.Config + ( PostgresFileStoreCfg (..), + defaultXFTPDBOpts, + ) + where + + import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) + import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) + + data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation + } + + defaultXFTPDBOpts :: DBOpts + defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } + ``` + +- [ ] **Step 2: Create `Store/Postgres/Migrations.hs`** + + Full migration module with `xftpServerMigrations :: [Migration]` and `m20260325_initial` containing CREATE TABLE SQL for `files` and `recipients` tables plus indexes. Follow SMP's `QueueStore/Postgres/Migrations.hs` pattern exactly: tuple list → `sortOn name . map migration`. + +- [ ] **Step 3: Create `Store/Postgres.hs` with stub instance** + + 1. Define `PostgresFileStore` with `dbStore :: DBStore` and `dbStoreLog :: Maybe (StoreLog 'WriteMode)`. + 2. `instance FileStoreClass PostgresFileStore` with `error "not implemented"` for all methods except `newFileStore` (calls `createDBStore` + opens `dbStoreLog`) and `closeFileStore` (closes both). `type FileStoreConfig PostgresFileStore = PostgresFileStoreCfg`. + 3. Add `withDB`, `handleDuplicate`, `assertUpdated`, `withLog` helpers. + +- [ ] **Step 4: Add `XSCDatabase` GADT constructor in `Env.hs` (CPP-guarded)** + + ```haskell + #if defined(dbServerPostgres) + import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) + import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) + #endif + + data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore + #if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore + #endif + ``` + +- [ ] **Step 5: Update cabal** + + Add to existing `if flag(server_postgres)` block: + ``` + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Config + Simplex.FileTransfer.Server.Store.Postgres.Migrations + ``` + +- [ ] **Step 6: Build both ways** + + Run: `cabal build && cabal build -fserver_postgres` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store/Postgres.hs src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs src/Simplex/FileTransfer/Server/Env.hs + git add src/Simplex/FileTransfer/Server/Store/Postgres.hs src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs src/Simplex/FileTransfer/Server/Env.hs simplexmq.cabal + git commit -m "feat(xftp): add PostgreSQL store skeleton with schema migration" + ``` + +--- + +## Task 7: Implement `PostgresFileStore` operations + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store/Postgres.hs` + +- [ ] **Step 1: Implement `addFile`** + + `INSERT INTO files (sender_id, file_size, file_digest, sender_key, file_path, created_at, status) VALUES (?,?,?,?,NULL,?,?)`. Catch unique violation with `handleDuplicate` → `DUPLICATE_`. Call `withLog "addFile"` after. + +- [ ] **Step 2: Implement `getFile`** + + For `SFSender`: `SELECT ... FROM files WHERE sender_id = ?`. Construct `FileRec` with `newTVarIO` per TVar field. `recipientIds = S.empty`. + For `SFRecipient`: `SELECT f.*, r.recipient_key FROM recipients r JOIN files f ON r.sender_id = f.sender_id WHERE r.recipient_id = ?`. + +- [ ] **Step 3: Implement `setFilePath`** + + `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`. Use `assertUpdated`. Call `withLog "setFilePath"`. + +- [ ] **Step 4: Implement `addRecipient`** + + `INSERT INTO recipients (recipient_id, sender_id, recipient_key) VALUES (?,?,?)`. `handleDuplicate` → `DUPLICATE_`. Call `withLog "addRecipient"`. + +- [ ] **Step 5: Implement `deleteFile`, `blockFile`** + + `deleteFile`: `DELETE FROM files WHERE sender_id = ?` (CASCADE). `withLog "deleteFile"`. + `blockFile`: `UPDATE files SET status = ? WHERE sender_id = ?`. `assertUpdated`. `withLog "blockFile"`. + +- [ ] **Step 6: Implement `deleteRecipient`, `ackFile`** + + `deleteRecipient`: `DELETE FROM recipients WHERE recipient_id = ?`. `withLog "deleteRecipient"`. + `ackFile`: same + return `Left AUTH` if 0 rows. + +- [ ] **Step 7: Implement `expiredFiles`, `getUsedStorage`, `getFileCount`** + + `expiredFiles`: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? LIMIT ?`. + `getUsedStorage`: `SELECT COALESCE(SUM(file_size), 0) FROM files`. + `getFileCount`: `SELECT COUNT(*) FROM files`. + +- [ ] **Step 8: Add `ToField`/`FromField` instances** + + For `RoundedFileTime` (Int64 wrapper), `ServerEntityStatus` (Text via StrEncoding), `C.APublicAuthKey` (Binary via `encodePubKey`/`decodePubKey`). Check SMP's `QueueStore/Postgres.hs` for existing instances to import. + +- [ ] **Step 9: Wrap mutation operations in `uninterruptibleMask_`** + + Operations that combine a DB write with a TVar update (e.g., `getFile` constructs `FileRec` with `newTVarIO`) must be wrapped in `E.uninterruptibleMask_` to prevent async exceptions from leaving inconsistent state. Follow SMP's `addQueue_`, `deleteStoreQueue` pattern. + +- [ ] **Step 10: Build** + + Run: `cabal build -fserver_postgres` + +- [ ] **Step 11: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store/Postgres.hs + git add src/Simplex/FileTransfer/Server/Store/Postgres.hs + git commit -m "feat(xftp): implement PostgresFileStore operations" + ``` + +--- + +## Task 8: Add INI config, Main.hs dispatch, startup validation + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` + +- [ ] **Step 1: Update `iniFileContent` in `Main.hs`** + + Add to `[STORE_LOG]` section: `store_files: memory`, commented-out `db_connection`, `db_schema`, `db_pool_size`, `db_store_log` keys. Follow SMP's `optDisabled'` pattern for commented defaults. + +- [ ] **Step 2: Add `StartOptions` and `--confirm-migrations` flag** + + ```haskell + data StartOptions = StartOptions + { confirmMigrations :: MigrationConfirmation + } + ``` + Add to `Start` command parser with default `MCConsole`. Thread through to `runServer`. + +- [ ] **Step 3: Add store_files INI parsing and CPP-guarded Postgres dispatch** + + In `runServer`: read `store_files` from INI (`fromRight "memory" $ lookupValue "STORE_LOG" "store_files" ini`). Add `"database"` branch (CPP-guarded) that constructs `PostgresFileStoreCfg` using `iniDBOptions ini defaultXFTPDBOpts` and `enableDbStoreLog'` pattern. Non-postgres build: `exitError`. + +- [ ] **Step 4: Add `XSCDatabase` branch in `newXFTPServerEnv` (`Env.hs`)** + + CPP-guarded pattern match on `XSCDatabase dbCfg`: `newFileStore dbCfg`, `storeLog = Nothing`. + +- [ ] **Step 5: Add startup config validation** + + Add `checkFileStoreMode` (CPP-guarded) before `run`: validate conflicting storeLog file + database mode, missing schema, etc. per design doc. + +- [ ] **Step 6: Build both ways** + + Run: `cabal build && cabal build -fserver_postgres` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Main.hs src/Simplex/FileTransfer/Server/Env.hs + git add src/Simplex/FileTransfer/Server/Main.hs src/Simplex/FileTransfer/Server/Env.hs + git commit -m "feat(xftp): add PostgreSQL INI config, store dispatch, startup validation" + ``` + +--- + +## Task 9: Add database import/export CLI commands + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` + +- [ ] **Step 1: Add `Database` CLI command (CPP-guarded)** + + Add `Database StoreCmd DBOpts` constructor to `CliCommand`. Add `database` subcommand parser with `import`/`export` subcommands + `dbOptsP defaultXFTPDBOpts`. + +- [ ] **Step 2: Implement `importFileStoreToDatabase`** + + 1. `confirmOrExit` with database details. + 2. Create temporary `STMFileStore`, replay StoreLog via `readWriteFileStore`. + 3. Create `PostgresFileStore` with `createSchema = True`, `confirmMigrations = MCYesUp`. + 4. Batch-insert files using PostgreSQL COPY protocol. Progress every 10k. + 5. Batch-insert recipients using COPY protocol. + 6. Verify counts: `SELECT COUNT(*)` — warn on mismatch. + 7. Rename StoreLog to `.bak`. + 8. Report counts. + +- [ ] **Step 3: Implement `exportDatabaseToStoreLog`** + + 1. `confirmOrExit`. Fail if output file exists. + 2. Create `PostgresFileStore` from config. + 3. Open StoreLog for writing. + 4. Fold over file records: write `AddFile` (with status), `AddRecipients`, `PutFile` per file. + 5. Close StoreLog, report counts. + +- [ ] **Step 4: Build** + + Run: `cabal build -fserver_postgres` + +- [ ] **Step 5: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Main.hs + git add src/Simplex/FileTransfer/Server/Main.hs + git commit -m "feat(xftp): add database import/export CLI commands" + ``` + +--- + +## Task 10: Add Postgres tests + +**Files:** +- Modify: `tests/XFTPClient.hs` +- Modify: `tests/Test.hs` +- Create: `tests/CoreTests/XFTPStoreTests.hs` + +- [ ] **Step 1: Add test fixtures in `tests/XFTPClient.hs`** + + ```haskell + testXFTPStoreDBOpts :: DBOpts + testXFTPStoreDBOpts = + DBOpts + { connstr = "postgresql://test_xftp_server_user@/test_xftp_server_db", + schema = "xftp_server_test", + poolSize = 10, + createSchema = True + } + ``` + Add `testXFTPDBConnectInfo :: ConnectInfo` matching the connection string. + +- [ ] **Step 2: Add Postgres server test group in `tests/Test.hs`** + + CPP-guarded block that runs existing `xftpServerTests` with Postgres store config, wrapped in `postgressBracket testXFTPDBConnectInfo`. Parameterize `withXFTPServer` to accept store config if needed. + +- [ ] **Step 3: Create `tests/CoreTests/XFTPStoreTests.hs` — unit tests** + + Test `PostgresFileStore` operations directly: + - `addFile` + `getFile SFSender` round-trip. + - `addFile` duplicate → `DUPLICATE_`. + - `getFile` nonexistent → `AUTH`. + - `setFilePath` + verify `WHERE file_path IS NULL` guard. + - `addRecipient` + `getFile SFRecipient` round-trip. + - `deleteFile` cascades recipients. + - `blockFile` + verify status. + - `expiredFiles` batch semantics. + - `getUsedStorage`, `getFileCount` correctness. + +- [ ] **Step 4: Add migration round-trip test** + + Create `STMFileStore` with test data (files + recipients + blocked status) → export to StoreLog → import to Postgres → export back → compare StoreLog files byte-for-byte. + +- [ ] **Step 5: Build and run tests** + + ```bash + cabal build -fserver_postgres test:simplexmq-test + cabal test --test-show-details=streaming --test-option=--match="/XFTP/" -fserver_postgres + ``` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i tests/CoreTests/XFTPStoreTests.hs tests/XFTPClient.hs + git add tests/CoreTests/XFTPStoreTests.hs tests/XFTPClient.hs tests/Test.hs + git commit -m "test(xftp): add PostgreSQL backend tests" + ``` From d703cfae8724b6d41de07b69ec8c8e07205936d4 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 12:59:48 +0000 Subject: [PATCH 05/24] refactor: move usedStorage from FileStore to XFTPEnv --- src/Simplex/FileTransfer/Server.hs | 16 ++++++++++------ src/Simplex/FileTransfer/Server/Env.hs | 5 +++-- src/Simplex/FileTransfer/Server/Store.hs | 20 +++++++------------- 3 files changed, 20 insertions(+), 21 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 6e0a9735a..75d16e310 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -549,7 +549,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case | bs == 0 || bs > s -> pure $ FRErr SIZE | otherwise -> drain (s - bs) reserve = do - us <- asks $ usedStorage . store + us <- asks usedStorage quota <- asks $ fromMaybe maxBound . fileSizeQuota . config atomically . stateTVar us $ \used -> let used' = used + fromIntegral size in if used' <= quota then (True, used') else (False, used) @@ -566,7 +566,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) pure FROk Left e -> do - us <- asks $ usedStorage . store + us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral size) liftIO $ whenM (doesFileExist fPath) (removeFile fPath) `catch` logFileError pure $ FRErr e @@ -624,6 +624,9 @@ deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExce ExceptT $ first (\(_ :: SomeException) -> FILE_IO) <$> try (forM_ path $ \p -> whenM (doesFileExist p) (removeFile p >> deletedStats stats)) st <- asks store void $ atomically $ storeAction st + forM_ path $ \_ -> do + us <- asks usedStorage + atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo) lift $ incFileStat stat where deletedStats stats = do @@ -636,7 +639,8 @@ getFileTime = getRoundedSystemTime expireServerFiles :: Maybe Int -> ExpirationConfig -> M () expireServerFiles itemDelay expCfg = do st <- asks store - usedStart <- readTVarIO $ usedStorage st + us <- asks usedStorage + usedStart <- readTVarIO us old <- liftIO $ expireBeforeEpoch expCfg files' <- readTVarIO (files st) logNote $ "Expiration check: " <> tshow (M.size files') <> " files" @@ -644,7 +648,7 @@ expireServerFiles itemDelay expCfg = do mapM_ threadDelay itemDelay atomically (expiredFilePath st sId old) >>= mapM_ (maybeRemove $ delete st sId) - usedEnd <- readTVarIO $ usedStorage st + usedEnd <- readTVarIO us logNote $ "Used " <> mbs usedStart <> " -> " <> mbs usedEnd <> ", " <> mbs (usedStart - usedEnd) <> " reclaimed." where mbs bs = tshow (bs `div` 1048576) <> "mb" @@ -691,9 +695,9 @@ restoreServerStats = asks (serverStatsBackupFile . config) >>= mapM_ restoreStat liftIO (strDecode <$> B.readFile f) >>= \case Right d@FileServerStatsData {_filesCount = statsFilesCount, _filesSize = statsFilesSize} -> do s <- asks serverStats - FileStore {files, usedStorage} <- asks store + FileStore {files} <- asks store _filesCount <- M.size <$> readTVarIO files - _filesSize <- readTVarIO usedStorage + _filesSize <- readTVarIO =<< asks usedStorage liftIO $ setFileServerStats s d {_filesCount, _filesSize} renameFile f $ f <> ".bak" logNote "server stats restored" diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index d4c58df66..f38cc5e9d 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -91,6 +91,7 @@ defaultInactiveClientExpiration = data XFTPEnv = XFTPEnv { config :: XFTPServerConfig, store :: FileStore, + usedStorage :: TVar Int64, storeLog :: Maybe (StoreLog 'WriteMode), random :: TVar ChaChaDRG, serverIdentity :: C.KeyHash, @@ -115,7 +116,7 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede store <- newFileStore storeLog <- mapM (`readWriteFileStore` store) storeLogFile used <- countUsedStorage <$> readTVarIO (files store) - atomically $ writeTVar (usedStorage store) used + usedStorage <- newTVarIO used forM_ fileSizeQuota $ \quota -> do logNote $ "Total / available storage: " <> tshow quota <> " / " <> tshow (quota - used) when (quota < used) $ logWarn "WARNING: storage quota is less than used storage, no files can be uploaded!" @@ -123,7 +124,7 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede httpServerCreds <- mapM loadServerCredential httpCredentials Fingerprint fp <- loadFingerprint xftpCredentials serverStats <- newFileServerStats =<< getCurrentTime - pure XFTPEnv {config, store, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats} + pure XFTPEnv {config, store, usedStorage, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats} countUsedStorage :: M.Map k FileRec -> Int64 countUsedStorage = M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 diff --git a/src/Simplex/FileTransfer/Server/Store.hs b/src/Simplex/FileTransfer/Server/Store.hs index eec481a21..e3860eae6 100644 --- a/src/Simplex/FileTransfer/Server/Store.hs +++ b/src/Simplex/FileTransfer/Server/Store.hs @@ -25,7 +25,6 @@ module Simplex.FileTransfer.Server.Store where import Control.Concurrent.STM -import Control.Monad import qualified Data.Attoparsec.ByteString.Char8 as A import Data.Int (Int64) import Data.Set (Set) @@ -43,8 +42,7 @@ import Simplex.Messaging.Util (ifM, ($>>=)) data FileStore = FileStore { files :: TMap SenderId FileRec, - recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey), - usedStorage :: TVar Int64 + recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) } data FileRec = FileRec @@ -72,8 +70,7 @@ newFileStore :: IO FileStore newFileStore = do files <- TM.emptyIO recipients <- TM.emptyIO - usedStorage <- newTVarIO 0 - pure FileStore {files, recipients, usedStorage} + pure FileStore {files, recipients} addFile :: FileStore -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM (Either XFTPErrorType ()) addFile FileStore {files} sId fileInfo createdAt status = @@ -91,9 +88,8 @@ newFileRec senderId fileInfo createdAt status = do setFilePath :: FileStore -> SenderId -> FilePath -> STM (Either XFTPErrorType ()) setFilePath st sId fPath = - withFile st sId $ \FileRec {fileInfo, filePath} -> do + withFile st sId $ \FileRec {filePath} -> do writeTVar filePath (Just fPath) - modifyTVar' (usedStorage st) (+ fromIntegral (size fileInfo)) pure $ Right () addRecipient :: FileStore -> SenderId -> FileRecipient -> STM (Either XFTPErrorType ()) @@ -110,19 +106,17 @@ addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = -- this function must be called after the file is deleted from the file system deleteFile :: FileStore -> SenderId -> STM (Either XFTPErrorType ()) -deleteFile FileStore {files, recipients, usedStorage} senderId = do +deleteFile FileStore {files, recipients} senderId = do TM.lookupDelete senderId files >>= \case - Just FileRec {fileInfo, recipientIds} -> do + Just FileRec {recipientIds} -> do readTVar recipientIds >>= mapM_ (`TM.delete` recipients) - modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo) pure $ Right () _ -> pure $ Left AUTH -- this function must be called after the file is deleted from the file system blockFile :: FileStore -> SenderId -> BlockingInfo -> Bool -> STM (Either XFTPErrorType ()) -blockFile st@FileStore {usedStorage} senderId info deleted = - withFile st senderId $ \FileRec {fileInfo, fileStatus} -> do - when deleted $ modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo) +blockFile st senderId info _deleted = + withFile st senderId $ \FileRec {fileStatus} -> do writeTVar fileStatus $! EntityBlocked info pure $ Right () From 8e449b84767a7a2139118ce4a8ea3f096297428a Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 13:07:53 +0000 Subject: [PATCH 06/24] refactor: add getUsedStorage, getFileCount, expiredFiles store functions --- src/Simplex/FileTransfer/Server.hs | 36 +++++++++++------------- src/Simplex/FileTransfer/Server/Env.hs | 7 +---- src/Simplex/FileTransfer/Server/Store.hs | 35 ++++++++++++++++------- 3 files changed, 43 insertions(+), 35 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 75d16e310..e94d26df4 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -31,7 +31,6 @@ import qualified Data.ByteString.Char8 as B import Data.Int (Int64) import Data.List.NonEmpty (NonEmpty) import qualified Data.List.NonEmpty as L -import qualified Data.Map.Strict as M import Data.Maybe (fromMaybe, isJust) import qualified Data.Text as T import qualified Data.Text.IO as T @@ -642,26 +641,25 @@ expireServerFiles itemDelay expCfg = do us <- asks usedStorage usedStart <- readTVarIO us old <- liftIO $ expireBeforeEpoch expCfg - files' <- readTVarIO (files st) - logNote $ "Expiration check: " <> tshow (M.size files') <> " files" - forM_ (M.keys files') $ \sId -> do - mapM_ threadDelay itemDelay - atomically (expiredFilePath st sId old) - >>= mapM_ (maybeRemove $ delete st sId) + filesCount <- liftIO $ getFileCount st + logNote $ "Expiration check: " <> tshow filesCount <> " files" + expireLoop st us old usedEnd <- readTVarIO us logNote $ "Used " <> mbs usedStart <> " -> " <> mbs usedEnd <> ", " <> mbs (usedStart - usedEnd) <> " reclaimed." where mbs bs = tshow (bs `div` 1048576) <> "mb" - maybeRemove del = maybe del (remove del) - remove del filePath = - ifM - (doesFileExist filePath) - ((removeFile filePath >> del) `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow filePath <> ": " <> tshow e) - del - delete st sId = do - withFileLog (`logDeleteFile` sId) - void . atomically $ deleteFile st sId -- will not update usedStorage if sId isn't in store - incFileStat filesExpired + expireLoop st us old = do + expired <- liftIO $ expiredFiles st old 10000 + forM_ expired $ \(sId, filePath_, fileSize) -> do + mapM_ threadDelay itemDelay + forM_ filePath_ $ \fp -> + whenM (doesFileExist fp) $ + removeFile fp `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow fp <> ": " <> tshow e + withFileLog (`logDeleteFile` sId) + void . atomically $ deleteFile st sId + atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) + incFileStat filesExpired + unless (null expired) $ expireLoop st us old randomId :: Int -> M ByteString randomId n = atomically . C.randomBytes n =<< asks random @@ -695,8 +693,8 @@ restoreServerStats = asks (serverStatsBackupFile . config) >>= mapM_ restoreStat liftIO (strDecode <$> B.readFile f) >>= \case Right d@FileServerStatsData {_filesCount = statsFilesCount, _filesSize = statsFilesSize} -> do s <- asks serverStats - FileStore {files} <- asks store - _filesCount <- M.size <$> readTVarIO files + st <- asks store + _filesCount <- liftIO $ getFileCount st _filesSize <- readTVarIO =<< asks usedStorage liftIO $ setFileServerStats s d {_filesCount, _filesSize} renameFile f $ f <> ".bak" diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index f38cc5e9d..dfa3da105 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -15,7 +15,6 @@ module Simplex.FileTransfer.Server.Env defFileExpirationHours, defaultFileExpiration, newXFTPServerEnv, - countUsedStorage, ) where import Control.Logger.Simple @@ -23,7 +22,6 @@ import Control.Monad import Crypto.Random import Data.Int (Int64) import Data.List.NonEmpty (NonEmpty) -import qualified Data.Map.Strict as M import Data.Time.Clock (getCurrentTime) import Data.Word (Word32) import Data.X509.Validation (Fingerprint (..)) @@ -115,7 +113,7 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede random <- C.newRandom store <- newFileStore storeLog <- mapM (`readWriteFileStore` store) storeLogFile - used <- countUsedStorage <$> readTVarIO (files store) + used <- getUsedStorage store usedStorage <- newTVarIO used forM_ fileSizeQuota $ \quota -> do logNote $ "Total / available storage: " <> tshow quota <> " / " <> tshow (quota - used) @@ -126,9 +124,6 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede serverStats <- newFileServerStats =<< getCurrentTime pure XFTPEnv {config, store, usedStorage, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats} -countUsedStorage :: M.Map k FileRec -> Int64 -countUsedStorage = M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 - data XFTPRequest = XFTPReqNew FileInfo (NonEmpty RcvPublicAuthKey) (Maybe BasicAuth) | XFTPReqCmd XFTPFileId FileRec FileCmd diff --git a/src/Simplex/FileTransfer/Server/Store.hs b/src/Simplex/FileTransfer/Server/Store.hs index e3860eae6..0a3de4b10 100644 --- a/src/Simplex/FileTransfer/Server/Store.hs +++ b/src/Simplex/FileTransfer/Server/Store.hs @@ -17,18 +17,24 @@ module Simplex.FileTransfer.Server.Store deleteFile, blockFile, deleteRecipient, - expiredFilePath, getFile, ackFile, + expiredFiles, + getUsedStorage, + getFileCount, fileTimePrecision, ) where import Control.Concurrent.STM +import Control.Monad (forM) import qualified Data.Attoparsec.ByteString.Char8 as A import Data.Int (Int64) +import qualified Data.Map.Strict as M +import Data.Maybe (catMaybes) import Data.Set (Set) import qualified Data.Set as S +import Data.Word (Word32) import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..), XFTPFileId) import Simplex.FileTransfer.Transport (XFTPErrorType (..)) import qualified Simplex.Messaging.Crypto as C @@ -38,7 +44,7 @@ import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) import Simplex.Messaging.SystemTime import Simplex.Messaging.TMap (TMap) import qualified Simplex.Messaging.TMap as TM -import Simplex.Messaging.Util (ifM, ($>>=)) +import Simplex.Messaging.Util (ifM) data FileStore = FileStore { files :: TMap SenderId FileRec, @@ -133,14 +139,6 @@ getFile st party fId = case party of Just (sId, rKey) -> withFile st sId $ pure . Right . (,rKey) _ -> pure $ Left AUTH -expiredFilePath :: FileStore -> XFTPFileId -> Int64 -> STM (Maybe (Maybe FilePath)) -expiredFilePath FileStore {files} sId old = - TM.lookup sId files - $>>= \FileRec {filePath, createdAt = RoundedSystemTime createdAt} -> - if createdAt + fileTimePrecision < old - then Just <$> readTVar filePath - else pure Nothing - ackFile :: FileStore -> RecipientId -> STM (Either XFTPErrorType ()) ackFile st@FileStore {recipients} recipientId = do TM.lookupDelete recipientId recipients >>= \case @@ -150,6 +148,23 @@ ackFile st@FileStore {recipients} recipientId = do pure $ Right () _ -> pure $ Left AUTH +expiredFiles :: FileStore -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] +expiredFiles FileStore {files} old _limit = do + fs <- readTVarIO files + fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> + if createdAt + fileTimePrecision < old + then do + path <- readTVarIO filePath + pure $ Just (sId, path, size) + else pure Nothing + +getUsedStorage :: FileStore -> IO Int64 +getUsedStorage FileStore {files} = + M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files + +getFileCount :: FileStore -> IO Int +getFileCount FileStore {files} = M.size <$> readTVarIO files + withFile :: FileStore -> SenderId -> (FileRec -> STM (Either XFTPErrorType a)) -> STM (Either XFTPErrorType a) withFile FileStore {files} sId a = TM.lookup sId files >>= \case From b0da98273b810a6bf424021beadb8083b7627c95 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 13:14:31 +0000 Subject: [PATCH 07/24] refactor: change file store operations from STM to IO --- src/Simplex/FileTransfer/Server.hs | 36 ++++++++++----------- src/Simplex/FileTransfer/Server/Store.hs | 32 +++++++++--------- src/Simplex/FileTransfer/Server/StoreLog.hs | 2 +- 3 files changed, 35 insertions(+), 35 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index e94d26df4..711e5e082 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -367,13 +367,13 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira CPDelete fileId -> withUserRole $ unliftIO u $ do fs <- asks store r <- runExceptT $ do - (fr, _) <- ExceptT $ atomically $ getFile fs SFRecipient fileId + (fr, _) <- ExceptT $ liftIO $ getFile fs SFRecipient fileId ExceptT $ deleteServerFile_ fr liftIO . hPutStrLn h $ either (\e -> "error: " <> show e) (\() -> "ok") r CPBlock fileId info -> withUserRole $ unliftIO u $ do fs <- asks store r <- runExceptT $ do - (fr, _) <- ExceptT $ atomically $ getFile fs SFRecipient fileId + (fr, _) <- ExceptT $ liftIO $ getFile fs SFRecipient fileId ExceptT $ blockServerFile fr info liftIO . hPutStrLn h $ either (\e -> "error: " <> show e) (\() -> "ok") r CPHelp -> hPutStrLn h "commands: stats-rts, delete, help, quit" @@ -449,16 +449,15 @@ verifyXFTPTransmission thAuth (tAuth, authorized, (corrId, fId, cmd)) = verifyCmd :: SFileParty p -> M VerificationResult verifyCmd party = do st <- asks store - atomically $ verify =<< getFile st party fId + liftIO (getFile st party fId) >>= \case + Right (fr, k) -> do + status <- readTVarIO (fileStatus fr) + pure $ case status of + EntityActive -> XFTPReqCmd fId fr cmd `verifyWith` k + EntityBlocked info -> VRFailed $ BLOCKED info + EntityOff -> noFileAuth + Left _ -> pure noFileAuth where - verify = \case - Right (fr, k) -> result <$> readTVar (fileStatus fr) - where - result = \case - EntityActive -> XFTPReqCmd fId fr cmd `verifyWith` k - EntityBlocked info -> VRFailed $ BLOCKED info - EntityOff -> noFileAuth - Left _ -> pure noFileAuth noFileAuth = dummyVerifyCmd thAuth tAuth authorized corrId `seq` VRFailed AUTH -- TODO verify with DH authorization req `verifyWith` k = if verifyCmdAuthorization thAuth tAuth authorized corrId k then VRVerified req else VRFailed AUTH @@ -512,11 +511,11 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rcp = FileRecipient rId rpk ExceptT $ addRecipient st sId rcp pure rcp - retryAdd :: Int -> (XFTPFileId -> STM (Either XFTPErrorType a)) -> M (Either XFTPErrorType a) + retryAdd :: Int -> (XFTPFileId -> IO (Either XFTPErrorType a)) -> M (Either XFTPErrorType a) retryAdd 0 _ = pure $ Left INTERNAL retryAdd n add = do fId <- getFileId - atomically (add fId) >>= \case + liftIO (add fId) >>= \case Left DUPLICATE_ -> retryAdd (n - 1) add r -> pure r addRecipients :: XFTPFileId -> NonEmpty RcvPublicAuthKey -> M FileResponse @@ -558,8 +557,9 @@ processXFTPRequest HTTP2Body {bodyPart} = \case receiveChunk (XFTPRcvChunkSpec fPath size digest) >>= \case Right () -> do stats <- asks serverStats + st <- asks store withFileLog $ \sl -> logPutFile sl senderId fPath - atomically $ writeTVar filePath (Just fPath) + void $ liftIO $ setFilePath st senderId fPath incFileStat filesUploaded incFileStat filesCount liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) @@ -601,7 +601,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case ackFileReception rId fr = do withFileLog (`logAckFile` rId) st <- asks store - atomically $ deleteRecipient st rId fr + liftIO $ deleteRecipient st rId fr incFileStat fileDownloadAcks pure FROk @@ -616,13 +616,13 @@ blockServerFile fr@FileRec {senderId} info = do withFileLog $ \sl -> logBlockFile sl senderId info deleteOrBlockServerFile_ fr filesBlocked $ \st -> blockFile st senderId info True -deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (FileStore -> STM (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) +deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (FileStore -> IO (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExceptT $ do path <- readTVarIO filePath stats <- asks serverStats ExceptT $ first (\(_ :: SomeException) -> FILE_IO) <$> try (forM_ path $ \p -> whenM (doesFileExist p) (removeFile p >> deletedStats stats)) st <- asks store - void $ atomically $ storeAction st + void $ liftIO $ storeAction st forM_ path $ \_ -> do us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo) @@ -656,7 +656,7 @@ expireServerFiles itemDelay expCfg = do whenM (doesFileExist fp) $ removeFile fp `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow fp <> ": " <> tshow e withFileLog (`logDeleteFile` sId) - void . atomically $ deleteFile st sId + void $ liftIO $ deleteFile st sId atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) incFileStat filesExpired unless (null expired) $ expireLoop st us old diff --git a/src/Simplex/FileTransfer/Server/Store.hs b/src/Simplex/FileTransfer/Server/Store.hs index 0a3de4b10..2ea460761 100644 --- a/src/Simplex/FileTransfer/Server/Store.hs +++ b/src/Simplex/FileTransfer/Server/Store.hs @@ -78,8 +78,8 @@ newFileStore = do recipients <- TM.emptyIO pure FileStore {files, recipients} -addFile :: FileStore -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM (Either XFTPErrorType ()) -addFile FileStore {files} sId fileInfo createdAt status = +addFile :: FileStore -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) +addFile FileStore {files} sId fileInfo createdAt status = atomically $ ifM (TM.member sId files) (pure $ Left DUPLICATE_) $ do f <- newFileRec sId fileInfo createdAt status TM.insert sId f files @@ -92,14 +92,14 @@ newFileRec senderId fileInfo createdAt status = do fileStatus <- newTVar status pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} -setFilePath :: FileStore -> SenderId -> FilePath -> STM (Either XFTPErrorType ()) -setFilePath st sId fPath = +setFilePath :: FileStore -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) +setFilePath st sId fPath = atomically $ withFile st sId $ \FileRec {filePath} -> do writeTVar filePath (Just fPath) pure $ Right () -addRecipient :: FileStore -> SenderId -> FileRecipient -> STM (Either XFTPErrorType ()) -addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = +addRecipient :: FileStore -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) +addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = atomically $ withFile st senderId $ \FileRec {recipientIds} -> do rIds <- readTVar recipientIds mem <- TM.member rId recipients @@ -111,8 +111,8 @@ addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = pure $ Right () -- this function must be called after the file is deleted from the file system -deleteFile :: FileStore -> SenderId -> STM (Either XFTPErrorType ()) -deleteFile FileStore {files, recipients} senderId = do +deleteFile :: FileStore -> SenderId -> IO (Either XFTPErrorType ()) +deleteFile FileStore {files, recipients} senderId = atomically $ do TM.lookupDelete senderId files >>= \case Just FileRec {recipientIds} -> do readTVar recipientIds >>= mapM_ (`TM.delete` recipients) @@ -120,27 +120,27 @@ deleteFile FileStore {files, recipients} senderId = do _ -> pure $ Left AUTH -- this function must be called after the file is deleted from the file system -blockFile :: FileStore -> SenderId -> BlockingInfo -> Bool -> STM (Either XFTPErrorType ()) -blockFile st senderId info _deleted = +blockFile :: FileStore -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) +blockFile st senderId info _deleted = atomically $ withFile st senderId $ \FileRec {fileStatus} -> do writeTVar fileStatus $! EntityBlocked info pure $ Right () -deleteRecipient :: FileStore -> RecipientId -> FileRec -> STM () -deleteRecipient FileStore {recipients} rId FileRec {recipientIds} = do +deleteRecipient :: FileStore -> RecipientId -> FileRec -> IO () +deleteRecipient FileStore {recipients} rId FileRec {recipientIds} = atomically $ do TM.delete rId recipients modifyTVar' recipientIds $ S.delete rId -getFile :: FileStore -> SFileParty p -> XFTPFileId -> STM (Either XFTPErrorType (FileRec, C.APublicAuthKey)) -getFile st party fId = case party of +getFile :: FileStore -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) +getFile st party fId = atomically $ case party of SFSender -> withFile st fId $ pure . Right . (\f -> (f, sndKey $ fileInfo f)) SFRecipient -> TM.lookup fId (recipients st) >>= \case Just (sId, rKey) -> withFile st sId $ pure . Right . (,rKey) _ -> pure $ Left AUTH -ackFile :: FileStore -> RecipientId -> STM (Either XFTPErrorType ()) -ackFile st@FileStore {recipients} recipientId = do +ackFile :: FileStore -> RecipientId -> IO (Either XFTPErrorType ()) +ackFile st@FileStore {recipients} recipientId = atomically $ do TM.lookupDelete recipientId recipients >>= \case Just (sId, _) -> withFile st sId $ \FileRec {recipientIds} -> do diff --git a/src/Simplex/FileTransfer/Server/StoreLog.hs b/src/Simplex/FileTransfer/Server/StoreLog.hs index c82beda29..8175aca73 100644 --- a/src/Simplex/FileTransfer/Server/StoreLog.hs +++ b/src/Simplex/FileTransfer/Server/StoreLog.hs @@ -96,7 +96,7 @@ readFileStore f st = mapM_ (addFileLogRecord . LB.toStrict) . LB.lines =<< LB.re addFileLogRecord s = case strDecode s of Left e -> B.putStrLn $ "Log parsing error (" <> B.pack e <> "): " <> B.take 100 s Right lr -> - atomically (addToStore lr) >>= \case + addToStore lr >>= \case Left e -> B.putStrLn $ "Log processing error (" <> bshow e <> "): " <> B.take 100 s _ -> pure () addToStore = \case From 6f4bf647ede4cf4db039adf8679da596e1d4318d Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 13:22:14 +0000 Subject: [PATCH 08/24] refactor: extract FileStoreClass typeclass, move STM impl to Store.STM --- simplexmq.cabal | 1 + src/Simplex/FileTransfer/Server.hs | 7 +- src/Simplex/FileTransfer/Server/Env.hs | 5 +- src/Simplex/FileTransfer/Server/Store.hs | 160 ++++--------------- src/Simplex/FileTransfer/Server/Store/STM.hs | 127 +++++++++++++++ src/Simplex/FileTransfer/Server/StoreLog.hs | 9 +- 6 files changed, 170 insertions(+), 139 deletions(-) create mode 100644 src/Simplex/FileTransfer/Server/Store/STM.hs diff --git a/simplexmq.cabal b/simplexmq.cabal index 3ad23df09..329187a81 100644 --- a/simplexmq.cabal +++ b/simplexmq.cabal @@ -241,6 +241,7 @@ library Simplex.FileTransfer.Server.Prometheus Simplex.FileTransfer.Server.Stats Simplex.FileTransfer.Server.Store + Simplex.FileTransfer.Server.Store.STM Simplex.FileTransfer.Server.StoreLog Simplex.Messaging.Server Simplex.Messaging.Server.CLI diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 711e5e082..6bbf7b85f 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -51,6 +51,7 @@ import Simplex.FileTransfer.Server.Env import Simplex.FileTransfer.Server.Prometheus import Simplex.FileTransfer.Server.Stats import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.STM (STMFileStore) import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport import qualified Simplex.Messaging.Crypto as C @@ -500,12 +501,12 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rIds = L.map (\(FileRecipient rId _) -> rId) rcps pure $ FRSndIds sId rIds pure $ either FRErr id r - addFileRetry :: FileStore -> FileInfo -> Int -> RoundedFileTime -> M (Either XFTPErrorType XFTPFileId) + addFileRetry :: STMFileStore -> FileInfo -> Int -> RoundedFileTime -> M (Either XFTPErrorType XFTPFileId) addFileRetry st file n ts = retryAdd n $ \sId -> runExceptT $ do ExceptT $ addFile st sId file ts EntityActive pure sId - addRecipientRetry :: FileStore -> Int -> XFTPFileId -> RcvPublicAuthKey -> M (Either XFTPErrorType FileRecipient) + addRecipientRetry :: STMFileStore -> Int -> XFTPFileId -> RcvPublicAuthKey -> M (Either XFTPErrorType FileRecipient) addRecipientRetry st n sId rpk = retryAdd n $ \rId -> runExceptT $ do let rcp = FileRecipient rId rpk @@ -616,7 +617,7 @@ blockServerFile fr@FileRec {senderId} info = do withFileLog $ \sl -> logBlockFile sl senderId info deleteOrBlockServerFile_ fr filesBlocked $ \st -> blockFile st senderId info True -deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (FileStore -> IO (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) +deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (STMFileStore -> IO (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExceptT $ do path <- readTVarIO filePath stats <- asks serverStats diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index dfa3da105..ce1ebecbf 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -30,6 +30,7 @@ import qualified Network.TLS as T import Simplex.FileTransfer.Protocol (FileCmd, FileInfo (..), XFTPFileId) import Simplex.FileTransfer.Server.Stats import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (VersionRangeXFTP) import qualified Simplex.Messaging.Crypto as C @@ -88,7 +89,7 @@ defaultInactiveClientExpiration = data XFTPEnv = XFTPEnv { config :: XFTPServerConfig, - store :: FileStore, + store :: STMFileStore, usedStorage :: TVar Int64, storeLog :: Maybe (StoreLog 'WriteMode), random :: TVar ChaChaDRG, @@ -111,7 +112,7 @@ defaultFileExpiration = newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCredentials, httpCredentials} = do random <- C.newRandom - store <- newFileStore + store <- newFileStore () storeLog <- mapM (`readWriteFileStore` store) storeLogFile used <- getUsedStorage store usedStorage <- newTVarIO used diff --git a/src/Simplex/FileTransfer/Server/Store.hs b/src/Simplex/FileTransfer/Server/Store.hs index 2ea460761..a3a4d5795 100644 --- a/src/Simplex/FileTransfer/Server/Store.hs +++ b/src/Simplex/FileTransfer/Server/Store.hs @@ -1,55 +1,30 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE GADTs #-} -{-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} -{-# LANGUAGE TupleSections #-} +{-# LANGUAGE TypeFamilies #-} module Simplex.FileTransfer.Server.Store - ( FileStore (..), + ( FileStoreClass (..), FileRec (..), FileRecipient (..), RoundedFileTime, - newFileStore, - addFile, - setFilePath, - addRecipient, - deleteFile, - blockFile, - deleteRecipient, - getFile, - ackFile, - expiredFiles, - getUsedStorage, - getFileCount, fileTimePrecision, ) where import Control.Concurrent.STM -import Control.Monad (forM) import qualified Data.Attoparsec.ByteString.Char8 as A import Data.Int (Int64) -import qualified Data.Map.Strict as M -import Data.Maybe (catMaybes) import Data.Set (Set) -import qualified Data.Set as S import Data.Word (Word32) -import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..), XFTPFileId) -import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty, XFTPFileId) +import Simplex.FileTransfer.Transport (XFTPErrorType) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String -import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) -import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.Protocol (BlockingInfo, RecipientId, SenderId) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus) import Simplex.Messaging.SystemTime -import Simplex.Messaging.TMap (TMap) -import qualified Simplex.Messaging.TMap as TM -import Simplex.Messaging.Util (ifM) - -data FileStore = FileStore - { files :: TMap SenderId FileRec, - recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) - } data FileRec = FileRec { senderId :: SenderId, @@ -65,108 +40,33 @@ type RoundedFileTime = RoundedSystemTime 3600 fileTimePrecision :: Int64 fileTimePrecision = 3600 -- truncate creation time to 1 hour -data FileRecipient = FileRecipient RecipientId RcvPublicAuthKey +data FileRecipient = FileRecipient RecipientId C.APublicAuthKey deriving (Show) instance StrEncoding FileRecipient where strEncode (FileRecipient rId rKey) = strEncode rId <> ":" <> strEncode rKey strP = FileRecipient <$> strP <* A.char ':' <*> strP -newFileStore :: IO FileStore -newFileStore = do - files <- TM.emptyIO - recipients <- TM.emptyIO - pure FileStore {files, recipients} - -addFile :: FileStore -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) -addFile FileStore {files} sId fileInfo createdAt status = atomically $ - ifM (TM.member sId files) (pure $ Left DUPLICATE_) $ do - f <- newFileRec sId fileInfo createdAt status - TM.insert sId f files - pure $ Right () - -newFileRec :: SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM FileRec -newFileRec senderId fileInfo createdAt status = do - recipientIds <- newTVar S.empty - filePath <- newTVar Nothing - fileStatus <- newTVar status - pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} - -setFilePath :: FileStore -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) -setFilePath st sId fPath = atomically $ - withFile st sId $ \FileRec {filePath} -> do - writeTVar filePath (Just fPath) - pure $ Right () - -addRecipient :: FileStore -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) -addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = atomically $ - withFile st senderId $ \FileRec {recipientIds} -> do - rIds <- readTVar recipientIds - mem <- TM.member rId recipients - if rId `S.member` rIds || mem - then pure $ Left DUPLICATE_ - else do - writeTVar recipientIds $! S.insert rId rIds - TM.insert rId (senderId, rKey) recipients - pure $ Right () - --- this function must be called after the file is deleted from the file system -deleteFile :: FileStore -> SenderId -> IO (Either XFTPErrorType ()) -deleteFile FileStore {files, recipients} senderId = atomically $ do - TM.lookupDelete senderId files >>= \case - Just FileRec {recipientIds} -> do - readTVar recipientIds >>= mapM_ (`TM.delete` recipients) - pure $ Right () - _ -> pure $ Left AUTH - --- this function must be called after the file is deleted from the file system -blockFile :: FileStore -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) -blockFile st senderId info _deleted = atomically $ - withFile st senderId $ \FileRec {fileStatus} -> do - writeTVar fileStatus $! EntityBlocked info - pure $ Right () - -deleteRecipient :: FileStore -> RecipientId -> FileRec -> IO () -deleteRecipient FileStore {recipients} rId FileRec {recipientIds} = atomically $ do - TM.delete rId recipients - modifyTVar' recipientIds $ S.delete rId - -getFile :: FileStore -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) -getFile st party fId = atomically $ case party of - SFSender -> withFile st fId $ pure . Right . (\f -> (f, sndKey $ fileInfo f)) - SFRecipient -> - TM.lookup fId (recipients st) >>= \case - Just (sId, rKey) -> withFile st sId $ pure . Right . (,rKey) - _ -> pure $ Left AUTH - -ackFile :: FileStore -> RecipientId -> IO (Either XFTPErrorType ()) -ackFile st@FileStore {recipients} recipientId = atomically $ do - TM.lookupDelete recipientId recipients >>= \case - Just (sId, _) -> - withFile st sId $ \FileRec {recipientIds} -> do - modifyTVar' recipientIds $ S.delete recipientId - pure $ Right () - _ -> pure $ Left AUTH - -expiredFiles :: FileStore -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] -expiredFiles FileStore {files} old _limit = do - fs <- readTVarIO files - fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> - if createdAt + fileTimePrecision < old - then do - path <- readTVarIO filePath - pure $ Just (sId, path, size) - else pure Nothing - -getUsedStorage :: FileStore -> IO Int64 -getUsedStorage FileStore {files} = - M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files - -getFileCount :: FileStore -> IO Int -getFileCount FileStore {files} = M.size <$> readTVarIO files - -withFile :: FileStore -> SenderId -> (FileRec -> STM (Either XFTPErrorType a)) -> STM (Either XFTPErrorType a) -withFile FileStore {files} sId a = - TM.lookup sId files >>= \case - Just f -> a f - _ -> pure $ Left AUTH +class FileStoreClass s where + type FileStoreConfig s + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration (with LIMIT for Postgres; called in a loop until empty) + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + + -- Storage and stats (for init-time computation) + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int diff --git a/src/Simplex/FileTransfer/Server/Store/STM.hs b/src/Simplex/FileTransfer/Server/Store/STM.hs new file mode 100644 index 000000000..7859d06aa --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/STM.hs @@ -0,0 +1,127 @@ +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE GADTs #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE TupleSections #-} +{-# LANGUAGE TypeFamilies #-} + +module Simplex.FileTransfer.Server.Store.STM + ( STMFileStore (..), + ) +where + +import Control.Concurrent.STM +import Control.Monad (forM) +import Data.Int (Int64) +import qualified Data.Map.Strict as M +import Data.Maybe (catMaybes) +import Data.Set (Set) +import qualified Data.Set as S +import Data.Word (Word32) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..), XFTPFileId) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.SystemTime +import Simplex.Messaging.TMap (TMap) +import qualified Simplex.Messaging.TMap as TM +import Simplex.Messaging.Util (ifM) + +data STMFileStore = STMFileStore + { files :: TMap SenderId FileRec, + recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) + } + +instance FileStoreClass STMFileStore where + type FileStoreConfig STMFileStore = () + + newFileStore () = do + files <- TM.emptyIO + recipients <- TM.emptyIO + pure STMFileStore {files, recipients} + + closeFileStore _ = pure () + + addFile STMFileStore {files} sId fileInfo createdAt status = atomically $ + ifM (TM.member sId files) (pure $ Left DUPLICATE_) $ do + f <- newFileRec sId fileInfo createdAt status + TM.insert sId f files + pure $ Right () + + setFilePath st sId fPath = atomically $ + withSTMFile st sId $ \FileRec {filePath} -> do + writeTVar filePath (Just fPath) + pure $ Right () + + addRecipient st@STMFileStore {recipients} senderId (FileRecipient rId rKey) = atomically $ + withSTMFile st senderId $ \FileRec {recipientIds} -> do + rIds <- readTVar recipientIds + mem <- TM.member rId recipients + if rId `S.member` rIds || mem + then pure $ Left DUPLICATE_ + else do + writeTVar recipientIds $! S.insert rId rIds + TM.insert rId (senderId, rKey) recipients + pure $ Right () + + getFile st party fId = atomically $ case party of + SFSender -> withSTMFile st fId $ pure . Right . (\f -> (f, sndKey $ fileInfo f)) + SFRecipient -> + TM.lookup fId (recipients st) >>= \case + Just (sId, rKey) -> withSTMFile st sId $ pure . Right . (,rKey) + _ -> pure $ Left AUTH + + deleteFile STMFileStore {files, recipients} senderId = atomically $ do + TM.lookupDelete senderId files >>= \case + Just FileRec {recipientIds} -> do + readTVar recipientIds >>= mapM_ (`TM.delete` recipients) + pure $ Right () + _ -> pure $ Left AUTH + + blockFile st senderId info _deleted = atomically $ + withSTMFile st senderId $ \FileRec {fileStatus} -> do + writeTVar fileStatus $! EntityBlocked info + pure $ Right () + + deleteRecipient STMFileStore {recipients} rId FileRec {recipientIds} = atomically $ do + TM.delete rId recipients + modifyTVar' recipientIds $ S.delete rId + + ackFile st@STMFileStore {recipients} recipientId = atomically $ do + TM.lookupDelete recipientId recipients >>= \case + Just (sId, _) -> + withSTMFile st sId $ \FileRec {recipientIds} -> do + modifyTVar' recipientIds $ S.delete recipientId + pure $ Right () + _ -> pure $ Left AUTH + + expiredFiles STMFileStore {files} old _limit = do + fs <- readTVarIO files + fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> + if createdAt + fileTimePrecision < old + then do + path <- readTVarIO filePath + pure $ Just (sId, path, size) + else pure Nothing + + getUsedStorage STMFileStore {files} = + M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files + + getFileCount STMFileStore {files} = M.size <$> readTVarIO files + +-- Internal STM helpers + +newFileRec :: SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM FileRec +newFileRec senderId fileInfo createdAt status = do + recipientIds <- newTVar S.empty + filePath <- newTVar Nothing + fileStatus <- newTVar status + pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} + +withSTMFile :: STMFileStore -> SenderId -> (FileRec -> STM (Either XFTPErrorType a)) -> STM (Either XFTPErrorType a) +withSTMFile STMFileStore {files} sId a = + TM.lookup sId files >>= \case + Just f -> a f + _ -> pure $ Left AUTH diff --git a/src/Simplex/FileTransfer/Server/StoreLog.hs b/src/Simplex/FileTransfer/Server/StoreLog.hs index 8175aca73..dc65e4a22 100644 --- a/src/Simplex/FileTransfer/Server/StoreLog.hs +++ b/src/Simplex/FileTransfer/Server/StoreLog.hs @@ -32,6 +32,7 @@ import Data.Map.Strict (Map) import qualified Data.Map.Strict as M import Simplex.FileTransfer.Protocol (FileInfo (..)) import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) import Simplex.Messaging.Encoding.String import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) @@ -87,10 +88,10 @@ logBlockFile s fId = logFileStoreRecord s . BlockFile fId logAckFile :: StoreLog 'WriteMode -> RecipientId -> IO () logAckFile s = logFileStoreRecord s . AckFile -readWriteFileStore :: FilePath -> FileStore -> IO (StoreLog 'WriteMode) +readWriteFileStore :: FilePath -> STMFileStore -> IO (StoreLog 'WriteMode) readWriteFileStore = readWriteStoreLog readFileStore writeFileStore -readFileStore :: FilePath -> FileStore -> IO () +readFileStore :: FilePath -> STMFileStore -> IO () readFileStore f st = mapM_ (addFileLogRecord . LB.toStrict) . LB.lines =<< LB.readFile f where addFileLogRecord s = case strDecode s of @@ -108,8 +109,8 @@ readFileStore f st = mapM_ (addFileLogRecord . LB.toStrict) . LB.lines =<< LB.re AckFile rId -> ackFile st rId addRecipients sId rcps = mapM_ (ExceptT . addRecipient st sId) rcps -writeFileStore :: StoreLog 'WriteMode -> FileStore -> IO () -writeFileStore s FileStore {files, recipients} = do +writeFileStore :: StoreLog 'WriteMode -> STMFileStore -> IO () +writeFileStore s STMFileStore {files, recipients} = do allRcps <- readTVarIO recipients readTVarIO files >>= mapM_ (logFile allRcps) where From ff254b451b7d3324188b1715e6830be1d9723ebb Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 13:34:35 +0000 Subject: [PATCH 09/24] refactor: make XFTPEnv and server polymorphic over FileStoreClass --- src/Simplex/FileTransfer/Server.hs | 93 +++++++++++++------------ src/Simplex/FileTransfer/Server/Env.hs | 19 +++-- src/Simplex/FileTransfer/Server/Main.hs | 5 +- tests/XFTPClient.hs | 4 +- 4 files changed, 65 insertions(+), 56 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 6bbf7b85f..dc10e7533 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -51,7 +51,6 @@ import Simplex.FileTransfer.Server.Env import Simplex.FileTransfer.Server.Prometheus import Simplex.FileTransfer.Server.Stats import Simplex.FileTransfer.Server.Store -import Simplex.FileTransfer.Server.Store.STM (STMFileStore) import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport import qualified Simplex.Messaging.Crypto as C @@ -88,7 +87,7 @@ import UnliftIO.Concurrent (threadDelay) import UnliftIO.Directory (canonicalizePath, doesFileExist, removeFile, renameFile) import qualified UnliftIO.Exception as E -type M a = ReaderT XFTPEnv IO a +type M s a = ReaderT (XFTPEnv s) IO a data XFTPTransportRequest = XFTPTransportRequest { thParams :: THandleParamsXFTP 'TServer, @@ -112,19 +111,19 @@ corsPreflightHeaders = ("Access-Control-Max-Age", "86400") ] -runXFTPServer :: XFTPServerConfig -> IO () -runXFTPServer cfg = do +runXFTPServer :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO () +runXFTPServer storeCfg cfg = do started <- newEmptyTMVarIO - runXFTPServerBlocking started cfg + runXFTPServerBlocking started storeCfg cfg -runXFTPServerBlocking :: TMVar Bool -> XFTPServerConfig -> IO () -runXFTPServerBlocking started cfg = newXFTPServerEnv cfg >>= runReaderT (xftpServer cfg started) +runXFTPServerBlocking :: FileStoreClass s => TMVar Bool -> XFTPStoreConfig s -> XFTPServerConfig -> IO () +runXFTPServerBlocking started storeCfg cfg = newXFTPServerEnv storeCfg cfg >>= runReaderT (xftpServer cfg started) data Handshake = HandshakeSent C.PrivateKeyX25519 | HandshakeAccepted (THandleParams XFTPVersion 'TServer) -xftpServer :: XFTPServerConfig -> TMVar Bool -> M () +xftpServer :: forall s. FileStoreClass s => XFTPServerConfig -> TMVar Bool -> M s () xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpiration, fileExpiration, xftpServerVRange} started = do mapM_ (expireServerFiles Nothing) fileExpiration restoreServerStats @@ -137,7 +136,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira ) `finally` stopServer where - runServer :: M () + runServer :: M s () runServer = do srvCreds@(chain, pk) <- asks tlsServerCreds httpCreds_ <- asks httpServerCreds @@ -168,7 +167,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira Nothing -> pure () Just thParams -> processRequest req0 {thParams} | otherwise -> liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS') - xftpServerHandshakeV1 :: X.CertificateChain -> C.APrivateSignKey -> TMap SessionId Handshake -> XFTPTransportRequest -> M (Maybe (THandleParams XFTPVersion 'TServer)) + xftpServerHandshakeV1 :: X.CertificateChain -> C.APrivateSignKey -> TMap SessionId Handshake -> XFTPTransportRequest -> M s (Maybe (THandleParams XFTPVersion 'TServer)) xftpServerHandshakeV1 chain serverSignKey sessions XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, request, reqBody = HTTP2Body {bodyHead}, sendResponse, sniUsed, addCORS} = do s <- atomically $ TM.lookup sessionId sessions r <- runExceptT $ case s of @@ -227,39 +226,41 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS) pure Nothing Nothing -> throwE HANDSHAKE - sendError :: XFTPErrorType -> M (Maybe (THandleParams XFTPVersion 'TServer)) + sendError :: XFTPErrorType -> M s (Maybe (THandleParams XFTPVersion 'TServer)) sendError err = do runExceptT (encodeXftp err) >>= \case Right bs -> liftIO . sendResponse $ H.responseBuilder N.ok200 (corsHeaders addCORS) bs Left _ -> logError $ "Error encoding handshake error: " <> tshow err pure Nothing - encodeXftp :: Encoding a => a -> ExceptT XFTPErrorType (ReaderT XFTPEnv IO) Builder + encodeXftp :: Encoding a => a -> ExceptT XFTPErrorType (ReaderT (XFTPEnv s) IO) Builder encodeXftp a = byteString <$> liftHS (C.pad (smpEncode a) xftpBlockSize) liftHS = liftEitherWith (const HANDSHAKE) - stopServer :: M () + stopServer :: M s () stopServer = do withFileLog closeStoreLog + st <- asks store + liftIO $ closeFileStore st saveServerStats logNote "Server stopped" - expireFilesThread_ :: XFTPServerConfig -> [M ()] + expireFilesThread_ :: XFTPServerConfig -> [M s ()] expireFilesThread_ XFTPServerConfig {fileExpiration = Just fileExp} = [expireFiles fileExp] expireFilesThread_ _ = [] - expireFiles :: ExpirationConfig -> M () + expireFiles :: ExpirationConfig -> M s () expireFiles expCfg = do let interval = checkInterval expCfg * 1000000 forever $ do liftIO $ threadDelay' interval expireServerFiles (Just 100000) expCfg - serverStatsThread_ :: XFTPServerConfig -> [M ()] + serverStatsThread_ :: XFTPServerConfig -> [M s ()] serverStatsThread_ XFTPServerConfig {logStatsInterval = Just interval, logStatsStartTime, serverStatsLogFile} = [logServerStats logStatsStartTime interval serverStatsLogFile] serverStatsThread_ _ = [] - logServerStats :: Int64 -> Int64 -> FilePath -> M () + logServerStats :: Int64 -> Int64 -> FilePath -> M s () logServerStats startAt logInterval statsFilePath = do initialDelay <- (startAt -) . fromIntegral . (`div` 1000000_000000) . diffTimeToPicoseconds . utctDayTime <$> liftIO getCurrentTime liftIO $ putStrLn $ "server stats log enabled: " <> statsFilePath @@ -300,12 +301,12 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira ] liftIO $ threadDelay' interval - prometheusMetricsThread_ :: XFTPServerConfig -> [M ()] + prometheusMetricsThread_ :: XFTPServerConfig -> [M s ()] prometheusMetricsThread_ XFTPServerConfig {prometheusInterval = Just interval, prometheusMetricsFile} = [savePrometheusMetrics interval prometheusMetricsFile] prometheusMetricsThread_ _ = [] - savePrometheusMetrics :: Int -> FilePath -> M () + savePrometheusMetrics :: Int -> FilePath -> M s () savePrometheusMetrics saveInterval metricsFile = do labelMyThread "savePrometheusMetrics" liftIO $ putStrLn $ "Prometheus metrics saved every " <> show saveInterval <> " seconds to " <> metricsFile @@ -324,11 +325,11 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira let fd = periodStatDataCounts $ _filesDownloaded d pure FileServerMetrics {statsData = d, filesDownloadedPeriods = fd, rtsOptions} - controlPortThread_ :: XFTPServerConfig -> [M ()] + controlPortThread_ :: XFTPServerConfig -> [M s ()] controlPortThread_ XFTPServerConfig {controlPort = Just port} = [runCPServer port] controlPortThread_ _ = [] - runCPServer :: ServiceName -> M () + runCPServer :: ServiceName -> M s () runCPServer port = do cpStarted <- newEmptyTMVarIO u <- askUnliftIO @@ -336,7 +337,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira labelMyThread "control port server" runLocalTCPServer cpStarted port $ runCPClient u where - runCPClient :: UnliftIO (ReaderT XFTPEnv IO) -> Socket -> IO () + runCPClient :: UnliftIO (ReaderT (XFTPEnv s) IO) -> Socket -> IO () runCPClient u sock = do labelMyThread "control port client" h <- socketToHandle sock ReadWriteMode @@ -395,7 +396,7 @@ data ServerFile = ServerFile sbState :: LC.SbState } -processRequest :: XFTPTransportRequest -> M () +processRequest :: FileStoreClass s => XFTPTransportRequest -> M s () processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHead}, sendResponse, addCORS} | B.length bodyHead /= xftpBlockSize = sendXFTPResponse ("", NoEntity, FRErr BLOCK) Nothing | otherwise = @@ -430,7 +431,7 @@ processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHea done #ifdef slow_servers -randomDelay :: M () +randomDelay :: M s () randomDelay = do d <- asks $ responseDelay . config when (d > 0) $ do @@ -440,14 +441,14 @@ randomDelay = do data VerificationResult = VRVerified XFTPRequest | VRFailed XFTPErrorType -verifyXFTPTransmission :: Maybe (THandleAuth 'TServer) -> SignedTransmission FileCmd -> M VerificationResult +verifyXFTPTransmission :: forall s. FileStoreClass s => Maybe (THandleAuth 'TServer) -> SignedTransmission FileCmd -> M s VerificationResult verifyXFTPTransmission thAuth (tAuth, authorized, (corrId, fId, cmd)) = case cmd of FileCmd SFSender (FNEW file rcps auth') -> pure $ XFTPReqNew file rcps auth' `verifyWith` sndKey file FileCmd SFRecipient PING -> pure $ VRVerified XFTPReqPing FileCmd party _ -> verifyCmd party where - verifyCmd :: SFileParty p -> M VerificationResult + verifyCmd :: SFileParty p -> M s VerificationResult verifyCmd party = do st <- asks store liftIO (getFile st party fId) >>= \case @@ -463,7 +464,7 @@ verifyXFTPTransmission thAuth (tAuth, authorized, (corrId, fId, cmd)) = -- TODO verify with DH authorization req `verifyWith` k = if verifyCmdAuthorization thAuth tAuth authorized corrId k then VRVerified req else VRFailed AUTH -processXFTPRequest :: HTTP2Body -> XFTPRequest -> M (FileResponse, Maybe ServerFile) +processXFTPRequest :: forall s. FileStoreClass s => HTTP2Body -> XFTPRequest -> M s (FileResponse, Maybe ServerFile) processXFTPRequest HTTP2Body {bodyPart} = \case XFTPReqNew file rks auth -> noFile =<< ifM allowNew (createFile file rks) (pure $ FRErr AUTH) where @@ -482,7 +483,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case XFTPReqPing -> noFile FRPong where noFile resp = pure (resp, Nothing) - createFile :: FileInfo -> NonEmpty RcvPublicAuthKey -> M FileResponse + createFile :: FileInfo -> NonEmpty RcvPublicAuthKey -> M s FileResponse createFile file rks = do st <- asks store r <- runExceptT $ do @@ -501,25 +502,25 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rIds = L.map (\(FileRecipient rId _) -> rId) rcps pure $ FRSndIds sId rIds pure $ either FRErr id r - addFileRetry :: STMFileStore -> FileInfo -> Int -> RoundedFileTime -> M (Either XFTPErrorType XFTPFileId) + addFileRetry :: s -> FileInfo -> Int -> RoundedFileTime -> M s (Either XFTPErrorType XFTPFileId) addFileRetry st file n ts = retryAdd n $ \sId -> runExceptT $ do ExceptT $ addFile st sId file ts EntityActive pure sId - addRecipientRetry :: STMFileStore -> Int -> XFTPFileId -> RcvPublicAuthKey -> M (Either XFTPErrorType FileRecipient) + addRecipientRetry :: s -> Int -> XFTPFileId -> RcvPublicAuthKey -> M s (Either XFTPErrorType FileRecipient) addRecipientRetry st n sId rpk = retryAdd n $ \rId -> runExceptT $ do let rcp = FileRecipient rId rpk ExceptT $ addRecipient st sId rcp pure rcp - retryAdd :: Int -> (XFTPFileId -> IO (Either XFTPErrorType a)) -> M (Either XFTPErrorType a) + retryAdd :: Int -> (XFTPFileId -> IO (Either XFTPErrorType a)) -> M s (Either XFTPErrorType a) retryAdd 0 _ = pure $ Left INTERNAL retryAdd n add = do fId <- getFileId liftIO (add fId) >>= \case Left DUPLICATE_ -> retryAdd (n - 1) add r -> pure r - addRecipients :: XFTPFileId -> NonEmpty RcvPublicAuthKey -> M FileResponse + addRecipients :: XFTPFileId -> NonEmpty RcvPublicAuthKey -> M s FileResponse addRecipients sId rks = do st <- asks store r <- runExceptT $ do @@ -530,7 +531,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rIds = L.map (\(FileRecipient rId _) -> rId) rcps pure $ FRRcvIds rIds pure $ either FRErr id r - receiveServerFile :: FileRec -> M FileResponse + receiveServerFile :: FileRec -> M s FileResponse receiveServerFile FileRec {senderId, fileInfo = FileInfo {size, digest}, filePath} = case bodyPart of Nothing -> pure $ FRErr SIZE -- TODO validate body size from request before downloading, once it's populated @@ -573,7 +574,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case receiveChunk spec = do t <- asks $ fileTimeout . config liftIO $ fromMaybe (Left TIMEOUT) <$> timeout t (runExceptT $ receiveFile getBody spec) - sendServerFile :: FileRec -> RcvPublicDhKey -> M (FileResponse, Maybe ServerFile) + sendServerFile :: FileRec -> RcvPublicDhKey -> M s (FileResponse, Maybe ServerFile) sendServerFile FileRec {senderId, filePath, fileInfo = FileInfo {size}} rDhKey = do readTVarIO filePath >>= \case Just path -> ifM (doesFileExist path) sendFile (pure (FRErr AUTH, Nothing)) @@ -592,13 +593,13 @@ processXFTPRequest HTTP2Body {bodyPart} = \case _ -> pure (FRErr INTERNAL, Nothing) _ -> pure (FRErr NO_FILE, Nothing) - deleteServerFile :: FileRec -> M FileResponse + deleteServerFile :: FileRec -> M s FileResponse deleteServerFile fr = either FRErr (\() -> FROk) <$> deleteServerFile_ fr logFileError :: SomeException -> IO () logFileError e = logError $ "Error deleting file: " <> tshow e - ackFileReception :: RecipientId -> FileRec -> M FileResponse + ackFileReception :: RecipientId -> FileRec -> M s FileResponse ackFileReception rId fr = do withFileLog (`logAckFile` rId) st <- asks store @@ -606,18 +607,18 @@ processXFTPRequest HTTP2Body {bodyPart} = \case incFileStat fileDownloadAcks pure FROk -deleteServerFile_ :: FileRec -> M (Either XFTPErrorType ()) +deleteServerFile_ :: FileStoreClass s => FileRec -> M s (Either XFTPErrorType ()) deleteServerFile_ fr@FileRec {senderId} = do withFileLog (`logDeleteFile` senderId) deleteOrBlockServerFile_ fr filesDeleted (`deleteFile` senderId) -- this also deletes the file from storage, but doesn't include it in delete statistics -blockServerFile :: FileRec -> BlockingInfo -> M (Either XFTPErrorType ()) +blockServerFile :: FileStoreClass s => FileRec -> BlockingInfo -> M s (Either XFTPErrorType ()) blockServerFile fr@FileRec {senderId} info = do withFileLog $ \sl -> logBlockFile sl senderId info deleteOrBlockServerFile_ fr filesBlocked $ \st -> blockFile st senderId info True -deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (STMFileStore -> IO (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) +deleteOrBlockServerFile_ :: FileStoreClass s => FileRec -> (FileServerStats -> IORef Int) -> (s -> IO (Either XFTPErrorType ())) -> M s (Either XFTPErrorType ()) deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExceptT $ do path <- readTVarIO filePath stats <- asks serverStats @@ -636,7 +637,7 @@ deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExce getFileTime :: IO RoundedFileTime getFileTime = getRoundedSystemTime -expireServerFiles :: Maybe Int -> ExpirationConfig -> M () +expireServerFiles :: FileStoreClass s => Maybe Int -> ExpirationConfig -> M s () expireServerFiles itemDelay expCfg = do st <- asks store us <- asks usedStorage @@ -662,21 +663,21 @@ expireServerFiles itemDelay expCfg = do incFileStat filesExpired unless (null expired) $ expireLoop st us old -randomId :: Int -> M ByteString +randomId :: Int -> M s ByteString randomId n = atomically . C.randomBytes n =<< asks random -getFileId :: M XFTPFileId +getFileId :: M s XFTPFileId getFileId = fmap EntityId . randomId =<< asks (fileIdSize . config) -withFileLog :: (StoreLog 'WriteMode -> IO a) -> M () +withFileLog :: (StoreLog 'WriteMode -> IO a) -> M s () withFileLog action = liftIO . mapM_ action =<< asks storeLog -incFileStat :: (FileServerStats -> IORef Int) -> M () +incFileStat :: (FileServerStats -> IORef Int) -> M s () incFileStat statSel = do stats <- asks serverStats liftIO $ atomicModifyIORef'_ (statSel stats) (+ 1) -saveServerStats :: M () +saveServerStats :: M s () saveServerStats = asks (serverStatsBackupFile . config) >>= mapM_ (\f -> asks serverStats >>= liftIO . getFileServerStatsData >>= liftIO . saveStats f) @@ -686,7 +687,7 @@ saveServerStats = B.writeFile f $ strEncode stats logNote "server stats saved" -restoreServerStats :: M () +restoreServerStats :: FileStoreClass s => M s () restoreServerStats = asks (serverStatsBackupFile . config) >>= mapM_ restoreStats where restoreStats f = whenM (doesFileExist f) $ do diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index ce1ebecbf..f03dc2f12 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -9,6 +9,7 @@ module Simplex.FileTransfer.Server.Env ( XFTPServerConfig (..), + XFTPStoreConfig (..), XFTPEnv (..), XFTPRequest (..), defaultInactiveClientExpiration, @@ -87,9 +88,12 @@ defaultInactiveClientExpiration = checkInterval = 3600 -- seconds, 1 hours } -data XFTPEnv = XFTPEnv +data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore + +data XFTPEnv s = XFTPEnv { config :: XFTPServerConfig, - store :: STMFileStore, + store :: s, usedStorage :: TVar Int64, storeLog :: Maybe (StoreLog 'WriteMode), random :: TVar ChaChaDRG, @@ -109,11 +113,14 @@ defaultFileExpiration = checkInterval = 2 * 3600 -- seconds, 2 hours } -newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv -newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCredentials, httpCredentials} = do +newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s) +newXFTPServerEnv storeCfg config@XFTPServerConfig {fileSizeQuota, xftpCredentials, httpCredentials} = do random <- C.newRandom - store <- newFileStore () - storeLog <- mapM (`readWriteFileStore` store) storeLogFile + (store, storeLog) <- case storeCfg of + XSCMemory storeLogPath -> do + st <- newFileStore () + sl <- mapM (`readWriteFileStore` st) storeLogPath + pure (st, sl) used <- getUsedStorage store usedStorage <- newTVarIO used forM_ fileSizeQuota $ \quota -> do diff --git a/src/Simplex/FileTransfer/Server/Main.hs b/src/Simplex/FileTransfer/Server/Main.hs index 101fe945b..42c53d32c 100644 --- a/src/Simplex/FileTransfer/Server/Main.hs +++ b/src/Simplex/FileTransfer/Server/Main.hs @@ -28,7 +28,7 @@ import Options.Applicative import Simplex.FileTransfer.Chunks import Simplex.FileTransfer.Description (FileSize (..)) import Simplex.FileTransfer.Server (runXFTPServer) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), XFTPStoreConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String @@ -194,7 +194,8 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do when (isJust webHttpPort || isJust webHttpsParams') $ serveStaticFiles EmbeddedWebParams {webStaticPath = path, webHttpPort, webHttpsParams = webHttpsParams'} Nothing -> pure () - runXFTPServer serverConfig + let storeCfg = XSCMemory $ storeLogFile serverConfig + runXFTPServer storeCfg serverConfig where isOnion = \case THOnionHost _ -> True; _ -> False enableStoreLog = settingIsOn "STORE_LOG" "enable" ini diff --git a/tests/XFTPClient.hs b/tests/XFTPClient.hs index 85a1d21b8..6fcc32669 100644 --- a/tests/XFTPClient.hs +++ b/tests/XFTPClient.hs @@ -14,7 +14,7 @@ import SMPClient (serverBracket) import Simplex.FileTransfer.Client import Simplex.FileTransfer.Description import Simplex.FileTransfer.Server (runXFTPServerBlocking) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), XFTPStoreConfig (..), defaultFileExpiration, defaultInactiveClientExpiration) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import Simplex.Messaging.Protocol (XFTPServer) import Simplex.Messaging.Transport.HTTP2 (httpALPN) @@ -58,7 +58,7 @@ withXFTPServerCfgNoALPN cfg = withXFTPServerCfg cfg {transportConfig = (transpor withXFTPServerCfg :: HasCallStack => XFTPServerConfig -> (HasCallStack => ThreadId -> IO a) -> IO a withXFTPServerCfg cfg = serverBracket - (\started -> runXFTPServerBlocking started cfg) + (\started -> runXFTPServerBlocking started (XSCMemory $ storeLogFile cfg) cfg) (threadDelay 10000) withXFTPServerThreadOn :: HasCallStack => (HasCallStack => ThreadId -> IO a) -> IO a From cde9f5054479237c2cbe11c0f02748be69f8425d Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 13:52:54 +0000 Subject: [PATCH 10/24] feat: add PostgreSQL store skeleton with schema migration --- simplexmq.cabal | 3 + src/Simplex/FileTransfer/Server/Env.hs | 13 +++ .../FileTransfer/Server/Store/Postgres.hs | 104 ++++++++++++++++++ .../Server/Store/Postgres/Config.hs | 25 +++++ .../Server/Store/Postgres/Migrations.hs | 47 ++++++++ 5 files changed, 192 insertions(+) create mode 100644 src/Simplex/FileTransfer/Server/Store/Postgres.hs create mode 100644 src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs create mode 100644 src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs diff --git a/simplexmq.cabal b/simplexmq.cabal index 329187a81..21b02ce0b 100644 --- a/simplexmq.cabal +++ b/simplexmq.cabal @@ -282,6 +282,9 @@ library Simplex.Messaging.Notifications.Server.Store.Postgres Simplex.Messaging.Notifications.Server.Store.Types Simplex.Messaging.Notifications.Server.StoreLog + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Config + Simplex.FileTransfer.Server.Store.Postgres.Migrations Simplex.Messaging.Server.MsgStore.Postgres Simplex.Messaging.Server.QueueStore.Postgres Simplex.Messaging.Server.QueueStore.Postgres.Migrations diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index f03dc2f12..3a09f3143 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE GADTs #-} @@ -32,6 +33,10 @@ import Simplex.FileTransfer.Protocol (FileCmd, FileInfo (..), XFTPFileId) import Simplex.FileTransfer.Server.Stats import Simplex.FileTransfer.Server.Store import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +#if defined(dbServerPostgres) +import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) +#endif import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (VersionRangeXFTP) import qualified Simplex.Messaging.Crypto as C @@ -90,6 +95,9 @@ defaultInactiveClientExpiration = data XFTPStoreConfig s where XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore +#if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore +#endif data XFTPEnv s = XFTPEnv { config :: XFTPServerConfig, @@ -121,6 +129,11 @@ newXFTPServerEnv storeCfg config@XFTPServerConfig {fileSizeQuota, xftpCredential st <- newFileStore () sl <- mapM (`readWriteFileStore` st) storeLogPath pure (st, sl) +#if defined(dbServerPostgres) + XSCDatabase dbCfg -> do + st <- newFileStore dbCfg + pure (st, Nothing) +#endif used <- getUsedStorage store usedStorage <- newTVarIO used forM_ fileSizeQuota $ \quota -> do diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs new file mode 100644 index 000000000..22f7c2a34 --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -0,0 +1,104 @@ +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE GADTs #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE QuasiQuotes #-} +{-# LANGUAGE ScopedTypeVariables #-} +{-# LANGUAGE TypeFamilies #-} + +module Simplex.FileTransfer.Server.Store.Postgres + ( PostgresFileStore (..), + withDB, + withDB', + handleDuplicate, + assertUpdated, + withLog, + ) +where + +import qualified Control.Exception as E +import Control.Logger.Simple +import Control.Monad +import Control.Monad.Except +import Control.Monad.Trans.Except (throwE) +import Control.Monad.IO.Class +import Data.Functor (($>)) +import Data.Int (Int64) +import Data.Text (Text) +import Database.PostgreSQL.Simple (SqlError) +import Database.PostgreSQL.Simple.Errors (ConstraintViolation (..), constraintViolation) +import qualified Database.PostgreSQL.Simple as DB +import GHC.IO (catchAny) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.Postgres.Config +import Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) +import Simplex.FileTransfer.Server.StoreLog +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import Simplex.Messaging.Agent.Store.Postgres (createDBStore, closeDBStore) +import Simplex.Messaging.Agent.Store.Postgres.Common (DBStore, withTransaction) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..)) +import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) +import Simplex.Messaging.Util (tshow) +import System.Exit (exitFailure) +import System.IO (IOMode (..)) + +data PostgresFileStore = PostgresFileStore + { dbStore :: DBStore, + dbStoreLog :: Maybe (StoreLog 'WriteMode) + } + +instance FileStoreClass PostgresFileStore where + type FileStoreConfig PostgresFileStore = PostgresFileStoreCfg + + newFileStore PostgresFileStoreCfg {dbOpts, dbStoreLogPath, confirmMigrations} = do + dbStore <- either err pure =<< createDBStore dbOpts xftpServerMigrations (MigrationConfig confirmMigrations Nothing) + dbStoreLog <- mapM (openWriteStoreLog True) dbStoreLogPath + pure PostgresFileStore {dbStore, dbStoreLog} + where + err e = do + logError $ "STORE: newFileStore, error opening PostgreSQL database, " <> tshow e + exitFailure + + closeFileStore PostgresFileStore {dbStore, dbStoreLog} = do + closeDBStore dbStore + mapM_ closeStoreLog dbStoreLog + + addFile _ _ _ _ _ = error "PostgresFileStore.addFile: not implemented" + setFilePath _ _ _ = error "PostgresFileStore.setFilePath: not implemented" + addRecipient _ _ _ = error "PostgresFileStore.addRecipient: not implemented" + getFile _ _ _ = error "PostgresFileStore.getFile: not implemented" + deleteFile _ _ = error "PostgresFileStore.deleteFile: not implemented" + blockFile _ _ _ _ = error "PostgresFileStore.blockFile: not implemented" + deleteRecipient _ _ _ = error "PostgresFileStore.deleteRecipient: not implemented" + ackFile _ _ = error "PostgresFileStore.ackFile: not implemented" + expiredFiles _ _ _ = error "PostgresFileStore.expiredFiles: not implemented" + getUsedStorage _ = error "PostgresFileStore.getUsedStorage: not implemented" + getFileCount _ = error "PostgresFileStore.getFileCount: not implemented" + +-- Helpers + +withDB :: forall a. Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> ExceptT XFTPErrorType IO a +withDB op st action = + ExceptT $ E.try (withTransaction (dbStore st) action) >>= either logErr pure + where + logErr :: E.SomeException -> IO (Either XFTPErrorType a) + logErr e = logError ("STORE: " <> err) $> Left INTERNAL + where + err = op <> ", withDB, " <> tshow e + +withDB' :: Text -> PostgresFileStore -> (DB.Connection -> IO a) -> ExceptT XFTPErrorType IO a +withDB' op st action = withDB op st $ fmap Right . action + +assertUpdated :: ExceptT XFTPErrorType IO Int64 -> ExceptT XFTPErrorType IO () +assertUpdated = (>>= \n -> when (n == 0) (throwE AUTH)) + +handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) +handleDuplicate e = case constraintViolation e of + Just (UniqueViolation _) -> pure $ Left DUPLICATE_ + _ -> E.throwIO e + +withLog :: MonadIO m => Text -> PostgresFileStore -> (StoreLog 'WriteMode -> IO ()) -> m () +withLog op PostgresFileStore {dbStoreLog} action = + forM_ dbStoreLog $ \sl -> liftIO $ action sl `catchAny` \e -> + logWarn $ "STORE: " <> op <> ", withLog, " <> tshow e diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs b/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs new file mode 100644 index 000000000..a0dd5d7ba --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs @@ -0,0 +1,25 @@ +{-# LANGUAGE OverloadedStrings #-} + +module Simplex.FileTransfer.Server.Store.Postgres.Config + ( PostgresFileStoreCfg (..), + defaultXFTPDBOpts, + ) +where + +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) + +data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation + } + +defaultXFTPDBOpts :: DBOpts +defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs new file mode 100644 index 000000000..1914ecbd6 --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs @@ -0,0 +1,47 @@ +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE QuasiQuotes #-} + +module Simplex.FileTransfer.Server.Store.Postgres.Migrations + ( xftpServerMigrations, + ) +where + +import Data.List (sortOn) +import Data.Text (Text) +import Simplex.Messaging.Agent.Store.Shared +import Text.RawString.QQ (r) + +xftpSchemaMigrations :: [(String, Text, Maybe Text)] +xftpSchemaMigrations = + [ ("20260325_initial", m20260325_initial, Nothing) + ] + +-- | The list of migrations in ascending order by date +xftpServerMigrations :: [Migration] +xftpServerMigrations = sortOn name $ map migration xftpSchemaMigrations + where + migration (name, up, down) = Migration {name, up, down = down} + +m20260325_initial :: Text +m20260325_initial = + [r| +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + file_size INT4 NOT NULL, + file_digest BYTEA NOT NULL, + sender_key BYTEA NOT NULL, + file_path TEXT, + created_at INT8 NOT NULL, + status TEXT NOT NULL DEFAULT 'active' +); + +CREATE TABLE recipients ( + recipient_id BYTEA NOT NULL PRIMARY KEY, + sender_id BYTEA NOT NULL REFERENCES files ON DELETE CASCADE, + recipient_key BYTEA NOT NULL +); + +CREATE INDEX idx_recipients_sender_id ON recipients (sender_id); +CREATE INDEX idx_files_created_at ON files (created_at); +|] From ae4888fc6ea7a4a9a25582e79f51d9a375beebd2 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 14:10:53 +0000 Subject: [PATCH 11/24] feat: implement PostgresFileStore operations --- .../FileTransfer/Server/Store/Postgres.hs | 141 +++++++++++++++--- 1 file changed, 123 insertions(+), 18 deletions(-) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index 22f7c2a34..fea00fbc9 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -3,7 +3,6 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} -{-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} @@ -21,27 +20,35 @@ import qualified Control.Exception as E import Control.Logger.Simple import Control.Monad import Control.Monad.Except -import Control.Monad.Trans.Except (throwE) import Control.Monad.IO.Class +import Control.Monad.Trans.Except (throwE) import Data.Functor (($>)) -import Data.Int (Int64) +import Data.Int (Int32, Int64) +import qualified Data.Set as S import Data.Text (Text) -import Database.PostgreSQL.Simple (SqlError) +import Data.Word (Word32) +import Database.PostgreSQL.Simple (Binary (..), Only (..), SqlError) import Database.PostgreSQL.Simple.Errors (ConstraintViolation (..), constraintViolation) import qualified Database.PostgreSQL.Simple as DB import GHC.IO (catchAny) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..)) import Simplex.FileTransfer.Server.Store import Simplex.FileTransfer.Server.Store.Postgres.Config import Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (XFTPErrorType (..)) -import Simplex.Messaging.Agent.Store.Postgres (createDBStore, closeDBStore) +import Simplex.Messaging.Agent.Store.Postgres (closeDBStore, createDBStore) import Simplex.Messaging.Agent.Store.Postgres.Common (DBStore, withTransaction) import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (SenderId) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.Server.QueueStore.Postgres () import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) import Simplex.Messaging.Util (tshow) import System.Exit (exitFailure) import System.IO (IOMode (..)) +import UnliftIO.STM data PostgresFileStore = PostgresFileStore { dbStore :: DBStore, @@ -64,19 +71,117 @@ instance FileStoreClass PostgresFileStore where closeDBStore dbStore mapM_ closeStoreLog dbStoreLog - addFile _ _ _ _ _ = error "PostgresFileStore.addFile: not implemented" - setFilePath _ _ _ = error "PostgresFileStore.setFilePath: not implemented" - addRecipient _ _ _ = error "PostgresFileStore.addRecipient: not implemented" - getFile _ _ _ = error "PostgresFileStore.getFile: not implemented" - deleteFile _ _ = error "PostgresFileStore.deleteFile: not implemented" - blockFile _ _ _ _ = error "PostgresFileStore.blockFile: not implemented" - deleteRecipient _ _ _ = error "PostgresFileStore.deleteRecipient: not implemented" - ackFile _ _ = error "PostgresFileStore.ackFile: not implemented" - expiredFiles _ _ _ = error "PostgresFileStore.expiredFiles: not implemented" - getUsedStorage _ = error "PostgresFileStore.getUsedStorage: not implemented" - getFileCount _ = error "PostgresFileStore.getFileCount: not implemented" - --- Helpers + addFile st sId fileInfo@FileInfo {sndKey, size, digest} createdAt status = + E.uninterruptibleMask_ $ runExceptT $ do + void $ withDB "addFile" st $ \db -> + E.try + ( DB.execute + db + "INSERT INTO files (sender_id, file_size, file_digest, sender_key, created_at, status) VALUES (?,?,?,?,?,?)" + (sId, (fromIntegral size :: Int32), Binary digest, Binary (C.encodePubKey sndKey), createdAt, status) + ) + >>= either handleDuplicate (pure . Right) + withLog "addFile" st $ \s -> logAddFile s sId fileInfo createdAt status + + setFilePath st sId fPath = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "setFilePath" st $ \db -> + DB.execute db "UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL" (fPath, sId) + withLog "setFilePath" st $ \s -> logPutFile s sId fPath + + addRecipient st senderId (FileRecipient rId rKey) = E.uninterruptibleMask_ $ runExceptT $ do + void $ withDB "addRecipient" st $ \db -> + E.try + ( DB.execute + db + "INSERT INTO recipients (recipient_id, sender_id, recipient_key) VALUES (?,?,?)" + (rId, senderId, Binary (C.encodePubKey rKey)) + ) + >>= either handleDuplicate (pure . Right) + withLog "addRecipient" st $ \s -> logAddRecipients s senderId (pure $ FileRecipient rId rKey) + + getFile st party fId = runExceptT $ case party of + SFSender -> + withDB "getFile" st $ \db -> do + rs <- + DB.query + db + "SELECT file_size, file_digest, sender_key, file_path, created_at, status FROM files WHERE sender_id = ?" + (Only fId) + case rs of + [(size, digest, sndKeyBs, path, createdAt, status)] -> + case C.decodePubKey sndKeyBs of + Right sndKey -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral (size :: Int32), digest} + fr <- mkFileRec fId fileInfo path createdAt status + pure $ Right (fr, sndKey) + Left _ -> pure $ Left INTERNAL + _ -> pure $ Left AUTH + SFRecipient -> + withDB "getFile" st $ \db -> do + rs <- + DB.query + db + "SELECT f.file_size, f.file_digest, f.sender_key, f.file_path, f.created_at, f.status, f.sender_id, r.recipient_key FROM recipients r JOIN files f ON r.sender_id = f.sender_id WHERE r.recipient_id = ?" + (Only fId) + case rs of + [(size, digest, sndKeyBs, path, createdAt, status, senderId, rcpKeyBs)] -> + case (C.decodePubKey sndKeyBs, C.decodePubKey rcpKeyBs) of + (Right sndKey, Right rcpKey) -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral (size :: Int32), digest} + fr <- mkFileRec senderId fileInfo path createdAt status + pure $ Right (fr, rcpKey) + _ -> pure $ Left INTERNAL + _ -> pure $ Left AUTH + + deleteFile st sId = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "deleteFile" st $ \db -> + DB.execute db "DELETE FROM files WHERE sender_id = ?" (Only sId) + withLog "deleteFile" st $ \s -> logDeleteFile s sId + + blockFile st sId info _deleted = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "blockFile" st $ \db -> + DB.execute db "UPDATE files SET status = ? WHERE sender_id = ?" (EntityBlocked info, sId) + withLog "blockFile" st $ \s -> logBlockFile s sId info + + deleteRecipient st rId _fr = + void $ runExceptT $ withDB' "deleteRecipient" st $ \db -> + DB.execute db "DELETE FROM recipients WHERE recipient_id = ?" (Only rId) + + ackFile st rId = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "ackFile" st $ \db -> + DB.execute db "DELETE FROM recipients WHERE recipient_id = ?" (Only rId) + withLog "ackFile" st $ \s -> logAckFile s rId + + expiredFiles st old limit = + fmap toResult $ withTransaction (dbStore st) $ \db -> + DB.query + db + "SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? ORDER BY created_at LIMIT ?" + (fileTimePrecision, old, limit) + where + toResult :: [(SenderId, Maybe FilePath, Int32)] -> [(SenderId, Maybe FilePath, Word32)] + toResult = map (\(sId, path, size) -> (sId, path, fromIntegral size)) + + getUsedStorage st = + withTransaction (dbStore st) $ \db -> do + [Only total] <- DB.query_ db "SELECT COALESCE(SUM(file_size::INT8), 0) FROM files" + pure total + + getFileCount st = + withTransaction (dbStore st) $ \db -> do + [Only count] <- DB.query_ db "SELECT COUNT(*) FROM files" + pure (fromIntegral (count :: Int64)) + +-- Internal helpers + +mkFileRec :: SenderId -> FileInfo -> Maybe FilePath -> RoundedFileTime -> ServerEntityStatus -> IO FileRec +mkFileRec senderId fileInfo path createdAt status = do + filePath <- newTVarIO path + recipientIds <- newTVarIO S.empty + fileStatus <- newTVarIO status + pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} + +-- DB helpers withDB :: forall a. Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> ExceptT XFTPErrorType IO a withDB op st action = From d6b6cd5c88ec0e9a1582afec14e4e24f5b56b112 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 14:35:22 +0000 Subject: [PATCH 12/24] feat: add PostgreSQL INI config, store dispatch, startup validation --- src/Simplex/FileTransfer/Server/Env.hs | 53 ++++++++++++++++++++++++- src/Simplex/FileTransfer/Server/Main.hs | 48 ++++++++++++++++++---- 2 files changed, 92 insertions(+), 9 deletions(-) diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index 3a09f3143..e5289ecb6 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -3,6 +3,7 @@ {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} +{-# LANGUAGE RankNTypes #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} @@ -17,6 +18,8 @@ module Simplex.FileTransfer.Server.Env defFileExpirationHours, defaultFileExpiration, newXFTPServerEnv, + runWithStoreConfig, + checkFileStoreMode, ) where import Control.Logger.Simple @@ -31,11 +34,17 @@ import Network.Socket import qualified Network.TLS as T import Simplex.FileTransfer.Protocol (FileCmd, FileInfo (..), XFTPFileId) import Simplex.FileTransfer.Server.Stats +import Data.Ini (Ini) import Simplex.FileTransfer.Server.Store import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) #if defined(dbServerPostgres) +import Data.Functor (($>)) import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) -import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg (..), defaultXFTPDBOpts) +import Simplex.Messaging.Server.CLI (iniDBOptions, settingIsOn) +import System.Directory (doesFileExist) +import System.Exit (exitFailure) #endif import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (VersionRangeXFTP) @@ -149,3 +158,45 @@ data XFTPRequest = XFTPReqNew FileInfo (NonEmpty RcvPublicAuthKey) (Maybe BasicAuth) | XFTPReqCmd XFTPFileId FileRec FileCmd | XFTPReqPing + +-- | Select and run the store config based on INI settings. +-- CPP guards for Postgres are handled here so Main.hs stays CPP-free. +runWithStoreConfig :: + Ini -> + String -> + Maybe FilePath -> + FilePath -> + MigrationConfirmation -> + (forall s. FileStoreClass s => XFTPStoreConfig s -> IO ()) -> + IO () +runWithStoreConfig _ini storeType storeLogFile_ _storeLogFilePath _confirmMigrations run = case storeType of + "memory" -> run $ XSCMemory storeLogFile_ +#if defined(dbServerPostgres) + "database" -> run $ XSCDatabase dbCfg + where + enableDbStoreLog' = settingIsOn "STORE_LOG" "db_store_log" _ini + dbStoreLogPath = enableDbStoreLog' $> _storeLogFilePath + dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions _ini defaultXFTPDBOpts, dbStoreLogPath, confirmMigrations = _confirmMigrations} +#else + "database" -> error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif + _ -> error $ "Invalid store_files value: " <> storeType + +-- | Validate startup config when store_files=database. +checkFileStoreMode :: Ini -> String -> FilePath -> IO () +#if defined(dbServerPostgres) +checkFileStoreMode ini storeType storeLogFilePath = case storeType of + "database" -> do + storeLogExists <- doesFileExist storeLogFilePath + let dbStoreLogOn = settingIsOn "STORE_LOG" "db_store_log" ini + when (storeLogExists && isNothing_ dbStoreLogOn) $ do + putStrLn $ "Error: store log file " <> storeLogFilePath <> " exists but store_files is `database`." + putStrLn "Use `file-server database import` to migrate, or set `db_store_log: on`." + exitFailure + _ -> pure () + where + isNothing_ Nothing = True + isNothing_ _ = False +#else +checkFileStoreMode _ _ _ = pure () +#endif diff --git a/src/Simplex/FileTransfer/Server/Main.hs b/src/Simplex/FileTransfer/Server/Main.hs index 42c53d32c..f39825aa3 100644 --- a/src/Simplex/FileTransfer/Server/Main.hs +++ b/src/Simplex/FileTransfer/Server/Main.hs @@ -28,11 +28,12 @@ import Options.Applicative import Simplex.FileTransfer.Chunks import Simplex.FileTransfer.Description (FileSize (..)) import Simplex.FileTransfer.Server (runXFTPServer) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), XFTPStoreConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration, runWithStoreConfig, checkFileStoreMode) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), pattern XFTPServer) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation (..)) import Simplex.Messaging.Server.CLI import Simplex.Messaging.Server.Expiration import Simplex.Messaging.Server.Information (ServerPublicInfo (..)) @@ -66,9 +67,9 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do doesFileExist iniFile >>= \case True -> genOnline cfgPath certOpts _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." - Start -> + Start opts -> doesFileExist iniFile >>= \case - True -> readIniFile iniFile >>= either exitError runServer + True -> readIniFile iniFile >>= either exitError (runServer opts) _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." Delete -> do confirmOrExit @@ -126,6 +127,14 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do \# and restoring it when the server is started.\n\ \# Log is compacted on start (deleted objects are removed).\n" <> ("enable: " <> onOff enableStoreLog <> "\n\n") + <> "# File storage mode: `memory` or `database` (PostgreSQL).\n\ + \store_files: memory\n\n\ + \# Database connection settings for PostgreSQL database (`store_files: database`).\n\ + \# db_connection: postgresql://xftp@/xftp_server_store\n\ + \# db_schema: xftp_server\n\ + \# db_pool_size: 10\n\n\ + \# Write database changes to store log file\n\ + \# db_store_log: off\n\n" <> "# Expire files after the specified number of hours.\n" <> ("expire_files_hours: " <> tshow defFileExpirationHours <> "\n\n") <> "log_stats: off\n\ @@ -173,7 +182,7 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do \# TLS credentials for HTTPS web server on the same port as XFTP.\n\ \# cert: " <> T.pack (cfgPath `combine` "web.crt") <> "\n\ \# key: " <> T.pack (cfgPath `combine` "web.key") <> "\n" - runServer ini = do + runServer StartOptions {confirmMigrations} ini = do hSetBuffering stdout LineBuffering hSetBuffering stderr LineBuffering fp <- checkSavedFingerprint cfgPath defaultX509Config @@ -194,8 +203,10 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do when (isJust webHttpPort || isJust webHttpsParams') $ serveStaticFiles EmbeddedWebParams {webStaticPath = path, webHttpPort, webHttpsParams = webHttpsParams'} Nothing -> pure () - let storeCfg = XSCMemory $ storeLogFile serverConfig - runXFTPServer storeCfg serverConfig + let storeType = fromRight "memory" $ T.unpack <$> lookupValue "STORE_LOG" "store_files" ini + checkFileStoreMode ini storeType storeLogFilePath + runWithStoreConfig ini storeType (storeLogFile serverConfig) storeLogFilePath confirmMigrations $ + \storeCfg -> runXFTPServer storeCfg serverConfig where isOnion = \case THOnionHost _ -> True; _ -> False enableStoreLog = settingIsOn "STORE_LOG" "enable" ini @@ -290,9 +301,13 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do data CliCommand = Init InitOptions | OnlineCert CertOptions - | Start + | Start StartOptions | Delete +newtype StartOptions = StartOptions + { confirmMigrations :: MigrationConfirmation + } + data InitOptions = InitOptions { enableStoreLog :: Bool, signAlgorithm :: SignAlgorithm, @@ -309,7 +324,7 @@ cliCommandP cfgPath logPath iniFile = hsubparser ( command "init" (info (Init <$> initP) (progDesc $ "Initialize server - creates " <> cfgPath <> " and " <> logPath <> " directories and configuration files")) <> command "cert" (info (OnlineCert <$> certOptionsP) (progDesc $ "Generate new online TLS server credentials (configuration: " <> iniFile <> ")")) - <> command "start" (info (pure Start) (progDesc $ "Start server (configuration: " <> iniFile <> ")")) + <> command "start" (info (Start <$> startOptsP) (progDesc $ "Start server (configuration: " <> iniFile <> ")")) <> command "delete" (info (pure Delete) (progDesc "Delete configuration and log files")) ) where @@ -376,3 +391,20 @@ cliCommandP cfgPath logPath iniFile = <> metavar "PATH" ) pure InitOptions {enableStoreLog, signAlgorithm, ip, fqdn, filesPath, fileSizeQuota, webStaticPath} + startOptsP :: Parser StartOptions + startOptsP = do + confirmMigrations <- + option + parseConfirmMigrations + ( long "confirm-migrations" + <> metavar "CONFIRM_MIGRATIONS" + <> help "Confirm PostgreSQL database migration: up, down (default is manual confirmation)" + <> value MCConsole + ) + pure StartOptions {confirmMigrations} + where + parseConfirmMigrations :: ReadM MigrationConfirmation + parseConfirmMigrations = eitherReader $ \case + "up" -> Right MCYesUp + "down" -> Right MCYesUpDown + _ -> Left "invalid migration confirmation, pass 'up' or 'down'" From aacd873dff460118f45db4eced981baea88f4f7f Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 14:54:40 +0000 Subject: [PATCH 13/24] feat: add database import/export CLI commands --- src/Simplex/FileTransfer/Server/Env.hs | 24 ++- src/Simplex/FileTransfer/Server/Main.hs | 33 +++- .../FileTransfer/Server/Store/Postgres.hs | 154 +++++++++++++++++- 3 files changed, 204 insertions(+), 7 deletions(-) diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index e5289ecb6..73773ff88 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -20,6 +20,8 @@ module Simplex.FileTransfer.Server.Env newXFTPServerEnv, runWithStoreConfig, checkFileStoreMode, + importToDatabase, + exportFromDatabase, ) where import Control.Logger.Simple @@ -40,7 +42,7 @@ import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) #if defined(dbServerPostgres) import Data.Functor (($>)) -import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) +import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore, importFileStore, exportFileStore) import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg (..), defaultXFTPDBOpts) import Simplex.Messaging.Server.CLI (iniDBOptions, settingIsOn) import System.Directory (doesFileExist) @@ -200,3 +202,23 @@ checkFileStoreMode ini storeType storeLogFilePath = case storeType of #else checkFileStoreMode _ _ _ = pure () #endif + +-- | Import StoreLog to PostgreSQL database. +importToDatabase :: FilePath -> Ini -> MigrationConfirmation -> IO () +#if defined(dbServerPostgres) +importToDatabase storeLogFilePath ini _confirmMigrations = do + let dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions ini defaultXFTPDBOpts, dbStoreLogPath = Nothing, confirmMigrations = _confirmMigrations} + importFileStore storeLogFilePath dbCfg +#else +importToDatabase _ _ _ = error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif + +-- | Export PostgreSQL database to StoreLog. +exportFromDatabase :: FilePath -> Ini -> MigrationConfirmation -> IO () +#if defined(dbServerPostgres) +exportFromDatabase storeLogFilePath ini _confirmMigrations = do + let dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions ini defaultXFTPDBOpts, dbStoreLogPath = Nothing, confirmMigrations = _confirmMigrations} + exportFileStore storeLogFilePath dbCfg +#else +exportFromDatabase _ _ _ = error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif diff --git a/src/Simplex/FileTransfer/Server/Main.hs b/src/Simplex/FileTransfer/Server/Main.hs index f39825aa3..9f5045300 100644 --- a/src/Simplex/FileTransfer/Server/Main.hs +++ b/src/Simplex/FileTransfer/Server/Main.hs @@ -12,7 +12,7 @@ module Simplex.FileTransfer.Server.Main xftpServerCLI_, ) where -import Control.Monad (when) +import Control.Monad (unless, when) import Data.Either (fromRight) import Data.Functor (($>)) import Data.Ini (lookupValue, readIniFile) @@ -28,7 +28,7 @@ import Options.Applicative import Simplex.FileTransfer.Chunks import Simplex.FileTransfer.Description (FileSize (..)) import Simplex.FileTransfer.Server (runXFTPServer) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration, runWithStoreConfig, checkFileStoreMode) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration, runWithStoreConfig, checkFileStoreMode, importToDatabase, exportFromDatabase) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String @@ -71,6 +71,10 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do doesFileExist iniFile >>= \case True -> readIniFile iniFile >>= either exitError (runServer opts) _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." + Database cmd -> + doesFileExist iniFile >>= \case + True -> readIniFile iniFile >>= either exitError (runDatabaseCmd cmd) + _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." Delete -> do confirmOrExit "WARNING: deleting the server will make all queues inaccessible, because the server identity (certificate fingerprint) will change.\nTHIS CANNOT BE UNDONE!" @@ -85,6 +89,21 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do executableName = "file-server" storeLogFilePath = combine logPath "file-server-store.log" defaultStaticPath = combine logPath "www" + runDatabaseCmd cmd ini = case cmd of + SCImport -> do + storeLogExists <- doesFileExist storeLogFilePath + unless storeLogExists $ exitError $ "Error: store log file " <> storeLogFilePath <> " does not exist." + confirmOrExit + ("Import store log " <> storeLogFilePath <> " to PostgreSQL database?") + "Import cancelled." + importToDatabase storeLogFilePath ini MCYesUp + SCExport -> do + storeLogExists <- doesFileExist storeLogFilePath + when storeLogExists $ exitError $ "Error: store log file " <> storeLogFilePath <> " already exists." + confirmOrExit + ("Export PostgreSQL database to store log " <> storeLogFilePath <> "?") + "Export cancelled." + exportFromDatabase storeLogFilePath ini MCConsole initializeServer InitOptions {enableStoreLog, signAlgorithm, ip, fqdn, filesPath, fileSizeQuota, webStaticPath = webStaticPath_} = do clearDirIfExists cfgPath clearDirIfExists logPath @@ -302,8 +321,11 @@ data CliCommand = Init InitOptions | OnlineCert CertOptions | Start StartOptions + | Database StoreCmd | Delete +data StoreCmd = SCImport | SCExport + newtype StartOptions = StartOptions { confirmMigrations :: MigrationConfirmation } @@ -325,6 +347,7 @@ cliCommandP cfgPath logPath iniFile = ( command "init" (info (Init <$> initP) (progDesc $ "Initialize server - creates " <> cfgPath <> " and " <> logPath <> " directories and configuration files")) <> command "cert" (info (OnlineCert <$> certOptionsP) (progDesc $ "Generate new online TLS server credentials (configuration: " <> iniFile <> ")")) <> command "start" (info (Start <$> startOptsP) (progDesc $ "Start server (configuration: " <> iniFile <> ")")) + <> command "database" (info (Database <$> storeCmdP) (progDesc "Import/export file store to/from PostgreSQL database")) <> command "delete" (info (pure Delete) (progDesc "Delete configuration and log files")) ) where @@ -408,3 +431,9 @@ cliCommandP cfgPath logPath iniFile = "up" -> Right MCYesUp "down" -> Right MCYesUpDown _ -> Left "invalid migration confirmation, pass 'up' or 'down'" + storeCmdP :: Parser StoreCmd + storeCmdP = + hsubparser + ( command "import" (info (pure SCImport) (progDesc "Import store log file into PostgreSQL database")) + <> command "export" (info (pure SCExport) (progDesc "Export PostgreSQL database to store log file")) + ) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index fea00fbc9..08ca4ce98 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE BangPatterns #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE LambdaCase #-} @@ -13,6 +14,8 @@ module Simplex.FileTransfer.Server.Store.Postgres handleDuplicate, assertUpdated, withLog, + importFileStore, + exportFileStore, ) where @@ -22,32 +25,45 @@ import Control.Monad import Control.Monad.Except import Control.Monad.IO.Class import Control.Monad.Trans.Except (throwE) +import Data.ByteString (ByteString) +import Data.ByteString.Builder (Builder) +import qualified Data.ByteString.Builder as BB +import qualified Data.ByteString.Lazy as LB import Data.Functor (($>)) import Data.Int (Int32, Int64) +import Data.List (intersperse) +import qualified Data.List.NonEmpty as L +import qualified Data.Map.Strict as M import qualified Data.Set as S import Data.Text (Text) import Data.Word (Word32) import Database.PostgreSQL.Simple (Binary (..), Only (..), SqlError) -import Database.PostgreSQL.Simple.Errors (ConstraintViolation (..), constraintViolation) import qualified Database.PostgreSQL.Simple as DB +import qualified Database.PostgreSQL.Simple.Copy as DB +import Database.PostgreSQL.Simple.Errors (ConstraintViolation (..), constraintViolation) +import Database.PostgreSQL.Simple.ToField (Action (..), ToField (..)) import GHC.IO (catchAny) import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..)) import Simplex.FileTransfer.Server.Store import Simplex.FileTransfer.Server.Store.Postgres.Config import Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (XFTPErrorType (..)) import Simplex.Messaging.Agent.Store.Postgres (closeDBStore, createDBStore) import Simplex.Messaging.Agent.Store.Postgres.Common (DBStore, withTransaction) -import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..)) +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..), MigrationConfirmation (..)) import qualified Simplex.Messaging.Crypto as C -import Simplex.Messaging.Protocol (SenderId) +import Simplex.Messaging.Protocol (RcvPublicAuthKey, RecipientId, SenderId) +import Simplex.Messaging.Transport (EntityId (..)) import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) import Simplex.Messaging.Server.QueueStore.Postgres () import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) import Simplex.Messaging.Util (tshow) +import System.Directory (renameFile) import System.Exit (exitFailure) -import System.IO (IOMode (..)) +import System.IO (IOMode (..), hFlush, stdout) import UnliftIO.STM data PostgresFileStore = PostgresFileStore @@ -207,3 +223,133 @@ withLog :: MonadIO m => Text -> PostgresFileStore -> (StoreLog 'WriteMode -> IO withLog op PostgresFileStore {dbStoreLog} action = forM_ dbStoreLog $ \sl -> liftIO $ action sl `catchAny` \e -> logWarn $ "STORE: " <> op <> ", withLog, " <> tshow e + +-- Import: StoreLog -> PostgreSQL + +importFileStore :: FilePath -> PostgresFileStoreCfg -> IO () +importFileStore storeLogFilePath dbCfg = do + putStrLn $ "Reading store log: " <> storeLogFilePath + stmStore <- newFileStore () :: IO STMFileStore + sl <- readWriteFileStore storeLogFilePath stmStore + closeStoreLog sl + allFiles <- readTVarIO (files stmStore) + allRcps <- readTVarIO (recipients stmStore) + let fileCount = M.size allFiles + rcpCount = M.size allRcps + putStrLn $ "Loaded " <> show fileCount <> " files, " <> show rcpCount <> " recipients." + let dbCfg' = dbCfg {dbOpts = (dbOpts dbCfg) {createSchema = True}, confirmMigrations = MCYesUp} + pgStore <- newFileStore dbCfg' :: IO PostgresFileStore + putStrLn "Importing files..." + fCnt <- withTransaction (dbStore pgStore) $ \db -> do + DB.copy_ + db + "COPY files (sender_id, file_size, file_digest, sender_key, file_path, created_at, status) FROM STDIN WITH (FORMAT csv)" + iforM_ (M.toList allFiles) $ \i (sId, fr) -> do + DB.putCopyData db =<< fileRecToCSV sId fr + when (i > 0 && i `mod` 10000 == 0) $ putStr (" " <> show i <> " files\r") >> hFlush stdout + DB.putCopyEnd db + [Only cnt] <- DB.query_ db "SELECT COUNT(*) FROM files" + pure (cnt :: Int64) + putStrLn $ "Imported " <> show fCnt <> " files." + putStrLn "Importing recipients..." + rCnt <- withTransaction (dbStore pgStore) $ \db -> do + DB.copy_ + db + "COPY recipients (recipient_id, sender_id, recipient_key) FROM STDIN WITH (FORMAT csv)" + iforM_ (M.toList allRcps) $ \i (rId, (sId, rKey)) -> do + DB.putCopyData db $ recipientToCSV rId sId rKey + when (i > 0 && i `mod` 10000 == 0) $ putStr (" " <> show i <> " recipients\r") >> hFlush stdout + DB.putCopyEnd db + [Only cnt] <- DB.query_ db "SELECT COUNT(*) FROM recipients" + pure (cnt :: Int64) + putStrLn $ "Imported " <> show rCnt <> " recipients." + when (fromIntegral fileCount /= fCnt) $ + putStrLn $ "WARNING: expected " <> show fileCount <> " files, got " <> show fCnt + when (fromIntegral rcpCount /= rCnt) $ + putStrLn $ "WARNING: expected " <> show rcpCount <> " recipients, got " <> show rCnt + closeFileStore pgStore + renameFile storeLogFilePath (storeLogFilePath <> ".bak") + putStrLn $ "Store log renamed to " <> storeLogFilePath <> ".bak" + +-- Export: PostgreSQL -> StoreLog + +exportFileStore :: FilePath -> PostgresFileStoreCfg -> IO () +exportFileStore storeLogFilePath dbCfg = do + pgStore <- newFileStore dbCfg :: IO PostgresFileStore + sl <- openWriteStoreLog False storeLogFilePath + putStrLn "Exporting files..." + -- Load all recipients into a map for lookup + rcpMap <- withTransaction (dbStore pgStore) $ \db -> + DB.fold_ + db + "SELECT recipient_id, sender_id, recipient_key FROM recipients ORDER BY sender_id" + M.empty + (\acc (rId, sId, rKeyBs :: ByteString) -> + case C.decodePubKey rKeyBs of + Right rKey -> pure $! M.insertWith (++) sId [FileRecipient rId rKey] acc + Left _ -> putStrLn ("WARNING: invalid recipient key for " <> show rId) $> acc) + -- Fold over files, writing StoreLog records + (!fCnt, !rCnt) <- withTransaction (dbStore pgStore) $ \db -> + DB.fold_ + db + "SELECT sender_id, file_size, file_digest, sender_key, file_path, created_at, status FROM files ORDER BY created_at" + (0 :: Int, 0 :: Int) + ( \(!fc, !rc) (sId, size :: Int32, digest :: ByteString, sndKeyBs :: ByteString, path :: Maybe String, createdAt, status) -> + case C.decodePubKey sndKeyBs of + Right sndKey -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral size, digest} + logAddFile sl sId fileInfo createdAt status + let rcps = M.findWithDefault [] sId rcpMap + rc' = rc + length rcps + forM_ (L.nonEmpty rcps) $ logAddRecipients sl sId + forM_ path $ logPutFile sl sId + pure (fc + 1, rc') + Left _ -> do + putStrLn $ "WARNING: invalid sender key for " <> show sId + pure (fc, rc) + ) + closeStoreLog sl + closeFileStore pgStore + putStrLn $ "Exported " <> show fCnt <> " files, " <> show rCnt <> " recipients to " <> storeLogFilePath + +-- CSV helpers for COPY protocol + +iforM_ :: Monad m => [a] -> (Int -> a -> m ()) -> m () +iforM_ xs f = zipWithM_ f [0 ..] xs + +fileRecToCSV :: SenderId -> FileRec -> IO ByteString +fileRecToCSV sId FileRec {fileInfo = FileInfo {sndKey, size, digest}, filePath, createdAt, fileStatus} = do + path <- readTVarIO filePath + status <- readTVarIO fileStatus + pure $ LB.toStrict $ BB.toLazyByteString $ mconcat (BB.char7 ',' `intersperse` fields path status) <> BB.char7 '\n' + where + fields path status = + [ renderField (toField (Binary (unEntityId sId))), + renderField (toField (fromIntegral size :: Int32)), + renderField (toField (Binary digest)), + renderField (toField (Binary (C.encodePubKey sndKey))), + nullable (toField <$> path), + renderField (toField createdAt), + BB.char7 '"' <> renderField (toField status) <> BB.char7 '"' + ] + +recipientToCSV :: RecipientId -> SenderId -> RcvPublicAuthKey -> ByteString +recipientToCSV rId sId rKey = + LB.toStrict $ BB.toLazyByteString $ mconcat (BB.char7 ',' `intersperse` fields) <> BB.char7 '\n' + where + fields = + [ renderField (toField (Binary (unEntityId rId))), + renderField (toField (Binary (unEntityId sId))), + renderField (toField (Binary (C.encodePubKey rKey))) + ] + +renderField :: Action -> Builder +renderField = \case + Plain bld -> bld + Escape s -> BB.byteString s + EscapeByteA s -> BB.string7 "\\x" <> BB.byteStringHex s + EscapeIdentifier s -> BB.byteString s + Many as -> mconcat (map renderField as) + +nullable :: Maybe Action -> Builder +nullable = maybe mempty renderField From dea62cc349642a3654a39dc15e13c2dd94c66e27 Mon Sep 17 00:00:00 2001 From: shum Date: Wed, 1 Apr 2026 15:52:01 +0000 Subject: [PATCH 14/24] test: add PostgreSQL backend tests --- simplexmq.cabal | 1 + .../FileTransfer/Server/Store/Postgres.hs | 2 +- src/Simplex/FileTransfer/Server/StoreLog.hs | 1 + tests/CoreTests/XFTPStoreTests.hs | 284 ++++++++++++++++++ tests/Test.hs | 9 + tests/XFTPClient.hs | 67 +++++ tests/XFTPServerTests.hs | 118 +++++++- 7 files changed, 480 insertions(+), 2 deletions(-) create mode 100644 tests/CoreTests/XFTPStoreTests.hs diff --git a/simplexmq.cabal b/simplexmq.cabal index 21b02ce0b..919dd272a 100644 --- a/simplexmq.cabal +++ b/simplexmq.cabal @@ -527,6 +527,7 @@ test-suite simplexmq-test if flag(server_postgres) other-modules: AgentTests.NotificationTests + CoreTests.XFTPStoreTests NtfClient NtfServerTests PostgresSchemaDump diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index 08ca4ce98..b35e3cd72 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -180,7 +180,7 @@ instance FileStoreClass PostgresFileStore where getUsedStorage st = withTransaction (dbStore st) $ \db -> do - [Only total] <- DB.query_ db "SELECT COALESCE(SUM(file_size::INT8), 0) FROM files" + [Only total] <- DB.query_ db "SELECT COALESCE(SUM(file_size::INT8), 0)::INT8 FROM files" pure total getFileCount st = diff --git a/src/Simplex/FileTransfer/Server/StoreLog.hs b/src/Simplex/FileTransfer/Server/StoreLog.hs index dc65e4a22..a6747257b 100644 --- a/src/Simplex/FileTransfer/Server/StoreLog.hs +++ b/src/Simplex/FileTransfer/Server/StoreLog.hs @@ -10,6 +10,7 @@ module Simplex.FileTransfer.Server.StoreLog FileStoreLogRecord (..), closeStoreLog, readWriteFileStore, + writeFileStore, logAddFile, logPutFile, logAddRecipients, diff --git a/tests/CoreTests/XFTPStoreTests.hs b/tests/CoreTests/XFTPStoreTests.hs new file mode 100644 index 000000000..91e395976 --- /dev/null +++ b/tests/CoreTests/XFTPStoreTests.hs @@ -0,0 +1,284 @@ +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE ScopedTypeVariables #-} + +module CoreTests.XFTPStoreTests (xftpStoreTests, xftpMigrationTests) where + +import Control.Monad +import qualified Data.ByteString.Char8 as B +import Data.Word (Word32) +import qualified Data.Set as S +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..)) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +import Simplex.FileTransfer.Server.StoreLog +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (BlockingInfo (..), BlockingReason (..), EntityId (..)) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.SystemTime (RoundedSystemTime (..)) +import Simplex.FileTransfer.Server.Store.Postgres (importFileStore, exportFileStore) +import Simplex.FileTransfer.Server.StoreLog (readWriteFileStore, writeFileStore) +import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) +import System.Directory (doesFileExist, removeFile) +import Test.Hspec hiding (fit, it) +import UnliftIO.STM +import Util +import XFTPClient (testXFTPPostgresCfg) + +xftpStoreTests :: Spec +xftpStoreTests = describe "PostgresFileStore operations" $ do + it "should add and get file by sender" testAddGetFileSender + it "should add and get file by recipient" testAddGetFileRecipient + it "should reject duplicate file" testDuplicateFile + it "should return AUTH for nonexistent file" testGetNonexistent + it "should set file path with IS NULL guard" testSetFilePath + it "should reject duplicate recipient" testDuplicateRecipient + it "should delete file and cascade recipients" testDeleteFileCascade + it "should block file and update status" testBlockFile + it "should ack file reception" testAckFile + it "should return expired files with limit" testExpiredFiles + it "should compute used storage and file count" testStorageAndCount + +xftpMigrationTests :: Spec +xftpMigrationTests = describe "XFTP migration round-trip" $ do + it "should export to StoreLog and import back to Postgres preserving data" testMigrationRoundTrip + +-- Test helpers + +withPgStore :: (PostgresFileStore -> IO ()) -> IO () +withPgStore test = do + st <- newFileStore testXFTPPostgresCfg :: IO PostgresFileStore + test st + closeFileStore st + +testSenderId :: EntityId +testSenderId = EntityId "sender001_______" + +testRecipientId :: EntityId +testRecipientId = EntityId "recipient001____" + +testRecipientId2 :: EntityId +testRecipientId2 = EntityId "recipient002____" + +testFileInfo :: C.APublicAuthKey -> FileInfo +testFileInfo sndKey = + FileInfo + { sndKey, + size = 128000 :: Word32, + digest = "test_digest_bytes_here___" + } + +testCreatedAt :: RoundedFileTime +testCreatedAt = RoundedSystemTime 1000000 + +-- Tests + +testAddGetFileSender :: Expectation +testAddGetFileSender = withPgStore $ \st -> do + g <- C.newRandom + (sk, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sk + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {senderId, fileInfo = fi, createdAt}, key) -> do + senderId `shouldBe` testSenderId + sndKey fi `shouldBe` sk + size fi `shouldBe` 128000 + createdAt `shouldBe` testCreatedAt + key `shouldBe` sk + Left e -> expectationFailure $ "getFile failed: " <> show e + +testAddGetFileRecipient :: Expectation +testAddGetFileRecipient = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + result <- getFile st SFRecipient testRecipientId + case result of + Right (FileRec {senderId}, key) -> do + senderId `shouldBe` testSenderId + key `shouldBe` rcpKey + Left e -> expectationFailure $ "getFile failed: " <> show e + +testDuplicateFile :: Expectation +testDuplicateFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Left DUPLICATE_ + +testGetNonexistent :: Expectation +testGetNonexistent = withPgStore $ \st -> do + getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ()) + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + +testSetFilePath :: Expectation +testSetFilePath = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + setFilePath st testSenderId "/tmp/test_file" `shouldReturn` Right () + -- Second setFilePath should fail (file_path IS NULL guard) + setFilePath st testSenderId "/tmp/other_file" `shouldReturn` Left AUTH + -- Verify path was set + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {filePath}, _) -> readTVarIO filePath `shouldReturn` Just "/tmp/test_file" + Left e -> expectationFailure $ "getFile failed: " <> show e + +testDuplicateRecipient :: Expectation +testDuplicateRecipient = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Left DUPLICATE_ + +testDeleteFileCascade :: Expectation +testDeleteFileCascade = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + deleteFile st testSenderId `shouldReturn` Right () + -- File and recipient should both be gone + getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ()) + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + +testBlockFile :: Expectation +testBlockFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + let blockInfo = BlockingInfo {reason = BRContent, notice = Nothing} + blockFile st testSenderId blockInfo False `shouldReturn` Right () + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {fileStatus}, _) -> readTVarIO fileStatus `shouldReturn` EntityBlocked blockInfo + Left e -> expectationFailure $ "getFile failed: " <> show e + +testAckFile :: Expectation +testAckFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + ackFile st testRecipientId `shouldReturn` Right () + -- Recipient gone, but file still exists + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + result <- getFile st SFSender testSenderId + case result of + Right _ -> pure () + Left e -> expectationFailure $ "getFile failed: " <> show e + +testExpiredFiles :: Expectation +testExpiredFiles = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + oldTime = RoundedSystemTime 100000 + newTime = RoundedSystemTime 999999999 + -- Add old and new files + addFile st (EntityId "old_file________") fileInfo oldTime EntityActive `shouldReturn` Right () + void $ setFilePath st (EntityId "old_file________") "/tmp/old" + addFile st (EntityId "new_file________") fileInfo newTime EntityActive `shouldReturn` Right () + -- Query expired with cutoff that only catches old file + expired <- expiredFiles st 500000 100 + length expired `shouldBe` 1 + case expired of + [(sId, path, sz)] -> do + sId `shouldBe` EntityId "old_file________" + path `shouldBe` Just "/tmp/old" + sz `shouldBe` 128000 + _ -> expectationFailure "expected 1 expired file" + +testStorageAndCount :: Expectation +testStorageAndCount = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + getUsedStorage st `shouldReturn` 0 + getFileCount st `shouldReturn` 0 + let fileInfo = testFileInfo sndKey + addFile st (EntityId "file_a__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addFile st (EntityId "file_b__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right () + getFileCount st `shouldReturn` 2 + used <- getUsedStorage st + used `shouldBe` 256000 -- 128000 * 2 + +-- Migration round-trip test + +testMigrationRoundTrip :: Expectation +testMigrationRoundTrip = do + let storeLogPath = "tests/tmp/xftp-migration-test.log" + storeLogPath2 = "tests/tmp/xftp-migration-test2.log" + -- 1. Create STM store with test data + stmStore <- newFileStore () :: IO STMFileStore + g <- C.newRandom + (sndKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (sndKey2, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo1 = testFileInfo sndKey1 + fileInfo2 = FileInfo {sndKey = sndKey2, size = 64000, digest = "other_digest____________"} + sId1 = EntityId "migration_file_1" + sId2 = EntityId "migration_file_2" + rId1 = EntityId "migration_rcp_1_" + addFile stmStore sId1 fileInfo1 testCreatedAt EntityActive `shouldReturn` Right () + void $ setFilePath stmStore sId1 "/tmp/file1" + addRecipient stmStore sId1 (FileRecipient rId1 rcpKey1) `shouldReturn` Right () + let testBlockInfo = BlockingInfo {reason = BRSpam, notice = Nothing} + addFile stmStore sId2 fileInfo2 testCreatedAt (EntityBlocked testBlockInfo) `shouldReturn` Right () + -- 2. Write to StoreLog + sl <- openWriteStoreLog False storeLogPath + writeFileStore sl stmStore + closeStoreLog sl + -- 3. Import StoreLog to Postgres + importFileStore storeLogPath testXFTPPostgresCfg + -- StoreLog should be renamed to .bak + doesFileExist storeLogPath `shouldReturn` False + doesFileExist (storeLogPath <> ".bak") `shouldReturn` True + -- 4. Export from Postgres back to StoreLog + exportFileStore storeLogPath2 testXFTPPostgresCfg + -- 5. Read exported StoreLog into a new STM store and verify + stmStore2 <- newFileStore () :: IO STMFileStore + sl2 <- readWriteFileStore storeLogPath2 stmStore2 + closeStoreLog sl2 + -- Verify file 1 + result1 <- getFile stmStore2 SFSender sId1 + case result1 of + Right (FileRec {fileInfo = fi, filePath, fileStatus}, _) -> do + size fi `shouldBe` 128000 + readTVarIO filePath `shouldReturn` Just "/tmp/file1" + readTVarIO fileStatus `shouldReturn` EntityActive + Left e -> expectationFailure $ "getFile sId1 failed: " <> show e + -- Verify recipient + result1r <- getFile stmStore2 SFRecipient rId1 + case result1r of + Right (_, key) -> key `shouldBe` rcpKey1 + Left e -> expectationFailure $ "getFile rId1 failed: " <> show e + -- Verify file 2 (blocked) + result2 <- getFile stmStore2 SFSender sId2 + case result2 of + Right (FileRec {fileInfo = fi, fileStatus}, _) -> do + size fi `shouldBe` 64000 + readTVarIO fileStatus `shouldReturn` EntityBlocked (BlockingInfo {reason = BRSpam, notice = Nothing}) + Left e -> expectationFailure $ "getFile sId2 failed: " <> show e + -- Cleanup + removeFile (storeLogPath <> ".bak") + removeFile storeLogPath2 diff --git a/tests/Test.hs b/tests/Test.hs index 63f97d807..830321561 100644 --- a/tests/Test.hs +++ b/tests/Test.hs @@ -46,12 +46,15 @@ import AgentTests.SchemaDump (schemaDumpTest) #endif #if defined(dbServerPostgres) +import CoreTests.XFTPStoreTests (xftpStoreTests, xftpMigrationTests) import NtfServerTests (ntfServerTests) import NtfClient (ntfTestServerDBConnectInfo, ntfTestStoreDBOpts) import PostgresSchemaDump (postgresSchemaDumpTest) import SMPClient (testServerDBConnectInfo, testStoreDBOpts) import Simplex.Messaging.Notifications.Server.Store.Migrations (ntfServerMigrations) import Simplex.Messaging.Server.QueueStore.Postgres.Migrations (serverMigrations) +import XFTPClient (testXFTPDBConnectInfo) +import XFTPServerTests (xftpServerTestsPg) #endif #if defined(dbPostgres) || defined(dbServerPostgres) @@ -152,6 +155,12 @@ main = do describe "XFTP file description" fileDescriptionTests describe "XFTP CLI" xftpCLITests describe "XFTP agent" xftpAgentTests +#if defined(dbServerPostgres) + around_ (postgressBracket testXFTPDBConnectInfo) $ do + describe "XFTP Postgres store operations" xftpStoreTests + describe "XFTP migration round-trip" xftpMigrationTests + describe "XFTP server (PostgreSQL backend)" xftpServerTestsPg +#endif #if defined(dbPostgres) describe "XFTP Web Client" $ xftpWebTests (dropAllSchemasExceptSystem testDBConnectInfo) #else diff --git a/tests/XFTPClient.hs b/tests/XFTPClient.hs index 6fcc32669..a9707af63 100644 --- a/tests/XFTPClient.hs +++ b/tests/XFTPClient.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} @@ -15,11 +16,18 @@ import Simplex.FileTransfer.Client import Simplex.FileTransfer.Description import Simplex.FileTransfer.Server (runXFTPServerBlocking) import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), XFTPStoreConfig (..), defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Store (FileStoreClass) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import Simplex.Messaging.Protocol (XFTPServer) import Simplex.Messaging.Transport.HTTP2 (httpALPN) import Simplex.Messaging.Transport.Server import Test.Hspec hiding (fit, it) +#if defined(dbServerPostgres) +import Database.PostgreSQL.Simple (ConnectInfo (..), defaultConnectInfo) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg (..), defaultXFTPDBOpts) +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation (..)) +#endif xftpTest :: HasCallStack => (HasCallStack => XFTPClient -> IO ()) -> Expectation xftpTest test = runXFTPTest test `shouldReturn` () @@ -192,3 +200,62 @@ testXFTPServerConfigEd25519SNI = { addCORSHeaders = True } } + +-- Store-parameterized server bracket + +withXFTPServerCfgStore :: (HasCallStack, FileStoreClass s) => XFTPStoreConfig s -> XFTPServerConfig -> (HasCallStack => ThreadId -> IO a) -> IO a +withXFTPServerCfgStore storeCfg cfg = + serverBracket + (\started -> runXFTPServerBlocking started storeCfg cfg) + (threadDelay 10000) + +withXFTPServerStore :: (HasCallStack, FileStoreClass s) => XFTPStoreConfig s -> IO a -> IO a +withXFTPServerStore storeCfg = withXFTPServerCfgStore storeCfg testXFTPServerConfig . const + +#if defined(dbServerPostgres) +testXFTPDBConnectInfo :: ConnectInfo +testXFTPDBConnectInfo = + defaultConnectInfo + { connectUser = "test_xftp_server_user", + connectDatabase = "test_xftp_server_db" + } + +testXFTPStoreDBOpts :: DBOpts +testXFTPStoreDBOpts = + defaultXFTPDBOpts + { connstr = "postgresql://test_xftp_server_user@/test_xftp_server_db", + schema = "xftp_server_test", + poolSize = 10, + createSchema = True + } + +testXFTPPostgresCfg :: PostgresFileStoreCfg +testXFTPPostgresCfg = + PostgresFileStoreCfg + { dbOpts = testXFTPStoreDBOpts, + dbStoreLogPath = Nothing, + confirmMigrations = MCYesUp + } + +withXFTPServerPg :: HasCallStack => IO a -> IO a +withXFTPServerPg = withXFTPServerStore (XSCDatabase testXFTPPostgresCfg) + +xftpTestPg :: HasCallStack => (HasCallStack => XFTPClient -> IO ()) -> Expectation +xftpTestPg test = runXFTPTestPg test `shouldReturn` () + +runXFTPTestPg :: HasCallStack => (HasCallStack => XFTPClient -> IO a) -> IO a +runXFTPTestPg test = withXFTPServerPg $ testXFTPClient test + +xftpTestPg2 :: HasCallStack => (HasCallStack => XFTPClient -> XFTPClient -> IO ()) -> Expectation +xftpTestPg2 test = xftpTestPgN 2 _test + where + _test [h1, h2] = test h1 h2 + _test _ = error "expected 2 handles" + +xftpTestPgN :: forall a. HasCallStack => Int -> (HasCallStack => [XFTPClient] -> IO a) -> IO a +xftpTestPgN nClients test = withXFTPServerPg $ run nClients [] + where + run :: Int -> [XFTPClient] -> IO a + run 0 hs = test hs + run n hs = testXFTPClient $ \h -> run (n - 1) (h : hs) +#endif diff --git a/tests/XFTPServerTests.hs b/tests/XFTPServerTests.hs index 0af3d7eca..3d58b3bc2 100644 --- a/tests/XFTPServerTests.hs +++ b/tests/XFTPServerTests.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE NamedFieldPuns #-} @@ -6,7 +7,11 @@ {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE ScopedTypeVariables #-} -module XFTPServerTests where +module XFTPServerTests (xftpServerTests +#if defined(dbServerPostgres) + , xftpServerTestsPg +#endif + ) where import AgentTests.FunctionalAPITests (runRight_) import Control.Concurrent (threadDelay) @@ -51,6 +56,10 @@ import Test.Hspec hiding (fit, it) import UnliftIO.STM import Util import XFTPClient +#if defined(dbServerPostgres) +import Simplex.FileTransfer.Server.Env (XFTPStoreConfig (..)) +import XFTPClient (testXFTPPostgresCfg, withXFTPServerCfgStore, xftpTestPg, xftpTestPg2, xftpTestPgN) +#endif xftpServerTests :: Spec xftpServerTests = @@ -598,3 +607,110 @@ testStaleWebSession = decoded <- either (error . show) pure $ C.unPad respBody decoded `shouldBe` smpEncode SESSION +#if defined(dbServerPostgres) +xftpServerTestsPg :: Spec +xftpServerTestsPg = + before_ (createDirectoryIfMissing False xftpServerFiles) . after_ (removeDirectoryRecursive xftpServerFiles) $ do + describe "XFTP file chunk delivery (PostgreSQL)" $ do + it "should create, upload and receive file chunk (1 client)" testFileChunkDeliveryPg + it "should create, upload and receive file chunk (2 clients)" testFileChunkDelivery2Pg + it "should create, add recipients, upload and receive file chunk" testFileChunkDeliveryAddRecipientsPg + it "should delete file chunk (1 client)" testFileChunkDeletePg + it "should delete file chunk (2 clients)" testFileChunkDelete2Pg + it "should acknowledge file chunk reception (1 client)" testFileChunkAckPg + it "should acknowledge file chunk reception (2 clients)" testFileChunkAck2Pg + it "should not allow uploading chunks after specified storage quota" testFileStorageQuotaPg + it "should expire chunks after set interval" testFileChunkExpirationPg + +testFileChunkDeliveryPg :: Expectation +testFileChunkDeliveryPg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkDelivery c c + +testFileChunkDelivery2Pg :: Expectation +testFileChunkDelivery2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkDelivery s r + +testFileChunkDeliveryAddRecipientsPg :: Expectation +testFileChunkDeliveryAddRecipientsPg = xftpTestPgN 4 $ \hs -> case hs of + [s, r1, r2, r3] -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey1, rpKey1) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey2, rpKey2) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey3, rpKey3) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + (sId, [rId1]) <- createXFTPChunk s spKey file [rcvKey1] Nothing + [rId2, rId3] <- addXFTPRecipients s spKey sId [rcvKey2, rcvKey3] + uploadXFTPChunk s spKey sId chunkSpec + let testReceiveChunk r rpKey rId fPath = do + downloadXFTPChunk g r rpKey rId $ XFTPRcvChunkSpec fPath chSize digest + liftIO $ B.readFile fPath `shouldReturn` bytes + testReceiveChunk r1 rpKey1 rId1 "tests/tmp/received_chunk1" + testReceiveChunk r2 rpKey2 rId2 "tests/tmp/received_chunk2" + testReceiveChunk r3 rpKey3 rId3 "tests/tmp/received_chunk3" + _ -> error "expected 4 handles" + +testFileChunkDeletePg :: Expectation +testFileChunkDeletePg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkDelete c c + +testFileChunkDelete2Pg :: Expectation +testFileChunkDelete2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkDelete s r + +testFileChunkAckPg :: Expectation +testFileChunkAckPg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkAck c c + +testFileChunkAck2Pg :: Expectation +testFileChunkAck2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkAck s r + +testFileStorageQuotaPg :: Expectation +testFileStorageQuotaPg = do + let cfg = testXFTPServerConfig {fileSizeQuota = Just $ chSize * 2} + withXFTPServerCfgStore (XSCDatabase testXFTPPostgresCfg) cfg $ \_ -> + testXFTPClient $ \c -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey, rpKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + download rId = do + downloadXFTPChunk g c rpKey rId $ XFTPRcvChunkSpec "tests/tmp/received_chunk1" chSize digest + liftIO $ B.readFile "tests/tmp/received_chunk1" `shouldReturn` bytes + (sId1, [rId1]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId1 chunkSpec + download rId1 + (sId2, [rId2]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId2 chunkSpec + download rId2 + (sId3, [_rId3]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId3 chunkSpec + `catchError` (liftIO . (`shouldBe` PCEProtocolError QUOTA)) + deleteXFTPChunk c spKey sId1 + uploadXFTPChunk c spKey sId3 chunkSpec + +testFileChunkExpirationPg :: Expectation +testFileChunkExpirationPg = + withXFTPServerCfgStore (XSCDatabase testXFTPPostgresCfg) testXFTPServerConfig {fileExpiration} $ \_ -> + testXFTPClient $ \c -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey, rpKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + (sId, [rId]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId chunkSpec + downloadXFTPChunk g c rpKey rId $ XFTPRcvChunkSpec "tests/tmp/received_chunk1" chSize digest + liftIO $ B.readFile "tests/tmp/received_chunk1" `shouldReturn` bytes + liftIO $ threadDelay 1000000 + downloadXFTPChunk g c rpKey rId (XFTPRcvChunkSpec "tests/tmp/received_chunk2" chSize digest) + `catchError` (liftIO . (`shouldBe` PCEProtocolError AUTH)) + deleteXFTPChunk c spKey sId + `catchError` (liftIO . (`shouldBe` PCEProtocolError AUTH)) + where + fileExpiration = Just ExpirationConfig {ttl = 1, checkInterval = 1} +#endif + From d101a9b764b3b77671c98613f028077174b898d9 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:40:40 +0000 Subject: [PATCH 15/24] fix: map ForeignKeyViolation to AUTH in addRecipient When a file is concurrently deleted while addRecipient runs, the FK constraint on recipients.sender_id raises ForeignKeyViolation. Previously this propagated as INTERNAL; now it returns AUTH (file not found). --- src/Simplex/FileTransfer/Server/Store/Postgres.hs | 1 + 1 file changed, 1 insertion(+) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index b35e3cd72..9677b646b 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -217,6 +217,7 @@ assertUpdated = (>>= \n -> when (n == 0) (throwE AUTH)) handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) handleDuplicate e = case constraintViolation e of Just (UniqueViolation _) -> pure $ Left DUPLICATE_ + Just (ForeignKeyViolation _ _) -> pure $ Left AUTH _ -> E.throwIO e withLog :: MonadIO m => Text -> PostgresFileStore -> (StoreLog 'WriteMode -> IO ()) -> m () From dd395b4a06466321293a2c0d5b8ec3c157e49a6a Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:41:30 +0000 Subject: [PATCH 16/24] fix: only decrement usedStorage for uploaded files on expiration expireServerFiles unconditionally subtracted file_size from usedStorage for every expired file, including files that were never uploaded (no file_path). Since reserve only increments usedStorage during upload, expiring never-uploaded files caused usedStorage to drift negative. --- src/Simplex/FileTransfer/Server.hs | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index dc10e7533..fd6781acb 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -659,7 +659,8 @@ expireServerFiles itemDelay expCfg = do removeFile fp `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow fp <> ": " <> tshow e withFileLog (`logDeleteFile` sId) void $ liftIO $ deleteFile st sId - atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) + forM_ filePath_ $ \_ -> + atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) incFileStat filesExpired unless (null expired) $ expireLoop st us old From 0d28333919fcc0c76feaafec2db27d335502dfed Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:42:16 +0000 Subject: [PATCH 17/24] fix: handle setFilePath error in receiveServerFile setFilePath result was discarded with void. If it failed (file deleted concurrently, or double-upload where file_path IS NULL guard rejected the second write), the server still reported FROk, incremented stats, and left usedStorage permanently inflated. Now the error is checked: on failure, reserved storage is released and AUTH is returned. --- src/Simplex/FileTransfer/Server.hs | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index fd6781acb..125bc4600 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -560,12 +560,17 @@ processXFTPRequest HTTP2Body {bodyPart} = \case Right () -> do stats <- asks serverStats st <- asks store - withFileLog $ \sl -> logPutFile sl senderId fPath - void $ liftIO $ setFilePath st senderId fPath - incFileStat filesUploaded - incFileStat filesCount - liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) - pure FROk + liftIO (setFilePath st senderId fPath) >>= \case + Right () -> do + withFileLog $ \sl -> logPutFile sl senderId fPath + incFileStat filesUploaded + incFileStat filesCount + liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) + pure FROk + Left _e -> do + us <- asks usedStorage + atomically $ modifyTVar' us $ subtract (fromIntegral size) + pure $ FRErr AUTH Left e -> do us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral size) From e5f664815fd624885fff7e8cfa28bfac0c5529fa Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:43:27 +0000 Subject: [PATCH 18/24] fix: escape double quotes in COPY CSV status field The status field (e.g. "blocked,reason=spam,notice={...}") is quoted in CSV for COPY protocol, but embedded double quotes from BlockingInfo notice (JSON) were not escaped. This could break CSV parsing during import. Now double quotes are escaped as "" per CSV spec. --- src/Simplex/FileTransfer/Server/Store/Postgres.hs | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index 9677b646b..1be3c228b 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -26,6 +26,7 @@ import Control.Monad.Except import Control.Monad.IO.Class import Control.Monad.Trans.Except (throwE) import Data.ByteString (ByteString) +import qualified Data.ByteString.Char8 as B import Data.ByteString.Builder (Builder) import qualified Data.ByteString.Builder as BB import qualified Data.ByteString.Lazy as LB @@ -331,7 +332,7 @@ fileRecToCSV sId FileRec {fileInfo = FileInfo {sndKey, size, digest}, filePath, renderField (toField (Binary (C.encodePubKey sndKey))), nullable (toField <$> path), renderField (toField createdAt), - BB.char7 '"' <> renderField (toField status) <> BB.char7 '"' + quotedField (toField status) ] recipientToCSV :: RecipientId -> SenderId -> RcvPublicAuthKey -> ByteString @@ -354,3 +355,10 @@ renderField = \case nullable :: Maybe Action -> Builder nullable = maybe mempty renderField + +quotedField :: Action -> Builder +quotedField a = BB.char7 '"' <> escapeQuotes (renderField a) <> BB.char7 '"' + where + escapeQuotes bld = + let bs = LB.toStrict $ BB.toLazyByteString bld + in BB.byteString $ B.concatMap (\c -> if c == '"' then "\"\"" else B.singleton c) bs From c1f978a4af6a7c3776e0a1be4ad0b165fa39e144 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:44:14 +0000 Subject: [PATCH 19/24] fix: reject upload to blocked file in Postgres setFilePath In Postgres mode, getFile returns a snapshot TVar for fileStatus. If a file is blocked between getFile and setFilePath, the stale status check passes but the upload should be rejected. Added status = 'active' to the UPDATE WHERE clause so blocked files cannot receive uploads. --- src/Simplex/FileTransfer/Server/Store/Postgres.hs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index 1be3c228b..30f75d7df 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -102,7 +102,7 @@ instance FileStoreClass PostgresFileStore where setFilePath st sId fPath = E.uninterruptibleMask_ $ runExceptT $ do assertUpdated $ withDB' "setFilePath" st $ \db -> - DB.execute db "UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL" (fPath, sId) + DB.execute db "UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL AND status = 'active'" (fPath, sId) withLog "setFilePath" st $ \s -> logPutFile s sId fPath addRecipient st senderId (FileRecipient rId rKey) = E.uninterruptibleMask_ $ runExceptT $ do From e831d5a022383c0c468dc7e30b45e8dd1d5d8167 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:45:06 +0000 Subject: [PATCH 20/24] fix: add CHECK constraint on file_size > 0 Prevents negative or zero file_size values at the database level. Without this, corrupted data from import or direct DB access could cause incorrect storage accounting (getUsedStorage sums file_size, and expiredFiles casts to Word32 which wraps negative values). --- .../Server/Store/Postgres/Migrations.hs | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs index 1914ecbd6..84f6b209e 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs @@ -14,7 +14,8 @@ import Text.RawString.QQ (r) xftpSchemaMigrations :: [(String, Text, Maybe Text)] xftpSchemaMigrations = - [ ("20260325_initial", m20260325_initial, Nothing) + [ ("20260325_initial", m20260325_initial, Nothing), + ("20260402_file_size_check", m20260402_file_size_check, Just down_m20260402_file_size_check) ] -- | The list of migrations in ascending order by date @@ -45,3 +46,15 @@ CREATE TABLE recipients ( CREATE INDEX idx_recipients_sender_id ON recipients (sender_id); CREATE INDEX idx_files_created_at ON files (created_at); |] + +m20260402_file_size_check :: Text +m20260402_file_size_check = + [r| +ALTER TABLE files ADD CONSTRAINT check_file_size_positive CHECK (file_size > 0); +|] + +down_m20260402_file_size_check :: Text +down_m20260402_file_size_check = + [r| +ALTER TABLE files DROP CONSTRAINT check_file_size_positive; +|] From c306e9bcd3ae52bd2b44fed1d74aab37a4d9173d Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 12:45:52 +0000 Subject: [PATCH 21/24] fix: check for existing data before database import importFileStore now checks if the target database already contains files and aborts with an error. Previously, importing into a non-empty database would fail mid-COPY on duplicate primary keys, leaving the database in a partially imported state. --- src/Simplex/FileTransfer/Server/Store/Postgres.hs | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs index 30f75d7df..8922149bf 100644 --- a/src/Simplex/FileTransfer/Server/Store/Postgres.hs +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -241,6 +241,11 @@ importFileStore storeLogFilePath dbCfg = do putStrLn $ "Loaded " <> show fileCount <> " files, " <> show rcpCount <> " recipients." let dbCfg' = dbCfg {dbOpts = (dbOpts dbCfg) {createSchema = True}, confirmMigrations = MCYesUp} pgStore <- newFileStore dbCfg' :: IO PostgresFileStore + existingCount <- getFileCount pgStore + when (existingCount > 0) $ do + putStrLn $ "WARNING: database already contains " <> show existingCount <> " files. Import will fail on duplicate keys." + putStrLn "Drop the existing schema first or use a fresh database." + exitFailure putStrLn "Importing files..." fCnt <- withTransaction (dbStore pgStore) $ \db -> do DB.copy_ From e659f4a64efdf15bf9ba19ff1fb2ab6bde946da8 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 13:16:58 +0000 Subject: [PATCH 22/24] fix: clean up disk file when setFilePath fails in receiveServerFile When setFilePath fails (file deleted or blocked concurrently, or duplicate upload), the uploaded file was left orphaned on disk with no DB record pointing to it. Now the file is removed on failure, matching the cleanup in the receiveChunk error path. --- src/Simplex/FileTransfer/Server.hs | 1 + 1 file changed, 1 insertion(+) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 125bc4600..4faed8499 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -570,6 +570,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case Left _e -> do us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral size) + liftIO $ whenM (doesFileExist fPath) (removeFile fPath) `catch` logFileError pure $ FRErr AUTH Left e -> do us <- asks usedStorage From 1c6f68873a696f9816badb51716fcc8d77238452 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 13:21:55 +0000 Subject: [PATCH 23/24] fix: check storeAction result in deleteOrBlockServerFile_ The store action result (deleteFile/blockFile) was discarded with void. If the DB row was already deleted by a concurrent operation, the function still decremented usedStorage, causing drift. Now the error propagates via ExceptT, skipping the usedStorage adjustment. --- src/Simplex/FileTransfer/Server.hs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 4faed8499..7ea6120ab 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -630,7 +630,7 @@ deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExce stats <- asks serverStats ExceptT $ first (\(_ :: SomeException) -> FILE_IO) <$> try (forM_ path $ \p -> whenM (doesFileExist p) (removeFile p >> deletedStats stats)) st <- asks store - void $ liftIO $ storeAction st + ExceptT $ liftIO $ storeAction st forM_ path $ \_ -> do us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo) From 464e083c3a0b1d94617467b4218c8bed080f9423 Mon Sep 17 00:00:00 2001 From: shum Date: Thu, 2 Apr 2026 15:50:14 +0000 Subject: [PATCH 24/24] fix: check deleteFile result in expireServerFiles MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit deleteFile result was discarded with void. If a concurrent delete already removed the file, deleteFile returned AUTH but usedStorage was still decremented — causing double-decrement drift. Now the usedStorage adjustment and filesExpired stat only run on success. --- src/Simplex/FileTransfer/Server.hs | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 7ea6120ab..fc57b777a 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -664,10 +664,12 @@ expireServerFiles itemDelay expCfg = do whenM (doesFileExist fp) $ removeFile fp `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow fp <> ": " <> tshow e withFileLog (`logDeleteFile` sId) - void $ liftIO $ deleteFile st sId - forM_ filePath_ $ \_ -> - atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) - incFileStat filesExpired + liftIO (deleteFile st sId) >>= \case + Right () -> do + forM_ filePath_ $ \_ -> + atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) + incFileStat filesExpired + Left _ -> pure () unless (null expired) $ expireLoop st us old randomId :: Int -> M s ByteString