diff --git a/plans/2026-03-25-xftp-postgres-backend-design.md b/plans/2026-03-25-xftp-postgres-backend-design.md new file mode 100644 index 0000000000..78a32a5075 --- /dev/null +++ b/plans/2026-03-25-xftp-postgres-backend-design.md @@ -0,0 +1,472 @@ +# XFTP Server PostgreSQL Backend + +## Overview + +Add PostgreSQL backend support to xftp-server, following the SMP server pattern. Supports bidirectional migration between STM (in-memory with StoreLog) and PostgreSQL backends. + +## Goals + +- PostgreSQL-backed file metadata storage as an alternative to STM + StoreLog +- Polymorphic server code via `FileStoreClass` typeclass with IO-based methods (following `QueueStoreClass` pattern) +- Bidirectional migration: StoreLog <-> PostgreSQL via CLI commands +- Shared `server_postgres` cabal flag (same flag enables both SMP and XFTP Postgres support) +- INI-based backend selection at runtime + +## Architecture + +### FileStoreClass Typeclass + +IO-based typeclass following the `QueueStoreClass` pattern — each method is a self-contained IO action, with the implementation responsible for its own atomicity (STM backend wraps in `atomically`, Postgres backend uses database transactions): + +```haskell +class FileStoreClass s where + type FileStoreConfig s + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration (with LIMIT for Postgres; called in a loop until empty) + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + + -- Storage and stats (for init-time computation) + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int +``` + +- STM backend: each method wraps its STM transaction in `atomically` internally. +- Postgres backend: each method runs its query via `withDB` / database connection internally. + +No polymorphic monad or `runStore` dispatcher needed — unlike `MsgStoreClass`, XFTP file operations are individually atomic and don't require grouping multiple operations into backend-dependent transactions. + +### PostgresFileStore Data Type + +```haskell +data PostgresFileStore = PostgresFileStore + { dbStore :: DBStore, + dbStoreLog :: Maybe (StoreLog 'WriteMode) + } +``` + +- `dbStore` — connection pool created via `createDBStore`, runs schema migrations on init. +- `dbStoreLog` — optional parallel log file (enabled by `db_store_log` INI setting). When present, every mutation (`addFile`, `setFilePath`, `deleteFile`, `blockFile`, `addRecipient`, `ackFile`) also writes to this log via a `withLog` wrapper. `withLog` is called AFTER the DB operation succeeds (so the log reflects committed state only). Log write failures are non-fatal (logged as warnings, do not fail the DB operation). This provides an audit trail and enables recovery via export. + +`closeFileStore` for Postgres calls `closeDBStore` (closes connection pool) then `mapM_ closeStoreLog dbStoreLog` (flushes and closes the parallel log). For STM, it closes the storeLog. Called from a `finally` block during server shutdown, matching SMP's `stopServer` → `closeMsgStore` → `closeQueueStore` pattern. + +### STMFileStore Type + +After extracting from current `Store.hs`, `STMFileStore` retains the file and recipient maps but no longer owns `usedStorage` (moved to `XFTPEnv`): + +```haskell +data STMFileStore = STMFileStore + { files :: TMap SenderId FileRec, + recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) + } +``` + +`closeFileStore` for STM is a no-op (TMaps are garbage-collected; the env-level `storeLog` is closed separately by the server). + +### Error Handling + +Postgres operations follow SMP's `withDB` / `handleDuplicate` pattern: + +```haskell +withDB :: Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> ExceptT XFTPErrorType IO a +withDB op st action = + ExceptT $ E.try (withTransaction (dbStore st) action) >>= either logErr pure + where + logErr :: E.SomeException -> IO (Either XFTPErrorType a) + logErr e = logError ("STORE: " <> err) $> Left INTERNAL + where + err = op <> ", withDB, " <> tshow e + +handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) +handleDuplicate e = case constraintViolation e of + Just (UniqueViolation _) -> pure $ Left DUPLICATE_ + _ -> E.throwIO e +``` + +- All DB operations wrapped in `withDB` — catches exceptions, logs, returns `INTERNAL`. +- Unique constraint violations caught by `handleDuplicate` and mapped to `DUPLICATE_`. +- UPDATE operations verified with `assertUpdated` — returns `AUTH` if 0 rows affected (matching SMP pattern, prevents silent failures when WHERE clause doesn't match). +- Critical sections (DB write + TVar update) wrapped in `uninterruptibleMask_` to prevent async exceptions from leaving inconsistent state between DB and TVars. + +### FileRec and TVar Fields + +`FileRec` retains its `TVar` fields (matching SMP's `PostgresQueue` pattern): + +```haskell +data FileRec = FileRec + { senderId :: SenderId, + fileInfo :: FileInfo, + filePath :: TVar (Maybe FilePath), + recipientIds :: TVar (Set RecipientId), + createdAt :: RoundedFileTime, + fileStatus :: TVar ServerEntityStatus + } +``` + +- **STM backend**: TVars are the source of truth, as currently. +- **Postgres backend**: `getFile` reads from DB and creates a `FileRec` with fresh TVars populated from the DB row (matching SMP's `mkQ` pattern — `newTVarIO` per load). Mutation methods (`setFilePath`, `blockFile`, etc.) update both the DB (persistence) and the TVars (in-session consistency). The `recipientIds` TVar is initialized to `S.empty` — no subquery needed because no server code reads `recipientIds` directly; all recipient operations go through the typeclass methods (`addRecipient`, `deleteRecipient`, `ackFile`), which query the `recipients` table for Postgres. + +### usedStorage Ownership + +`usedStorage :: TVar Int64` moves from the store to `XFTPEnv`. The store typeclass does **not** manage `usedStorage` — it only provides `getUsedStorage` for init-time computation. + +- **STM init**: StoreLog replay calls `setFilePath` (which only sets the filePath TVar — the STM `setFilePath` implementation is changed to **not** update `usedStorage`). Similarly, STM `deleteFile` (Store.hs line 117) and `blockFile` (line 125) are changed to **not** update `usedStorage` — the server handles all `usedStorage` adjustments externally. After replay, `getUsedStorage` computes the sum over all file sizes (matching current `countUsedStorage` behavior). +- **Postgres init**: `getUsedStorage` executes `SELECT COALESCE(SUM(file_size), 0) FROM files`. +- **Runtime**: Server manages `usedStorage` TVar directly for reserve/commit/rollback during uploads, and adjusts after `deleteFile`/`blockFile` calls. + +**Note on `getUsedStorage` semantics**: The current STM `countUsedStorage` sums all file sizes unconditionally (including files without `filePath` set, i.e., created but not yet uploaded). The Postgres `getUsedStorage` matches this: `SELECT SUM(file_size) FROM files` (no `WHERE file_path IS NOT NULL`). In practice, orphaned files (created but never uploaded) are rare and short-lived (expired within 48h), so the difference is negligible. A future improvement could filter by `file_path IS NOT NULL` in both backends to reflect actual disk usage more accurately. + +### Server.hs Refactoring + +`Server.hs` becomes polymorphic over `FileStoreClass s`. Since all typeclass methods are IO, call sites replace `atomically` with direct IO calls to the store. + +**Call sites requiring changes** (exhaustive list): + +1. **`receiveServerFile`** (line 563): `atomically $ writeTVar filePath (Just fPath)` → `setFilePath store senderId fPath`. The `reserve` logic (line 551-555) stays as direct TVar manipulation on `usedStorage` from `XFTPEnv`. + +2. **`verifyXFTPTransmission`** (line 453): `atomically $ verify =<< getFile st party fId` — the `getFile` call and subsequent `readTVar fileStatus` are in a single `atomically` block. Refactored to: `getFile st party fId` (IO), then `readTVarIO (fileStatus fr)` from the returned `FileRec` (safe for both backends — STM TVar is the source of truth, Postgres TVar is a fresh snapshot from DB). + +3. **`retryAdd`** (line 516): Signature `XFTPFileId -> STM (Either XFTPErrorType a)` → `XFTPFileId -> IO (Either XFTPErrorType a)`. The `atomically` call (line 520) replaced with `liftIO`. + +4. **`deleteOrBlockServerFile_`** (line 620): Parameter `FileStore -> STM (Either XFTPErrorType ())` → `FileStoreClass s => s -> IO (Either XFTPErrorType ())`. The `atomically` call (line 626) removed — the store method is already IO. After the store action, server adjusts `usedStorage` TVar in `XFTPEnv` based on `fileInfo.size`. + +5. **`ackFileReception`** (line 605): `atomically $ deleteRecipient st rId fr` → `deleteRecipient st rId fr`. + +6. **Control port `CPDelete`/`CPBlock`** (lines 371, 377): `atomically $ getFile fs SFRecipient fileId` → `getFile fs SFRecipient fileId`. + +7. **`expireServerFiles`** (line 636): Replace per-file `expiredFilePath` iteration with batched `expiredFiles st old batchSize`, which returns `[(SenderId, Maybe FilePath, Word32)]` — the `Word32` file size is needed so the server can adjust the `usedStorage` TVar after each deletion. Called in a loop until the returned list is empty. The `itemDelay` between files applies to the deletion loop over each batch, not the query itself. STM backend ignores the batch size limit (returns all expired files from TMap scan); Postgres uses `LIMIT`. + +8. **`restoreServerStats`** (line 694): `FileStore {files, usedStorage} <- asks store` accesses store fields directly. Refactored to: `usedStorage` from `XFTPEnv` via `asks usedStorage`, file count via `getFileCount store`. STM: `M.size <$> readTVarIO files`. Postgres: `SELECT COUNT(*) FROM files`. + +### Store Config Selection + +GADT in `Env.hs`: + +```haskell +data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore +#if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore +#endif +``` + +`XFTPEnv` becomes polymorphic: + +```haskell +data XFTPEnv s = XFTPEnv + { config :: XFTPServerConfig, + store :: s, + usedStorage :: TVar Int64, + storeLog :: Maybe (StoreLog 'WriteMode), + ... + } +``` + +The `M` monad (`ReaderT (XFTPEnv s) IO`) and all functions in `Server.hs` gain `FileStoreClass s =>` constraints. + +**StoreLog lifecycle per backend:** + +- **STM mode**: `storeLog = Just sl` (current behavior — append-only log for persistence and recovery). +- **Postgres mode**: `storeLog = Nothing` (main storeLog disabled — Postgres is the source of truth). The optional parallel `dbStoreLog` inside `PostgresFileStore` provides audit/recovery if enabled via `db_store_log` INI setting. + +The existing `withFileLog` pattern in Server.hs continues to work unchanged — it maps over `Maybe (StoreLog 'WriteMode)`, which is `Nothing` in Postgres mode so the calls become no-ops. + +### Main.hs Store Type Dispatch + +The `Start` CLI command gains a `--confirm-migrations` flag (default `MCConsole` — manual prompt, matching SMP's `StartOptions`). For automated deployments, `--confirm-migrations up` auto-applies forward migrations. The import command uses `MCYesUp` (always auto-apply). + +Following SMP's existential dispatch pattern (`AStoreType` + `run`), `Main.hs` selects the store type from INI config and dispatches to the polymorphic server: + +```haskell +runServer ini = do + let storeType = fromRight "memory" $ lookupValue "STORE_LOG" "store_files" ini + case storeType of + "memory" -> run $ XSCMemory (enableStoreLog $> storeLogFilePath) + "database" -> +#if defined(dbServerPostgres) + run $ XSCDatabase PostgresFileStoreCfg {..} +#else + exitError "server not compiled with Postgres support" +#endif + _ -> exitError $ "Invalid store_files value: " <> storeType + where + run :: FileStoreClass s => XFTPStoreConfig s -> IO () + run storeCfg = do + env <- newXFTPServerEnv storeCfg config + runReaderT (xftpServer config) env +``` + +**`newXFTPServerEnv` refactored signature:** + +```haskell +newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s) +newXFTPServerEnv storeCfg config = do + (store, storeLog) <- case storeCfg of + XSCMemory storeLogPath -> do + st <- newFileStore () + sl <- mapM (`readWriteFileStore` st) storeLogPath + pure (st, sl) + XSCDatabase dbCfg -> do + st <- newFileStore dbCfg + pure (st, Nothing) -- main storeLog disabled for Postgres + usedStorage <- newTVarIO =<< getUsedStorage store + ... + pure XFTPEnv {config, store, usedStorage, storeLog, ...} +``` + +### Startup Config Validation + +Following SMP's `checkMsgStoreMode` pattern, `Main.hs` validates config before starting: + +- **`store_files=database` + StoreLog file exists** (without `db_store_log=on`): Error — "StoreLog file present but store_files is `database`. Use `xftp-server database import` to migrate, or set `db_store_log: on`." +- **`store_files=database` + schema doesn't exist**: Error — "Create schema in PostgreSQL or use `xftp-server database import`." +- **`store_files=memory` + Postgres schema exists**: Warning — "Postgres schema exists but store_files is `memory`. Data in Postgres will not be used." +- **Binary compiled without `server_postgres` + `store_files=database`**: Error — "Server not compiled with Postgres support." + +## Module Structure + +``` +src/Simplex/FileTransfer/Server/ + Store.hs -- FileStoreClass typeclass + shared types (FileRec, FileRecipient, etc.) + Store/ + STM.hs -- STMFileStore (extracted from current Store.hs) + Postgres.hs -- PostgresFileStore [CPP-guarded] + Postgres/ + Migrations.hs -- Schema migrations [CPP-guarded] + Config.hs -- PostgresFileStoreCfg [CPP-guarded] + StoreLog.hs -- Unchanged (interchange format for both backends + migration) + Env.hs -- XFTPStoreConfig GADT, polymorphic XFTPEnv + Main.hs -- Store selection, migration CLI commands + Server.hs -- Polymorphic over FileStoreClass +``` + +## PostgreSQL Schema + +Initial migration (`20260325_initial`): + +```sql +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + file_size INT4 NOT NULL, + file_digest BYTEA NOT NULL, + sender_key BYTEA NOT NULL, + file_path TEXT, + created_at INT8 NOT NULL, + status TEXT NOT NULL DEFAULT 'active' +); + +CREATE TABLE recipients ( + recipient_id BYTEA NOT NULL PRIMARY KEY, + sender_id BYTEA NOT NULL REFERENCES files ON DELETE CASCADE, + recipient_key BYTEA NOT NULL +); + +CREATE INDEX idx_recipients_sender_id ON recipients (sender_id); +CREATE INDEX idx_files_created_at ON files (created_at); +``` + +- `file_size` is `INT4` matching `Word32` in `FileInfo.size` +- `sender_key` and `recipient_key` stored as `BYTEA` using binary encoding via `C.encodePubKey` / `C.decodePubKey` (matching SMP's `ToField`/`FromField` instances for `APublicAuthKey` — includes algorithm type tag in the binary format) +- `file_path` nullable (set after upload completes via `setFilePath`) +- `ON DELETE CASCADE` for recipients when file is hard-deleted +- `created_at` stores rounded epoch seconds (1-hour precision, `RoundedFileTime`) +- `status` as TEXT via `StrEncoding` (`ServerEntityStatus`: `EntityActive`, `EntityBlocked info`, `EntityOff`) +- Hard deletes (no `deleted_at` column) +- No PL/pgSQL functions needed; `setFilePath` uses `WHERE file_path IS NULL` to prevent duplicate uploads (the `UPDATE` itself acquires a row-level lock) +- `used_storage` computed on startup: `SELECT COALESCE(SUM(file_size), 0) FROM files` (matches STM `countUsedStorage` — all files, see usedStorage Ownership section) + +### Migrations Module + +Following SMP's `QueueStore/Postgres/Migrations.hs` pattern: + +```haskell +module Simplex.FileTransfer.Server.Store.Postgres.Migrations + ( xftpServerMigrations, + ) +where + +import Data.List (sortOn) +import Data.Text (Text) +import Simplex.Messaging.Agent.Store.Shared +import Text.RawString.QQ (r) + +xftpSchemaMigrations :: [(String, Text, Maybe Text)] +xftpSchemaMigrations = + [ ("20260325_initial", m20260325_initial, Nothing) + ] + +xftpServerMigrations :: [Migration] +xftpServerMigrations = sortOn name $ map migration xftpSchemaMigrations + where + migration (name, up, down) = Migration {name, up, down = down} + +m20260325_initial :: Text +m20260325_initial = + [r| +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + ... +); + |] +``` + +The `Migration` type (from `Simplex.Messaging.Agent.Store.Shared`) has fields `{name :: String, up :: Text, down :: Maybe Text}`. Initial migration has `Nothing` for `down`. Future migrations should include `Just down_migration` for rollback support. Called via `createDBStore dbOpts xftpServerMigrations (MigrationConfig confirmMigrations Nothing)`. + +### Postgres Operations + +Key query patterns: + +- **`addFile`**: `INSERT INTO files (...) VALUES (...)`, return `DUPLICATE_` on unique violation. +- **`setFilePath`**: `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`, verified with `assertUpdated` (returns `AUTH` if 0 rows affected — file not found or already uploaded). The `WHERE file_path IS NULL` prevents duplicate uploads; the `UPDATE` acquires a row lock implicitly. Only persists the path; `usedStorage` managed by server. +- **`addRecipient`**: `INSERT INTO recipients (...)`, plus check for duplicates. No need for `recipientIds` TVar update — Postgres derives it from the table. +- **`getFile`** (sender): `SELECT ... FROM files WHERE sender_id = ?`, returns auth key from `sender_key` column. +- **`getFile`** (recipient): `SELECT f.*, r.recipient_key FROM recipients r JOIN files f ON ... WHERE r.recipient_id = ?`. +- **`deleteFile`**: `DELETE FROM files WHERE sender_id = ?` (recipients cascade). +- **`blockFile`**: `UPDATE files SET status = ? WHERE sender_id = ?`. When `deleted = True`, the server adjusts `usedStorage` externally (matching current STM behavior where `blockFile` only updates status and storage, not `filePath`). +- **`expiredFiles`**: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? LIMIT ?` — batched query replaces per-file iteration, includes `file_size` for `usedStorage` adjustment. Called in a loop until no rows returned. + +## INI Configuration + +New keys in `[STORE_LOG]` section: + +```ini +[STORE_LOG] +enable: on +store_files: memory # memory | database +db_connection: postgresql://xftp@/xftp_server_store +db_schema: xftp_server +db_pool_size: 10 +db_store_log: off +expire_files_hours: 48 +``` + +`store_files` selects the backend (`store_files` rather than `store_queues` because XFTP stores files, not queues): +- `memory` -> `XSCMemory` (current behavior) +- `database` -> `XSCDatabase` (requires `server_postgres` build flag) + +### INI Template Generation (`xftp-server init`) + +The `iniFileContent` function in `Main.hs` must be updated to generate the new keys in the `[STORE_LOG]` section. Following SMP's `iniDbOpts` pattern with `optDisabled'` (prefixes `"# "` when value equals default), Postgres keys are generated commented out by default: + +```ini +[STORE_LOG] +enable: on + +# File storage mode: `memory` or `database` (PostgreSQL). +store_files: memory + +# Database connection settings for PostgreSQL database (`store_files: database`). +# db_connection: postgresql://xftp@/xftp_server_store +# db_schema: xftp_server +# db_pool_size: 10 + +# Write database changes to store log file +# db_store_log: off + +expire_files_hours: 48 +``` + +Reuses `iniDBOptions` from `Simplex.Messaging.Server.CLI` for runtime parsing (falls back to defaults when keys are commented out or missing). `enableDbStoreLog'` pattern (`settingIsOn "STORE_LOG" "db_store_log"`) controls `dbStoreLogPath`. + +### PostgresFileStoreCfg + +```haskell +data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation + } +``` + +No `deletedTTL` (hard deletes). + +### Default DB Options + +```haskell +defaultXFTPDBOpts :: DBOpts +defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } +``` + +## Migration CLI + +Bidirectional migration via StoreLog as interchange format: + +``` +xftp-server database import [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +xftp-server database export [--database DB_CONN] [--schema DB_SCHEMA] [--pool-size N] +``` + +No `--table` flag needed (unlike SMP which has queues/messages/all) — XFTP has a single entity type (files + recipients, always migrated together). + +CLI options reuse `dbOptsP` parser from `Simplex.Messaging.Server.CLI`. + +### Import (StoreLog -> PostgreSQL) + +1. Confirm: prompt user with database connection details and StoreLog path +2. Read and replay StoreLog into temporary `STMFileStore` +3. Connect to PostgreSQL, run schema migrations (`createSchema = True`, `confirmMigrations = MCYesUp`) +4. Batch-insert file records into `files` table using PostgreSQL COPY protocol (matching SMP's `batchInsertQueues` pattern for performance). Progress reported every 10k files. +5. Batch-insert recipient records into `recipients` table using COPY protocol +6. Verify counts: `SELECT COUNT(*) FROM files` / `recipients` — warn if mismatch +7. Rename StoreLog to `.bak` (prevents accidental re-import, preserves original for rollback) +8. Report counts + +### Export (PostgreSQL -> StoreLog) + +1. Confirm: prompt user with database connection details and output path. Fail if output file already exists. +2. Connect to PostgreSQL +3. Open new StoreLog file for writing +4. Fold over all file records, writing per file (in this order, matching existing `writeFileStore`): `AddFile` (with `ServerEntityStatus` — this preserves `EntityBlocked` state), `AddRecipients`, then `PutFile` (if `file_path` is set) +5. Report counts + +Note: `AddFile` carries `ServerEntityStatus` which includes `EntityBlocked info`, so blocking state is preserved through export/import without needing separate `BlockFile` log entries. + +File data on disk is untouched by migration — only metadata moves between backends. + +## Cabal Integration + +Shared `server_postgres` flag. New Postgres modules added to existing conditional block: + +```cabal +if flag(server_postgres) + cpp-options: -DdbServerPostgres + exposed-modules: + ...existing SMP modules... + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Migrations + Simplex.FileTransfer.Server.Store.Postgres.Config +``` + +CPP guards (`#if defined(dbServerPostgres)`) in: +- `Store.hs` — Postgres `FromField`/`ToField` instances for XFTP-specific types if needed +- `Env.hs` — `XSCDatabase` constructor +- `Main.hs` — database CLI commands, store selection for `database` mode, Postgres imports +- `Server.hs` — Postgres-specific imports if needed + +## Testing + +- **Parameterized server tests**: Existing `xftpServerTests` refactored to accept a store type parameter (following SMP's `SpecWith (ASrvTransport, AStoreType)` pattern). The same server tests run against both STM and Postgres backends — STM tests run unconditionally, Postgres tests added under `#if defined(dbServerPostgres)` with `postgressBracket` for database lifecycle (drop → create → test → drop). +- **Unit tests**: `PostgresFileStore` operations — add/get/delete/block/expire, duplicate detection, auth errors +- **Migration round-trip**: STM store → export to StoreLog → import to Postgres → export back → verify StoreLog equality (including blocked file status) +- **Tests location**: in `tests/` alongside existing XFTP tests, guarded by `server_postgres` CPP flag +- **Test database**: PostgreSQL on `localhost:5432`, using a dedicated `xftp_server_test` schema (dropped and recreated per test run via `postgressBracket`, following SMP's test database lifecycle pattern) +- **Test fixtures**: `testXFTPStoreDBOpts :: DBOpts` with `createSchema = True`, `confirmMigrations = MCYesUp`, in `tests/XFTPClient.hs` diff --git a/plans/2026-03-25-xftp-postgres-implementation-plan.md b/plans/2026-03-25-xftp-postgres-implementation-plan.md new file mode 100644 index 0000000000..2ae334670e --- /dev/null +++ b/plans/2026-03-25-xftp-postgres-implementation-plan.md @@ -0,0 +1,648 @@ +# XFTP PostgreSQL Backend — Implementation Plan + +> **For agentic workers:** REQUIRED: Use superpowers-extended-cc:subagent-driven-development (if subagents available) or superpowers-extended-cc:executing-plans to implement this plan. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Add PostgreSQL backend support to xftp-server as an alternative to STM + StoreLog, with bidirectional migration. + +**Architecture:** Introduce `FileStoreClass` typeclass (IO-based, following `QueueStoreClass` pattern). Extract current STM store into `Store/STM.hs`, make `Server.hs` polymorphic, then add `Store/Postgres.hs` behind `server_postgres` CPP flag. `usedStorage` moves from store to `XFTPEnv` so the server manages quota tracking externally. + +**Tech Stack:** Haskell, postgresql-simple, STM, fourmolu, cabal with CPP flags + +**Design spec:** `plans/2026-03-25-xftp-postgres-backend-design.md` + +--- + +## File Structure + +**Existing files modified:** +- `src/Simplex/FileTransfer/Server/Store.hs` — rewritten: becomes typeclass + shared types +- `src/Simplex/FileTransfer/Server/Env.hs` — polymorphic `XFTPEnv s`, `XFTPStoreConfig` GADT +- `src/Simplex/FileTransfer/Server.hs` — polymorphic over `FileStoreClass s` +- `src/Simplex/FileTransfer/Server/StoreLog.hs` — update for IO store functions +- `src/Simplex/FileTransfer/Server/Main.hs` — INI config, dispatch, CLI commands +- `simplexmq.cabal` — new modules +- `tests/XFTPClient.hs` — Postgres test fixtures +- `tests/Test.hs` — Postgres test group + +**New files created:** +- `src/Simplex/FileTransfer/Server/Store/STM.hs` — `STMFileStore` (extracted from current `Store.hs`) +- `src/Simplex/FileTransfer/Server/Store/Postgres.hs` — `PostgresFileStore` [CPP-guarded] +- `src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs` — `PostgresFileStoreCfg` [CPP-guarded] +- `src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs` — schema SQL [CPP-guarded] +- `tests/CoreTests/XFTPStoreTests.hs` — Postgres store unit tests [CPP-guarded] + +--- + +## Task 1: Move `usedStorage` from `FileStore` to `XFTPEnv` + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` + +- [ ] **Step 1: Remove `usedStorage` from `FileStore` in `Store.hs`** + + 1. Remove `usedStorage :: TVar Int64` field from `FileStore` record (line 47). + 2. Remove `usedStorage <- newTVarIO 0` from `newFileStore` (line 75) and drop the field from the record construction (line 76). + 3. In `setFilePath` (line 92-97): remove `modifyTVar' (usedStorage st) (+ fromIntegral (size fileInfo))` — keep only `writeTVar filePath (Just fPath)`. Change pattern from `\FileRec {fileInfo, filePath}` to `\FileRec {filePath}` (fileInfo is now unused — `-Wunused-matches` error). + 4. In `deleteFile` (line 112-119): remove `modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo)`. Change outer pattern match from `FileStore {files, recipients, usedStorage}` to `FileStore {files, recipients}`. Change inner pattern from `Just FileRec {fileInfo, recipientIds}` to `Just FileRec {recipientIds}` (`fileInfo` is now unused — `-Wunused-matches` error). + 5. In `blockFile` (line 122-127): remove `when deleted $ modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo)`. Change pattern match from `st@FileStore {usedStorage}` to `st`. The `deleted` parameter and `fileInfo` in the inner pattern become unused — prefix with `_` or remove from pattern to avoid `-Wunused-matches`. + +- [ ] **Step 2: Add `usedStorage` to `XFTPEnv` in `Env.hs`** + + 1. Add `usedStorage :: TVar Int64` field to `XFTPEnv` record (between `store` and `storeLog`, line 93). + 2. In `newXFTPServerEnv` (line 112-126): replace lines 117-118: + ``` + used <- countUsedStorage <$> readTVarIO (files store) + atomically $ writeTVar (usedStorage store) used + ``` + with: + ``` + usedStorage <- newTVarIO =<< countUsedStorage <$> readTVarIO (files store) + ``` + 3. Add `usedStorage` to the `pure XFTPEnv {..}` construction. + +- [ ] **Step 3: Update all `usedStorage` access sites in `Server.hs`** + + 1. Line 552: `us <- asks $ usedStorage . store` → `us <- asks usedStorage`. + 2. Line 569: `us <- asks $ usedStorage . store` → `us <- asks usedStorage`. + 3. Line 639: `usedStart <- readTVarIO $ usedStorage st` → `usedStart <- readTVarIO =<< asks usedStorage`. + 4. Line 647: `usedEnd <- readTVarIO $ usedStorage st` → `usedEnd <- readTVarIO =<< asks usedStorage`. + 5. Line 694: `FileStore {files, usedStorage} <- asks store` → split into `FileStore {files} <- asks store` and `usedStorage <- asks usedStorage`. + 6. In `deleteOrBlockServerFile_` (line 620): after `void $ atomically $ storeAction st`, add usedStorage adjustment — `us <- asks usedStorage` then `atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo)` when file had a path (check `path` from `readTVarIO filePath` earlier in the function). + +- [ ] **Step 4: Build and verify** + + Run: `cabal build` + +- [ ] **Step 5: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git commit -m "refactor(xftp): move usedStorage from FileStore to XFTPEnv" + ``` + +--- + +## Task 2: Add `getUsedStorage`, `getFileCount`, `expiredFiles` functions + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` + +- [ ] **Step 1: Add three new functions to `Store.hs`** + + 1. Add to exports: `getUsedStorage`, `getFileCount`, `expiredFiles`. + 2. Remove `expiredFilePath` from exports AND delete the function definition (dead code → `-Wunused-binds` error). Also remove `($>>=)` from import `Simplex.Messaging.Util (ifM, ($>>=))` → `Simplex.Messaging.Util (ifM)` — `$>>=` was only used by `expiredFilePath`. + 3. Add import: `qualified Data.Map.Strict as M` (needed for `M.foldl'` in `getUsedStorage` and `M.toList` in `expiredFiles`). + 4. Implement: + ```haskell + getUsedStorage :: FileStore -> IO Int64 + getUsedStorage FileStore {files} = + M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files + + getFileCount :: FileStore -> IO Int + getFileCount FileStore {files} = M.size <$> readTVarIO files + + expiredFiles :: FileStore -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + expiredFiles FileStore {files} old _limit = do + fs <- readTVarIO files + fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> + if createdAt + fileTimePrecision < old + then do + path <- readTVarIO filePath + pure $ Just (sId, path, size) + else pure Nothing + ``` + 5. Add imports: `Data.Maybe (catMaybes)`, `Data.Word (Word32)` (note: `qualified Data.Map.Strict as M` already added in item 3). + +- [ ] **Step 2: Replace `countUsedStorage` in `Env.hs`** + + 1. Replace `countUsedStorage <$> readTVarIO (files store)` with `getUsedStorage store` in `newXFTPServerEnv`. + 2. Remove `countUsedStorage` function definition and its export. + 3. Remove `qualified Data.Map.Strict as M` import if no longer used. + +- [ ] **Step 3: Update `restoreServerStats` in `Server.hs` to use `getFileCount`** + + In `restoreServerStats` (line 694-696): replace `FileStore {files} <- asks store` and `_filesCount <- M.size <$> readTVarIO files` with `st <- asks store` and `_filesCount <- liftIO $ getFileCount st` (eliminates the `FileStore` pattern match — `files` binding no longer needed). + +- [ ] **Step 4: Replace `expireServerFiles` iteration in `Server.hs`** + + 1. Replace the body of `expireServerFiles` (lines 636-660). Remove `files' <- readTVarIO (files st)` and the `forM_ (M.keys files')` loop. + 2. New body: call `expiredFiles st old 10000` in a loop. For each `(sId, filePath_, fileSize)` in returned list: apply `itemDelay`, remove disk file if present, call `atomically $ deleteFile st sId`, adjust `usedStorage` TVar by `fileSize`, increment `filesExpired` stat. Loop until `expiredFiles` returns `[]`. + 3. Remove `Data.Map.Strict` import from Server.hs if no longer needed (was used for `M.size` and `M.keys` — now replaced by `getFileCount` and `expiredFiles`). + +- [ ] **Step 5: Build and verify** + + Run: `cabal build` + +- [ ] **Step 6: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs + git commit -m "refactor(xftp): add getUsedStorage, getFileCount, expiredFiles store functions" + ``` + +--- + +## Task 3: Change `Store.hs` functions from STM to IO + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `src/Simplex/FileTransfer/Server/StoreLog.hs` + +- [ ] **Step 1: Change all Store.hs function signatures from STM to IO** + + For each of: `addFile`, `setFilePath`, `addRecipient`, `getFile`, `deleteFile`, `blockFile`, `deleteRecipient`, `ackFile`: + 1. Change return type from `STM (Either XFTPErrorType ...)` to `IO (Either XFTPErrorType ...)` (or `STM ()` to `IO ()` for `deleteRecipient`). + 2. Wrap the function body in `atomically $ do ...`. + 3. Keep `withFile` and `newFileRec` as internal STM helpers (called inside the `atomically` blocks). + +- [ ] **Step 2: Update Server.hs call sites — remove `atomically` wrappers** + + 1. Line 563 (`receiveServerFile`): change `atomically $ writeTVar filePath (Just fPath)` → add `st <- asks store` then `void $ liftIO $ setFilePath st senderId fPath` (design call site #1 — `store` is not in scope in `receiveServerFile`'s `receive` helper, so bind via `asks`; `void` avoids `-Wunused-do-bind` warning on the `Either` result). + 2. Line 453 (`verifyXFTPTransmission`): split `atomically $ verify =<< getFile st party fId` into: `liftIO (getFile st party fId)` (IO→M lift), then pattern match on result, use `readTVarIO (fileStatus fr)` instead of `readTVar`. + 3. Lines 371, 377 (control port `CPDelete`/`CPBlock`): change `ExceptT $ atomically $ getFile fs SFRecipient fileId` → `ExceptT $ liftIO $ getFile fs SFRecipient fileId` (inside `unliftIO u $ do` block which runs in M monad — `liftIO` required to lift IO into M). + 4. Line 508 (`addFile` in `createFile`): the `ExceptT $ addFile st sId file ts EntityActive` — `addFile` is now IO, `ExceptT` wraps IO directly. Remove any `atomically`. + 5. Line 514 (`addRecipient`): same — `ExceptT . addRecipient st sId` works directly in IO. + 6. Line 516 (`retryAdd`): change parameter type from `(XFTPFileId -> STM (Either XFTPErrorType a))` to `(XFTPFileId -> IO (Either XFTPErrorType a))`. Line 520: change `atomically (add fId)` to `liftIO (add fId)`. + 7. Line 605 (`ackFileReception`): change `atomically $ deleteRecipient st rId fr` to `liftIO $ deleteRecipient st rId fr`. + 8. Line 620 (`deleteOrBlockServerFile_`): change third parameter type from `(FileStore -> STM (Either XFTPErrorType ()))` to `(FileStore -> IO (Either XFTPErrorType ()))`. Line 626: change `void $ atomically $ storeAction st` to `void $ liftIO $ storeAction st`. + 9. `expireServerFiles` `delete` helper: change `atomically $ deleteFile st sId` to `liftIO $ deleteFile st sId` (deleteFile is now IO; `liftIO` required because the helper runs in M monad, not IO). + +- [ ] **Step 3: Update `StoreLog.hs` — remove `atomically` from replay** + + In `readFileStore` (line 93), function `addToStore`: + 1. Change `atomically (addToStore lr)` to `addToStore lr` — store functions are now IO. + 2. The `addToStore` body calls `addFile`, `setFilePath`, `deleteFile`, `blockFile`, `ackFile` — all IO now, no `atomically` needed. + 3. For `AddRecipients`: `runExceptT $ mapM_ (ExceptT . addRecipient st sId) rcps` — `addRecipient` returns `IO (Either ...)`, so `ExceptT . addRecipient st sId` works directly. + +- [ ] **Step 4: Build and verify** + + Run: `cabal build` + +- [ ] **Step 5: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git commit -m "refactor(xftp): change file store operations from STM to IO" + ``` + +--- + +## Task 4: Extract `FileStoreClass` typeclass, move STM impl to `Store/STM.hs` + +**Files:** +- Rewrite: `src/Simplex/FileTransfer/Server/Store.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/STM.hs` +- Modify: `src/Simplex/FileTransfer/Server/StoreLog.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `simplexmq.cabal` + +- [ ] **Step 1: Create `Store/STM.hs` — move all implementation code** + + 1. Create directory `src/Simplex/FileTransfer/Server/Store/`. + 2. Create `src/Simplex/FileTransfer/Server/Store/STM.hs`. + 3. Move from `Store.hs`: `FileStore` data type (rename to `STMFileStore`), all function implementations, internal helpers (`withFile`, `newFileRec`), all STM-specific imports. + 4. Rename all `FileStore` references to `STMFileStore` in the new file. + 5. Module declaration: `module Simplex.FileTransfer.Server.Store.STM` exporting only `STMFileStore (..)` — do NOT export standalone functions (`addFile`, `setFilePath`, etc.) to avoid name collisions with the typeclass methods from `Store.hs`. + +- [ ] **Step 2: Rewrite `Store.hs` as the typeclass module** + + 1. Add `{-# LANGUAGE TypeFamilies #-}` pragma to `Store.hs` (required for `type FileStoreConfig s` associated type). + 2. Keep in `Store.hs`: `FileRec (..)`, `FileRecipient (..)`, `RoundedFileTime`, `fileTimePrecision` definitions and their `StrEncoding` instance. + 3. Add `FileStoreClass` typeclass: + ```haskell + class FileStoreClass s where + type FileStoreConfig s + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + + -- Stats + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int + ``` + 4. Do NOT re-export from `Store/STM.hs` — this would create a circular module dependency (Store.hs imports Store/STM.hs, Store/STM.hs imports Store.hs). Consumers must import `Store.STM` directly where they need `STMFileStore`. + 5. Remove all STM-specific imports that are no longer needed. + +- [ ] **Step 3: Add `FileStoreClass` instance in `Store/STM.hs`** + + 1. Import `FileStoreClass` from `Simplex.FileTransfer.Server.Store`. + 2. Inline all implementations directly in the instance body (do NOT delegate to standalone functions — the standalone names collide with typeclass method names, causing ambiguous occurrences for importers): + ```haskell + instance FileStoreClass STMFileStore where + type FileStoreConfig STMFileStore = () + newFileStore () = do + files <- TM.emptyIO + recipients <- TM.emptyIO + pure STMFileStore {files, recipients} + closeFileStore _ = pure () + addFile st sId fileInfo createdAt status = atomically $ ... + setFilePath st sId fPath = atomically $ ... + -- ... (each method's body is the existing function body, inlined) + ``` + 3. Remove the standalone top-level function definitions — they are now instance methods. Keep only `withFile` and `newFileRec` as internal helpers used by the instance methods. + +- [ ] **Step 4: Update importers** + + 1. `Env.hs`: add `import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..))`. Change `FileStore` → `STMFileStore` in `XFTPEnv` type and `newXFTPServerEnv`. Change `store <- newFileStore` to `store <- newFileStore ()` (typeclass method now takes `FileStoreConfig STMFileStore` which is `()`). Keep `import Simplex.FileTransfer.Server.Store` for `FileRec`, `FileRecipient`, `FileStoreClass`, etc. + 2. `Server.hs`: add `import Simplex.FileTransfer.Server.Store.STM`. Change `FileStore` → `STMFileStore` in any explicit type annotations. Import `FileStoreClass` from `Simplex.FileTransfer.Server.Store`. + 3. `StoreLog.hs`: add `import Simplex.FileTransfer.Server.Store.STM` to access concrete `STMFileStore` type and store functions used during log replay. Change `FileStore` → `STMFileStore` in `readWriteFileStore` and `writeFileStore` parameter types. + +- [ ] **Step 5: Update cabal file** + + Add `Simplex.FileTransfer.Server.Store.STM` to `exposed-modules` in the `!flag(client_library)` section, alongside existing XFTP server modules. + +- [ ] **Step 6: Build and verify** + + Run: `cabal build` + +- [ ] **Step 7: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 8: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Store/STM.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs + git add src/Simplex/FileTransfer/Server/Store.hs src/Simplex/FileTransfer/Server/Store/STM.hs src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/StoreLog.hs simplexmq.cabal + git commit -m "refactor(xftp): extract FileStoreClass typeclass, move STM impl to Store.STM" + ``` + +--- + +## Task 5: Make `XFTPEnv` and `Server.hs` polymorphic over `FileStoreClass` + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `src/Simplex/FileTransfer/Server.hs` +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` +- Modify: `tests/XFTPClient.hs` (if it calls `runXFTPServerBlocking` directly) + +- [ ] **Step 1: Make `XFTPEnv` polymorphic in `Env.hs`** + + 1. Add `XFTPStoreConfig` GADT: `data XFTPStoreConfig s where XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore`. + 2. Change `data XFTPEnv` to `data XFTPEnv s` — field `store :: FileStore` becomes `store :: s`. + 3. Change `newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv` to `newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s)`. + 4. Pattern match on `XSCMemory storeLogPath` in `newXFTPServerEnv` body. Create store via `newFileStore ()`, storeLog via `mapM (`readWriteFileStore` st) storeLogPath`. + +- [ ] **Step 2: Make `Server.hs` polymorphic** + + 1. Change `type M a = ReaderT XFTPEnv IO a` to `type M s a = ReaderT (XFTPEnv s) IO a`. + 2. Add `FileStoreClass s =>` constraint to all functions using `M s a`. Use `forall s.` in signatures of functions that have `where`-block bindings with `M s` type annotations — `ScopedTypeVariables` requires explicit `forall` to bring `s` into scope for inner type signatures (matching SMP's `smpServer :: forall s. MsgStoreClass s => ...` pattern). Full list: `xftpServer`, `processRequest`, `verifyXFTPTransmission`, `processXFTPRequest` and all its `where`-bound functions (`createFile`, `addRecipients`, `receiveServerFile`, `sendServerFile`, `deleteServerFile`, `ackFileReception`, `retryAdd`, `addFileRetry`, `addRecipientRetry`), `deleteServerFile_`, `blockServerFile`, `deleteOrBlockServerFile_`, `expireServerFiles`, `randomId`, `getFileId`, `withFileLog`, `incFileStat`, `saveServerStats`, `restoreServerStats`, `randomDelay` (inside `#ifdef slow_servers` CPP block). Also update `encodeXftp` (line 236) and `runCPClient` (line 339) which use explicit `ReaderT XFTPEnv IO` instead of the `M` alias — change to `ReaderT (XFTPEnv s) IO`. + 3. Change `runXFTPServerBlocking` and `runXFTPServer` to take `XFTPStoreConfig s` parameter. + 4. Add `closeFileStore store` call to the server shutdown path (in the `finally` block or `stopServer` equivalent — after saving stats, before logging "Server stopped"). This ensures Postgres connection pool and `dbStoreLog` are properly closed. For STM this is a no-op. + +- [ ] **Step 3: Update `Main.hs` dispatch** + + 1. In `runServer`: construct `XSCMemory (enableStoreLog $> storeLogFilePath)`. + 2. Add dispatch function that calls the updated `runXFTPServer` (which creates `started` internally): + ```haskell + run :: FileStoreClass s => XFTPStoreConfig s -> IO () + run storeCfg = runXFTPServer storeCfg serverConfig + ``` + 3. Call `run` with the `XSCMemory` config. + +- [ ] **Step 4: Update test helper if needed** + + If `tests/XFTPClient.hs` calls `runXFTPServerBlocking` directly, update the call to pass an `XSCMemory` config. Check the `withXFTPServer` / `serverBracket` helper. + +- [ ] **Step 5: Build and verify** + + Run: `cabal build && cabal build test:simplexmq-test` + +- [ ] **Step 6: Run existing tests** + + Run: `cabal test --test-show-details=streaming --test-option=--match="/XFTP/"` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/Main.hs + git add src/Simplex/FileTransfer/Server/Env.hs src/Simplex/FileTransfer/Server.hs src/Simplex/FileTransfer/Server/Main.hs tests/XFTPClient.hs simplexmq.cabal + git commit -m "refactor(xftp): make XFTPEnv and server polymorphic over FileStoreClass" + ``` + +--- + +## Task 6: Add Postgres config, migrations, and store skeleton + +**Files:** +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs` +- Create: `src/Simplex/FileTransfer/Server/Store/Postgres.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` +- Modify: `simplexmq.cabal` + +- [ ] **Step 1: Create `Store/Postgres/Config.hs`** + + ```haskell + module Simplex.FileTransfer.Server.Store.Postgres.Config + ( PostgresFileStoreCfg (..), + defaultXFTPDBOpts, + ) + where + + import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) + import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) + + data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation + } + + defaultXFTPDBOpts :: DBOpts + defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } + ``` + +- [ ] **Step 2: Create `Store/Postgres/Migrations.hs`** + + Full migration module with `xftpServerMigrations :: [Migration]` and `m20260325_initial` containing CREATE TABLE SQL for `files` and `recipients` tables plus indexes. Follow SMP's `QueueStore/Postgres/Migrations.hs` pattern exactly: tuple list → `sortOn name . map migration`. + +- [ ] **Step 3: Create `Store/Postgres.hs` with stub instance** + + 1. Define `PostgresFileStore` with `dbStore :: DBStore` and `dbStoreLog :: Maybe (StoreLog 'WriteMode)`. + 2. `instance FileStoreClass PostgresFileStore` with `error "not implemented"` for all methods except `newFileStore` (calls `createDBStore` + opens `dbStoreLog`) and `closeFileStore` (closes both). `type FileStoreConfig PostgresFileStore = PostgresFileStoreCfg`. + 3. Add `withDB`, `handleDuplicate`, `assertUpdated`, `withLog` helpers. + +- [ ] **Step 4: Add `XSCDatabase` GADT constructor in `Env.hs` (CPP-guarded)** + + ```haskell + #if defined(dbServerPostgres) + import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) + import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) + #endif + + data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore + #if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore + #endif + ``` + +- [ ] **Step 5: Update cabal** + + Add to existing `if flag(server_postgres)` block: + ``` + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Config + Simplex.FileTransfer.Server.Store.Postgres.Migrations + ``` + +- [ ] **Step 6: Build both ways** + + Run: `cabal build && cabal build -fserver_postgres` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store/Postgres.hs src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs src/Simplex/FileTransfer/Server/Env.hs + git add src/Simplex/FileTransfer/Server/Store/Postgres.hs src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs src/Simplex/FileTransfer/Server/Env.hs simplexmq.cabal + git commit -m "feat(xftp): add PostgreSQL store skeleton with schema migration" + ``` + +--- + +## Task 7: Implement `PostgresFileStore` operations + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Store/Postgres.hs` + +- [ ] **Step 1: Implement `addFile`** + + `INSERT INTO files (sender_id, file_size, file_digest, sender_key, file_path, created_at, status) VALUES (?,?,?,?,NULL,?,?)`. Catch unique violation with `handleDuplicate` → `DUPLICATE_`. Call `withLog "addFile"` after. + +- [ ] **Step 2: Implement `getFile`** + + For `SFSender`: `SELECT ... FROM files WHERE sender_id = ?`. Construct `FileRec` with `newTVarIO` per TVar field. `recipientIds = S.empty`. + For `SFRecipient`: `SELECT f.*, r.recipient_key FROM recipients r JOIN files f ON r.sender_id = f.sender_id WHERE r.recipient_id = ?`. + +- [ ] **Step 3: Implement `setFilePath`** + + `UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL`. Use `assertUpdated`. Call `withLog "setFilePath"`. + +- [ ] **Step 4: Implement `addRecipient`** + + `INSERT INTO recipients (recipient_id, sender_id, recipient_key) VALUES (?,?,?)`. `handleDuplicate` → `DUPLICATE_`. Call `withLog "addRecipient"`. + +- [ ] **Step 5: Implement `deleteFile`, `blockFile`** + + `deleteFile`: `DELETE FROM files WHERE sender_id = ?` (CASCADE). `withLog "deleteFile"`. + `blockFile`: `UPDATE files SET status = ? WHERE sender_id = ?`. `assertUpdated`. `withLog "blockFile"`. + +- [ ] **Step 6: Implement `deleteRecipient`, `ackFile`** + + `deleteRecipient`: `DELETE FROM recipients WHERE recipient_id = ?`. `withLog "deleteRecipient"`. + `ackFile`: same + return `Left AUTH` if 0 rows. + +- [ ] **Step 7: Implement `expiredFiles`, `getUsedStorage`, `getFileCount`** + + `expiredFiles`: `SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? LIMIT ?`. + `getUsedStorage`: `SELECT COALESCE(SUM(file_size), 0) FROM files`. + `getFileCount`: `SELECT COUNT(*) FROM files`. + +- [ ] **Step 8: Add `ToField`/`FromField` instances** + + For `RoundedFileTime` (Int64 wrapper), `ServerEntityStatus` (Text via StrEncoding), `C.APublicAuthKey` (Binary via `encodePubKey`/`decodePubKey`). Check SMP's `QueueStore/Postgres.hs` for existing instances to import. + +- [ ] **Step 9: Wrap mutation operations in `uninterruptibleMask_`** + + Operations that combine a DB write with a TVar update (e.g., `getFile` constructs `FileRec` with `newTVarIO`) must be wrapped in `E.uninterruptibleMask_` to prevent async exceptions from leaving inconsistent state. Follow SMP's `addQueue_`, `deleteStoreQueue` pattern. + +- [ ] **Step 10: Build** + + Run: `cabal build -fserver_postgres` + +- [ ] **Step 11: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Store/Postgres.hs + git add src/Simplex/FileTransfer/Server/Store/Postgres.hs + git commit -m "feat(xftp): implement PostgresFileStore operations" + ``` + +--- + +## Task 8: Add INI config, Main.hs dispatch, startup validation + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` +- Modify: `src/Simplex/FileTransfer/Server/Env.hs` + +- [ ] **Step 1: Update `iniFileContent` in `Main.hs`** + + Add to `[STORE_LOG]` section: `store_files: memory`, commented-out `db_connection`, `db_schema`, `db_pool_size`, `db_store_log` keys. Follow SMP's `optDisabled'` pattern for commented defaults. + +- [ ] **Step 2: Add `StartOptions` and `--confirm-migrations` flag** + + ```haskell + data StartOptions = StartOptions + { confirmMigrations :: MigrationConfirmation + } + ``` + Add to `Start` command parser with default `MCConsole`. Thread through to `runServer`. + +- [ ] **Step 3: Add store_files INI parsing and CPP-guarded Postgres dispatch** + + In `runServer`: read `store_files` from INI (`fromRight "memory" $ lookupValue "STORE_LOG" "store_files" ini`). Add `"database"` branch (CPP-guarded) that constructs `PostgresFileStoreCfg` using `iniDBOptions ini defaultXFTPDBOpts` and `enableDbStoreLog'` pattern. Non-postgres build: `exitError`. + +- [ ] **Step 4: Add `XSCDatabase` branch in `newXFTPServerEnv` (`Env.hs`)** + + CPP-guarded pattern match on `XSCDatabase dbCfg`: `newFileStore dbCfg`, `storeLog = Nothing`. + +- [ ] **Step 5: Add startup config validation** + + Add `checkFileStoreMode` (CPP-guarded) before `run`: validate conflicting storeLog file + database mode, missing schema, etc. per design doc. + +- [ ] **Step 6: Build both ways** + + Run: `cabal build && cabal build -fserver_postgres` + +- [ ] **Step 7: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Main.hs src/Simplex/FileTransfer/Server/Env.hs + git add src/Simplex/FileTransfer/Server/Main.hs src/Simplex/FileTransfer/Server/Env.hs + git commit -m "feat(xftp): add PostgreSQL INI config, store dispatch, startup validation" + ``` + +--- + +## Task 9: Add database import/export CLI commands + +**Files:** +- Modify: `src/Simplex/FileTransfer/Server/Main.hs` + +- [ ] **Step 1: Add `Database` CLI command (CPP-guarded)** + + Add `Database StoreCmd DBOpts` constructor to `CliCommand`. Add `database` subcommand parser with `import`/`export` subcommands + `dbOptsP defaultXFTPDBOpts`. + +- [ ] **Step 2: Implement `importFileStoreToDatabase`** + + 1. `confirmOrExit` with database details. + 2. Create temporary `STMFileStore`, replay StoreLog via `readWriteFileStore`. + 3. Create `PostgresFileStore` with `createSchema = True`, `confirmMigrations = MCYesUp`. + 4. Batch-insert files using PostgreSQL COPY protocol. Progress every 10k. + 5. Batch-insert recipients using COPY protocol. + 6. Verify counts: `SELECT COUNT(*)` — warn on mismatch. + 7. Rename StoreLog to `.bak`. + 8. Report counts. + +- [ ] **Step 3: Implement `exportDatabaseToStoreLog`** + + 1. `confirmOrExit`. Fail if output file exists. + 2. Create `PostgresFileStore` from config. + 3. Open StoreLog for writing. + 4. Fold over file records: write `AddFile` (with status), `AddRecipients`, `PutFile` per file. + 5. Close StoreLog, report counts. + +- [ ] **Step 4: Build** + + Run: `cabal build -fserver_postgres` + +- [ ] **Step 5: Format and commit** + + ```bash + fourmolu -i src/Simplex/FileTransfer/Server/Main.hs + git add src/Simplex/FileTransfer/Server/Main.hs + git commit -m "feat(xftp): add database import/export CLI commands" + ``` + +--- + +## Task 10: Add Postgres tests + +**Files:** +- Modify: `tests/XFTPClient.hs` +- Modify: `tests/Test.hs` +- Create: `tests/CoreTests/XFTPStoreTests.hs` + +- [ ] **Step 1: Add test fixtures in `tests/XFTPClient.hs`** + + ```haskell + testXFTPStoreDBOpts :: DBOpts + testXFTPStoreDBOpts = + DBOpts + { connstr = "postgresql://test_xftp_server_user@/test_xftp_server_db", + schema = "xftp_server_test", + poolSize = 10, + createSchema = True + } + ``` + Add `testXFTPDBConnectInfo :: ConnectInfo` matching the connection string. + +- [ ] **Step 2: Add Postgres server test group in `tests/Test.hs`** + + CPP-guarded block that runs existing `xftpServerTests` with Postgres store config, wrapped in `postgressBracket testXFTPDBConnectInfo`. Parameterize `withXFTPServer` to accept store config if needed. + +- [ ] **Step 3: Create `tests/CoreTests/XFTPStoreTests.hs` — unit tests** + + Test `PostgresFileStore` operations directly: + - `addFile` + `getFile SFSender` round-trip. + - `addFile` duplicate → `DUPLICATE_`. + - `getFile` nonexistent → `AUTH`. + - `setFilePath` + verify `WHERE file_path IS NULL` guard. + - `addRecipient` + `getFile SFRecipient` round-trip. + - `deleteFile` cascades recipients. + - `blockFile` + verify status. + - `expiredFiles` batch semantics. + - `getUsedStorage`, `getFileCount` correctness. + +- [ ] **Step 4: Add migration round-trip test** + + Create `STMFileStore` with test data (files + recipients + blocked status) → export to StoreLog → import to Postgres → export back → compare StoreLog files byte-for-byte. + +- [ ] **Step 5: Build and run tests** + + ```bash + cabal build -fserver_postgres test:simplexmq-test + cabal test --test-show-details=streaming --test-option=--match="/XFTP/" -fserver_postgres + ``` + +- [ ] **Step 6: Format and commit** + + ```bash + fourmolu -i tests/CoreTests/XFTPStoreTests.hs tests/XFTPClient.hs + git add tests/CoreTests/XFTPStoreTests.hs tests/XFTPClient.hs tests/Test.hs + git commit -m "test(xftp): add PostgreSQL backend tests" + ``` diff --git a/simplexmq.cabal b/simplexmq.cabal index 3ad23df095..919dd272a6 100644 --- a/simplexmq.cabal +++ b/simplexmq.cabal @@ -241,6 +241,7 @@ library Simplex.FileTransfer.Server.Prometheus Simplex.FileTransfer.Server.Stats Simplex.FileTransfer.Server.Store + Simplex.FileTransfer.Server.Store.STM Simplex.FileTransfer.Server.StoreLog Simplex.Messaging.Server Simplex.Messaging.Server.CLI @@ -281,6 +282,9 @@ library Simplex.Messaging.Notifications.Server.Store.Postgres Simplex.Messaging.Notifications.Server.Store.Types Simplex.Messaging.Notifications.Server.StoreLog + Simplex.FileTransfer.Server.Store.Postgres + Simplex.FileTransfer.Server.Store.Postgres.Config + Simplex.FileTransfer.Server.Store.Postgres.Migrations Simplex.Messaging.Server.MsgStore.Postgres Simplex.Messaging.Server.QueueStore.Postgres Simplex.Messaging.Server.QueueStore.Postgres.Migrations @@ -523,6 +527,7 @@ test-suite simplexmq-test if flag(server_postgres) other-modules: AgentTests.NotificationTests + CoreTests.XFTPStoreTests NtfClient NtfServerTests PostgresSchemaDump diff --git a/src/Simplex/FileTransfer/Server.hs b/src/Simplex/FileTransfer/Server.hs index 6e0a9735a6..fc57b777ae 100644 --- a/src/Simplex/FileTransfer/Server.hs +++ b/src/Simplex/FileTransfer/Server.hs @@ -31,7 +31,6 @@ import qualified Data.ByteString.Char8 as B import Data.Int (Int64) import Data.List.NonEmpty (NonEmpty) import qualified Data.List.NonEmpty as L -import qualified Data.Map.Strict as M import Data.Maybe (fromMaybe, isJust) import qualified Data.Text as T import qualified Data.Text.IO as T @@ -88,7 +87,7 @@ import UnliftIO.Concurrent (threadDelay) import UnliftIO.Directory (canonicalizePath, doesFileExist, removeFile, renameFile) import qualified UnliftIO.Exception as E -type M a = ReaderT XFTPEnv IO a +type M s a = ReaderT (XFTPEnv s) IO a data XFTPTransportRequest = XFTPTransportRequest { thParams :: THandleParamsXFTP 'TServer, @@ -112,19 +111,19 @@ corsPreflightHeaders = ("Access-Control-Max-Age", "86400") ] -runXFTPServer :: XFTPServerConfig -> IO () -runXFTPServer cfg = do +runXFTPServer :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO () +runXFTPServer storeCfg cfg = do started <- newEmptyTMVarIO - runXFTPServerBlocking started cfg + runXFTPServerBlocking started storeCfg cfg -runXFTPServerBlocking :: TMVar Bool -> XFTPServerConfig -> IO () -runXFTPServerBlocking started cfg = newXFTPServerEnv cfg >>= runReaderT (xftpServer cfg started) +runXFTPServerBlocking :: FileStoreClass s => TMVar Bool -> XFTPStoreConfig s -> XFTPServerConfig -> IO () +runXFTPServerBlocking started storeCfg cfg = newXFTPServerEnv storeCfg cfg >>= runReaderT (xftpServer cfg started) data Handshake = HandshakeSent C.PrivateKeyX25519 | HandshakeAccepted (THandleParams XFTPVersion 'TServer) -xftpServer :: XFTPServerConfig -> TMVar Bool -> M () +xftpServer :: forall s. FileStoreClass s => XFTPServerConfig -> TMVar Bool -> M s () xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpiration, fileExpiration, xftpServerVRange} started = do mapM_ (expireServerFiles Nothing) fileExpiration restoreServerStats @@ -137,7 +136,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira ) `finally` stopServer where - runServer :: M () + runServer :: M s () runServer = do srvCreds@(chain, pk) <- asks tlsServerCreds httpCreds_ <- asks httpServerCreds @@ -168,7 +167,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira Nothing -> pure () Just thParams -> processRequest req0 {thParams} | otherwise -> liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS') - xftpServerHandshakeV1 :: X.CertificateChain -> C.APrivateSignKey -> TMap SessionId Handshake -> XFTPTransportRequest -> M (Maybe (THandleParams XFTPVersion 'TServer)) + xftpServerHandshakeV1 :: X.CertificateChain -> C.APrivateSignKey -> TMap SessionId Handshake -> XFTPTransportRequest -> M s (Maybe (THandleParams XFTPVersion 'TServer)) xftpServerHandshakeV1 chain serverSignKey sessions XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, request, reqBody = HTTP2Body {bodyHead}, sendResponse, sniUsed, addCORS} = do s <- atomically $ TM.lookup sessionId sessions r <- runExceptT $ case s of @@ -227,39 +226,41 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS) pure Nothing Nothing -> throwE HANDSHAKE - sendError :: XFTPErrorType -> M (Maybe (THandleParams XFTPVersion 'TServer)) + sendError :: XFTPErrorType -> M s (Maybe (THandleParams XFTPVersion 'TServer)) sendError err = do runExceptT (encodeXftp err) >>= \case Right bs -> liftIO . sendResponse $ H.responseBuilder N.ok200 (corsHeaders addCORS) bs Left _ -> logError $ "Error encoding handshake error: " <> tshow err pure Nothing - encodeXftp :: Encoding a => a -> ExceptT XFTPErrorType (ReaderT XFTPEnv IO) Builder + encodeXftp :: Encoding a => a -> ExceptT XFTPErrorType (ReaderT (XFTPEnv s) IO) Builder encodeXftp a = byteString <$> liftHS (C.pad (smpEncode a) xftpBlockSize) liftHS = liftEitherWith (const HANDSHAKE) - stopServer :: M () + stopServer :: M s () stopServer = do withFileLog closeStoreLog + st <- asks store + liftIO $ closeFileStore st saveServerStats logNote "Server stopped" - expireFilesThread_ :: XFTPServerConfig -> [M ()] + expireFilesThread_ :: XFTPServerConfig -> [M s ()] expireFilesThread_ XFTPServerConfig {fileExpiration = Just fileExp} = [expireFiles fileExp] expireFilesThread_ _ = [] - expireFiles :: ExpirationConfig -> M () + expireFiles :: ExpirationConfig -> M s () expireFiles expCfg = do let interval = checkInterval expCfg * 1000000 forever $ do liftIO $ threadDelay' interval expireServerFiles (Just 100000) expCfg - serverStatsThread_ :: XFTPServerConfig -> [M ()] + serverStatsThread_ :: XFTPServerConfig -> [M s ()] serverStatsThread_ XFTPServerConfig {logStatsInterval = Just interval, logStatsStartTime, serverStatsLogFile} = [logServerStats logStatsStartTime interval serverStatsLogFile] serverStatsThread_ _ = [] - logServerStats :: Int64 -> Int64 -> FilePath -> M () + logServerStats :: Int64 -> Int64 -> FilePath -> M s () logServerStats startAt logInterval statsFilePath = do initialDelay <- (startAt -) . fromIntegral . (`div` 1000000_000000) . diffTimeToPicoseconds . utctDayTime <$> liftIO getCurrentTime liftIO $ putStrLn $ "server stats log enabled: " <> statsFilePath @@ -300,12 +301,12 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira ] liftIO $ threadDelay' interval - prometheusMetricsThread_ :: XFTPServerConfig -> [M ()] + prometheusMetricsThread_ :: XFTPServerConfig -> [M s ()] prometheusMetricsThread_ XFTPServerConfig {prometheusInterval = Just interval, prometheusMetricsFile} = [savePrometheusMetrics interval prometheusMetricsFile] prometheusMetricsThread_ _ = [] - savePrometheusMetrics :: Int -> FilePath -> M () + savePrometheusMetrics :: Int -> FilePath -> M s () savePrometheusMetrics saveInterval metricsFile = do labelMyThread "savePrometheusMetrics" liftIO $ putStrLn $ "Prometheus metrics saved every " <> show saveInterval <> " seconds to " <> metricsFile @@ -324,11 +325,11 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira let fd = periodStatDataCounts $ _filesDownloaded d pure FileServerMetrics {statsData = d, filesDownloadedPeriods = fd, rtsOptions} - controlPortThread_ :: XFTPServerConfig -> [M ()] + controlPortThread_ :: XFTPServerConfig -> [M s ()] controlPortThread_ XFTPServerConfig {controlPort = Just port} = [runCPServer port] controlPortThread_ _ = [] - runCPServer :: ServiceName -> M () + runCPServer :: ServiceName -> M s () runCPServer port = do cpStarted <- newEmptyTMVarIO u <- askUnliftIO @@ -336,7 +337,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira labelMyThread "control port server" runLocalTCPServer cpStarted port $ runCPClient u where - runCPClient :: UnliftIO (ReaderT XFTPEnv IO) -> Socket -> IO () + runCPClient :: UnliftIO (ReaderT (XFTPEnv s) IO) -> Socket -> IO () runCPClient u sock = do labelMyThread "control port client" h <- socketToHandle sock ReadWriteMode @@ -368,13 +369,13 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira CPDelete fileId -> withUserRole $ unliftIO u $ do fs <- asks store r <- runExceptT $ do - (fr, _) <- ExceptT $ atomically $ getFile fs SFRecipient fileId + (fr, _) <- ExceptT $ liftIO $ getFile fs SFRecipient fileId ExceptT $ deleteServerFile_ fr liftIO . hPutStrLn h $ either (\e -> "error: " <> show e) (\() -> "ok") r CPBlock fileId info -> withUserRole $ unliftIO u $ do fs <- asks store r <- runExceptT $ do - (fr, _) <- ExceptT $ atomically $ getFile fs SFRecipient fileId + (fr, _) <- ExceptT $ liftIO $ getFile fs SFRecipient fileId ExceptT $ blockServerFile fr info liftIO . hPutStrLn h $ either (\e -> "error: " <> show e) (\() -> "ok") r CPHelp -> hPutStrLn h "commands: stats-rts, delete, help, quit" @@ -395,7 +396,7 @@ data ServerFile = ServerFile sbState :: LC.SbState } -processRequest :: XFTPTransportRequest -> M () +processRequest :: FileStoreClass s => XFTPTransportRequest -> M s () processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHead}, sendResponse, addCORS} | B.length bodyHead /= xftpBlockSize = sendXFTPResponse ("", NoEntity, FRErr BLOCK) Nothing | otherwise = @@ -430,7 +431,7 @@ processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHea done #ifdef slow_servers -randomDelay :: M () +randomDelay :: M s () randomDelay = do d <- asks $ responseDelay . config when (d > 0) $ do @@ -440,31 +441,30 @@ randomDelay = do data VerificationResult = VRVerified XFTPRequest | VRFailed XFTPErrorType -verifyXFTPTransmission :: Maybe (THandleAuth 'TServer) -> SignedTransmission FileCmd -> M VerificationResult +verifyXFTPTransmission :: forall s. FileStoreClass s => Maybe (THandleAuth 'TServer) -> SignedTransmission FileCmd -> M s VerificationResult verifyXFTPTransmission thAuth (tAuth, authorized, (corrId, fId, cmd)) = case cmd of FileCmd SFSender (FNEW file rcps auth') -> pure $ XFTPReqNew file rcps auth' `verifyWith` sndKey file FileCmd SFRecipient PING -> pure $ VRVerified XFTPReqPing FileCmd party _ -> verifyCmd party where - verifyCmd :: SFileParty p -> M VerificationResult + verifyCmd :: SFileParty p -> M s VerificationResult verifyCmd party = do st <- asks store - atomically $ verify =<< getFile st party fId + liftIO (getFile st party fId) >>= \case + Right (fr, k) -> do + status <- readTVarIO (fileStatus fr) + pure $ case status of + EntityActive -> XFTPReqCmd fId fr cmd `verifyWith` k + EntityBlocked info -> VRFailed $ BLOCKED info + EntityOff -> noFileAuth + Left _ -> pure noFileAuth where - verify = \case - Right (fr, k) -> result <$> readTVar (fileStatus fr) - where - result = \case - EntityActive -> XFTPReqCmd fId fr cmd `verifyWith` k - EntityBlocked info -> VRFailed $ BLOCKED info - EntityOff -> noFileAuth - Left _ -> pure noFileAuth noFileAuth = dummyVerifyCmd thAuth tAuth authorized corrId `seq` VRFailed AUTH -- TODO verify with DH authorization req `verifyWith` k = if verifyCmdAuthorization thAuth tAuth authorized corrId k then VRVerified req else VRFailed AUTH -processXFTPRequest :: HTTP2Body -> XFTPRequest -> M (FileResponse, Maybe ServerFile) +processXFTPRequest :: forall s. FileStoreClass s => HTTP2Body -> XFTPRequest -> M s (FileResponse, Maybe ServerFile) processXFTPRequest HTTP2Body {bodyPart} = \case XFTPReqNew file rks auth -> noFile =<< ifM allowNew (createFile file rks) (pure $ FRErr AUTH) where @@ -483,7 +483,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case XFTPReqPing -> noFile FRPong where noFile resp = pure (resp, Nothing) - createFile :: FileInfo -> NonEmpty RcvPublicAuthKey -> M FileResponse + createFile :: FileInfo -> NonEmpty RcvPublicAuthKey -> M s FileResponse createFile file rks = do st <- asks store r <- runExceptT $ do @@ -502,25 +502,25 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rIds = L.map (\(FileRecipient rId _) -> rId) rcps pure $ FRSndIds sId rIds pure $ either FRErr id r - addFileRetry :: FileStore -> FileInfo -> Int -> RoundedFileTime -> M (Either XFTPErrorType XFTPFileId) + addFileRetry :: s -> FileInfo -> Int -> RoundedFileTime -> M s (Either XFTPErrorType XFTPFileId) addFileRetry st file n ts = retryAdd n $ \sId -> runExceptT $ do ExceptT $ addFile st sId file ts EntityActive pure sId - addRecipientRetry :: FileStore -> Int -> XFTPFileId -> RcvPublicAuthKey -> M (Either XFTPErrorType FileRecipient) + addRecipientRetry :: s -> Int -> XFTPFileId -> RcvPublicAuthKey -> M s (Either XFTPErrorType FileRecipient) addRecipientRetry st n sId rpk = retryAdd n $ \rId -> runExceptT $ do let rcp = FileRecipient rId rpk ExceptT $ addRecipient st sId rcp pure rcp - retryAdd :: Int -> (XFTPFileId -> STM (Either XFTPErrorType a)) -> M (Either XFTPErrorType a) + retryAdd :: Int -> (XFTPFileId -> IO (Either XFTPErrorType a)) -> M s (Either XFTPErrorType a) retryAdd 0 _ = pure $ Left INTERNAL retryAdd n add = do fId <- getFileId - atomically (add fId) >>= \case + liftIO (add fId) >>= \case Left DUPLICATE_ -> retryAdd (n - 1) add r -> pure r - addRecipients :: XFTPFileId -> NonEmpty RcvPublicAuthKey -> M FileResponse + addRecipients :: XFTPFileId -> NonEmpty RcvPublicAuthKey -> M s FileResponse addRecipients sId rks = do st <- asks store r <- runExceptT $ do @@ -531,7 +531,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case let rIds = L.map (\(FileRecipient rId _) -> rId) rcps pure $ FRRcvIds rIds pure $ either FRErr id r - receiveServerFile :: FileRec -> M FileResponse + receiveServerFile :: FileRec -> M s FileResponse receiveServerFile FileRec {senderId, fileInfo = FileInfo {size, digest}, filePath} = case bodyPart of Nothing -> pure $ FRErr SIZE -- TODO validate body size from request before downloading, once it's populated @@ -549,7 +549,7 @@ processXFTPRequest HTTP2Body {bodyPart} = \case | bs == 0 || bs > s -> pure $ FRErr SIZE | otherwise -> drain (s - bs) reserve = do - us <- asks $ usedStorage . store + us <- asks usedStorage quota <- asks $ fromMaybe maxBound . fileSizeQuota . config atomically . stateTVar us $ \used -> let used' = used + fromIntegral size in if used' <= quota then (True, used') else (False, used) @@ -559,21 +559,28 @@ processXFTPRequest HTTP2Body {bodyPart} = \case receiveChunk (XFTPRcvChunkSpec fPath size digest) >>= \case Right () -> do stats <- asks serverStats - withFileLog $ \sl -> logPutFile sl senderId fPath - atomically $ writeTVar filePath (Just fPath) - incFileStat filesUploaded - incFileStat filesCount - liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) - pure FROk + st <- asks store + liftIO (setFilePath st senderId fPath) >>= \case + Right () -> do + withFileLog $ \sl -> logPutFile sl senderId fPath + incFileStat filesUploaded + incFileStat filesCount + liftIO $ atomicModifyIORef'_ (filesSize stats) (+ fromIntegral size) + pure FROk + Left _e -> do + us <- asks usedStorage + atomically $ modifyTVar' us $ subtract (fromIntegral size) + liftIO $ whenM (doesFileExist fPath) (removeFile fPath) `catch` logFileError + pure $ FRErr AUTH Left e -> do - us <- asks $ usedStorage . store + us <- asks usedStorage atomically $ modifyTVar' us $ subtract (fromIntegral size) liftIO $ whenM (doesFileExist fPath) (removeFile fPath) `catch` logFileError pure $ FRErr e receiveChunk spec = do t <- asks $ fileTimeout . config liftIO $ fromMaybe (Left TIMEOUT) <$> timeout t (runExceptT $ receiveFile getBody spec) - sendServerFile :: FileRec -> RcvPublicDhKey -> M (FileResponse, Maybe ServerFile) + sendServerFile :: FileRec -> RcvPublicDhKey -> M s (FileResponse, Maybe ServerFile) sendServerFile FileRec {senderId, filePath, fileInfo = FileInfo {size}} rDhKey = do readTVarIO filePath >>= \case Just path -> ifM (doesFileExist path) sendFile (pure (FRErr AUTH, Nothing)) @@ -592,38 +599,41 @@ processXFTPRequest HTTP2Body {bodyPart} = \case _ -> pure (FRErr INTERNAL, Nothing) _ -> pure (FRErr NO_FILE, Nothing) - deleteServerFile :: FileRec -> M FileResponse + deleteServerFile :: FileRec -> M s FileResponse deleteServerFile fr = either FRErr (\() -> FROk) <$> deleteServerFile_ fr logFileError :: SomeException -> IO () logFileError e = logError $ "Error deleting file: " <> tshow e - ackFileReception :: RecipientId -> FileRec -> M FileResponse + ackFileReception :: RecipientId -> FileRec -> M s FileResponse ackFileReception rId fr = do withFileLog (`logAckFile` rId) st <- asks store - atomically $ deleteRecipient st rId fr + liftIO $ deleteRecipient st rId fr incFileStat fileDownloadAcks pure FROk -deleteServerFile_ :: FileRec -> M (Either XFTPErrorType ()) +deleteServerFile_ :: FileStoreClass s => FileRec -> M s (Either XFTPErrorType ()) deleteServerFile_ fr@FileRec {senderId} = do withFileLog (`logDeleteFile` senderId) deleteOrBlockServerFile_ fr filesDeleted (`deleteFile` senderId) -- this also deletes the file from storage, but doesn't include it in delete statistics -blockServerFile :: FileRec -> BlockingInfo -> M (Either XFTPErrorType ()) +blockServerFile :: FileStoreClass s => FileRec -> BlockingInfo -> M s (Either XFTPErrorType ()) blockServerFile fr@FileRec {senderId} info = do withFileLog $ \sl -> logBlockFile sl senderId info deleteOrBlockServerFile_ fr filesBlocked $ \st -> blockFile st senderId info True -deleteOrBlockServerFile_ :: FileRec -> (FileServerStats -> IORef Int) -> (FileStore -> STM (Either XFTPErrorType ())) -> M (Either XFTPErrorType ()) +deleteOrBlockServerFile_ :: FileStoreClass s => FileRec -> (FileServerStats -> IORef Int) -> (s -> IO (Either XFTPErrorType ())) -> M s (Either XFTPErrorType ()) deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExceptT $ do path <- readTVarIO filePath stats <- asks serverStats ExceptT $ first (\(_ :: SomeException) -> FILE_IO) <$> try (forM_ path $ \p -> whenM (doesFileExist p) (removeFile p >> deletedStats stats)) st <- asks store - void $ atomically $ storeAction st + ExceptT $ liftIO $ storeAction st + forM_ path $ \_ -> do + us <- asks usedStorage + atomically $ modifyTVar' us $ subtract (fromIntegral $ size fileInfo) lift $ incFileStat stat where deletedStats stats = do @@ -633,47 +643,50 @@ deleteOrBlockServerFile_ FileRec {filePath, fileInfo} stat storeAction = runExce getFileTime :: IO RoundedFileTime getFileTime = getRoundedSystemTime -expireServerFiles :: Maybe Int -> ExpirationConfig -> M () +expireServerFiles :: FileStoreClass s => Maybe Int -> ExpirationConfig -> M s () expireServerFiles itemDelay expCfg = do st <- asks store - usedStart <- readTVarIO $ usedStorage st + us <- asks usedStorage + usedStart <- readTVarIO us old <- liftIO $ expireBeforeEpoch expCfg - files' <- readTVarIO (files st) - logNote $ "Expiration check: " <> tshow (M.size files') <> " files" - forM_ (M.keys files') $ \sId -> do - mapM_ threadDelay itemDelay - atomically (expiredFilePath st sId old) - >>= mapM_ (maybeRemove $ delete st sId) - usedEnd <- readTVarIO $ usedStorage st + filesCount <- liftIO $ getFileCount st + logNote $ "Expiration check: " <> tshow filesCount <> " files" + expireLoop st us old + usedEnd <- readTVarIO us logNote $ "Used " <> mbs usedStart <> " -> " <> mbs usedEnd <> ", " <> mbs (usedStart - usedEnd) <> " reclaimed." where mbs bs = tshow (bs `div` 1048576) <> "mb" - maybeRemove del = maybe del (remove del) - remove del filePath = - ifM - (doesFileExist filePath) - ((removeFile filePath >> del) `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow filePath <> ": " <> tshow e) - del - delete st sId = do - withFileLog (`logDeleteFile` sId) - void . atomically $ deleteFile st sId -- will not update usedStorage if sId isn't in store - incFileStat filesExpired - -randomId :: Int -> M ByteString + expireLoop st us old = do + expired <- liftIO $ expiredFiles st old 10000 + forM_ expired $ \(sId, filePath_, fileSize) -> do + mapM_ threadDelay itemDelay + forM_ filePath_ $ \fp -> + whenM (doesFileExist fp) $ + removeFile fp `catch` \(e :: SomeException) -> logError $ "failed to remove expired file " <> tshow fp <> ": " <> tshow e + withFileLog (`logDeleteFile` sId) + liftIO (deleteFile st sId) >>= \case + Right () -> do + forM_ filePath_ $ \_ -> + atomically $ modifyTVar' us $ subtract (fromIntegral fileSize) + incFileStat filesExpired + Left _ -> pure () + unless (null expired) $ expireLoop st us old + +randomId :: Int -> M s ByteString randomId n = atomically . C.randomBytes n =<< asks random -getFileId :: M XFTPFileId +getFileId :: M s XFTPFileId getFileId = fmap EntityId . randomId =<< asks (fileIdSize . config) -withFileLog :: (StoreLog 'WriteMode -> IO a) -> M () +withFileLog :: (StoreLog 'WriteMode -> IO a) -> M s () withFileLog action = liftIO . mapM_ action =<< asks storeLog -incFileStat :: (FileServerStats -> IORef Int) -> M () +incFileStat :: (FileServerStats -> IORef Int) -> M s () incFileStat statSel = do stats <- asks serverStats liftIO $ atomicModifyIORef'_ (statSel stats) (+ 1) -saveServerStats :: M () +saveServerStats :: M s () saveServerStats = asks (serverStatsBackupFile . config) >>= mapM_ (\f -> asks serverStats >>= liftIO . getFileServerStatsData >>= liftIO . saveStats f) @@ -683,7 +696,7 @@ saveServerStats = B.writeFile f $ strEncode stats logNote "server stats saved" -restoreServerStats :: M () +restoreServerStats :: FileStoreClass s => M s () restoreServerStats = asks (serverStatsBackupFile . config) >>= mapM_ restoreStats where restoreStats f = whenM (doesFileExist f) $ do @@ -691,9 +704,9 @@ restoreServerStats = asks (serverStatsBackupFile . config) >>= mapM_ restoreStat liftIO (strDecode <$> B.readFile f) >>= \case Right d@FileServerStatsData {_filesCount = statsFilesCount, _filesSize = statsFilesSize} -> do s <- asks serverStats - FileStore {files, usedStorage} <- asks store - _filesCount <- M.size <$> readTVarIO files - _filesSize <- readTVarIO usedStorage + st <- asks store + _filesCount <- liftIO $ getFileCount st + _filesSize <- readTVarIO =<< asks usedStorage liftIO $ setFileServerStats s d {_filesCount, _filesSize} renameFile f $ f <> ".bak" logNote "server stats restored" diff --git a/src/Simplex/FileTransfer/Server/Env.hs b/src/Simplex/FileTransfer/Server/Env.hs index d4c58df66c..73773ff887 100644 --- a/src/Simplex/FileTransfer/Server/Env.hs +++ b/src/Simplex/FileTransfer/Server/Env.hs @@ -1,7 +1,9 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} +{-# LANGUAGE RankNTypes #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} @@ -9,13 +11,17 @@ module Simplex.FileTransfer.Server.Env ( XFTPServerConfig (..), + XFTPStoreConfig (..), XFTPEnv (..), XFTPRequest (..), defaultInactiveClientExpiration, defFileExpirationHours, defaultFileExpiration, newXFTPServerEnv, - countUsedStorage, + runWithStoreConfig, + checkFileStoreMode, + importToDatabase, + exportFromDatabase, ) where import Control.Logger.Simple @@ -23,7 +29,6 @@ import Control.Monad import Crypto.Random import Data.Int (Int64) import Data.List.NonEmpty (NonEmpty) -import qualified Data.Map.Strict as M import Data.Time.Clock (getCurrentTime) import Data.Word (Word32) import Data.X509.Validation (Fingerprint (..)) @@ -31,7 +36,18 @@ import Network.Socket import qualified Network.TLS as T import Simplex.FileTransfer.Protocol (FileCmd, FileInfo (..), XFTPFileId) import Simplex.FileTransfer.Server.Stats +import Data.Ini (Ini) import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) +#if defined(dbServerPostgres) +import Data.Functor (($>)) +import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore, importFileStore, exportFileStore) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg (..), defaultXFTPDBOpts) +import Simplex.Messaging.Server.CLI (iniDBOptions, settingIsOn) +import System.Directory (doesFileExist) +import System.Exit (exitFailure) +#endif import Simplex.FileTransfer.Server.StoreLog import Simplex.FileTransfer.Transport (VersionRangeXFTP) import qualified Simplex.Messaging.Crypto as C @@ -88,9 +104,16 @@ defaultInactiveClientExpiration = checkInterval = 3600 -- seconds, 1 hours } -data XFTPEnv = XFTPEnv +data XFTPStoreConfig s where + XSCMemory :: Maybe FilePath -> XFTPStoreConfig STMFileStore +#if defined(dbServerPostgres) + XSCDatabase :: PostgresFileStoreCfg -> XFTPStoreConfig PostgresFileStore +#endif + +data XFTPEnv s = XFTPEnv { config :: XFTPServerConfig, - store :: FileStore, + store :: s, + usedStorage :: TVar Int64, storeLog :: Maybe (StoreLog 'WriteMode), random :: TVar ChaChaDRG, serverIdentity :: C.KeyHash, @@ -109,13 +132,21 @@ defaultFileExpiration = checkInterval = 2 * 3600 -- seconds, 2 hours } -newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv -newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCredentials, httpCredentials} = do +newXFTPServerEnv :: FileStoreClass s => XFTPStoreConfig s -> XFTPServerConfig -> IO (XFTPEnv s) +newXFTPServerEnv storeCfg config@XFTPServerConfig {fileSizeQuota, xftpCredentials, httpCredentials} = do random <- C.newRandom - store <- newFileStore - storeLog <- mapM (`readWriteFileStore` store) storeLogFile - used <- countUsedStorage <$> readTVarIO (files store) - atomically $ writeTVar (usedStorage store) used + (store, storeLog) <- case storeCfg of + XSCMemory storeLogPath -> do + st <- newFileStore () + sl <- mapM (`readWriteFileStore` st) storeLogPath + pure (st, sl) +#if defined(dbServerPostgres) + XSCDatabase dbCfg -> do + st <- newFileStore dbCfg + pure (st, Nothing) +#endif + used <- getUsedStorage store + usedStorage <- newTVarIO used forM_ fileSizeQuota $ \quota -> do logNote $ "Total / available storage: " <> tshow quota <> " / " <> tshow (quota - used) when (quota < used) $ logWarn "WARNING: storage quota is less than used storage, no files can be uploaded!" @@ -123,12 +154,71 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede httpServerCreds <- mapM loadServerCredential httpCredentials Fingerprint fp <- loadFingerprint xftpCredentials serverStats <- newFileServerStats =<< getCurrentTime - pure XFTPEnv {config, store, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats} - -countUsedStorage :: M.Map k FileRec -> Int64 -countUsedStorage = M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 + pure XFTPEnv {config, store, usedStorage, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats} data XFTPRequest = XFTPReqNew FileInfo (NonEmpty RcvPublicAuthKey) (Maybe BasicAuth) | XFTPReqCmd XFTPFileId FileRec FileCmd | XFTPReqPing + +-- | Select and run the store config based on INI settings. +-- CPP guards for Postgres are handled here so Main.hs stays CPP-free. +runWithStoreConfig :: + Ini -> + String -> + Maybe FilePath -> + FilePath -> + MigrationConfirmation -> + (forall s. FileStoreClass s => XFTPStoreConfig s -> IO ()) -> + IO () +runWithStoreConfig _ini storeType storeLogFile_ _storeLogFilePath _confirmMigrations run = case storeType of + "memory" -> run $ XSCMemory storeLogFile_ +#if defined(dbServerPostgres) + "database" -> run $ XSCDatabase dbCfg + where + enableDbStoreLog' = settingIsOn "STORE_LOG" "db_store_log" _ini + dbStoreLogPath = enableDbStoreLog' $> _storeLogFilePath + dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions _ini defaultXFTPDBOpts, dbStoreLogPath, confirmMigrations = _confirmMigrations} +#else + "database" -> error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif + _ -> error $ "Invalid store_files value: " <> storeType + +-- | Validate startup config when store_files=database. +checkFileStoreMode :: Ini -> String -> FilePath -> IO () +#if defined(dbServerPostgres) +checkFileStoreMode ini storeType storeLogFilePath = case storeType of + "database" -> do + storeLogExists <- doesFileExist storeLogFilePath + let dbStoreLogOn = settingIsOn "STORE_LOG" "db_store_log" ini + when (storeLogExists && isNothing_ dbStoreLogOn) $ do + putStrLn $ "Error: store log file " <> storeLogFilePath <> " exists but store_files is `database`." + putStrLn "Use `file-server database import` to migrate, or set `db_store_log: on`." + exitFailure + _ -> pure () + where + isNothing_ Nothing = True + isNothing_ _ = False +#else +checkFileStoreMode _ _ _ = pure () +#endif + +-- | Import StoreLog to PostgreSQL database. +importToDatabase :: FilePath -> Ini -> MigrationConfirmation -> IO () +#if defined(dbServerPostgres) +importToDatabase storeLogFilePath ini _confirmMigrations = do + let dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions ini defaultXFTPDBOpts, dbStoreLogPath = Nothing, confirmMigrations = _confirmMigrations} + importFileStore storeLogFilePath dbCfg +#else +importToDatabase _ _ _ = error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif + +-- | Export PostgreSQL database to StoreLog. +exportFromDatabase :: FilePath -> Ini -> MigrationConfirmation -> IO () +#if defined(dbServerPostgres) +exportFromDatabase storeLogFilePath ini _confirmMigrations = do + let dbCfg = PostgresFileStoreCfg {dbOpts = iniDBOptions ini defaultXFTPDBOpts, dbStoreLogPath = Nothing, confirmMigrations = _confirmMigrations} + exportFileStore storeLogFilePath dbCfg +#else +exportFromDatabase _ _ _ = error "Error: server binary is compiled without support for PostgreSQL database.\nPlease re-compile with `cabal build -fserver_postgres`." +#endif diff --git a/src/Simplex/FileTransfer/Server/Main.hs b/src/Simplex/FileTransfer/Server/Main.hs index 101fe945bb..9f50453005 100644 --- a/src/Simplex/FileTransfer/Server/Main.hs +++ b/src/Simplex/FileTransfer/Server/Main.hs @@ -12,7 +12,7 @@ module Simplex.FileTransfer.Server.Main xftpServerCLI_, ) where -import Control.Monad (when) +import Control.Monad (unless, when) import Data.Either (fromRight) import Data.Functor (($>)) import Data.Ini (lookupValue, readIniFile) @@ -28,11 +28,12 @@ import Options.Applicative import Simplex.FileTransfer.Chunks import Simplex.FileTransfer.Description (FileSize (..)) import Simplex.FileTransfer.Server (runXFTPServer) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration, runWithStoreConfig, checkFileStoreMode, importToDatabase, exportFromDatabase) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), pattern XFTPServer) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation (..)) import Simplex.Messaging.Server.CLI import Simplex.Messaging.Server.Expiration import Simplex.Messaging.Server.Information (ServerPublicInfo (..)) @@ -66,9 +67,13 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do doesFileExist iniFile >>= \case True -> genOnline cfgPath certOpts _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." - Start -> + Start opts -> doesFileExist iniFile >>= \case - True -> readIniFile iniFile >>= either exitError runServer + True -> readIniFile iniFile >>= either exitError (runServer opts) + _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." + Database cmd -> + doesFileExist iniFile >>= \case + True -> readIniFile iniFile >>= either exitError (runDatabaseCmd cmd) _ -> exitError $ "Error: server is not initialized (" <> iniFile <> " does not exist).\nRun `" <> executableName <> " init`." Delete -> do confirmOrExit @@ -84,6 +89,21 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do executableName = "file-server" storeLogFilePath = combine logPath "file-server-store.log" defaultStaticPath = combine logPath "www" + runDatabaseCmd cmd ini = case cmd of + SCImport -> do + storeLogExists <- doesFileExist storeLogFilePath + unless storeLogExists $ exitError $ "Error: store log file " <> storeLogFilePath <> " does not exist." + confirmOrExit + ("Import store log " <> storeLogFilePath <> " to PostgreSQL database?") + "Import cancelled." + importToDatabase storeLogFilePath ini MCYesUp + SCExport -> do + storeLogExists <- doesFileExist storeLogFilePath + when storeLogExists $ exitError $ "Error: store log file " <> storeLogFilePath <> " already exists." + confirmOrExit + ("Export PostgreSQL database to store log " <> storeLogFilePath <> "?") + "Export cancelled." + exportFromDatabase storeLogFilePath ini MCConsole initializeServer InitOptions {enableStoreLog, signAlgorithm, ip, fqdn, filesPath, fileSizeQuota, webStaticPath = webStaticPath_} = do clearDirIfExists cfgPath clearDirIfExists logPath @@ -126,6 +146,14 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do \# and restoring it when the server is started.\n\ \# Log is compacted on start (deleted objects are removed).\n" <> ("enable: " <> onOff enableStoreLog <> "\n\n") + <> "# File storage mode: `memory` or `database` (PostgreSQL).\n\ + \store_files: memory\n\n\ + \# Database connection settings for PostgreSQL database (`store_files: database`).\n\ + \# db_connection: postgresql://xftp@/xftp_server_store\n\ + \# db_schema: xftp_server\n\ + \# db_pool_size: 10\n\n\ + \# Write database changes to store log file\n\ + \# db_store_log: off\n\n" <> "# Expire files after the specified number of hours.\n" <> ("expire_files_hours: " <> tshow defFileExpirationHours <> "\n\n") <> "log_stats: off\n\ @@ -173,7 +201,7 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do \# TLS credentials for HTTPS web server on the same port as XFTP.\n\ \# cert: " <> T.pack (cfgPath `combine` "web.crt") <> "\n\ \# key: " <> T.pack (cfgPath `combine` "web.key") <> "\n" - runServer ini = do + runServer StartOptions {confirmMigrations} ini = do hSetBuffering stdout LineBuffering hSetBuffering stderr LineBuffering fp <- checkSavedFingerprint cfgPath defaultX509Config @@ -194,7 +222,10 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do when (isJust webHttpPort || isJust webHttpsParams') $ serveStaticFiles EmbeddedWebParams {webStaticPath = path, webHttpPort, webHttpsParams = webHttpsParams'} Nothing -> pure () - runXFTPServer serverConfig + let storeType = fromRight "memory" $ T.unpack <$> lookupValue "STORE_LOG" "store_files" ini + checkFileStoreMode ini storeType storeLogFilePath + runWithStoreConfig ini storeType (storeLogFile serverConfig) storeLogFilePath confirmMigrations $ + \storeCfg -> runXFTPServer storeCfg serverConfig where isOnion = \case THOnionHost _ -> True; _ -> False enableStoreLog = settingIsOn "STORE_LOG" "enable" ini @@ -289,9 +320,16 @@ xftpServerCLI_ generateSite serveStaticFiles cfgPath logPath = do data CliCommand = Init InitOptions | OnlineCert CertOptions - | Start + | Start StartOptions + | Database StoreCmd | Delete +data StoreCmd = SCImport | SCExport + +newtype StartOptions = StartOptions + { confirmMigrations :: MigrationConfirmation + } + data InitOptions = InitOptions { enableStoreLog :: Bool, signAlgorithm :: SignAlgorithm, @@ -308,7 +346,8 @@ cliCommandP cfgPath logPath iniFile = hsubparser ( command "init" (info (Init <$> initP) (progDesc $ "Initialize server - creates " <> cfgPath <> " and " <> logPath <> " directories and configuration files")) <> command "cert" (info (OnlineCert <$> certOptionsP) (progDesc $ "Generate new online TLS server credentials (configuration: " <> iniFile <> ")")) - <> command "start" (info (pure Start) (progDesc $ "Start server (configuration: " <> iniFile <> ")")) + <> command "start" (info (Start <$> startOptsP) (progDesc $ "Start server (configuration: " <> iniFile <> ")")) + <> command "database" (info (Database <$> storeCmdP) (progDesc "Import/export file store to/from PostgreSQL database")) <> command "delete" (info (pure Delete) (progDesc "Delete configuration and log files")) ) where @@ -375,3 +414,26 @@ cliCommandP cfgPath logPath iniFile = <> metavar "PATH" ) pure InitOptions {enableStoreLog, signAlgorithm, ip, fqdn, filesPath, fileSizeQuota, webStaticPath} + startOptsP :: Parser StartOptions + startOptsP = do + confirmMigrations <- + option + parseConfirmMigrations + ( long "confirm-migrations" + <> metavar "CONFIRM_MIGRATIONS" + <> help "Confirm PostgreSQL database migration: up, down (default is manual confirmation)" + <> value MCConsole + ) + pure StartOptions {confirmMigrations} + where + parseConfirmMigrations :: ReadM MigrationConfirmation + parseConfirmMigrations = eitherReader $ \case + "up" -> Right MCYesUp + "down" -> Right MCYesUpDown + _ -> Left "invalid migration confirmation, pass 'up' or 'down'" + storeCmdP :: Parser StoreCmd + storeCmdP = + hsubparser + ( command "import" (info (pure SCImport) (progDesc "Import store log file into PostgreSQL database")) + <> command "export" (info (pure SCExport) (progDesc "Export PostgreSQL database to store log file")) + ) diff --git a/src/Simplex/FileTransfer/Server/Store.hs b/src/Simplex/FileTransfer/Server/Store.hs index eec481a21d..a3a4d57955 100644 --- a/src/Simplex/FileTransfer/Server/Store.hs +++ b/src/Simplex/FileTransfer/Server/Store.hs @@ -1,51 +1,30 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE GADTs #-} -{-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} -{-# LANGUAGE TupleSections #-} +{-# LANGUAGE TypeFamilies #-} module Simplex.FileTransfer.Server.Store - ( FileStore (..), + ( FileStoreClass (..), FileRec (..), FileRecipient (..), RoundedFileTime, - newFileStore, - addFile, - setFilePath, - addRecipient, - deleteFile, - blockFile, - deleteRecipient, - expiredFilePath, - getFile, - ackFile, fileTimePrecision, ) where import Control.Concurrent.STM -import Control.Monad import qualified Data.Attoparsec.ByteString.Char8 as A import Data.Int (Int64) import Data.Set (Set) -import qualified Data.Set as S -import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..), XFTPFileId) -import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import Data.Word (Word32) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty, XFTPFileId) +import Simplex.FileTransfer.Transport (XFTPErrorType) import qualified Simplex.Messaging.Crypto as C import Simplex.Messaging.Encoding.String -import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) -import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.Protocol (BlockingInfo, RecipientId, SenderId) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus) import Simplex.Messaging.SystemTime -import Simplex.Messaging.TMap (TMap) -import qualified Simplex.Messaging.TMap as TM -import Simplex.Messaging.Util (ifM, ($>>=)) - -data FileStore = FileStore - { files :: TMap SenderId FileRec, - recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey), - usedStorage :: TVar Int64 - } data FileRec = FileRec { senderId :: SenderId, @@ -61,103 +40,33 @@ type RoundedFileTime = RoundedSystemTime 3600 fileTimePrecision :: Int64 fileTimePrecision = 3600 -- truncate creation time to 1 hour -data FileRecipient = FileRecipient RecipientId RcvPublicAuthKey +data FileRecipient = FileRecipient RecipientId C.APublicAuthKey deriving (Show) instance StrEncoding FileRecipient where strEncode (FileRecipient rId rKey) = strEncode rId <> ":" <> strEncode rKey strP = FileRecipient <$> strP <* A.char ':' <*> strP -newFileStore :: IO FileStore -newFileStore = do - files <- TM.emptyIO - recipients <- TM.emptyIO - usedStorage <- newTVarIO 0 - pure FileStore {files, recipients, usedStorage} - -addFile :: FileStore -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM (Either XFTPErrorType ()) -addFile FileStore {files} sId fileInfo createdAt status = - ifM (TM.member sId files) (pure $ Left DUPLICATE_) $ do - f <- newFileRec sId fileInfo createdAt status - TM.insert sId f files - pure $ Right () - -newFileRec :: SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM FileRec -newFileRec senderId fileInfo createdAt status = do - recipientIds <- newTVar S.empty - filePath <- newTVar Nothing - fileStatus <- newTVar status - pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} - -setFilePath :: FileStore -> SenderId -> FilePath -> STM (Either XFTPErrorType ()) -setFilePath st sId fPath = - withFile st sId $ \FileRec {fileInfo, filePath} -> do - writeTVar filePath (Just fPath) - modifyTVar' (usedStorage st) (+ fromIntegral (size fileInfo)) - pure $ Right () - -addRecipient :: FileStore -> SenderId -> FileRecipient -> STM (Either XFTPErrorType ()) -addRecipient st@FileStore {recipients} senderId (FileRecipient rId rKey) = - withFile st senderId $ \FileRec {recipientIds} -> do - rIds <- readTVar recipientIds - mem <- TM.member rId recipients - if rId `S.member` rIds || mem - then pure $ Left DUPLICATE_ - else do - writeTVar recipientIds $! S.insert rId rIds - TM.insert rId (senderId, rKey) recipients - pure $ Right () - --- this function must be called after the file is deleted from the file system -deleteFile :: FileStore -> SenderId -> STM (Either XFTPErrorType ()) -deleteFile FileStore {files, recipients, usedStorage} senderId = do - TM.lookupDelete senderId files >>= \case - Just FileRec {fileInfo, recipientIds} -> do - readTVar recipientIds >>= mapM_ (`TM.delete` recipients) - modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo) - pure $ Right () - _ -> pure $ Left AUTH - --- this function must be called after the file is deleted from the file system -blockFile :: FileStore -> SenderId -> BlockingInfo -> Bool -> STM (Either XFTPErrorType ()) -blockFile st@FileStore {usedStorage} senderId info deleted = - withFile st senderId $ \FileRec {fileInfo, fileStatus} -> do - when deleted $ modifyTVar' usedStorage $ subtract (fromIntegral $ size fileInfo) - writeTVar fileStatus $! EntityBlocked info - pure $ Right () - -deleteRecipient :: FileStore -> RecipientId -> FileRec -> STM () -deleteRecipient FileStore {recipients} rId FileRec {recipientIds} = do - TM.delete rId recipients - modifyTVar' recipientIds $ S.delete rId - -getFile :: FileStore -> SFileParty p -> XFTPFileId -> STM (Either XFTPErrorType (FileRec, C.APublicAuthKey)) -getFile st party fId = case party of - SFSender -> withFile st fId $ pure . Right . (\f -> (f, sndKey $ fileInfo f)) - SFRecipient -> - TM.lookup fId (recipients st) >>= \case - Just (sId, rKey) -> withFile st sId $ pure . Right . (,rKey) - _ -> pure $ Left AUTH - -expiredFilePath :: FileStore -> XFTPFileId -> Int64 -> STM (Maybe (Maybe FilePath)) -expiredFilePath FileStore {files} sId old = - TM.lookup sId files - $>>= \FileRec {filePath, createdAt = RoundedSystemTime createdAt} -> - if createdAt + fileTimePrecision < old - then Just <$> readTVar filePath - else pure Nothing - -ackFile :: FileStore -> RecipientId -> STM (Either XFTPErrorType ()) -ackFile st@FileStore {recipients} recipientId = do - TM.lookupDelete recipientId recipients >>= \case - Just (sId, _) -> - withFile st sId $ \FileRec {recipientIds} -> do - modifyTVar' recipientIds $ S.delete recipientId - pure $ Right () - _ -> pure $ Left AUTH - -withFile :: FileStore -> SenderId -> (FileRec -> STM (Either XFTPErrorType a)) -> STM (Either XFTPErrorType a) -withFile FileStore {files} sId a = - TM.lookup sId files >>= \case - Just f -> a f - _ -> pure $ Left AUTH +class FileStoreClass s where + type FileStoreConfig s + + -- Lifecycle + newFileStore :: FileStoreConfig s -> IO s + closeFileStore :: s -> IO () + + -- File operations + addFile :: s -> SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> IO (Either XFTPErrorType ()) + setFilePath :: s -> SenderId -> FilePath -> IO (Either XFTPErrorType ()) + addRecipient :: s -> SenderId -> FileRecipient -> IO (Either XFTPErrorType ()) + getFile :: s -> SFileParty p -> XFTPFileId -> IO (Either XFTPErrorType (FileRec, C.APublicAuthKey)) + deleteFile :: s -> SenderId -> IO (Either XFTPErrorType ()) + blockFile :: s -> SenderId -> BlockingInfo -> Bool -> IO (Either XFTPErrorType ()) + deleteRecipient :: s -> RecipientId -> FileRec -> IO () + ackFile :: s -> RecipientId -> IO (Either XFTPErrorType ()) + + -- Expiration (with LIMIT for Postgres; called in a loop until empty) + expiredFiles :: s -> Int64 -> Int -> IO [(SenderId, Maybe FilePath, Word32)] + + -- Storage and stats (for init-time computation) + getUsedStorage :: s -> IO Int64 + getFileCount :: s -> IO Int diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres.hs b/src/Simplex/FileTransfer/Server/Store/Postgres.hs new file mode 100644 index 0000000000..8922149bf7 --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres.hs @@ -0,0 +1,369 @@ +{-# LANGUAGE BangPatterns #-} +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE GADTs #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE ScopedTypeVariables #-} +{-# LANGUAGE TypeFamilies #-} + +module Simplex.FileTransfer.Server.Store.Postgres + ( PostgresFileStore (..), + withDB, + withDB', + handleDuplicate, + assertUpdated, + withLog, + importFileStore, + exportFileStore, + ) +where + +import qualified Control.Exception as E +import Control.Logger.Simple +import Control.Monad +import Control.Monad.Except +import Control.Monad.IO.Class +import Control.Monad.Trans.Except (throwE) +import Data.ByteString (ByteString) +import qualified Data.ByteString.Char8 as B +import Data.ByteString.Builder (Builder) +import qualified Data.ByteString.Builder as BB +import qualified Data.ByteString.Lazy as LB +import Data.Functor (($>)) +import Data.Int (Int32, Int64) +import Data.List (intersperse) +import qualified Data.List.NonEmpty as L +import qualified Data.Map.Strict as M +import qualified Data.Set as S +import Data.Text (Text) +import Data.Word (Word32) +import Database.PostgreSQL.Simple (Binary (..), Only (..), SqlError) +import qualified Database.PostgreSQL.Simple as DB +import qualified Database.PostgreSQL.Simple.Copy as DB +import Database.PostgreSQL.Simple.Errors (ConstraintViolation (..), constraintViolation) +import Database.PostgreSQL.Simple.ToField (Action (..), ToField (..)) +import GHC.IO (catchAny) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..)) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.Postgres.Config +import Simplex.FileTransfer.Server.Store.Postgres.Migrations (xftpServerMigrations) +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +import Simplex.FileTransfer.Server.StoreLog +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import Simplex.Messaging.Agent.Store.Postgres (closeDBStore, createDBStore) +import Simplex.Messaging.Agent.Store.Postgres.Common (DBStore, withTransaction) +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..), MigrationConfirmation (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (RcvPublicAuthKey, RecipientId, SenderId) +import Simplex.Messaging.Transport (EntityId (..)) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.Server.QueueStore.Postgres () +import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) +import Simplex.Messaging.Util (tshow) +import System.Directory (renameFile) +import System.Exit (exitFailure) +import System.IO (IOMode (..), hFlush, stdout) +import UnliftIO.STM + +data PostgresFileStore = PostgresFileStore + { dbStore :: DBStore, + dbStoreLog :: Maybe (StoreLog 'WriteMode) + } + +instance FileStoreClass PostgresFileStore where + type FileStoreConfig PostgresFileStore = PostgresFileStoreCfg + + newFileStore PostgresFileStoreCfg {dbOpts, dbStoreLogPath, confirmMigrations} = do + dbStore <- either err pure =<< createDBStore dbOpts xftpServerMigrations (MigrationConfig confirmMigrations Nothing) + dbStoreLog <- mapM (openWriteStoreLog True) dbStoreLogPath + pure PostgresFileStore {dbStore, dbStoreLog} + where + err e = do + logError $ "STORE: newFileStore, error opening PostgreSQL database, " <> tshow e + exitFailure + + closeFileStore PostgresFileStore {dbStore, dbStoreLog} = do + closeDBStore dbStore + mapM_ closeStoreLog dbStoreLog + + addFile st sId fileInfo@FileInfo {sndKey, size, digest} createdAt status = + E.uninterruptibleMask_ $ runExceptT $ do + void $ withDB "addFile" st $ \db -> + E.try + ( DB.execute + db + "INSERT INTO files (sender_id, file_size, file_digest, sender_key, created_at, status) VALUES (?,?,?,?,?,?)" + (sId, (fromIntegral size :: Int32), Binary digest, Binary (C.encodePubKey sndKey), createdAt, status) + ) + >>= either handleDuplicate (pure . Right) + withLog "addFile" st $ \s -> logAddFile s sId fileInfo createdAt status + + setFilePath st sId fPath = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "setFilePath" st $ \db -> + DB.execute db "UPDATE files SET file_path = ? WHERE sender_id = ? AND file_path IS NULL AND status = 'active'" (fPath, sId) + withLog "setFilePath" st $ \s -> logPutFile s sId fPath + + addRecipient st senderId (FileRecipient rId rKey) = E.uninterruptibleMask_ $ runExceptT $ do + void $ withDB "addRecipient" st $ \db -> + E.try + ( DB.execute + db + "INSERT INTO recipients (recipient_id, sender_id, recipient_key) VALUES (?,?,?)" + (rId, senderId, Binary (C.encodePubKey rKey)) + ) + >>= either handleDuplicate (pure . Right) + withLog "addRecipient" st $ \s -> logAddRecipients s senderId (pure $ FileRecipient rId rKey) + + getFile st party fId = runExceptT $ case party of + SFSender -> + withDB "getFile" st $ \db -> do + rs <- + DB.query + db + "SELECT file_size, file_digest, sender_key, file_path, created_at, status FROM files WHERE sender_id = ?" + (Only fId) + case rs of + [(size, digest, sndKeyBs, path, createdAt, status)] -> + case C.decodePubKey sndKeyBs of + Right sndKey -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral (size :: Int32), digest} + fr <- mkFileRec fId fileInfo path createdAt status + pure $ Right (fr, sndKey) + Left _ -> pure $ Left INTERNAL + _ -> pure $ Left AUTH + SFRecipient -> + withDB "getFile" st $ \db -> do + rs <- + DB.query + db + "SELECT f.file_size, f.file_digest, f.sender_key, f.file_path, f.created_at, f.status, f.sender_id, r.recipient_key FROM recipients r JOIN files f ON r.sender_id = f.sender_id WHERE r.recipient_id = ?" + (Only fId) + case rs of + [(size, digest, sndKeyBs, path, createdAt, status, senderId, rcpKeyBs)] -> + case (C.decodePubKey sndKeyBs, C.decodePubKey rcpKeyBs) of + (Right sndKey, Right rcpKey) -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral (size :: Int32), digest} + fr <- mkFileRec senderId fileInfo path createdAt status + pure $ Right (fr, rcpKey) + _ -> pure $ Left INTERNAL + _ -> pure $ Left AUTH + + deleteFile st sId = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "deleteFile" st $ \db -> + DB.execute db "DELETE FROM files WHERE sender_id = ?" (Only sId) + withLog "deleteFile" st $ \s -> logDeleteFile s sId + + blockFile st sId info _deleted = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "blockFile" st $ \db -> + DB.execute db "UPDATE files SET status = ? WHERE sender_id = ?" (EntityBlocked info, sId) + withLog "blockFile" st $ \s -> logBlockFile s sId info + + deleteRecipient st rId _fr = + void $ runExceptT $ withDB' "deleteRecipient" st $ \db -> + DB.execute db "DELETE FROM recipients WHERE recipient_id = ?" (Only rId) + + ackFile st rId = E.uninterruptibleMask_ $ runExceptT $ do + assertUpdated $ withDB' "ackFile" st $ \db -> + DB.execute db "DELETE FROM recipients WHERE recipient_id = ?" (Only rId) + withLog "ackFile" st $ \s -> logAckFile s rId + + expiredFiles st old limit = + fmap toResult $ withTransaction (dbStore st) $ \db -> + DB.query + db + "SELECT sender_id, file_path, file_size FROM files WHERE created_at + ? < ? ORDER BY created_at LIMIT ?" + (fileTimePrecision, old, limit) + where + toResult :: [(SenderId, Maybe FilePath, Int32)] -> [(SenderId, Maybe FilePath, Word32)] + toResult = map (\(sId, path, size) -> (sId, path, fromIntegral size)) + + getUsedStorage st = + withTransaction (dbStore st) $ \db -> do + [Only total] <- DB.query_ db "SELECT COALESCE(SUM(file_size::INT8), 0)::INT8 FROM files" + pure total + + getFileCount st = + withTransaction (dbStore st) $ \db -> do + [Only count] <- DB.query_ db "SELECT COUNT(*) FROM files" + pure (fromIntegral (count :: Int64)) + +-- Internal helpers + +mkFileRec :: SenderId -> FileInfo -> Maybe FilePath -> RoundedFileTime -> ServerEntityStatus -> IO FileRec +mkFileRec senderId fileInfo path createdAt status = do + filePath <- newTVarIO path + recipientIds <- newTVarIO S.empty + fileStatus <- newTVarIO status + pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} + +-- DB helpers + +withDB :: forall a. Text -> PostgresFileStore -> (DB.Connection -> IO (Either XFTPErrorType a)) -> ExceptT XFTPErrorType IO a +withDB op st action = + ExceptT $ E.try (withTransaction (dbStore st) action) >>= either logErr pure + where + logErr :: E.SomeException -> IO (Either XFTPErrorType a) + logErr e = logError ("STORE: " <> err) $> Left INTERNAL + where + err = op <> ", withDB, " <> tshow e + +withDB' :: Text -> PostgresFileStore -> (DB.Connection -> IO a) -> ExceptT XFTPErrorType IO a +withDB' op st action = withDB op st $ fmap Right . action + +assertUpdated :: ExceptT XFTPErrorType IO Int64 -> ExceptT XFTPErrorType IO () +assertUpdated = (>>= \n -> when (n == 0) (throwE AUTH)) + +handleDuplicate :: SqlError -> IO (Either XFTPErrorType a) +handleDuplicate e = case constraintViolation e of + Just (UniqueViolation _) -> pure $ Left DUPLICATE_ + Just (ForeignKeyViolation _ _) -> pure $ Left AUTH + _ -> E.throwIO e + +withLog :: MonadIO m => Text -> PostgresFileStore -> (StoreLog 'WriteMode -> IO ()) -> m () +withLog op PostgresFileStore {dbStoreLog} action = + forM_ dbStoreLog $ \sl -> liftIO $ action sl `catchAny` \e -> + logWarn $ "STORE: " <> op <> ", withLog, " <> tshow e + +-- Import: StoreLog -> PostgreSQL + +importFileStore :: FilePath -> PostgresFileStoreCfg -> IO () +importFileStore storeLogFilePath dbCfg = do + putStrLn $ "Reading store log: " <> storeLogFilePath + stmStore <- newFileStore () :: IO STMFileStore + sl <- readWriteFileStore storeLogFilePath stmStore + closeStoreLog sl + allFiles <- readTVarIO (files stmStore) + allRcps <- readTVarIO (recipients stmStore) + let fileCount = M.size allFiles + rcpCount = M.size allRcps + putStrLn $ "Loaded " <> show fileCount <> " files, " <> show rcpCount <> " recipients." + let dbCfg' = dbCfg {dbOpts = (dbOpts dbCfg) {createSchema = True}, confirmMigrations = MCYesUp} + pgStore <- newFileStore dbCfg' :: IO PostgresFileStore + existingCount <- getFileCount pgStore + when (existingCount > 0) $ do + putStrLn $ "WARNING: database already contains " <> show existingCount <> " files. Import will fail on duplicate keys." + putStrLn "Drop the existing schema first or use a fresh database." + exitFailure + putStrLn "Importing files..." + fCnt <- withTransaction (dbStore pgStore) $ \db -> do + DB.copy_ + db + "COPY files (sender_id, file_size, file_digest, sender_key, file_path, created_at, status) FROM STDIN WITH (FORMAT csv)" + iforM_ (M.toList allFiles) $ \i (sId, fr) -> do + DB.putCopyData db =<< fileRecToCSV sId fr + when (i > 0 && i `mod` 10000 == 0) $ putStr (" " <> show i <> " files\r") >> hFlush stdout + DB.putCopyEnd db + [Only cnt] <- DB.query_ db "SELECT COUNT(*) FROM files" + pure (cnt :: Int64) + putStrLn $ "Imported " <> show fCnt <> " files." + putStrLn "Importing recipients..." + rCnt <- withTransaction (dbStore pgStore) $ \db -> do + DB.copy_ + db + "COPY recipients (recipient_id, sender_id, recipient_key) FROM STDIN WITH (FORMAT csv)" + iforM_ (M.toList allRcps) $ \i (rId, (sId, rKey)) -> do + DB.putCopyData db $ recipientToCSV rId sId rKey + when (i > 0 && i `mod` 10000 == 0) $ putStr (" " <> show i <> " recipients\r") >> hFlush stdout + DB.putCopyEnd db + [Only cnt] <- DB.query_ db "SELECT COUNT(*) FROM recipients" + pure (cnt :: Int64) + putStrLn $ "Imported " <> show rCnt <> " recipients." + when (fromIntegral fileCount /= fCnt) $ + putStrLn $ "WARNING: expected " <> show fileCount <> " files, got " <> show fCnt + when (fromIntegral rcpCount /= rCnt) $ + putStrLn $ "WARNING: expected " <> show rcpCount <> " recipients, got " <> show rCnt + closeFileStore pgStore + renameFile storeLogFilePath (storeLogFilePath <> ".bak") + putStrLn $ "Store log renamed to " <> storeLogFilePath <> ".bak" + +-- Export: PostgreSQL -> StoreLog + +exportFileStore :: FilePath -> PostgresFileStoreCfg -> IO () +exportFileStore storeLogFilePath dbCfg = do + pgStore <- newFileStore dbCfg :: IO PostgresFileStore + sl <- openWriteStoreLog False storeLogFilePath + putStrLn "Exporting files..." + -- Load all recipients into a map for lookup + rcpMap <- withTransaction (dbStore pgStore) $ \db -> + DB.fold_ + db + "SELECT recipient_id, sender_id, recipient_key FROM recipients ORDER BY sender_id" + M.empty + (\acc (rId, sId, rKeyBs :: ByteString) -> + case C.decodePubKey rKeyBs of + Right rKey -> pure $! M.insertWith (++) sId [FileRecipient rId rKey] acc + Left _ -> putStrLn ("WARNING: invalid recipient key for " <> show rId) $> acc) + -- Fold over files, writing StoreLog records + (!fCnt, !rCnt) <- withTransaction (dbStore pgStore) $ \db -> + DB.fold_ + db + "SELECT sender_id, file_size, file_digest, sender_key, file_path, created_at, status FROM files ORDER BY created_at" + (0 :: Int, 0 :: Int) + ( \(!fc, !rc) (sId, size :: Int32, digest :: ByteString, sndKeyBs :: ByteString, path :: Maybe String, createdAt, status) -> + case C.decodePubKey sndKeyBs of + Right sndKey -> do + let fileInfo = FileInfo {sndKey, size = fromIntegral size, digest} + logAddFile sl sId fileInfo createdAt status + let rcps = M.findWithDefault [] sId rcpMap + rc' = rc + length rcps + forM_ (L.nonEmpty rcps) $ logAddRecipients sl sId + forM_ path $ logPutFile sl sId + pure (fc + 1, rc') + Left _ -> do + putStrLn $ "WARNING: invalid sender key for " <> show sId + pure (fc, rc) + ) + closeStoreLog sl + closeFileStore pgStore + putStrLn $ "Exported " <> show fCnt <> " files, " <> show rCnt <> " recipients to " <> storeLogFilePath + +-- CSV helpers for COPY protocol + +iforM_ :: Monad m => [a] -> (Int -> a -> m ()) -> m () +iforM_ xs f = zipWithM_ f [0 ..] xs + +fileRecToCSV :: SenderId -> FileRec -> IO ByteString +fileRecToCSV sId FileRec {fileInfo = FileInfo {sndKey, size, digest}, filePath, createdAt, fileStatus} = do + path <- readTVarIO filePath + status <- readTVarIO fileStatus + pure $ LB.toStrict $ BB.toLazyByteString $ mconcat (BB.char7 ',' `intersperse` fields path status) <> BB.char7 '\n' + where + fields path status = + [ renderField (toField (Binary (unEntityId sId))), + renderField (toField (fromIntegral size :: Int32)), + renderField (toField (Binary digest)), + renderField (toField (Binary (C.encodePubKey sndKey))), + nullable (toField <$> path), + renderField (toField createdAt), + quotedField (toField status) + ] + +recipientToCSV :: RecipientId -> SenderId -> RcvPublicAuthKey -> ByteString +recipientToCSV rId sId rKey = + LB.toStrict $ BB.toLazyByteString $ mconcat (BB.char7 ',' `intersperse` fields) <> BB.char7 '\n' + where + fields = + [ renderField (toField (Binary (unEntityId rId))), + renderField (toField (Binary (unEntityId sId))), + renderField (toField (Binary (C.encodePubKey rKey))) + ] + +renderField :: Action -> Builder +renderField = \case + Plain bld -> bld + Escape s -> BB.byteString s + EscapeByteA s -> BB.string7 "\\x" <> BB.byteStringHex s + EscapeIdentifier s -> BB.byteString s + Many as -> mconcat (map renderField as) + +nullable :: Maybe Action -> Builder +nullable = maybe mempty renderField + +quotedField :: Action -> Builder +quotedField a = BB.char7 '"' <> escapeQuotes (renderField a) <> BB.char7 '"' + where + escapeQuotes bld = + let bs = LB.toStrict $ BB.toLazyByteString bld + in BB.byteString $ B.concatMap (\c -> if c == '"' then "\"\"" else B.singleton c) bs diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs b/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs new file mode 100644 index 0000000000..a0dd5d7bad --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres/Config.hs @@ -0,0 +1,25 @@ +{-# LANGUAGE OverloadedStrings #-} + +module Simplex.FileTransfer.Server.Store.Postgres.Config + ( PostgresFileStoreCfg (..), + defaultXFTPDBOpts, + ) +where + +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation) + +data PostgresFileStoreCfg = PostgresFileStoreCfg + { dbOpts :: DBOpts, + dbStoreLogPath :: Maybe FilePath, + confirmMigrations :: MigrationConfirmation + } + +defaultXFTPDBOpts :: DBOpts +defaultXFTPDBOpts = + DBOpts + { connstr = "postgresql://xftp@/xftp_server_store", + schema = "xftp_server", + poolSize = 10, + createSchema = False + } diff --git a/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs new file mode 100644 index 0000000000..84f6b209e7 --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/Postgres/Migrations.hs @@ -0,0 +1,60 @@ +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE QuasiQuotes #-} + +module Simplex.FileTransfer.Server.Store.Postgres.Migrations + ( xftpServerMigrations, + ) +where + +import Data.List (sortOn) +import Data.Text (Text) +import Simplex.Messaging.Agent.Store.Shared +import Text.RawString.QQ (r) + +xftpSchemaMigrations :: [(String, Text, Maybe Text)] +xftpSchemaMigrations = + [ ("20260325_initial", m20260325_initial, Nothing), + ("20260402_file_size_check", m20260402_file_size_check, Just down_m20260402_file_size_check) + ] + +-- | The list of migrations in ascending order by date +xftpServerMigrations :: [Migration] +xftpServerMigrations = sortOn name $ map migration xftpSchemaMigrations + where + migration (name, up, down) = Migration {name, up, down = down} + +m20260325_initial :: Text +m20260325_initial = + [r| +CREATE TABLE files ( + sender_id BYTEA NOT NULL PRIMARY KEY, + file_size INT4 NOT NULL, + file_digest BYTEA NOT NULL, + sender_key BYTEA NOT NULL, + file_path TEXT, + created_at INT8 NOT NULL, + status TEXT NOT NULL DEFAULT 'active' +); + +CREATE TABLE recipients ( + recipient_id BYTEA NOT NULL PRIMARY KEY, + sender_id BYTEA NOT NULL REFERENCES files ON DELETE CASCADE, + recipient_key BYTEA NOT NULL +); + +CREATE INDEX idx_recipients_sender_id ON recipients (sender_id); +CREATE INDEX idx_files_created_at ON files (created_at); +|] + +m20260402_file_size_check :: Text +m20260402_file_size_check = + [r| +ALTER TABLE files ADD CONSTRAINT check_file_size_positive CHECK (file_size > 0); +|] + +down_m20260402_file_size_check :: Text +down_m20260402_file_size_check = + [r| +ALTER TABLE files DROP CONSTRAINT check_file_size_positive; +|] diff --git a/src/Simplex/FileTransfer/Server/Store/STM.hs b/src/Simplex/FileTransfer/Server/Store/STM.hs new file mode 100644 index 0000000000..7859d06aa4 --- /dev/null +++ b/src/Simplex/FileTransfer/Server/Store/STM.hs @@ -0,0 +1,127 @@ +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE GADTs #-} +{-# LANGUAGE LambdaCase #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE TupleSections #-} +{-# LANGUAGE TypeFamilies #-} + +module Simplex.FileTransfer.Server.Store.STM + ( STMFileStore (..), + ) +where + +import Control.Concurrent.STM +import Control.Monad (forM) +import Data.Int (Int64) +import qualified Data.Map.Strict as M +import Data.Maybe (catMaybes) +import Data.Set (Set) +import qualified Data.Set as S +import Data.Word (Word32) +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..), XFTPFileId) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.SystemTime +import Simplex.Messaging.TMap (TMap) +import qualified Simplex.Messaging.TMap as TM +import Simplex.Messaging.Util (ifM) + +data STMFileStore = STMFileStore + { files :: TMap SenderId FileRec, + recipients :: TMap RecipientId (SenderId, RcvPublicAuthKey) + } + +instance FileStoreClass STMFileStore where + type FileStoreConfig STMFileStore = () + + newFileStore () = do + files <- TM.emptyIO + recipients <- TM.emptyIO + pure STMFileStore {files, recipients} + + closeFileStore _ = pure () + + addFile STMFileStore {files} sId fileInfo createdAt status = atomically $ + ifM (TM.member sId files) (pure $ Left DUPLICATE_) $ do + f <- newFileRec sId fileInfo createdAt status + TM.insert sId f files + pure $ Right () + + setFilePath st sId fPath = atomically $ + withSTMFile st sId $ \FileRec {filePath} -> do + writeTVar filePath (Just fPath) + pure $ Right () + + addRecipient st@STMFileStore {recipients} senderId (FileRecipient rId rKey) = atomically $ + withSTMFile st senderId $ \FileRec {recipientIds} -> do + rIds <- readTVar recipientIds + mem <- TM.member rId recipients + if rId `S.member` rIds || mem + then pure $ Left DUPLICATE_ + else do + writeTVar recipientIds $! S.insert rId rIds + TM.insert rId (senderId, rKey) recipients + pure $ Right () + + getFile st party fId = atomically $ case party of + SFSender -> withSTMFile st fId $ pure . Right . (\f -> (f, sndKey $ fileInfo f)) + SFRecipient -> + TM.lookup fId (recipients st) >>= \case + Just (sId, rKey) -> withSTMFile st sId $ pure . Right . (,rKey) + _ -> pure $ Left AUTH + + deleteFile STMFileStore {files, recipients} senderId = atomically $ do + TM.lookupDelete senderId files >>= \case + Just FileRec {recipientIds} -> do + readTVar recipientIds >>= mapM_ (`TM.delete` recipients) + pure $ Right () + _ -> pure $ Left AUTH + + blockFile st senderId info _deleted = atomically $ + withSTMFile st senderId $ \FileRec {fileStatus} -> do + writeTVar fileStatus $! EntityBlocked info + pure $ Right () + + deleteRecipient STMFileStore {recipients} rId FileRec {recipientIds} = atomically $ do + TM.delete rId recipients + modifyTVar' recipientIds $ S.delete rId + + ackFile st@STMFileStore {recipients} recipientId = atomically $ do + TM.lookupDelete recipientId recipients >>= \case + Just (sId, _) -> + withSTMFile st sId $ \FileRec {recipientIds} -> do + modifyTVar' recipientIds $ S.delete recipientId + pure $ Right () + _ -> pure $ Left AUTH + + expiredFiles STMFileStore {files} old _limit = do + fs <- readTVarIO files + fmap catMaybes . forM (M.toList fs) $ \(sId, FileRec {fileInfo = FileInfo {size}, filePath, createdAt = RoundedSystemTime createdAt}) -> + if createdAt + fileTimePrecision < old + then do + path <- readTVarIO filePath + pure $ Just (sId, path, size) + else pure Nothing + + getUsedStorage STMFileStore {files} = + M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0 <$> readTVarIO files + + getFileCount STMFileStore {files} = M.size <$> readTVarIO files + +-- Internal STM helpers + +newFileRec :: SenderId -> FileInfo -> RoundedFileTime -> ServerEntityStatus -> STM FileRec +newFileRec senderId fileInfo createdAt status = do + recipientIds <- newTVar S.empty + filePath <- newTVar Nothing + fileStatus <- newTVar status + pure FileRec {senderId, fileInfo, filePath, recipientIds, createdAt, fileStatus} + +withSTMFile :: STMFileStore -> SenderId -> (FileRec -> STM (Either XFTPErrorType a)) -> STM (Either XFTPErrorType a) +withSTMFile STMFileStore {files} sId a = + TM.lookup sId files >>= \case + Just f -> a f + _ -> pure $ Left AUTH diff --git a/src/Simplex/FileTransfer/Server/StoreLog.hs b/src/Simplex/FileTransfer/Server/StoreLog.hs index c82beda29b..a6747257b2 100644 --- a/src/Simplex/FileTransfer/Server/StoreLog.hs +++ b/src/Simplex/FileTransfer/Server/StoreLog.hs @@ -10,6 +10,7 @@ module Simplex.FileTransfer.Server.StoreLog FileStoreLogRecord (..), closeStoreLog, readWriteFileStore, + writeFileStore, logAddFile, logPutFile, logAddRecipients, @@ -32,6 +33,7 @@ import Data.Map.Strict (Map) import qualified Data.Map.Strict as M import Simplex.FileTransfer.Protocol (FileInfo (..)) import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) import Simplex.Messaging.Encoding.String import Simplex.Messaging.Protocol (BlockingInfo, RcvPublicAuthKey, RecipientId, SenderId) import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) @@ -87,16 +89,16 @@ logBlockFile s fId = logFileStoreRecord s . BlockFile fId logAckFile :: StoreLog 'WriteMode -> RecipientId -> IO () logAckFile s = logFileStoreRecord s . AckFile -readWriteFileStore :: FilePath -> FileStore -> IO (StoreLog 'WriteMode) +readWriteFileStore :: FilePath -> STMFileStore -> IO (StoreLog 'WriteMode) readWriteFileStore = readWriteStoreLog readFileStore writeFileStore -readFileStore :: FilePath -> FileStore -> IO () +readFileStore :: FilePath -> STMFileStore -> IO () readFileStore f st = mapM_ (addFileLogRecord . LB.toStrict) . LB.lines =<< LB.readFile f where addFileLogRecord s = case strDecode s of Left e -> B.putStrLn $ "Log parsing error (" <> B.pack e <> "): " <> B.take 100 s Right lr -> - atomically (addToStore lr) >>= \case + addToStore lr >>= \case Left e -> B.putStrLn $ "Log processing error (" <> bshow e <> "): " <> B.take 100 s _ -> pure () addToStore = \case @@ -108,8 +110,8 @@ readFileStore f st = mapM_ (addFileLogRecord . LB.toStrict) . LB.lines =<< LB.re AckFile rId -> ackFile st rId addRecipients sId rcps = mapM_ (ExceptT . addRecipient st sId) rcps -writeFileStore :: StoreLog 'WriteMode -> FileStore -> IO () -writeFileStore s FileStore {files, recipients} = do +writeFileStore :: StoreLog 'WriteMode -> STMFileStore -> IO () +writeFileStore s STMFileStore {files, recipients} = do allRcps <- readTVarIO recipients readTVarIO files >>= mapM_ (logFile allRcps) where diff --git a/tests/CoreTests/XFTPStoreTests.hs b/tests/CoreTests/XFTPStoreTests.hs new file mode 100644 index 0000000000..91e3959767 --- /dev/null +++ b/tests/CoreTests/XFTPStoreTests.hs @@ -0,0 +1,284 @@ +{-# LANGUAGE DataKinds #-} +{-# LANGUAGE NamedFieldPuns #-} +{-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE ScopedTypeVariables #-} + +module CoreTests.XFTPStoreTests (xftpStoreTests, xftpMigrationTests) where + +import Control.Monad +import qualified Data.ByteString.Char8 as B +import Data.Word (Word32) +import qualified Data.Set as S +import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..)) +import Simplex.FileTransfer.Server.Store +import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg) +import Simplex.FileTransfer.Server.Store.STM (STMFileStore (..)) +import Simplex.FileTransfer.Server.StoreLog +import Simplex.FileTransfer.Transport (XFTPErrorType (..)) +import qualified Simplex.Messaging.Crypto as C +import Simplex.Messaging.Protocol (BlockingInfo (..), BlockingReason (..), EntityId (..)) +import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..)) +import Simplex.Messaging.SystemTime (RoundedSystemTime (..)) +import Simplex.FileTransfer.Server.Store.Postgres (importFileStore, exportFileStore) +import Simplex.FileTransfer.Server.StoreLog (readWriteFileStore, writeFileStore) +import Simplex.Messaging.Server.StoreLog (openWriteStoreLog) +import System.Directory (doesFileExist, removeFile) +import Test.Hspec hiding (fit, it) +import UnliftIO.STM +import Util +import XFTPClient (testXFTPPostgresCfg) + +xftpStoreTests :: Spec +xftpStoreTests = describe "PostgresFileStore operations" $ do + it "should add and get file by sender" testAddGetFileSender + it "should add and get file by recipient" testAddGetFileRecipient + it "should reject duplicate file" testDuplicateFile + it "should return AUTH for nonexistent file" testGetNonexistent + it "should set file path with IS NULL guard" testSetFilePath + it "should reject duplicate recipient" testDuplicateRecipient + it "should delete file and cascade recipients" testDeleteFileCascade + it "should block file and update status" testBlockFile + it "should ack file reception" testAckFile + it "should return expired files with limit" testExpiredFiles + it "should compute used storage and file count" testStorageAndCount + +xftpMigrationTests :: Spec +xftpMigrationTests = describe "XFTP migration round-trip" $ do + it "should export to StoreLog and import back to Postgres preserving data" testMigrationRoundTrip + +-- Test helpers + +withPgStore :: (PostgresFileStore -> IO ()) -> IO () +withPgStore test = do + st <- newFileStore testXFTPPostgresCfg :: IO PostgresFileStore + test st + closeFileStore st + +testSenderId :: EntityId +testSenderId = EntityId "sender001_______" + +testRecipientId :: EntityId +testRecipientId = EntityId "recipient001____" + +testRecipientId2 :: EntityId +testRecipientId2 = EntityId "recipient002____" + +testFileInfo :: C.APublicAuthKey -> FileInfo +testFileInfo sndKey = + FileInfo + { sndKey, + size = 128000 :: Word32, + digest = "test_digest_bytes_here___" + } + +testCreatedAt :: RoundedFileTime +testCreatedAt = RoundedSystemTime 1000000 + +-- Tests + +testAddGetFileSender :: Expectation +testAddGetFileSender = withPgStore $ \st -> do + g <- C.newRandom + (sk, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sk + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {senderId, fileInfo = fi, createdAt}, key) -> do + senderId `shouldBe` testSenderId + sndKey fi `shouldBe` sk + size fi `shouldBe` 128000 + createdAt `shouldBe` testCreatedAt + key `shouldBe` sk + Left e -> expectationFailure $ "getFile failed: " <> show e + +testAddGetFileRecipient :: Expectation +testAddGetFileRecipient = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + result <- getFile st SFRecipient testRecipientId + case result of + Right (FileRec {senderId}, key) -> do + senderId `shouldBe` testSenderId + key `shouldBe` rcpKey + Left e -> expectationFailure $ "getFile failed: " <> show e + +testDuplicateFile :: Expectation +testDuplicateFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Left DUPLICATE_ + +testGetNonexistent :: Expectation +testGetNonexistent = withPgStore $ \st -> do + getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ()) + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + +testSetFilePath :: Expectation +testSetFilePath = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + setFilePath st testSenderId "/tmp/test_file" `shouldReturn` Right () + -- Second setFilePath should fail (file_path IS NULL guard) + setFilePath st testSenderId "/tmp/other_file" `shouldReturn` Left AUTH + -- Verify path was set + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {filePath}, _) -> readTVarIO filePath `shouldReturn` Just "/tmp/test_file" + Left e -> expectationFailure $ "getFile failed: " <> show e + +testDuplicateRecipient :: Expectation +testDuplicateRecipient = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Left DUPLICATE_ + +testDeleteFileCascade :: Expectation +testDeleteFileCascade = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + deleteFile st testSenderId `shouldReturn` Right () + -- File and recipient should both be gone + getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ()) + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + +testBlockFile :: Expectation +testBlockFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + let blockInfo = BlockingInfo {reason = BRContent, notice = Nothing} + blockFile st testSenderId blockInfo False `shouldReturn` Right () + result <- getFile st SFSender testSenderId + case result of + Right (FileRec {fileStatus}, _) -> readTVarIO fileStatus `shouldReturn` EntityBlocked blockInfo + Left e -> expectationFailure $ "getFile failed: " <> show e + +testAckFile :: Expectation +testAckFile = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right () + ackFile st testRecipientId `shouldReturn` Right () + -- Recipient gone, but file still exists + getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ()) + result <- getFile st SFSender testSenderId + case result of + Right _ -> pure () + Left e -> expectationFailure $ "getFile failed: " <> show e + +testExpiredFiles :: Expectation +testExpiredFiles = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo = testFileInfo sndKey + oldTime = RoundedSystemTime 100000 + newTime = RoundedSystemTime 999999999 + -- Add old and new files + addFile st (EntityId "old_file________") fileInfo oldTime EntityActive `shouldReturn` Right () + void $ setFilePath st (EntityId "old_file________") "/tmp/old" + addFile st (EntityId "new_file________") fileInfo newTime EntityActive `shouldReturn` Right () + -- Query expired with cutoff that only catches old file + expired <- expiredFiles st 500000 100 + length expired `shouldBe` 1 + case expired of + [(sId, path, sz)] -> do + sId `shouldBe` EntityId "old_file________" + path `shouldBe` Just "/tmp/old" + sz `shouldBe` 128000 + _ -> expectationFailure "expected 1 expired file" + +testStorageAndCount :: Expectation +testStorageAndCount = withPgStore $ \st -> do + g <- C.newRandom + (sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + getUsedStorage st `shouldReturn` 0 + getFileCount st `shouldReturn` 0 + let fileInfo = testFileInfo sndKey + addFile st (EntityId "file_a__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right () + addFile st (EntityId "file_b__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right () + getFileCount st `shouldReturn` 2 + used <- getUsedStorage st + used `shouldBe` 256000 -- 128000 * 2 + +-- Migration round-trip test + +testMigrationRoundTrip :: Expectation +testMigrationRoundTrip = do + let storeLogPath = "tests/tmp/xftp-migration-test.log" + storeLogPath2 = "tests/tmp/xftp-migration-test2.log" + -- 1. Create STM store with test data + stmStore <- newFileStore () :: IO STMFileStore + g <- C.newRandom + (sndKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcpKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (sndKey2, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + let fileInfo1 = testFileInfo sndKey1 + fileInfo2 = FileInfo {sndKey = sndKey2, size = 64000, digest = "other_digest____________"} + sId1 = EntityId "migration_file_1" + sId2 = EntityId "migration_file_2" + rId1 = EntityId "migration_rcp_1_" + addFile stmStore sId1 fileInfo1 testCreatedAt EntityActive `shouldReturn` Right () + void $ setFilePath stmStore sId1 "/tmp/file1" + addRecipient stmStore sId1 (FileRecipient rId1 rcpKey1) `shouldReturn` Right () + let testBlockInfo = BlockingInfo {reason = BRSpam, notice = Nothing} + addFile stmStore sId2 fileInfo2 testCreatedAt (EntityBlocked testBlockInfo) `shouldReturn` Right () + -- 2. Write to StoreLog + sl <- openWriteStoreLog False storeLogPath + writeFileStore sl stmStore + closeStoreLog sl + -- 3. Import StoreLog to Postgres + importFileStore storeLogPath testXFTPPostgresCfg + -- StoreLog should be renamed to .bak + doesFileExist storeLogPath `shouldReturn` False + doesFileExist (storeLogPath <> ".bak") `shouldReturn` True + -- 4. Export from Postgres back to StoreLog + exportFileStore storeLogPath2 testXFTPPostgresCfg + -- 5. Read exported StoreLog into a new STM store and verify + stmStore2 <- newFileStore () :: IO STMFileStore + sl2 <- readWriteFileStore storeLogPath2 stmStore2 + closeStoreLog sl2 + -- Verify file 1 + result1 <- getFile stmStore2 SFSender sId1 + case result1 of + Right (FileRec {fileInfo = fi, filePath, fileStatus}, _) -> do + size fi `shouldBe` 128000 + readTVarIO filePath `shouldReturn` Just "/tmp/file1" + readTVarIO fileStatus `shouldReturn` EntityActive + Left e -> expectationFailure $ "getFile sId1 failed: " <> show e + -- Verify recipient + result1r <- getFile stmStore2 SFRecipient rId1 + case result1r of + Right (_, key) -> key `shouldBe` rcpKey1 + Left e -> expectationFailure $ "getFile rId1 failed: " <> show e + -- Verify file 2 (blocked) + result2 <- getFile stmStore2 SFSender sId2 + case result2 of + Right (FileRec {fileInfo = fi, fileStatus}, _) -> do + size fi `shouldBe` 64000 + readTVarIO fileStatus `shouldReturn` EntityBlocked (BlockingInfo {reason = BRSpam, notice = Nothing}) + Left e -> expectationFailure $ "getFile sId2 failed: " <> show e + -- Cleanup + removeFile (storeLogPath <> ".bak") + removeFile storeLogPath2 diff --git a/tests/Test.hs b/tests/Test.hs index 63f97d8070..8303215610 100644 --- a/tests/Test.hs +++ b/tests/Test.hs @@ -46,12 +46,15 @@ import AgentTests.SchemaDump (schemaDumpTest) #endif #if defined(dbServerPostgres) +import CoreTests.XFTPStoreTests (xftpStoreTests, xftpMigrationTests) import NtfServerTests (ntfServerTests) import NtfClient (ntfTestServerDBConnectInfo, ntfTestStoreDBOpts) import PostgresSchemaDump (postgresSchemaDumpTest) import SMPClient (testServerDBConnectInfo, testStoreDBOpts) import Simplex.Messaging.Notifications.Server.Store.Migrations (ntfServerMigrations) import Simplex.Messaging.Server.QueueStore.Postgres.Migrations (serverMigrations) +import XFTPClient (testXFTPDBConnectInfo) +import XFTPServerTests (xftpServerTestsPg) #endif #if defined(dbPostgres) || defined(dbServerPostgres) @@ -152,6 +155,12 @@ main = do describe "XFTP file description" fileDescriptionTests describe "XFTP CLI" xftpCLITests describe "XFTP agent" xftpAgentTests +#if defined(dbServerPostgres) + around_ (postgressBracket testXFTPDBConnectInfo) $ do + describe "XFTP Postgres store operations" xftpStoreTests + describe "XFTP migration round-trip" xftpMigrationTests + describe "XFTP server (PostgreSQL backend)" xftpServerTestsPg +#endif #if defined(dbPostgres) describe "XFTP Web Client" $ xftpWebTests (dropAllSchemasExceptSystem testDBConnectInfo) #else diff --git a/tests/XFTPClient.hs b/tests/XFTPClient.hs index 85a1d21b84..a9707af633 100644 --- a/tests/XFTPClient.hs +++ b/tests/XFTPClient.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} @@ -14,12 +15,19 @@ import SMPClient (serverBracket) import Simplex.FileTransfer.Client import Simplex.FileTransfer.Description import Simplex.FileTransfer.Server (runXFTPServerBlocking) -import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), XFTPStoreConfig (..), defaultFileExpiration, defaultInactiveClientExpiration) +import Simplex.FileTransfer.Server.Store (FileStoreClass) import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange) import Simplex.Messaging.Protocol (XFTPServer) import Simplex.Messaging.Transport.HTTP2 (httpALPN) import Simplex.Messaging.Transport.Server import Test.Hspec hiding (fit, it) +#if defined(dbServerPostgres) +import Database.PostgreSQL.Simple (ConnectInfo (..), defaultConnectInfo) +import Simplex.FileTransfer.Server.Store.Postgres.Config (PostgresFileStoreCfg (..), defaultXFTPDBOpts) +import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts (..)) +import Simplex.Messaging.Agent.Store.Shared (MigrationConfirmation (..)) +#endif xftpTest :: HasCallStack => (HasCallStack => XFTPClient -> IO ()) -> Expectation xftpTest test = runXFTPTest test `shouldReturn` () @@ -58,7 +66,7 @@ withXFTPServerCfgNoALPN cfg = withXFTPServerCfg cfg {transportConfig = (transpor withXFTPServerCfg :: HasCallStack => XFTPServerConfig -> (HasCallStack => ThreadId -> IO a) -> IO a withXFTPServerCfg cfg = serverBracket - (\started -> runXFTPServerBlocking started cfg) + (\started -> runXFTPServerBlocking started (XSCMemory $ storeLogFile cfg) cfg) (threadDelay 10000) withXFTPServerThreadOn :: HasCallStack => (HasCallStack => ThreadId -> IO a) -> IO a @@ -192,3 +200,62 @@ testXFTPServerConfigEd25519SNI = { addCORSHeaders = True } } + +-- Store-parameterized server bracket + +withXFTPServerCfgStore :: (HasCallStack, FileStoreClass s) => XFTPStoreConfig s -> XFTPServerConfig -> (HasCallStack => ThreadId -> IO a) -> IO a +withXFTPServerCfgStore storeCfg cfg = + serverBracket + (\started -> runXFTPServerBlocking started storeCfg cfg) + (threadDelay 10000) + +withXFTPServerStore :: (HasCallStack, FileStoreClass s) => XFTPStoreConfig s -> IO a -> IO a +withXFTPServerStore storeCfg = withXFTPServerCfgStore storeCfg testXFTPServerConfig . const + +#if defined(dbServerPostgres) +testXFTPDBConnectInfo :: ConnectInfo +testXFTPDBConnectInfo = + defaultConnectInfo + { connectUser = "test_xftp_server_user", + connectDatabase = "test_xftp_server_db" + } + +testXFTPStoreDBOpts :: DBOpts +testXFTPStoreDBOpts = + defaultXFTPDBOpts + { connstr = "postgresql://test_xftp_server_user@/test_xftp_server_db", + schema = "xftp_server_test", + poolSize = 10, + createSchema = True + } + +testXFTPPostgresCfg :: PostgresFileStoreCfg +testXFTPPostgresCfg = + PostgresFileStoreCfg + { dbOpts = testXFTPStoreDBOpts, + dbStoreLogPath = Nothing, + confirmMigrations = MCYesUp + } + +withXFTPServerPg :: HasCallStack => IO a -> IO a +withXFTPServerPg = withXFTPServerStore (XSCDatabase testXFTPPostgresCfg) + +xftpTestPg :: HasCallStack => (HasCallStack => XFTPClient -> IO ()) -> Expectation +xftpTestPg test = runXFTPTestPg test `shouldReturn` () + +runXFTPTestPg :: HasCallStack => (HasCallStack => XFTPClient -> IO a) -> IO a +runXFTPTestPg test = withXFTPServerPg $ testXFTPClient test + +xftpTestPg2 :: HasCallStack => (HasCallStack => XFTPClient -> XFTPClient -> IO ()) -> Expectation +xftpTestPg2 test = xftpTestPgN 2 _test + where + _test [h1, h2] = test h1 h2 + _test _ = error "expected 2 handles" + +xftpTestPgN :: forall a. HasCallStack => Int -> (HasCallStack => [XFTPClient] -> IO a) -> IO a +xftpTestPgN nClients test = withXFTPServerPg $ run nClients [] + where + run :: Int -> [XFTPClient] -> IO a + run 0 hs = test hs + run n hs = testXFTPClient $ \h -> run (n - 1) (h : hs) +#endif diff --git a/tests/XFTPServerTests.hs b/tests/XFTPServerTests.hs index 0af3d7ecaa..3d58b3bc21 100644 --- a/tests/XFTPServerTests.hs +++ b/tests/XFTPServerTests.hs @@ -1,3 +1,4 @@ +{-# LANGUAGE CPP #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE NamedFieldPuns #-} @@ -6,7 +7,11 @@ {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE ScopedTypeVariables #-} -module XFTPServerTests where +module XFTPServerTests (xftpServerTests +#if defined(dbServerPostgres) + , xftpServerTestsPg +#endif + ) where import AgentTests.FunctionalAPITests (runRight_) import Control.Concurrent (threadDelay) @@ -51,6 +56,10 @@ import Test.Hspec hiding (fit, it) import UnliftIO.STM import Util import XFTPClient +#if defined(dbServerPostgres) +import Simplex.FileTransfer.Server.Env (XFTPStoreConfig (..)) +import XFTPClient (testXFTPPostgresCfg, withXFTPServerCfgStore, xftpTestPg, xftpTestPg2, xftpTestPgN) +#endif xftpServerTests :: Spec xftpServerTests = @@ -598,3 +607,110 @@ testStaleWebSession = decoded <- either (error . show) pure $ C.unPad respBody decoded `shouldBe` smpEncode SESSION +#if defined(dbServerPostgres) +xftpServerTestsPg :: Spec +xftpServerTestsPg = + before_ (createDirectoryIfMissing False xftpServerFiles) . after_ (removeDirectoryRecursive xftpServerFiles) $ do + describe "XFTP file chunk delivery (PostgreSQL)" $ do + it "should create, upload and receive file chunk (1 client)" testFileChunkDeliveryPg + it "should create, upload and receive file chunk (2 clients)" testFileChunkDelivery2Pg + it "should create, add recipients, upload and receive file chunk" testFileChunkDeliveryAddRecipientsPg + it "should delete file chunk (1 client)" testFileChunkDeletePg + it "should delete file chunk (2 clients)" testFileChunkDelete2Pg + it "should acknowledge file chunk reception (1 client)" testFileChunkAckPg + it "should acknowledge file chunk reception (2 clients)" testFileChunkAck2Pg + it "should not allow uploading chunks after specified storage quota" testFileStorageQuotaPg + it "should expire chunks after set interval" testFileChunkExpirationPg + +testFileChunkDeliveryPg :: Expectation +testFileChunkDeliveryPg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkDelivery c c + +testFileChunkDelivery2Pg :: Expectation +testFileChunkDelivery2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkDelivery s r + +testFileChunkDeliveryAddRecipientsPg :: Expectation +testFileChunkDeliveryAddRecipientsPg = xftpTestPgN 4 $ \hs -> case hs of + [s, r1, r2, r3] -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey1, rpKey1) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey2, rpKey2) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey3, rpKey3) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + (sId, [rId1]) <- createXFTPChunk s spKey file [rcvKey1] Nothing + [rId2, rId3] <- addXFTPRecipients s spKey sId [rcvKey2, rcvKey3] + uploadXFTPChunk s spKey sId chunkSpec + let testReceiveChunk r rpKey rId fPath = do + downloadXFTPChunk g r rpKey rId $ XFTPRcvChunkSpec fPath chSize digest + liftIO $ B.readFile fPath `shouldReturn` bytes + testReceiveChunk r1 rpKey1 rId1 "tests/tmp/received_chunk1" + testReceiveChunk r2 rpKey2 rId2 "tests/tmp/received_chunk2" + testReceiveChunk r3 rpKey3 rId3 "tests/tmp/received_chunk3" + _ -> error "expected 4 handles" + +testFileChunkDeletePg :: Expectation +testFileChunkDeletePg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkDelete c c + +testFileChunkDelete2Pg :: Expectation +testFileChunkDelete2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkDelete s r + +testFileChunkAckPg :: Expectation +testFileChunkAckPg = xftpTestPg $ \c -> runRight_ $ runTestFileChunkAck c c + +testFileChunkAck2Pg :: Expectation +testFileChunkAck2Pg = xftpTestPg2 $ \s r -> runRight_ $ runTestFileChunkAck s r + +testFileStorageQuotaPg :: Expectation +testFileStorageQuotaPg = do + let cfg = testXFTPServerConfig {fileSizeQuota = Just $ chSize * 2} + withXFTPServerCfgStore (XSCDatabase testXFTPPostgresCfg) cfg $ \_ -> + testXFTPClient $ \c -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey, rpKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + download rId = do + downloadXFTPChunk g c rpKey rId $ XFTPRcvChunkSpec "tests/tmp/received_chunk1" chSize digest + liftIO $ B.readFile "tests/tmp/received_chunk1" `shouldReturn` bytes + (sId1, [rId1]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId1 chunkSpec + download rId1 + (sId2, [rId2]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId2 chunkSpec + download rId2 + (sId3, [_rId3]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId3 chunkSpec + `catchError` (liftIO . (`shouldBe` PCEProtocolError QUOTA)) + deleteXFTPChunk c spKey sId1 + uploadXFTPChunk c spKey sId3 chunkSpec + +testFileChunkExpirationPg :: Expectation +testFileChunkExpirationPg = + withXFTPServerCfgStore (XSCDatabase testXFTPPostgresCfg) testXFTPServerConfig {fileExpiration} $ \_ -> + testXFTPClient $ \c -> runRight_ $ do + g <- liftIO C.newRandom + (sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + (rcvKey, rpKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g + bytes <- liftIO $ createTestChunk testChunkPath + digest <- liftIO $ LC.sha256Hash <$> LB.readFile testChunkPath + let file = FileInfo {sndKey, size = chSize, digest} + chunkSpec = XFTPChunkSpec {filePath = testChunkPath, chunkOffset = 0, chunkSize = chSize} + (sId, [rId]) <- createXFTPChunk c spKey file [rcvKey] Nothing + uploadXFTPChunk c spKey sId chunkSpec + downloadXFTPChunk g c rpKey rId $ XFTPRcvChunkSpec "tests/tmp/received_chunk1" chSize digest + liftIO $ B.readFile "tests/tmp/received_chunk1" `shouldReturn` bytes + liftIO $ threadDelay 1000000 + downloadXFTPChunk g c rpKey rId (XFTPRcvChunkSpec "tests/tmp/received_chunk2" chSize digest) + `catchError` (liftIO . (`shouldBe` PCEProtocolError AUTH)) + deleteXFTPChunk c spKey sId + `catchError` (liftIO . (`shouldBe` PCEProtocolError AUTH)) + where + fileExpiration = Just ExpirationConfig {ttl = 1, checkInterval = 1} +#endif +