rocksdbffm is an experimental Java wrapper for RocksDB using the Foreign Function & Memory (FFM) API (Project
The project aims to provide a more maintainable alternative to the traditional JNI-based rocksdbjni.
The target is JDK 25+ because of java.lang.foreign.
The native library is built from the RocksDB source via zig cc / zig c++ as a drop-in C/C++ compiler (
PORTABLE=1 make shared_lib). Zig bundles clang and libc++ for every target, enabling hermetic cross-compilation
without a separate sysroot or system toolchain.
Especially this post Expanding RocksDB’s Java FFI.
And this: Rocksjava-presentation
There is often a significant delay between new features appearing in the RocksDB C++ core and their availability in the Java JNI wrappers. This is largely due to the complexity of maintaining C++ glue code. By using FFM, we can map C headers directly in Java, simplifying the process of supporting new C++ features.
The code is mechanically generated, and it can be inspected easily (as it is normal Java code).
FFM is much more safe than JNI: memory errors are not crashing the whole JVM.
Exposing MemorySegment methods:
- Pinnable Slices: Utilizes
rocksdb_get_pinned - MemorySegment & ByteBuffer: Support for
java.lang.foreign.MemorySegmentand directByteBufferfor data transfer between Java and native code.
Benchmarks performed on JDK 25 (Apple M-series), RocksDB v10.10.1. Each tier uses the same single pre-seeded key so the numbers reflect pure call overhead, not cache miss variance.
| Operation | API tier | FFM (ops/s) | JNI (ops/s) | Gain |
|---|---|---|---|---|
| Reads | byte[] |
7,196,554 | 3,619,125 | +99% |
| Reads | DirectByteBuffer |
8,077,135 | 3,656,113 | +121% |
| Reads | MemorySegment |
8,149,510 | — | — |
| Writes | byte[] |
671,213 | 608,496 | +10% |
| Writes | DirectByteBuffer |
694,166 | 590,923 | +17% |
| Writes | MemorySegment |
686,889 | — | — |
| Batch writes (100 ops) | byte[] |
23,936 | 16,813 | +42% |
Both libraries use PinnableSlice for reads. Read gains (~2×) come from the absence of JNI frame setup and
thread-state transitions — FFM downcall stubs are JIT-compiled directly. MemorySegment is the fastest read tier
because segments backed by a confined arena carry no GC scope-check overhead on the hot path. Write gains are smaller
because WAL/memtable I/O dominates. Batch write gains multiply because per-call overhead is paid 100× per iteration.
./scripts/benchmark.shBuilds everything, runs both FFM and JNI suites, and prints a side-by-side comparison table.
This project is currently experimental. The table below tracks parity with rocksdbjni.
| Feature | Status | Notes |
|---|---|---|
| DB Open/Create | ✅ | Options, CreateIfMissing, ReadOnly |
| Put/Get/Delete | ✅ | byte[], ByteBuffer, MemorySegment; zero-copy via PinnableSlice |
| WriteBatch | ✅ | Atomic multi-op writes |
| Transactions (pessimistic) | ✅ | TransactionDB, savepoints, get-for-update |
| Checkpoints | ✅ | Point-in-time on-disk snapshot |
| Table Options | ✅ | BlockBasedTableConfig, LRUCache, FilterPolicy (Bloom) |
| Iterators | ✅ | seekToFirst/Last, seek, seekForPrev, next/prev; all three access tiers |
| Snapshots | ✅ | Point-in-time consistent reads; ReadOptions.setSnapshot, sequence numbers |
| Flush | ✅ | flush(FlushOptions), flushWal(boolean sync); sync/async modes |
| DB Properties | ✅ | getProperty(DBProperty) → Optional<String>, getLongProperty(DBProperty) → OptionalLong |
| Statistics | ✅ | TickerType, HistogramType, StatsLevel |
| Compression | ✅ | CompressionType enum (NO/Snappy/zlib/bz2/LZ4/LZ4HC/Xpress/Zstd); Options.setCompression; CompressionType.getSupportedTypes() runtime probe |
| Column Families | ❌ | Key namespace isolation |
| MultiGet | ❌ | Bulk reads |
| DeleteRange | ✅ | Range tombstones; deleteRange on RocksDB and WriteBatch; all three access tiers |
| Compaction control | ✅ | compactRange (all three tiers + CompactOptions), suggestCompactRange, disableFileDeletions, enableFileDeletions |
| SST File Ingest | ✅ | SstFileWriter (put/delete/deleteRange/merge), RocksDB.ingestExternalFile; IngestExternalFileOptions |
| Backup Engine | ✅ | BackupEngine, BackupEngineOptions, RestoreOptions, BackupInfo, BackupId; incremental backup/restore; purge; verify |
| TTL DB | ✅ | openWithTtl(path, Duration); lazy expiry via compaction; full API available |
| Optimistic Transactions | ✅ | OptimisticTransactionDB; conflict detection at commit; OptimisticTransactionOptions |
| CompactionFilter | ❌ | Custom compaction logic |
| WAL Iterator | ✅ | WalIterator, WalBatchResult; getUpdatesSince(SequenceNumber), getLatestSequenceNumber; CDC/replication/auditing |
| Rate Limiter | ✅ | RateLimiter; writes-only, reads-only, all-IO modes; auto-tuned variant; Options.setRateLimiter |
| SST File Manager | ✅ | SstFileManager; disk-space limits, trash-deletion rate, compaction buffer; Env; Options.setSstFileManager, Options.setEnv |
| Secondary DB | ✅ | SecondaryDB; tryCatchUpWithPrimary, get, iterator, snapshot, properties |
| Blob DB | ✅ | BlobDB; blob options on Options; blob properties (BLOB_STATS, NUM_BLOB_FILES, …); PrepopulateBlobCache |
| Logger | ✅ | Logger + callback |
| Custom Comparators | ❌ | custom comparators |
| Advanced column family | ❌ | |
| Advanced memtable config | ❌ | |
| Perf Context | ✅ | PerfContext, PerfLevel, PerfMetric; setPerfLevel, reset, metric, report |
| Persistent Cache | 🚫 | Not exposed in rocksdb/c.h — C++ only (NewPersistentCache); requires a custom C shim to bridge |
| Background Jobs | 🚧 | Tier 1: cancelAllBackgroundWork, disableManualCompaction, enableManualCompaction, waitForCompact(WaitForCompactOptions); Tier 3–5 (Options tuning, Env thread pools, FIFO/Universal options) pending |
These features are planned but not yet implemented:
- Merge / MergeOperator:
mergeonRocksDB,WriteBatch,SstFileWriter;setUInt64AddMergeOperatoronOptions; customMergeOperatorvia FFM upcall stubs.
Several deliberate decisions set this library apart from rocksdbjni.
Requires JDK 25+. The API uses java.lang.foreign (FFM), records, sealed types, and pattern matching where they reduce
boilerplate or improve safety. There is no legacy compatibility shim.
Every operation that can fail throws RocksDBException (an unchecked exception). rocksdbjni historically returned
null, -1, or relied on status objects that callers could silently ignore. Here a failure is always loud.
Raw numeric types carry no unit information and cannot be validated at construction time.
| Concept | rocksdbjni | rocksdbffm |
|---|---|---|
| Cache / buffer sizes | long (bytes, silently) |
MemorySize.ofMB(64) |
| Snapshot position | long |
SequenceNumber |
Both types are immutable, Comparable, and reject invalid values at construction — an illegal value cannot be created
and therefore cannot be passed anywhere.
All methods that accept a filesystem location (open, checkpoint, backup, …) take java.nio.file.Path instead of
String. This prevents confusion between absolute and relative paths, integrates naturally with the NIO file API, and
rules out accidentally passing non-path strings.
This is a heavily AI-driven project. We intend to continue using AI as a cornerstone of our development process, from mapping C headers to optimizing the FFM implementation.
- JDK 25+.
- Zig (any 0.15.x build).
# Build RocksDB from the submodule (first time or after a clean)
mvn generate-resources -Pnative-build
# Run unit tests
mvn testThis project is licensed under the same terms as RocksDB (LevelDB/Apache 2.0).
The project is open to contributions, particularly in the following areas:
- Implementing missing RocksDB C API features in Java.
- Benchmarking and performance profiling of the Java-to-Native boundary.
- Improving the safety and lifecycle management of native objects.
- Create a community around this project with the intent to merge it back into rocksdb.
- If that fails and community is aligned:
- Run it as separated project (like rust-rocksdb).
- Deploy to maven central.
- Add arena-accepting overloads to the
byte[]API tier (Zig-style caller-owned allocator):db.put(arena, key, value)/db.get(arena, key)/db.delete(arena, key). Lets callers amortize arena create/destroy over a batch of calls instead of paying it per call.