[PECOBLR-2321] Result Set Heartbeat / Keep-Alive for Ongoing Query Executions#1415
[PECOBLR-2321] Result Set Heartbeat / Keep-Alive for Ongoing Query Executions#1415
Conversation
Design doc for PECOBLR-2321: periodic heartbeat polling to keep server-side result state alive while the client consumes results slowly. Key design points: - Periodic GetStatementStatus (SEA) / GetOperationStatus (Thrift) - Opt-in via EnableHeartbeat connection parameter (default false) - Configurable interval (default 60s, aligned with ADBC C# driver) - Zero-leak guarantee: stops on ResultSet.close, Statement.close, Connection.close, end-of-results, or server terminal state - Error resilience: 10 consecutive failures before self-stop - Includes cross-driver survey (ADBC C#, Python, Go, Node.js) - Mermaid diagrams: sequence flows, state machine, class diagram Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Add periodic heartbeat polling to keep server-side result state alive
while the client consumes results slowly. Prevents warehouse auto-stop
from destroying in-progress results.
New files:
- ResultHeartbeatManager: per-connection manager with shared
ScheduledExecutorService (daemon thread). Manages start/stop/shutdown
lifecycle for heartbeats across all statements.
- ResultHeartbeatManagerTest: 7 unit tests covering lifecycle,
idempotency, interval, shutdown, re-execution.
Connection parameters:
- EnableHeartbeat (default 0): opt-in to enable heartbeat polling
- HeartbeatIntervalSeconds (default 60): polling interval
Protocol support:
- SEA: GET /sql/statements/{id} via checkStatementAlive()
- Thrift: GetOperationStatus via checkStatementAlive()
Heartbeat eligibility (skipped when not needed):
- SEA inline (InlineJsonResult): all data in memory, no server state
- Update count / metadata results: no data to keep alive
- Direct results: server already closed the operation
- Null execution result: nothing to fetch
Error resilience:
- 10 consecutive failures before self-stop (transient error tolerance)
- Single success resets failure counter
- Terminal states (CLOSED/ERROR/CANCELED/TIMEDOUT) stop heartbeat
Cleanup guarantees (zero leak):
- next() returns false → stop
- ResultSet.close() → stop
- Statement.close() → safety net stop
- Connection.close() → shutdown all
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Co-authored-by: Isaac
Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Metadata operations (getTables, getColumns, etc.) can return large result sets that may need heartbeat. Only UPDATE statements are excluded (they return update count, no data to keep alive). Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Direct results mean the server already closed the operation and delivered all data inline. No heartbeat needed — detect via ExecutionState.CLOSED in the constructor rather than waiting for the first poll to discover it. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
- No heartbeat during executeAsync() wait — user controls polling via getExecutionResult(). Heartbeat starts only when ResultSet is constructed and user begins consuming results. - Updated eligibility table: SEA inline doesn't need heartbeat (all data in memory), added update count row. - Added execution phase vs consumption phase diagram. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Extract isHeartbeatEligible() as package-visible method for testing. Add 10 tests covering all eligibility/ineligibility scenarios: Eligible (heartbeat starts): - SEA cloud fetch (Arrow) — statement alive for URL refresh - Thrift inline — data fetched on-demand, server can evict - Thrift cloud fetch (Arrow) — operation handle alive - Metadata queries — can return large result sets Ineligible (heartbeat skipped): - SEA inline (JSON) — all data in memory - Direct results (CLOSED) — server already closed - Update count (DML) — no result rows - Null execution result — nothing to fetch - Async PENDING — user controls polling - Async RUNNING — user controls polling Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
1. consecutiveFailures → AtomicInteger (was plain int written by scheduler thread, no happens-before guarantee) 2. Stopped flag prevents RPC on closed client/session. stopHeartbeat sets AtomicBoolean flag BEFORE cancel(false). In-flight heartbeat tick checks flag before RPC, skips if set. Exceptions during shutdown don't count as consecutive failures. 3. Constructor leak: verified startHeartbeatIfEnabled() is already the last line of the constructor, after all throwing code. No change needed — already safe. 4. HeartbeatIntervalSeconds bounds check: reject <= 0 (use default 60), warn for > 3600 (heartbeat may not keep operation alive). Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
1. Null-defense on Thrift response: if getOperationState() returns null, assume alive and log warning (prevents NPE) 2. Better logging for heartbeat failures: first failure at INFO, terminal (10th) failure at WARN with statement ID and error message. Users will now see early signals instead of cryptic "operation not found" on next() 3. Stop old heartbeat on re-execute: resetForNewExecution() now explicitly calls stopHeartbeat(oldStatementId) before clearing state. Prevents wasteful 10-failure self-termination of orphaned heartbeats 4. Document cloud-fetch prefetch interaction: noted that StreamingChunkProvider/RemoteChunkProvider background RPCs act as implicit heartbeat. Explicit heartbeat is still useful for gaps (all chunks downloaded, prefetch paused, sliding window full) Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Tests rewritten to be deterministic (CountDownLatch, no Thread.sleep): - testStoppedFlagSetOnStop: get flag after start, verify set on stop - testStoppedFlagSetOnShutdown: same pattern, verify set on shutdown - testStopRacingWithScheduledTick: verify stopped flag prevents RPC - testShutdownWithBlockedTask: verify shutdownNow fires after 5s - testReExecutionReplacesHeartbeat: verify old task stops Fix: startHeartbeat resets stopped flag on new start (was stale after stop-then-restart cycle). Add DEBUG log on successful heartbeat start with statementId, resultType, and interval for support diagnostics. Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
Verified against dogfood warehouse: - testHeartbeatKeepsResultsAliveDuringSlowConsumption: execute query, read first row, pause 15s (3 heartbeats at 5s interval), read remaining 99 rows successfully. All 100 rows returned. - testHeartbeatStopsOnResultSetClose: verify clean shutdown after close Run with: DATABRICKS_HOST=... DATABRICKS_TOKEN=... DATABRICKS_HTTP_PATH=... \ mvn -pl jdbc-core test -Dtest="HeartbeatIntegrationTest" Signed-off-by: Gopal Lal <gopal.lal@databricks.com> Co-authored-by: Isaac Signed-off-by: Gopal Lal <gopal.lal@databricks.com>
| // Get the stopped flag from the manager — shared between the heartbeat task and | ||
| // stopHeartbeat(). Prevents RPC on a just-closed client/session: stopHeartbeat sets | ||
| // the flag before cancel(false), so an in-flight tick sees it and skips the RPC. | ||
| final java.util.concurrent.atomic.AtomicBoolean stopped = mgr.getStoppedFlag(statementId); |
There was a problem hiding this comment.
[CRITICAL] Orphan stopped flag — heartbeat RPC never actually fires
The stopped flag is captured here at line 328 before mgr.startHeartbeat(...) is called at line 378. Inside ResultHeartbeatManager.startHeartbeat():
// ResultHeartbeatManager.java
void startHeartbeat(StatementId statementId, Runnable heartbeatTask) {
...
stopHeartbeat(statementId); // line 63 — REMOVES this flag from map AND sets it to true
getStoppedFlag(statementId).set(false); // line 66 — computeIfAbsent creates a NEW AtomicBoolean
...
}So the AtomicBoolean captured by the closure here is the removed/orphaned one — permanently set to true. The new flag in the map (which mgr.stopHeartbeat(...) later mutates from DatabricksResultSet.stopHeartbeat, Statement.close, Connection.close) is invisible to the closure.
Net effect: every tick, if (stopped.get()) return; short-circuits → client.checkStatementAlive(statementId) is never called. The whole feature is non-functional.
The integration test only passes because warehouses don't actually expire results in 15s — so the absence of heartbeats isn't observed.
Fix options (any one):
- Capture the flag after
mgr.startHeartbeat(...)returns. - Reuse the same
AtomicBooleaninstartHeartbeat/stopHeartbeat(don'tremovefrom the map — justset(true)/set(false)). - Have the closure call
mgr.getStoppedFlag(statementId).get()per tick instead of holding a captured reference.
Add a unit test that asserts client.checkStatementAlive is invoked at least once via the production wiring — currently no such test exists.
| // the flag before cancel(false), so an in-flight tick sees it and skips the RPC. | ||
| final java.util.concurrent.atomic.AtomicBoolean stopped = mgr.getStoppedFlag(statementId); | ||
|
|
||
| Runnable heartbeatTask = |
There was a problem hiding this comment.
[CRITICAL] Lambda strong-captures this — abandoned ResultSet keeps warehouse alive forever
This lambda invokes stopHeartbeat() (instance method, line 342, 373) and reads statementId (instance field, lines 336/340/352/353/358/367/369). Both implicitly capture this — the entire DatabricksResultSet, including executionResult (Arrow buffers, chunk providers, potentially MB of cached row data).
The future is held in ResultHeartbeatManager.activeHeartbeats for the connection's lifetime. So:
- A user that does
stmt.executeQuery(...).next()once and abandons theResultSetreference (a real-world bug, but a JDBC driver shouldn't amplify it) will:- Never trigger
next()→falseorclose()(the only auto-stop paths) - Have the entire
ResultSetand its data retained untilConnection.close()— typically hours in pooled environments - Have the heartbeat poll forever, holding the warehouse open and accumulating cost
- Never trigger
- This is the exact "cost forever" failure mode the design doc Requirements §3 explicitly tries to prevent.
- It is also a denial-of-service amplifier: an app opening 10k orphaned result sets per hour holds 10k Arrow batches in heap until
Connection.close().
The C# ADBC reference avoids this: its poller is per-statement with linked cancellation, so even GC of the statement helps. The Java implementation here is connection-scoped, so GC of the ResultSet alone won't help — the future keeps a hard reference back to the ResultSet.
Fix: Don't capture this. Pull statementId and mgr (or just Runnable stopFn = () -> mgr.stopHeartbeat(localStatementId)) into locals so the lambda has no implicit this reference. Verify with javap -p -c (no synthetic this$0 field on the lambda class) or a simple unit test that holds a WeakReference<DatabricksResultSet> and asserts it's collectable after the strong reference is dropped.
|
|
||
| try { | ||
| DatabricksConnection conn = | ||
| (DatabricksConnection) parentStatement.getStatement().getConnection(); |
There was a problem hiding this comment.
[CRITICAL] Pooled connections (HikariCP, DBCP, DatabricksPooledConnection) silently get NO heartbeat
This direct cast (DatabricksConnection) parentStatement.getStatement().getConnection() will throw ClassCastException for any pooled connection wrapper:
DatabricksPooledConnectionreturns a JDK dynamicProxydeclaringConnection.class, IDatabricksConnectionInternal.class(seeDatabricksPooledConnection.java:155-158) — notDatabricksConnection.- HikariCP returns
HikariProxyConnection; DBCP returnsPoolGuardConnectionWrapper— same story.
The exception is swallowed by the outer catch (Exception e) { LOGGER.debug(...) } at line 384-386 (and again at line 401-402 for stopHeartbeat). Result: users opt in to EnableHeartbeat=1 on the most common Java connection pool deployment, get no protection, and see no error — just a DEBUG line they have to enable to find.
Fix (one of):
connection.unwrap(DatabricksConnection.class)— works through the proxy viaIDatabricksConnectionInternal.- Add
getHeartbeatManager()toIDatabricksConnectionInternalso the pool proxy forwards it transparently.
Option 2 is cleaner and matches how the rest of the driver handles pooled access.
| this.cachedTelemetryCollector = resolveTelemetryCollector(parentStatement); | ||
| this.isClosed = false; | ||
| this.wasNull = false; | ||
| startHeartbeatIfEnabled(); |
There was a problem hiding this comment.
[CRITICAL] Heartbeat never starts on Thrift result sets — feature is dead-on-arrival on the Thrift path
The Thrift constructor (this method, lines 153-196) does not call startHeartbeatIfEnabled(). Only the SEA constructor at line 127 does.
All Thrift result sets are constructed via DatabricksThriftAccessor (executeStatement, getStatementResult, etc.) using this constructor — so on a transportMode=thrift connection with EnableHeartbeat=1, the manager is created and the eligibility logic correctly returns true for THRIFT_INLINE / THRIFT_ARROW_ENABLED, but no heartbeat ever starts.
Per the design doc's eligibility table, Thrift inline (data only on cluster, server-evictable) is one of the most critical scenarios this feature is meant to cover. It's silently broken.
The eligibility tests in ResultSetHeartbeatEligibilityTest.testThriftInlineIsEligible / testThriftArrowIsEligible mock the instance via reflection and bypass the constructor entirely, so they pass while production reality is broken.
Fix: Add startHeartbeatIfEnabled(); at the end of this constructor (line 196). Add a real-constructor smoke test that builds a Thrift DatabricksResultSet via the production constructor and asserts mgr.getActiveHeartbeatCount() == 1.
| statementSet.remove(statement); | ||
| } | ||
| if (heartbeatManager != null) { | ||
| heartbeatManager.shutdown(); |
There was a problem hiding this comment.
[HIGH] heartbeatManager.shutdown() is skipped if any statement.close() throws — scheduler + thread leak
for (IDatabricksStatementInternal statement : statementSet) {
statement.close(false); // makes RPCs — can throw
statementSet.remove(statement);
}
if (heartbeatManager != null) {
heartbeatManager.shutdown(); // never reached on throw above
}statement.close(false) issues a closeStatement RPC — any network/server error throws SQLException out of this loop. The heartbeatManager.shutdown() and session.close() calls below it are skipped, leaking:
- The
ScheduledExecutorServicedaemon thread (yes, daemon — but still leaks until JVM exit) - All scheduled futures and references they hold (see the
this-capture issue onDatabricksResultSet.java:330-376)
Fix: Wrap in try/finally so heartbeatManager.shutdown() always runs. Also catch per-statement exceptions so the loop completes:
try {
for (IDatabricksStatementInternal statement : statementSet) {
try { statement.close(false); } catch (Exception e) {
LOGGER.warn("Error closing statement: {}", e.getMessage());
}
statementSet.remove(statement);
}
} finally {
if (heartbeatManager != null) {
heartbeatManager.shutdown();
}
}|
|
||
| private static ResultHeartbeatManager createHeartbeatManager( | ||
| IDatabricksConnectionContext connectionContext) { | ||
| if (connectionContext instanceof DatabricksConnectionContext) { |
There was a problem hiding this comment.
[HIGH] instanceof DatabricksConnectionContext silently disables heartbeat for any other context impl
isHeartbeatEnabled() and getHeartbeatIntervalSeconds() live on the concrete class DatabricksConnectionContext, not on the IDatabricksConnectionContext interface. Any test mock, test double, or alternate implementation of IDatabricksConnectionContext falls through to return null — heartbeat silently disabled.
This pattern also makes the feature impossible to enable from any future context implementation (e.g., a wrapped/decorated context for telemetry or testing) without modifying this exact instanceof check.
Fix: Add the two methods to IDatabricksConnectionContext with default impls and drop the instanceof:
// IDatabricksConnectionContext.java
default boolean isHeartbeatEnabled() { return false; }
default int getHeartbeatIntervalSeconds() { return 60; }Then this method becomes:
private static ResultHeartbeatManager createHeartbeatManager(IDatabricksConnectionContext ctx) {
if (ctx.isHeartbeatEnabled()) {
return new ResultHeartbeatManager(ctx.getHeartbeatIntervalSeconds());
}
return null;
}| * @param statementId statement to check status for | ||
| * @return true if the statement is still in a non-terminal state (alive), false if terminal | ||
| */ | ||
| default boolean checkStatementAlive(StatementId statementId) throws SQLException { |
There was a problem hiding this comment.
[HIGH] Default checkStatementAlive returns false — caller treats this as terminal and self-stops
The default return false is interpreted as "terminal state" by the heartbeat task at DatabricksResultSet.java:336-343:
boolean alive = client.checkStatementAlive(statementId);
...
if (!alive) {
LOGGER.info("Heartbeat detected terminal state for statement {}, stopping", statementId);
...
stopHeartbeat();
}Any future IDatabricksClient implementation (test fakes, custom transports, third-party impls) that doesn't override this method will:
- Stop on the first heartbeat tick
- Emit a misleading
INFOlog saying the statement is in terminal state — when in reality the client doesn't support heartbeat
The two production impls do override, so this is academic for production today, but the default semantics are wrong and surprising.
Fix (one of):
- Make this method abstract (no default) — forces every
IDatabricksClientimplementer to deal with it explicitly. - Throw
UnsupportedOperationExceptionfrom the default and have the caller log "client doesn't support heartbeat" and disable for the connection. - Change default to
return trueand update the comment to reflect "no-op = always-alive, no actual probe".
Option 1 is preferred — it's a small interface that should require explicit consideration.
| GetStatementRequest request = new GetStatementRequest().setStatementId(statementId); | ||
| Request req = new Request(Request.GET, getStatusPath, apiClient.serialize(request)); | ||
| req.withHeaders(getHeaders("getStatement")); | ||
| GetStatementResponse response = apiClient.execute(req, GetStatementResponse.class); |
There was a problem hiding this comment.
[HIGH] No per-RPC timeout — HeartbeatRequestTimeoutSeconds is documented but never implemented
The design doc (docs/design/HEARTBEAT_KEEP_ALIVE.md:423) lists HeartbeatRequestTimeoutSeconds with default 30s. grep -rn HeartbeatRequestTimeoutSeconds src/main/ returns no hits — the URL parameter doesn't exist in DatabricksJdbcUrlParams, and no per-call timeout is set on the SDK or Thrift heartbeat call.
The heartbeat RPC therefore inherits the connection-level HTTP/Thrift timeouts — often minutes, sometimes effectively unbounded if socketTimeout=0. Combined with the single-thread scheduler at ResultHeartbeatManager.java:42, this has three concrete consequences:
- Single-point starvation: one stuck heartbeat blocks every other heartbeat on the connection — every other registered statement misses ticks → results expire while the warehouse is still being kept alive (wrong outcome on both axes).
- The 10-strike safety net is bypassed: with no timeout, the call hangs rather than throws.
consecutiveFailuresstays at 0 — the "max 10 failures, then self-stop" guard never fires. Connection.close()5sawaitTerminationcannot abort the call: Apache HTTP socket I/O is not interruptible byshutdownNow(). Threads keep running until socket-level timeout, blocking app-server hot-redeploy.
Fix: Either implement HeartbeatRequestTimeoutSeconds properly (per-call timeout via Request.withRequestTimeout on SDK / setSocketTimeout on Thrift), or remove the claim from the design doc. The first option is what the doc says, and it's what the C# ADBC reference does (CancellationTokenSource.CancelAfter(_requestTimeoutSeconds)).
| "Starting heartbeat for statement {} with interval {}s", statementId, intervalSeconds); | ||
|
|
||
| ScheduledFuture<?> future = | ||
| scheduler.scheduleAtFixedRate( |
There was a problem hiding this comment.
[HIGH] Use scheduleWithFixedDelay instead of scheduleAtFixedRate — current code bursts on slow ticks
scheduleAtFixedRate semantics: if a tick takes longer than the interval (e.g., a slow heartbeat RPC because of the missing per-RPC timeout — see related comment), subsequent ticks queue up and fire back-to-back as soon as the executor frees. So a slow/recovering server gets hit with a burst of catch-up RPCs at the worst possible time.
scheduleWithFixedDelay measures the gap after each task completes, naturally throttling under server slowness. It's a one-line change and matches the C# ADBC reference (await Task.Delay(...) AFTER each poll completes).
// before
scheduler.scheduleAtFixedRate(heartbeatTask, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);
// after
scheduler.scheduleWithFixedDelay(heartbeatTask, intervalSeconds, intervalSeconds, TimeUnit.SECONDS);There's no behavioral reason to prefer fixed-rate here — the polling cadence isn't drift-sensitive (we're not aligned to a wall clock).
| return state != StatementState.CANCELED | ||
| && state != StatementState.CLOSED | ||
| && state != StatementState.FAILED; | ||
| } catch (IOException e) { |
There was a problem hiding this comment.
[HIGH] catch (IOException e) is too narrow — SDK runtime exceptions bypass the wrapping
apiClient.execute(...) does NOT only throw IOException. It also throws:
DatabricksException/DatabricksError— for HTTP 4xx/5xx responses (e.g., 401 token expired returnsDatabricksError, notIOException)RuntimeException— for serialization / NPE on malformed responses
These propagate uncaught past this try/catch. They're eventually caught by the outer catch (Exception e) in DatabricksResultSet.java:344, but:
- They bypass the wrapping into
DatabricksSQLException(SDK_CLIENT_ERROR)— losing the structured error code surface. - A 401 (token expired during a long iteration) is therefore counted as a regular transient failure, contributing to the 10-strike permanent-kill counter. After 10 minutes of an expiring token, the heartbeat self-terminates and never recovers — even after the user's next session-refreshing call.
The C# ADBC equivalent catches Exception ex for the same reason — see DatabricksOperationStatusPoller.cs:149.
Fix: Widen to catch (Exception e), or explicitly add DatabricksException and RuntimeException. Same goes for DatabricksThriftServiceClient.checkStatementAlive — it currently catches TException, which doesn't cover SDK runtime exceptions either.
| if (statementId != null) { | ||
| ResultHeartbeatManager mgr = connection.getHeartbeatManager(); | ||
| if (mgr != null) { | ||
| mgr.stopHeartbeat(statementId); |
There was a problem hiding this comment.
[HIGH] Statement.cancel() does not stop the heartbeat
This cancel() calls cancelStatement on the server but does not call mgr.stopHeartbeat(statementId). Only close() (line 175-181) and resetForNewExecution() (line 982-988) clear the heartbeat.
After cancel() returns, the heartbeat keeps polling against a cancelled operation. In the happy path the server returns CANCELED_STATE and the heartbeat task self-stops on the terminal-state check — fine. But if there's a race or "operation not found" before the server registers the cancel, those errors count as transient failures, churning the 10-strike counter and emitting WARN/INFO log noise for up to ~10 minutes after a successful cancel.
Fix: Add a heartbeat stop to cancel(), mirroring the pattern in close():
public void cancel() throws SQLException {
...
if (statementId != null) {
ResultHeartbeatManager mgr = connection.getHeartbeatManager();
if (mgr != null) {
mgr.stopHeartbeat(statementId);
}
}
this.connection.getSession().getDatabricksClient().cancelStatement(statementId);
...
}
Code Review Squad — Critical findingsI ran a multi-perspective AI review (security, architecture, language, ops, performance, tests, maintainability, agent-compat, devil's advocate) and verified the findings below by reading the actual code on this branch (commit Most important: the feature does not work as writtenTwo independent verified bugs make the heartbeat permanently no-op on every code path:
The integration test passes only because real warehouses don't actually expire results in 15s, so the absence of heartbeats isn't observed. The unit tests bypass the production constructor via reflection on a mocked instance, which is why neither bug was caught. What I posted as inline commentsCritical (4):
High (7):
Other concerns worth addressing (not posted inline to keep this thread focused)
RecommendationThis PR is not safe to merge until at least the four critical bugs (especially the orphan-flag one, which makes the entire feature dead code) are fixed and there are real tests that would have caught them — i.e., a test that actually verifies Feedback? Drop it in #code-review-squad-feedback. |
Summary
Design + implementation for PECOBLR-2321: periodic heartbeat polling to keep server-side result state alive while the client consumes results slowly.
Problem
When users read query results slowly (pausing between
next()calls), the warehouse can auto-stop after its idle timeout. For inline results (data only on cluster, not uploaded to cloud storage), this means permanent data loss. The user gets errors likeINVALID_HANDLE_STATUSor "operation not found".Solution
A
ResultHeartbeatManagerthat periodically callsGetStatementStatus(SEA) orGetOperationStatus(Thrift) to signal the server that results are still being consumed. Opt-in viaEnableHeartbeat=1connection parameter (default false due to cost implications).Design doc
docs/design/HEARTBEAT_KEEP_ALIVE.md— includes cross-driver survey, Mermaid diagrams (sequence flows, state machine, class diagram), and detailed lifecycle analysis.Heartbeat eligibility (skipped when not needed)
getExecutionResult()— heartbeat starts only when ResultSet is constructedError resilience
Zero-leak guarantee
Heartbeat stops in 4 places:
next()returns false,ResultSet.close(),Statement.close(),Connection.close()Implementation
ResultHeartbeatManager— per-connection manager withScheduledExecutorService(daemon thread)ResultHeartbeatManagerTest— 7 unit testsDatabricksJdbcUrlParams—EnableHeartbeat(default 0),HeartbeatIntervalSeconds(default 60)DatabricksConnectionContext— getter methodsDatabricksConnection— creates/shuts down managerDatabricksResultSet— starts heartbeat in constructor, stops on close/next-falseDatabricksStatement— safety net stop in close()IDatabricksClient—checkStatementAlive()default methodDatabricksSdkClient— SEA heartbeat via GET/sql/statements/{id}DatabricksThriftServiceClient— Thrift heartbeat viaGetOperationStatus