Skip to content

POC for supporting Spotter 3#85

Draft
Rifdhan wants to merge 5 commits intomainfrom
poc-support-spotter-3
Draft

POC for supporting Spotter 3#85
Rifdhan wants to merge 5 commits intomainfrom
poc-support-spotter-3

Conversation

@Rifdhan
Copy link
Copy Markdown
Collaborator

@Rifdhan Rifdhan commented Mar 18, 2026

Not intended to be merged, just sharing for reference purposes

Rifdhan Nazeer added 4 commits March 11, 2026 14:12
- Use latest TS API SDK, update code to use latest bindings
- Add code to support local dev
- Add code to use KV storage from Durable Object inside MCP Server
- Service layer and tool definitions
- Accumulate and retrieve latest messages asynchronously
- Store and retrieve streaming message data using SQLite storage
- Add TTL for storage of 30 sec, automatically reset when data is
  touched
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces foundational support for Spotter 3 by upgrading the ThoughtSpot REST API SDK and integrating new functionalities for agent-based conversations. It enables the creation, management, and interaction with agent conversations, including handling streaming responses and maintaining conversation state over time.

Highlights

  • Spotter 3 API Integration: Integrated new API endpoints and tools to support agent conversations for Spotter 3, including synchronous and asynchronous messaging capabilities.
  • Dependency Update: Updated the @thoughtspot/rest-api-sdk to version 2.22.0 to align with the new Spotter 3 features.
  • Asynchronous Conversation State Management: Implemented mechanisms for managing streaming conversation state, including storing and retrieving conversation messages and setting TTL timeouts for state eviction.
  • New Tool Definitions: Added new tool definitions in server.json and mcp-server.ts for creating agent conversations, sending messages, sending asynchronous messages, and retrieving message updates.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Spotter 3 by updating dependencies, adding new API endpoints for agent conversations, and modifying the MCP server to handle these new functionalities. The changes include updating package dependencies, adding new schemas and tool definitions for agent conversations, and modifying the instrumentedMCPServer function to include conversation state management. The review focuses on identifying potential issues related to correctness and maintainability, specifically concerning the new agent conversation features and the integration of conversation state management.

console.log('>>> streaming response update with # lines', lines.length);

for (const line of lines) {
if (!line.startsWith("data: ")) continue;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

It's crucial to validate the line before attempting to parse it as JSON. If the line does not start with "data: ", it might contain invalid data or an error message. Attempting to parse such a line could lead to unexpected errors and potentially expose the system to security vulnerabilities. Add a check to ensure the line starts with "data: " before proceeding with the JSON parsing.

Comment on lines +525 to +593
async callSendAgentMessageAsync(request: z.infer<typeof CallToolRequestSchema>) {
const { conversationId, messages } = SendAgentMessageSchema.parse(request.params.arguments);
const response = await this.getThoughtSpotService().sendAgentMessageStreaming(conversationId, { messages });

const reader = response.body?.getReader();
if (!reader) {
throw new Error("Failed to get reader from response body");
}

const updateConversationState = async (isDone: boolean, latestMessages?: AgentMessage[]) => {
let conversationState = await this.getConversationState(conversationId);
if (!conversationState) {
conversationState = {
latestMessages: [],
isDone: false,
}
}

await this.updateConversationStateAndResetTtlTimeout(conversationId, {
...conversationState,
isDone,
...(latestMessages ? { latestMessages: [...conversationState.latestMessages, ...latestMessages as AgentMessage[]] } : {}),
});
}

setTimeout(async () => {
const decoder = new TextDecoder();
let buffer = "";

while (true) {
const { done, value } = await reader.read();
if (done) {
console.log('>>> streaming response done');
await updateConversationState(true);
break;
}

buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() || "";
console.log('>>> streaming response update with # lines', lines.length);

for (const line of lines) {
if (!line.startsWith("data: ")) continue;

try {
const data = JSON.parse(line.slice(6));
for (const item of data) {
if (item.type === 'text-chunk') {
await updateConversationState(false, [{
type: 'text-chunk',
text: item.content,
}]);
} else if (item.type === 'answer') {
await updateConversationState(false, [{
type: 'answer',
answerTitle: item.metadata.title,
answerQuery: item.metadata.sage_query,
}]);
} else {
console.log('>>> unknown item in event stream', item);
}
};
} catch(error) {
console.log('>>> error parsing line', line, error);
}
}
}
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The callSendAgentMessageAsync function uses setTimeout to process the streaming response. This approach might not be reliable in all environments, especially if the worker is terminated before the timeout completes. Consider using a more robust mechanism for handling asynchronous tasks, such as a queue or a dedicated background process. Also, there is no error handling if the reader.read() fails.

Comment on lines +529 to +531
const reader = response.body?.getReader();
if (!reader) {
throw new Error("Failed to get reader from response body");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Failing to get a reader from the response body is a critical error that should be handled more gracefully. Instead of throwing an error, consider returning an error response to the client with a descriptive message. This will prevent the entire worker from crashing and provide more informative feedback to the user.

Comment on lines +7 to +16
export function instrumentedMCPServer<T extends BaseMCPServer>(MCPServer: new (
ctx: Context,
getConversationState: (
conversationId: string,
) => Promise<StreamingConversationState | undefined>,
updateConversationStateAndResetTtlTimeout: (
conversationId: string,
newState: StreamingConversationState,
) => Promise<void>,
) => T, config: ResolveConfigFn) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The number of parameters in this function signature is getting quite large. Consider using an object to group related parameters for better readability and maintainability. This is especially important as more features are added to the MCP server.

For example, instead of passing getConversationState and updateConversationStateAndResetTtlTimeout as separate parameters, you could create a ConversationStateHandlers object that encapsulates these functions.


async init() {
await this.server.init();
this.ctx.storage
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line this.ctx.storage does not appear to be doing anything. Is this intentional? If not, it should be removed to avoid confusion.

Comment on lines +500 to +510
async callCreateAgentConversation(request: z.infer<typeof CallToolRequestSchema>) {
const { metadata_context, conversation_settings } = CreateAgentConversationSchema.parse(request.params.arguments);
const conversation = await this.getThoughtSpotService().createAgentConversation({
metadata_context,
conversation_settings,
});

return this.createStructuredContentSuccessResponse(
{ conversation_id: conversation.conversation_id },
"Agent conversation created successfully"
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The success message "Agent conversation created successfully" is a string literal. Consider defining this as a constant to improve maintainability and avoid typos. This applies to other success messages in this file as well.

Comment on lines +605 to +611
// Wait up to 5 seconds for the conversation state to be available
let conversationState: StreamingConversationState | undefined;
for (let i = 0; i < 5; i++) {
conversationState = await this.getConversationState(conversationId);
if (conversationState?.latestMessages?.length ?? 0 > 0) break;
await new Promise(resolve => setTimeout(resolve, 1000));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Waiting for a fixed duration (5 seconds) for the conversation state might not be sufficient in all cases. Consider using a more adaptive approach, such as a retry mechanism with exponential backoff, to handle cases where the conversation state is not immediately available. This will make the system more resilient to temporary delays or network issues.

Comment on lines +613 to +614
if (!conversationState) {
console.log('>>> no latest status for conversationId, assuming done and evicted', conversationId);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Logging "no latest status for conversationId, assuming done and evicted" suggests a potential issue with conversation state management. Investigate why the conversation state is being evicted prematurely and consider implementing a mechanism to prevent this from happening, such as increasing the TTL or using a more persistent storage solution.

});
}

setTimeout(async () => {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to wrap this in a setTimeout ?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was my simple way to defer event processing to async and return the original request sync. We can go with something more idiomatic in the final version.

break;
}

buffer += decoder.decode(value, { stream: true });
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems this stream parsing logic can be moved down to service layer

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, can do

console.log('>>> no latest status for conversationId, assuming done and evicted', conversationId);
}

await this.updateConversationStateAndResetTtlTimeout(conversationId, {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to update here ?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a chance you override updates coming from the backend in another thread. Since you are updating them here.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I wanted to rethink the message storage contract for the prod version

await this.cancelSchedule(oldState.ttlTimeoutId);
}

const schedule = await this.schedule(30, 'clearConversationState' as any, {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we do delete and update here? Instead of just appending new messages ?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I followed, there is no delete and update here? Are you referring to the scheduled timer?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, is this the TTL ? We delete the conversation after 30 sec ?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I used 30s in the POC to validate it was working but in reality it will be longer of course


// Wait up to 5 seconds for the conversation state to be available
let conversationState: StreamingConversationState | undefined;
for (let i = 0; i < 5; i++) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this breaking loop help? Should we also take as argument, last message recieved id? That way we can check if there are more messages recieved after the last message and wait for more messages before responding.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the approach here is simplified just for the POC, but the overall concept here is I wanted there to be a max timeout for getUpdates, so that it can't get stuck waiting indefinitely for updates in case Spotter BE crashed or something

- Ability to display charts in an embedded iframe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants