Skip to content

[New Version]update 0.0.4 docs#11

Merged
wangxingjun778 merged 1 commit intomainfrom
dev
Feb 28, 2026
Merged

[New Version]update 0.0.4 docs#11
wangxingjun778 merged 1 commit intomainfrom
dev

Conversation

@wangxingjun778
Copy link
Collaborator

No description provided.

@wangxingjun778 wangxingjun778 merged commit 195e7e8 into main Feb 28, 2026
1 check passed
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on updating the Sirchmunk documentation to reflect the new 0.0.4 version. Key additions include comprehensive guides for Docker deployment and the introduction of a new, faster 'FAST' search mode. The changes ensure that all user-facing documentation, including API references and SDK examples, are current with the latest features and operational details, providing clearer instructions for deployment and usage.

Highlights

  • Version Update: Sirchmunk has been updated to version 0.0.4, reflected across the documentation.
  • Docker Deployment Support: New documentation and guides have been added to facilitate Docker-based deployment, offering a simplified containerized setup.
  • Introduction of FAST Search Mode: A new 'FAST' search mode has been introduced, providing significantly quicker retrieval (2-5 seconds) with fewer LLM calls, now set as the default search mode.
  • Documentation Enhancements: Various guides, including CLI, Configuration, MCP Integration, Python SDK, and API Reference, have been updated to reflect the new search modes, their parameters, and LLM requirements.
  • Web UI Demo Update: The Web UI demo in the showcase has been updated from a GIF to an embedded video for a better user experience.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • content/_index.md
    • Updated the Sirchmunk version announcement to v0.0.4, highlighting Docker support, FAST search mode, and simplified deployment.
  • content/_index.zh.md
    • Updated the Sirchmunk version announcement to v0.0.4 in Chinese, highlighting Docker support, FAST search mode, and simplified deployment.
  • content/blog/in-context-search/index.md
    • Updated the 'Primary Mechanism' table entry for Sirchmunk to include 'Greedy Cascade (FAST) / Monte Carlo Sampling (DEEP)'.
    • Added a new paragraph explaining the characteristics and performance of FAST and DEEP search modes.
  • content/blog/in-context-search/index.zh.md
    • Updated the '核心机制' table entry for Sirchmunk to include '贪心级联(FAST)/ 蒙特卡洛采样(DEEP)' in Chinese.
    • Added a new paragraph explaining the characteristics and performance of FAST and DEEP search modes in Chinese.
  • content/blog/technical-deep-dive/index.md
    • Added a new paragraph detailing the two distinct search modes, FAST and DEEP, including their operational specifics and performance metrics.
  • content/blog/technical-deep-dive/index.zh.md
    • Added a new paragraph detailing the two distinct search modes, FAST and DEEP, including their operational specifics and performance metrics in Chinese.
  • content/docs/getting-started.md
    • Added a new card linking to the 'Docker Deployment' guide in the getting started section.
  • content/docs/getting-started.zh.md
    • Added a new card linking to the 'Docker 部署' guide in the getting started section in Chinese.
  • content/docs/guide/cli.md
    • Updated CLI search command examples to explicitly mention 'FAST mode by default'.
    • Added a new example demonstrating how to use the --mode DEEP option for comprehensive analysis.
  • content/docs/guide/cli.zh.md
    • Updated CLI search command examples to explicitly mention '默认 FAST 模式' in Chinese.
    • Added a new example demonstrating how to use the --mode DEEP option for comprehensive analysis in Chinese.
  • content/docs/guide/configuration.md
    • Updated the description for LLM_API_KEY to indicate its requirement for both FAST and DEEP modes.
    • Changed the default search mode parameter from DEEP to FAST.
    • Clarified LLM requirements for FAST and DEEP modes, emphasizing FAST mode's speed advantage.
  • content/docs/guide/configuration.zh.md
    • Updated the description for LLM_API_KEY to indicate its requirement for both FAST and DEEP modes in Chinese.
    • Changed the default search mode parameter from DEEP to FAST in Chinese.
    • Clarified LLM requirements for FAST and DEEP modes, emphasizing FAST mode's speed advantage in Chinese.
  • content/docs/guide/docker.md
    • Added a new document providing detailed instructions for Docker deployment, including available images, quick start commands, volume mounts, and environment variables.
  • content/docs/guide/docker.zh.md
    • Added a new document providing detailed instructions for Docker deployment in Chinese, including available images, quick start commands, volume mounts, and environment variables.
  • content/docs/guide/mcp.md
    • Updated the mode parameter description to include FAST as the default option.
    • Added a new entry for FAST mode in the search mode table, detailing its description and LLM requirement.
  • content/docs/guide/mcp.zh.md
    • Updated the mode parameter description to include FAST as the default option in Chinese.
    • Added a new entry for FAST mode in the search mode table, detailing its description and LLM requirement in Chinese.
  • content/docs/guide/python-sdk.md
    • Updated Python SDK examples to explicitly demonstrate both the default FAST mode and the DEEP mode.
    • Adjusted comments for the mode parameter to reflect FAST as the new default.
    • Removed redundant mode: "DEEP" specifications from Python SDK examples.
  • content/docs/guide/python-sdk.zh.md
    • Updated Python SDK examples to explicitly demonstrate both the default FAST mode and the DEEP mode in Chinese.
    • Adjusted comments for the mode parameter to reflect FAST as the new default in Chinese.
    • Removed redundant mode: "DEEP" specifications from Python SDK examples in Chinese.
  • content/docs/reference/_index.md
    • Changed the default mode in API request examples from DEEP to FAST.
    • Updated the reported API version in the health check endpoint to 0.0.4.
    • Added separate cURL examples to illustrate usage of both FAST and DEEP search modes.
    • Removed explicit mode: "DEEP" from Python and JavaScript API call examples.
  • content/docs/reference/_index.zh.md
    • Changed the default mode in API request examples from DEEP to FAST in Chinese.
    • Updated the reported API version in the health check endpoint to 0.0.4 in Chinese.
    • Added separate cURL examples to illustrate usage of both FAST and DEEP search modes in Chinese.
    • Removed explicit mode: "DEEP" from Python and JavaScript API call examples in Chinese.
  • content/docs/reference/i18n.md
    • Clarified that both FAST and DEEP modes require an LLM API key, and highlighted FAST mode's significant speed advantage.
  • content/docs/reference/i18n.zh.md
    • Clarified that both FAST and DEEP modes require an LLM API key in Chinese, and highlighted FAST mode's significant speed advantage.
  • content/showcase/web-ui/index.md
    • Replaced the static GIF demonstration of the Web UI with an embedded video for improved presentation.
  • content/showcase/web-ui/index.zh.md
    • Replaced the static GIF demonstration of the Web UI with an embedded video for improved presentation in Chinese.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for the v0.0.4 release, introducing Docker support and the new FAST/DEEP search modes. The changes are comprehensive across both English and Chinese documents. I've identified a few inconsistencies in the new Docker documentation regarding environment variable names and example values, which could lead to user errors. My suggestions aim to align these new documents with the rest of the project's configuration standards.

-p 8584:8584 \
-e LLM_API_KEY="your-api-key-here" \
-e LLM_BASE_URL="https://api.openai.com/v1" \
-e LLM_MODEL_NAME="gpt-5.2" \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There are a couple of inconsistencies in this example that could confuse users:

  1. The environment variable is LLM_MODEL_NAME, but other documentation files (e.g., configuration.md) use LLM_MODEL.
  2. The model name gpt-5.2 appears to be a placeholder. Using gpt-4o would align with other examples in the docs.

Please update for consistency.

Suggested change
-e LLM_MODEL_NAME="gpt-5.2" \
-e LLM_MODEL="gpt-4o" \

|---|---|---|
| `LLM_API_KEY` | Your LLM API key | *required* |
| `LLM_BASE_URL` | OpenAI-compatible API base URL | `https://api.openai.com/v1` |
| `LLM_MODEL_NAME` | Model name | `gpt-5.2` |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To maintain consistency with other documentation files (configuration.md), this variable should be LLM_MODEL. The default value should also be updated from the placeholder gpt-5.2 to gpt-4o for consistency.

Suggested change
| `LLM_MODEL_NAME` | Model name | `gpt-5.2` |
| `LLM_MODEL` | Model name | `gpt-4o` |

-p 8584:8584 \
-e LLM_API_KEY="your-api-key-here" \
-e LLM_BASE_URL="https://api.openai.com/v1" \
-e LLM_MODEL_NAME="gpt-5.2" \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

此处的环境变量 LLM_MODEL_NAME 与其他文档(如 configuration.zh.md)中使用的 LLM_MODEL 不一致,这可能会导致用户配置错误。此外,模型名称 gpt-5.2 似乎是一个占位符,与其他文档中使用的 gpt-4o 不一致。建议统一以避免混淆。

Suggested change
-e LLM_MODEL_NAME="gpt-5.2" \
-e LLM_MODEL="gpt-4o" \

|---|---|---|
| `LLM_API_KEY` | LLM API 密钥 | *必填* |
| `LLM_BASE_URL` | OpenAI 兼容 API 基础 URL | `https://api.openai.com/v1` |
| `LLM_MODEL_NAME` | 模型名称 | `gpt-5.2` |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

为与其他文档(如 configuration.zh.md)保持一致,此环境变量应为 LLM_MODEL。默认值也应从占位符 gpt-5.2 更新为 gpt-4o 以保持一致性。

Suggested change
| `LLM_MODEL_NAME` | 模型名称 | `gpt-5.2` |
| `LLM_MODEL` | 模型名称 | `gpt-4o` |


```bash
# Pull the image
docker pull modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/sirchmunk:ubuntu22.04-py312-0.0.4

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docker pull command hardcodes the cn-beijing registry. Since you provide both US and China registries, it would be beneficial to add a comment advising users to choose the one geographically closest to them for better pull speeds. The Chinese version of this document already includes a helpful comment for this.

Suggested change
docker pull modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/sirchmunk:ubuntu22.04-py312-0.0.4
# Pull the image (choose the registry closest to your location)
docker pull modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/sirchmunk:ubuntu22.04-py312-0.0.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant