LLM chat directly in VS Code. No browser needed.
- Send entire file or selected text to an LLM
- Response inserted at cursor or replaces selection
- Supports Claude, OpenAI, and LM Studio (local)
- Configure API keys and models in VS Code settings
- VS Code
- API Keys: Obtain keys from Anthropic for Claude or OpenAI for GPT models.
- LM Studio: For local LLM use, ensure LM Studio is running on
http://localhost:1234. - Node.js: Required for building from source.
- Download the
.vsixfile from the latest release. - Open VS Code, go to Extensions (
Ctrl+Shift+X). - Click
...> Install from VSIX and select the downloaded file.
- Clone the repository:
git clone https://github.com/gdifiore/askdotmd.git. - Install dependencies:
npm ci. - Build the extension:
npm run webpack. - Package the extension:
npm run package. - Install the generated
.vsixfile as above.
API keys are stored in VS Code's encrypted SecretStorage, not in settings.json.
Set a key via the Command Palette (Ctrl+Shift+P):
Ask.md: Set API Key— choose a provider and enter the keyAsk.md: Clear API Key— remove a stored key
On first request to a provider, you will be prompted for the key if none is stored.
Other settings (Command Palette → Preferences: Open Settings):
askdotmd.defaultModel: Default LLM (claude,openai, orlmstudio).askdotmd.claudeModel/askdotmd.openaiModel/askdotmd.lmstudioModel: Model name.askdotmd.lmstudioApiUrl: LM Studio endpoint (defaulthttp://localhost:1234/v1/chat/completions).askdotmd.maxTokens: Max tokens per response (default4096).askdotmd.temperature: Sampling temperature (default0.7).askdotmd.showContextInfo: Prepend filename/language/lines to request (defaulttrue).
- Open a file in VS Code
- Add a request in a comment (e.g.,
// Generate a sorting function) - Trigger the command:
- No selection: Sends entire file, inserts response at cursor
- With selection: Sends only selection, replaces with response
Access the command:
- Command Palette (
Ctrl+Shift+P):askdotmd: Send Request - Default keybinding:
Ctrl+Shift+L(Mac:Cmd+Shift+L)
Select your LLM when prompted (Claude, OpenAI, or LM Studio).
- Requests should be in comments; automatic extraction not yet implemented
- Large files may exceed token limits (select specific sections instead)
- Subject to your LLM provider's rate limits and quotas