@@ -54,7 +54,10 @@ knowcode semantic-search "How does parsing work?"
5454# 7. Start the intelligence server with watch mode
5555knowcode server --port 8080 --watch
5656
57- # 8. View statistics
57+ # 8. Start MCP server for IDE integration
58+ knowcode mcp-server --store .
59+
60+ # 9. View statistics
5861knowcode stats
5962```
6063
@@ -155,9 +158,11 @@ knowcode server --port 8080
155158```
156159
157160Once running, you can access endpoints like:
158- - ` GET /api/v1/context?target=MyClass `
161+ - ` GET /api/v1/context?target=MyClass&task_type=debug `
159162- ` GET /api/v1/search?q=parser ` ` (lexical search) `
160163- ` POST /api/v1/context/query ` ` (semantic search) `
164+ - ` GET /api/v1/trace_calls/{entity_id}?direction=callers&depth=3 ` ` (multi-hop call graph) `
165+ - ` GET /api/v1/impact/{entity_id} ` ` (deletion impact analysis) `
161166- ` POST /api/v1/reload ` (to refresh data after a new ` analyze ` run)
162167
163168### ` history `
@@ -207,6 +212,33 @@ models:
207212knowcode ask "How does the graph builder work?"
208213```
209214
215+ ### ` mcp-server `
216+ Start an MCP (Model Context Protocol) server for IDE agent integration.
217+
218+ ``` bash
219+ knowcode mcp-server [--store < path> ]
220+ ```
221+
222+ ** Tools Exposed:**
223+ - ` search_codebase ` - Search for code entities by name
224+ - ` get_entity_context ` - Get detailed context for an entity
225+ - ` trace_calls ` - Trace call graph (callers/callees) with depth
226+
227+ ** MCP Client Configuration (Claude Desktop, VS Code, etc.):**
228+ ``` json
229+ {
230+ "knowcode" : {
231+ "command" : " knowcode" ,
232+ "args" : [" mcp-server" , " --store" , " /path/to/project" ]
233+ }
234+ }
235+ ```
236+
237+ ** Installation with MCP support:**
238+ ``` bash
239+ pip install " knowcode[mcp]"
240+ ```
241+
210242## Supported Languages (MVP)
211243
212244- ** Python** (.py) - Full AST parsing (Supports Python 3.9 - 3.12)
@@ -229,6 +261,40 @@ KnowCode follows a layered architecture:
229261
230262See [ KnowCode.md] ( KnowCode.md ) for the complete reference architecture.
231263
264+ ## Configuration
265+
266+ ** ` aimodels.yaml ` ** supports:
267+
268+ ``` yaml
269+ # LLM models for 'ask' command
270+ natural_language_models :
271+ - name : gemini-2.0-flash-lite
272+ provider : google
273+ api_key_env : GOOGLE_API_KEY
274+
275+ # Embedding models
276+ embedding_models :
277+ - name : voyage-3-lite
278+ provider : voyageai
279+ api_key_env : VOYAGE_API_KEY_1
280+
281+ # Reranking models (cross-encoder)
282+ reranking_models :
283+ - name : rerank-2.5
284+ provider : voyageai
285+ api_key_env : VOYAGE_API_KEY_1
286+
287+ # Config
288+ config :
289+ sufficiency_threshold : 0.8 # For local-first answering
290+ ` ` `
291+
292+ **Optional dependencies:**
293+ ` ` ` bash
294+ pip install "knowcode[mcp]" # MCP server support
295+ pip install "knowcode[voyageai]" # VoyageAI reranking
296+ ```
297+
232298## Example Output
233299
234300** Stats:**
@@ -303,6 +369,12 @@ See [KnowCode.md](KnowCode.md) for the full vision. The MVP focuses on:
303369- ✅ v1.4: Runtime signal integration
304370- ✅ v2.0: Intelligence Server mode (local API for local IDE agents)
305371- ✅ v2.1: Semantic search with embeddings, hybrid retrieval, and watch mode
372+ - ✅ v2.2: Developer Q&A & IDE Agent Integration:
373+ - Query classification and task-specific templates
374+ - Multi-hop ` trace_calls() ` and impact analysis
375+ - Local-first ` smart_answer() ` with sufficiency scoring
376+ - MCP server for IDE integration
377+ - VoyageAI cross-encoder reranking
306378
307379** Future releases:**
308380- v3.0: Team sharing & Enterprise features (RBAC, SSO, etc.)
0 commit comments