You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest in contributing to Dynamic Context Pruning (DCP)!
4
+
5
+
## License and Contributions
6
+
7
+
This project uses the **GNU Affero General Public License v3.0 (AGPL-3.0)**.
8
+
9
+
### Contribution Agreement
10
+
11
+
By submitting a Pull Request to this project, you agree that:
12
+
13
+
1. Your contributions are licensed under the **AGPL-3.0**.
14
+
2. You grant the project maintainer(s) a non-exclusive, perpetual, irrevocable, worldwide, royalty-free, transferable license to use, modify, and re-license your contributions under any terms they choose, including commercial or proprietary licenses.
15
+
16
+
This arrangement ensures the project remains Open Source while providing a path for commercial sustainability.
17
+
18
+
## Getting Started
19
+
20
+
1. Fork the repository.
21
+
2. Create a feature branch.
22
+
3. Implement your changes and add tests if applicable.
23
+
4. Ensure all tests pass and the code is formatted.
Automatically reduces token usage in OpenCode by removing obsolete tools from conversation history.
6
+
Automatically reduces token usage in OpenCode by removing obsolete content from conversation history.
6
7
7
-

8
+

8
9
9
10
## Installation
10
11
@@ -27,15 +28,17 @@ DCP uses multiple tools and strategies to reduce context size:
27
28
28
29
### Tools
29
30
30
-
**Discard** — Exposes a `discard` tool that the AI can call to remove completed or noisy tool content from context.
31
+
**Distill** — Exposes a `distill` tool that the AI can call to distill valuable context into concise summaries before removing the tool content.
31
32
32
-
**Extract** — Exposes an `extract` tool that the AI can call to distill valuable context into concise summaries before removing the tool content.
33
+
**Compress** — Exposes a `compress` tool that the AI can call to collapse a large section of conversation (messages and tools) into a single summary.
34
+
35
+
**Prune** — Exposes a `prune` tool that the AI can call to remove completed or noisy tool content from context.
33
36
34
37
### Strategies
35
38
36
39
**Deduplication** — Identifies repeated tool calls (e.g., reading the same file multiple times) and keeps only the most recent output. Runs automatically on every request with zero LLM cost.
37
40
38
-
**Supersede Writes** — Prunes write tool inputs for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
41
+
**Supersede Writes** — Removes write tool calls for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
39
42
40
43
**Purge Errors** — Prunes tool inputs for tools that returned errors after a configurable number of turns (default: 4). Error messages are preserved for context, but the potentially large input content is removed. Runs automatically on every request with zero LLM cost.
41
44
@@ -47,13 +50,17 @@ LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matc
47
50
48
51
**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size and performance improvements through reduced context poisoning. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.
49
52
50
-
> **Note:** In testing, cache hit rates were approximately 65% with DCP enabled vs 85% without.
53
+
> **Note:** In testing, cache hit rates were approximately 80% with DCP enabled vs 85% without for most providers.
54
+
55
+
**Best use case:** Providers that count usage in requests, such as Github Copilot and Google Antigravity, have no negative price impact.
51
56
52
57
**Best use cases:**
53
58
54
59
-**Request-based billing** — Providers that count usage in requests, such as Github Copilot and Google Antigravity, have no negative price impact.
55
60
-**Uniform token pricing** — Providers that bill cached tokens at the same rate as regular input tokens, such as Cerebras, see pure savings with no cache-miss penalty.
56
61
62
+
**Claude Subscriptions:** Anthropic subscription users (who receive "free" caching) may experience faster limit depletion than hit-rate ratios suggest due to the higher relative cost of cache misses. See [Claude Cache Limits](https://she-llac.com/claude-limits) for details.
63
+
57
64
## Configuration
58
65
59
66
DCP uses its own config file:
@@ -62,78 +69,86 @@ DCP uses its own config file:
62
69
- Custom config directory: `$OPENCODE_CONFIG_DIR/dcp.jsonc` (or `dcp.json`), if `OPENCODE_CONFIG_DIR` is set
63
70
- Project: `.opencode/dcp.jsonc` (or `dcp.json`) in your project's `.opencode` directory
64
71
65
-
<details>
66
-
<summary><strong>Default Configuration</strong> (click to expand)</summary>
>// Enable debug logging to ~/.config/opencode/logs/dcp/
81
+
>"debug":false,
82
+
>// Notification display: "off", "minimal", or "detailed"
83
+
>"pruneNotification":"detailed",
84
+
>// Notification type: "chat" (in-conversation) or "toast" (system toast)
85
+
>"pruneNotificationType":"chat",
86
+
>// Slash commands configuration
87
+
>"commands": {
88
+
>"enabled":true,
89
+
>// Additional tools to protect from pruning via commands (e.g., /dcp sweep)
90
+
>"protectedTools": [],
91
+
> },
92
+
>// Protect from pruning for <turns> message turns past tool invocation
93
+
>"turnProtection": {
94
+
>"enabled":false,
95
+
>"turns":4,
96
+
> },
97
+
>// Protect file operations from pruning via glob patterns
98
+
>// Patterns match tool parameters.filePath (e.g. read/write/edit)
99
+
>"protectedFilePatterns": [],
100
+
>// LLM-driven context pruning tools
101
+
>"tools": {
102
+
>// Shared settings for all prune tools
103
+
>"settings": {
104
+
>// Nudge the LLM to use prune tools (every <nudgeFrequency> tool results)
105
+
>"nudgeEnabled":true,
106
+
>"nudgeFrequency":10,
107
+
>// Additional tools to protect from pruning
108
+
>"protectedTools": [],
109
+
> },
110
+
>// Removes tool content from context without preservation (for completed tasks or noise)
111
+
>"prune": {
112
+
>"enabled":true,
113
+
> },
114
+
>// Distills key findings into preserved knowledge before removing raw content
115
+
>"distill": {
116
+
>"enabled":true,
117
+
>// Show distillation content as an ignored message notification
118
+
>"showDistillation":false,
119
+
> },
120
+
>// Collapses a range of conversation content into a single summary
121
+
>"compress": {
122
+
>"enabled":true,
123
+
>// Show summary content as an ignored message notification
124
+
>"showCompression":true,
125
+
> },
126
+
> },
127
+
>// Automatic pruning strategies
128
+
>"strategies": {
129
+
>// Remove duplicate tool calls (same tool with same arguments)
130
+
>"deduplication": {
131
+
>"enabled":true,
132
+
>// Additional tools to protect from pruning
133
+
>"protectedTools": [],
134
+
> },
135
+
>// Prune write tool inputs when the file has been subsequently read
136
+
>"supersedeWrites": {
137
+
>"enabled":true,
138
+
> },
139
+
>// Prune tool inputs for errored tools after X turns
140
+
>"purgeErrors": {
141
+
>"enabled":true,
142
+
>// Number of turns before errored tool inputs are pruned
143
+
>"turns":4,
144
+
>// Additional tools to protect from pruning
145
+
>"protectedTools": [],
146
+
> },
147
+
> },
148
+
> }
149
+
>```
150
+
>
151
+
> </details>
137
152
138
153
### Commands
139
154
@@ -144,14 +159,10 @@ DCP provides a `/dcp` slash command:
144
159
- `/dcp stats` — Shows cumulative pruning statistics across all sessions.
145
160
- `/dcp sweep` — Prunes all tools since the last user message. Accepts an optional count: `/dcp sweep 10` prunes the last 10 tools. Respects `commands.protectedTools`.
146
161
147
-
### Turn Protection
148
-
149
-
When enabled, turn protection prevents tool outputs from being pruned for a configurable number of message turns. This gives the AI time to reference recent tool outputs before they become prunable. Applies to both `discard` and `extract` tools, as well as automatic strategies.
150
-
151
162
### Protected Tools
152
163
153
-
By default, these tools are always protected from pruning across all strategies:
0 commit comments