You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-11Lines changed: 7 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,11 @@ Add to your OpenCode config:
14
14
```jsonc
15
15
// opencode.jsonc
16
16
{
17
-
"plugin": ["@tarquinen/opencode-dcp@beta"],
17
+
"plugin": ["@tarquinen/opencode-dcp@latest"],
18
18
}
19
19
```
20
20
21
-
Using `@beta` ensures you always get the newest version automatically when OpenCode starts.
21
+
Using `@latest` ensures you always get the newest version automatically when OpenCode starts.
22
22
23
23
Restart OpenCode. The plugin will automatically start optimizing your sessions.
24
24
@@ -28,9 +28,9 @@ DCP reduces context size through a compress tool and automatic cleanup. Your ses
28
28
29
29
### Compress
30
30
31
-
A tool exposed to your model that selects a conversation range and replaces it with a technical summary. When a new compression overlaps an earlier one, the earlier summary is nested inside the new one — so information is preserved through layers of compression rather than diluted away.
31
+
Compress is a tool exposed to your model that selects a conversation range and replaces it with a technical summary. You can think of this as a much smarter version of Opencode's compaction process. Instead of triggering statically when your session reaches its maximum context and on the entire coding session, Compress allows the model to pick when to activate based on task completion, and to only compress a subset of messages containing the completed task. This allows the summaries replacing the session content to be much more focused and precise than Opencode's native compaction.
32
32
33
-
The model compresses at whatever scale fits: small ranges for noise cleanup, focused ranges for key findings, or broad ranges for completed work. Context thresholds (`minContextLimit`, `maxContextLimit`) and nudge settings control how aggressively the model is prompted to compress.
33
+
When a new compression overlaps an earlier one, the earlier summary is nested inside the new one — so information is preserved through layers of compression rather than diluted away. Additionally, protected tool outputs (such as subagents and skills) and protected file patterns are always kept in compression summaries, ensuring that the most important information is never lost. You can also enable `protectUserMessages`to preserve your messages verbatim during compression, though note that large prompts (e.g. copy-pasting log files in the prompt) will then never be compressed away.
34
34
35
35
### Deduplication
36
36
@@ -174,7 +174,6 @@ DCP provides a `/dcp` slash command:
174
174
-`/dcp stats` — Shows cumulative pruning statistics across all sessions.
175
175
-`/dcp sweep` — Prunes all tools since the last user message. Accepts an optional count: `/dcp sweep 10` prunes the last 10 tools. Respects `commands.protectedTools`.
176
176
-`/dcp manual [on|off]` — Toggle manual mode or set explicit state. When on, the AI will not autonomously use context management tools.
177
-
178
177
-`/dcp compress [focus]` — Trigger a single compress tool execution. Optional focus text directs what range to compress.
179
178
-`/dcp decompress <n>` — Restore a specific active compression by ID (for example `/dcp decompress 2`). Running without an argument shows available compression IDs, token sizes, and topics.
180
179
-`/dcp recompress <n>` — Re-apply a user-decompressed compression by ID (for example `/dcp recompress 2`). Running without an argument shows recompressible IDs, token sizes, and topics.
@@ -223,17 +222,14 @@ LLM providers cache prompts based on exact prefix matching. When DCP prunes cont
223
222
224
223
**Trade-off:** You lose some cache reads but gain token savings from reduced context size and fewer hallucinations from stale context. In most cases, especially in long sessions, the savings outweigh the cache miss cost.
225
224
226
-
> **Note:** In testing, cache hit rates were approximately 85% with DCP vs 90% without.
225
+
> [!NOTE]
226
+
> In testing, cache hit rates were approximately 85% with DCP vs 90% without.
227
227
228
228
**No impact for:**
229
229
230
-
-**Request-based billing** — Providers like Github Copilot that charge per request, not tokens.
230
+
-**Request-based billing** — Providers like GitHub Copilot that charge per request, not tokens.
231
231
-**Uniform token pricing** — Providers like Cerebras that bill cached and uncached tokens at the same rate.
232
232
233
-
## Limitations
234
-
235
-
**Subagents** — Disabled by default. Subagent sessions prioritize returning concise summaries to the main agent, and pruning could interfere with that. Opt in with `experimental.allowSubAgents: true`.
0 commit comments