Skip to content

Commit 68152e3

Browse files
authored
Merge pull request #420 from Opencode-DCP/dev
Dev
2 parents 1066e15 + 3e8377e commit 68152e3

File tree

1 file changed

+7
-11
lines changed

1 file changed

+7
-11
lines changed

README.md

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,11 @@ Add to your OpenCode config:
1414
```jsonc
1515
// opencode.jsonc
1616
{
17-
"plugin": ["@tarquinen/opencode-dcp@beta"],
17+
"plugin": ["@tarquinen/opencode-dcp@latest"],
1818
}
1919
```
2020

21-
Using `@beta` ensures you always get the newest version automatically when OpenCode starts.
21+
Using `@latest` ensures you always get the newest version automatically when OpenCode starts.
2222

2323
Restart OpenCode. The plugin will automatically start optimizing your sessions.
2424

@@ -28,9 +28,9 @@ DCP reduces context size through a compress tool and automatic cleanup. Your ses
2828

2929
### Compress
3030

31-
A tool exposed to your model that selects a conversation range and replaces it with a technical summary. When a new compression overlaps an earlier one, the earlier summary is nested inside the new one — so information is preserved through layers of compression rather than diluted away.
31+
Compress is a tool exposed to your model that selects a conversation range and replaces it with a technical summary. You can think of this as a much smarter version of Opencode's compaction process. Instead of triggering statically when your session reaches its maximum context and on the entire coding session, Compress allows the model to pick when to activate based on task completion, and to only compress a subset of messages containing the completed task. This allows the summaries replacing the session content to be much more focused and precise than Opencode's native compaction.
3232

33-
The model compresses at whatever scale fits: small ranges for noise cleanup, focused ranges for key findings, or broad ranges for completed work. Context thresholds (`minContextLimit`, `maxContextLimit`) and nudge settings control how aggressively the model is prompted to compress.
33+
When a new compression overlaps an earlier one, the earlier summary is nested inside the new one — so information is preserved through layers of compression rather than diluted away. Additionally, protected tool outputs (such as subagents and skills) and protected file patterns are always kept in compression summaries, ensuring that the most important information is never lost. You can also enable `protectUserMessages` to preserve your messages verbatim during compression, though note that large prompts (e.g. copy-pasting log files in the prompt) will then never be compressed away.
3434

3535
### Deduplication
3636

@@ -174,7 +174,6 @@ DCP provides a `/dcp` slash command:
174174
- `/dcp stats` — Shows cumulative pruning statistics across all sessions.
175175
- `/dcp sweep` — Prunes all tools since the last user message. Accepts an optional count: `/dcp sweep 10` prunes the last 10 tools. Respects `commands.protectedTools`.
176176
- `/dcp manual [on|off]` — Toggle manual mode or set explicit state. When on, the AI will not autonomously use context management tools.
177-
178177
- `/dcp compress [focus]` — Trigger a single compress tool execution. Optional focus text directs what range to compress.
179178
- `/dcp decompress <n>` — Restore a specific active compression by ID (for example `/dcp decompress 2`). Running without an argument shows available compression IDs, token sizes, and topics.
180179
- `/dcp recompress <n>` — Re-apply a user-decompressed compression by ID (for example `/dcp recompress 2`). Running without an argument shows recompressible IDs, token sizes, and topics.
@@ -223,17 +222,14 @@ LLM providers cache prompts based on exact prefix matching. When DCP prunes cont
223222

224223
**Trade-off:** You lose some cache reads but gain token savings from reduced context size and fewer hallucinations from stale context. In most cases, especially in long sessions, the savings outweigh the cache miss cost.
225224

226-
> **Note:** In testing, cache hit rates were approximately 85% with DCP vs 90% without.
225+
> [!NOTE]
226+
> In testing, cache hit rates were approximately 85% with DCP vs 90% without.
227227
228228
**No impact for:**
229229

230-
- **Request-based billing** — Providers like Github Copilot that charge per request, not tokens.
230+
- **Request-based billing** — Providers like GitHub Copilot that charge per request, not tokens.
231231
- **Uniform token pricing** — Providers like Cerebras that bill cached and uncached tokens at the same rate.
232232

233-
## Limitations
234-
235-
**Subagents** — Disabled by default. Subagent sessions prioritize returning concise summaries to the main agent, and pruning could interfere with that. Opt in with `experimental.allowSubAgents: true`.
236-
237233
## License
238234

239235
AGPL-3.0-or-later

0 commit comments

Comments
 (0)