For conversational UI workflows where an LLM iteratively refines a spec ("change the button text", "add a row to the table"), resending the entire TOON spec each turn is wasteful.
A delta encoding mode could transmit only the changes:
set /children/0/props/label: "New text"
insert /children/2: {component: Badge, props: {text: New}}
remove /children/3
This is similar to RFC 6902 JSON Patch but in TOON syntax.
Estimated savings on follow-up turns: 85-98% vs resending the full spec.
Requires:
- Stable node addressing (path-based or ID-based)
- Operations: set, insert, remove, move
- Shared state between encoder and decoder
Not useful for one-shot generation, but very high value for iterative refinement workflows.
Context: https://github.com/abhishekgahlot2/toon-json-render
For conversational UI workflows where an LLM iteratively refines a spec ("change the button text", "add a row to the table"), resending the entire TOON spec each turn is wasteful.
A delta encoding mode could transmit only the changes:
This is similar to RFC 6902 JSON Patch but in TOON syntax.
Estimated savings on follow-up turns: 85-98% vs resending the full spec.
Requires:
Not useful for one-shot generation, but very high value for iterative refinement workflows.
Context: https://github.com/abhishekgahlot2/toon-json-render