Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
314 changes: 262 additions & 52 deletions pnpm-lock.yaml

Large diffs are not rendered by default.

50 changes: 50 additions & 0 deletions templates/enrichment/.agents/skills/capture-learnings/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
name: capture-learnings
description: >-
Capture and apply accumulated knowledge in learnings.md. Use when the user
corrects a mistake, when debugging reveals unexpected behavior, or when an
architectural decision should be recorded for future reference.
user-invocable: false
---

# Capture Learnings

This is background knowledge, not a slash command. Read `learnings.md` before starting significant work. Update it when you discover something worth remembering.

## When to Capture

Use judgment, not rules. Capture when:

- **Surprising behavior** — Something didn't work as expected and you figured out why
- **Repeated friction** — You hit the same issue twice; write it down so there's no third time
- **Architectural decisions** — Why something is done a certain way (the "why" isn't in the code)
- **API/library quirks** — Undocumented behavior, version-specific gotchas
- **Performance insights** — What's slow and what fixed it

Don't capture:

- Things that are obvious from reading the code
- Standard language/framework behavior
- Temporary debugging notes

## Format

Add entries to `learnings.md` at the project root. Match the existing format — typically a heading per topic with a brief explanation:

```markdown
## [Topic]

[What you learned and why it matters. Keep it to 2-3 sentences.]
```

## Graduation

When a learning is referenced repeatedly, it's outgrowing `learnings.md`. Propose adding it to the relevant skill or creating a new skill via `create-skill`.

- Updating `learnings.md` is a Tier 1 modification (data — auto-apply)
- Updating a SKILL.md based on learnings is Tier 2 (source — verify after)

## Related Skills

- **self-modifying-code** — Learnings.md updates are Tier 1; skill updates are Tier 2
- **create-skill** — When a learning graduates, create a skill from it
167 changes: 167 additions & 0 deletions templates/enrichment/.agents/skills/create-skill/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
---
name: create-skill
description: >-
How to create new skills for an agent-native app. Use when adding a new
skill, documenting a pattern the agent should follow, or creating reusable
guidance for the agent.
---

# Create a Skill

## When to Use

Create a new skill when:

- There's a pattern the agent should follow repeatedly
- A workflow needs step-by-step guidance
- You want to scaffold files from a template

Don't create a skill when:

- The guidance already exists in another skill (extend it instead)
- You're documenting something the agent already knows (e.g., how to write TypeScript)
- The guidance is a one-off — put it in `AGENTS.md` or `learnings.md` instead

## 5-Question Interview

Before writing the skill, answer these:

1. **What should this skill enable?** — The core purpose in one sentence.
2. **Which agent-native rule does it serve?** — Rule 1 (files), Rule 2 (delegate), Rule 3 (scripts), Rule 4 (SSE), Rule 5 (self-modify), or "utility."
3. **When should it trigger?** — Describe the situations in natural language. Be slightly pushy — over-triggering is better than under-triggering.
4. **What type of skill?** — Pattern, Workflow, or Generator (see templates below).
5. **Does it need supporting files?** — References (read-only context) or none. Keep it minimal.

## Skill Types and Templates

### Pattern (architectural rule)

For documenting how things should be done:

```markdown
---
name: my-pattern
description: >-
[Under 40 words. When should this trigger?]
---

# [Pattern Name]

## Rule

[One sentence: what must be true]

## Why

[Why this rule exists]

## How

[How to follow it, with code examples]

## Don't

[Common violations]

## Related Skills

[Which skills compose with this one]
```

### Workflow (step-by-step)

For multi-step implementation tasks:

```markdown
---
name: my-workflow
description: >-
[Under 40 words. When should this trigger?]
---

# [Workflow Name]

## Prerequisites

[What must be in place first]

## Steps

[Numbered steps with code examples]

## Verification

[How to confirm it worked]

## Troubleshooting

[Common issues and fixes]

## Related Skills
```

### Generator (scaffolding)

For creating files from templates:

```markdown
---
name: my-generator
description: >-
[Under 40 words. When should this trigger?]
---

# [Generator Name]

## Usage

[How to invoke — what args/inputs are needed]

## What Gets Created

[List of files and their purpose]

## Template

[The template content with placeholders]

## After Generation

[What to do next — wire up SSE, add routes, etc.]

## Related Skills
```

## Naming Conventions

- Hyphen-case only: `[a-z0-9-]`, max 64 characters
- Pattern skills: descriptive names (`storing-data`, `delegate-to-agent`)
- Workflow/generator skills: verb-noun (`create-script`, `capture-learnings`)

## Tips

- **Keep descriptions under 40 words** — They're loaded into context on every conversation.
- **Keep SKILL.md under 500 lines** — Move detailed content to `references/` files.
- **Use standard markdown headings** — No XML tags or custom formats.

## Anti-Patterns

- **Inline LLM calls** — Skills must not call LLMs directly (violates Rule 2)
- **Database patterns** — Skills must not introduce databases (violates Rule 1)
- **Ignoring SSE** — If a skill creates data files, mention wiring up `useFileWatcher`
- **Vague descriptions** — "Helps with development" won't trigger. Be specific about _when_.
- **Pure documentation** — Skills should guide action, not just explain concepts

## File Structure

```
.agents/skills/my-skill/
├── SKILL.md # Main skill (required)
└── references/ # Optional supporting context
└── detailed-guide.md
```

## Related Skills

- **capture-learnings** — When a learning graduates to reusable guidance, create a skill
- **self-modifying-code** — The agent can create new skills (Tier 2 modification)
90 changes: 90 additions & 0 deletions templates/enrichment/.agents/skills/delegate-to-agent/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
---
name: delegate-to-agent
description: >-
How to delegate all AI work to the agent chat. Use when delegating AI work
from UI or scripts to the agent, when tempted to add inline LLM calls, or
when sending messages to the agent from application code.
---

# Delegate All AI to the Agent

## Rule

The UI and server never call an LLM directly. All AI work is delegated to the agent through the chat bridge.

## Why

The agent is the single AI interface. It has context about the full project, can read/write the database, and can run scripts. Inline LLM calls bypass this — they create a shadow AI that doesn't know what the agent knows and can't coordinate with it.

## How

**From the UI (client):**

```ts
import { sendToAgentChat } from "@agent-native/core";

sendToAgentChat({
message: "Generate a summary of this document",
context: documentContent, // optional hidden context (not shown in chat UI)
submit: true, // auto-submit to the agent
});
```

**From scripts (Node):**

```ts
import { agentChat } from "@agent-native/core";

agentChat.submit("Process the uploaded images and create thumbnails");
```

**From the UI, detecting when agent is done:**

```ts
import { useAgentChatGenerating } from "@agent-native/core";

function MyComponent() {
const isGenerating = useAgentChatGenerating();
// Show loading state while agent is working
}
```

## `submit` vs Prefill

The `submit` option controls whether the message is sent automatically or placed in the chat input for user review:

| `submit` value | Behavior | Use when |
| -------------- | --------------------------------------- | ----------------------------------------------------------------------------------- |
| `true` | Auto-submits to the agent immediately | Routine operations the user has already approved |
| `false` | Prefills the chat input for user review | High-stakes operations (deleting data, modifying code, API calls with side effects) |
| omitted | Uses the project's default setting | General-purpose delegation |

```ts
// Auto-submit: routine operation
sendToAgentChat({ message: "Update the project summary", submit: true });

// Prefill: let user review before sending
sendToAgentChat({
message: "Delete all projects older than 30 days",
submit: false,
});
```

## Don't

- Don't `import Anthropic from "@anthropic-ai/sdk"` in client or server code
- Don't `import OpenAI from "openai"` in client or server code
- Don't make direct API calls to any LLM provider
- Don't use AI SDK functions like `generateText()`, `streamText()`, etc.
- Don't build "AI features" that bypass the agent chat

## Exception

Scripts may call external APIs (image generation, search, etc.) — but the AI reasoning and orchestration still goes through the agent. A script is a tool the agent uses, not a replacement for the agent.

## Related Skills

- **scripts** — The agent invokes scripts via `pnpm script <name>` to perform complex operations
- **self-modifying-code** — The agent operates through the chat bridge to make code changes
- **storing-data** — The agent writes results to the database after processing requests
- **real-time-sync** — The UI updates automatically when the agent writes to the database
51 changes: 51 additions & 0 deletions templates/enrichment/.agents/skills/exa-enrichment/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
name: exa-enrichment
description: >-
Exa Websets for CSV enrichment — search types, enrichment descriptions,
sizing, and troubleshooting. Use when creating websets, interpreting CSV
columns, tuning queries, or recovering failed enrichments.
---

# Exa enrichment (Websets)

## Exa Websets overview

**Websets** are **asynchronous** collections of **verified web data** built by Exa for a batch of entities (e.g. people or companies from a CSV).

- Each **item** is typically a **URL** (a candidate page) with optional **enrichments** (fields derived from that result).
- Items move through a pipeline roughly: **find → verify → enrich** (exact stages are API-defined; think “discover candidates, validate, attach structured fields”).
- Jobs can take noticeable wall-clock time; scripts such as **`create-webset`** wait and merge; **`check-webset`** / **`get-results`** support partial progress and recovery.

## Search types

| Mode | When to use | Typical signals in CSV |
| ---- | ----------- | ---------------------- |
| **People** | Find professional identities | Person name + title + company, email local-part patterns, LinkedIn-ish columns |
| **Company** | Find org sites and firmographics | Domain, company legal name, industry |
| **Auto** | Mixed or ambiguous columns | Let the agent infer from column headers and sample rows |

**People** search targets LinkedIn-style / professional profiles. **Company** search targets corporate sites and structured company info.

## Enrichment types

- **Custom descriptions (natural language)** — Phrase what you want extracted, e.g. “Find the company’s annual revenue”, “What is this person’s current role?”. Prefer **specific, measurable** asks over vague ones (“tell me about them”).
- **Entity enrichments** — Higher-level bundles such as **company** info (workforce, headquarters, financials) or **person** info (job title, social links), depending on what the Exa integration in this app exposes.

When users ask for new columns (e.g. “LinkedIn URL”), map that to **clear enrichment descriptions** or the appropriate entity fields in code, not hand-wavy prompts.

## Best practices

1. **Specific search queries** — Narrow entities (name + company + domain) beat ultra-broad single-field searches.
2. **Include domain or URL columns** when available — Strongly improves match quality for companies and many people lookups.
3. **Limit webset size** for latency and cost — **Roughly 100–500 items** per batch is a good default; split huge CSVs when timeouts or poor quality appear.
4. **Enrichment text should be testable** — “Current employer as of 2024” beats “background info”.

## Common issues

| Symptom | Likely cause | What to try |
| ------- | ------------- | ----------- |
| Low match rate | Query too broad or under-specified | Add company/domain/title to search; tighten per-row query text |
| Missing / empty enrichments | Vague or non-extractable descriptions | Rephrase to a concrete fact; check if verify step dropped noisy URLs |
| Timeouts / stuck jobs | Webset too large or API slowness | Smaller batches; **`check-webset`** then **`get-results`**; retry merge only |

For app-specific wiring (script arguments, file paths), follow **`AGENTS.md`** and the **`scripts`** in this repo.
Loading
Loading