+
+### π [Try It Now β Open the Chat β](https://codex-agent-runner.pages.dev)
+
+*Works in your browser. One click. That's it.*
+
+
+
+---
+
+## β¨ What is this?
+
+**Codex Agent Runner** is a beautiful chat interface that lets you talk to AI β completely privately, right on your own computer. No API keys, no subscriptions, no data sent to the cloud.
+
+Just you and your AI, having a conversation.
+
+---
+
+## π― Get Started in 3 Steps
+
+### Step 1 β Install Ollama (free, takes 1 minute)
+
+π **[Download Ollama at ollama.ai](https://ollama.ai)**
+
+Ollama is a free tool that runs AI models on your computer. It works on Mac, Windows, and Linux.
+
+### Step 2 β Pull a model
+
+After installing Ollama, open your Terminal (Mac/Linux) or Command Prompt (Windows) and run:
+
+```
+ollama pull llama3
+```
+
+That's the only command you'll ever need.
+
+### Step 3 β Open the chat
+
+π **[Open Codex Agent Runner](https://codex-agent-runner.pages.dev)**
+
+The page will automatically detect your local Ollama and you're ready to chat!
+
+---
+
+## π¬ How to Chat
+
+Just type naturally! You can also address specific AI personas:
+
+| Type this⦠| What happens |
+|---|---|
+| `Hello, how are you?` | Chat directly with the AI |
+| `@ollama explain black holes` | Talk to Ollama |
+| `@copilot write me a Python function` | Talk to Copilot persona |
+| `@lucidia tell me a story` | Talk to Lucidia persona |
+| `@blackboxprogramming` | Talk to the BlackRoad AI |
+
+All of these talk to your **local** Ollama β nothing is sent to any external server.
+
+---
+
+## π Your Privacy, Guaranteed
+
+- β **Fully offline** β your conversations never leave your machine
+- β **No account needed** β zero sign-up, zero tracking
+- β **Free forever** β no subscriptions or API costs
+- β **Open source** β see exactly what runs on your machine
+
+---
+
+## π For Developers
+
+Want to use the API in your own project?
```js
import { ollamaChat, parseHandle } from './ollama.js';
-// Parse @handle from user input
const { handle, prompt } = parseHandle('@lucidia explain quantum entanglement');
-// Stream response from local Ollama
await ollamaChat({
model: 'llama3',
messages: [{ role: 'user', content: prompt }],
@@ -20,37 +89,18 @@ await ollamaChat({
});
```
-## API
-
-### `parseHandle(text)`
-
-Strips a recognized `@handle` prefix and returns `{ handle, prompt }`.
-
-### `ollamaChat(options)`
+See [ollama.js](./ollama.js) for the full API.
-Streams a chat completion from the local Ollama API.
+---
-| Option | Default | Description |
-|--------|---------|-------------|
-| `baseUrl` | `http://localhost:11434` | Ollama server URL |
-| `model` | `llama3` | Model name |
-| `messages` | β | OpenAI-style message array |
-| `onChunk` | β | Called with each text chunk |
-| `onDone` | β | Called when stream completes |
-| `onError` | β | Called on failure |
+## π Need Help?
-## Requirements
-
-- [Ollama](https://ollama.ai) running locally
-
-## Project Structure
-
-```
-ollama.js # Chat client with handle parsing and streaming
-ollama.test.js # Tests
-index.html # Web interface
-```
+- **Ollama shows "offline"?** Make sure Ollama is running β open the Ollama app or run `ollama serve` in your terminal
+- **No models available?** Run `ollama pull llama3` in your terminal
+- **Still stuck?** [Open an issue](https://github.com/blackboxprogramming/codex-agent-runner/issues) and we'll help!
-## License
+---
-Copyright 2026 BlackRoad OS, Inc. All rights reserved.
+