Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ jobs:

- name: Deploy to Cloudflare Pages
id: cloudflare_deploy
continue-on-error: true
uses: cloudflare/pages-action@v1
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
Expand All @@ -45,8 +46,8 @@ jobs:
gitHubToken: ${{ secrets.GITHUB_TOKEN }}

- name: Add deployment comment
if: github.event_name == 'pull_request'
uses: actions/github-script@v8
if: github.event_name == 'pull_request' && steps.cloudflare_deploy.outcome == 'success'
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
Expand All @@ -57,7 +58,11 @@ jobs:
})

- name: Notify success
if: success()
if: steps.cloudflare_deploy.outcome == 'success'
run: |
echo "✅ Deployment successful!"
echo "🌐 URL: ${{ steps.cloudflare_deploy.outputs.url }}"

- name: Skip deploy notice
if: steps.cloudflare_deploy.outcome != 'success'
run: echo "ℹ️ Cloudflare deployment skipped or failed — check that CLOUDFLARE_API_TOKEN and CLOUDFLARE_ACCOUNT_ID secrets are configured."
2 changes: 1 addition & 1 deletion .github/workflows/security-scan.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
strategy:
fail-fast: false
matrix:
language: ['javascript', 'typescript', 'python']
language: ['javascript']

steps:
- name: Checkout
Expand Down
118 changes: 84 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,85 @@
# Codex Agent Runner
# Codex Agent Runner

Ollama chat client with handle-based routing. Routes `@ollama`, `@copilot`, `@lucidia`, and `@blackboxprogramming` mentions to a local Ollama instance with streaming responses.
> **Chat with AI. No accounts. No cloud. Your data stays on your device.**

## Usage
<div align="center">

### 🚀 [Try It Now — Open the Chat →](https://codex-agent-runner.pages.dev)

*Works in your browser. One click. That's it.*

</div>

---

## ✨ What is this?

**Codex Agent Runner** is a beautiful chat interface that lets you talk to AI — completely privately, right on your own computer. No API keys, no subscriptions, no data sent to the cloud.

Just you and your AI, having a conversation.

---

## 🎯 Get Started in 3 Steps

### Step 1 — Install Ollama (free, takes 1 minute)

👉 **[Download Ollama at ollama.ai](https://ollama.ai)**

Ollama is a free tool that runs AI models on your computer. It works on Mac, Windows, and Linux.

### Step 2 — Pull a model

After installing Ollama, open your Terminal (Mac/Linux) or Command Prompt (Windows) and run:

```
ollama pull llama3
```

That's the only command you'll ever need.

### Step 3 — Open the chat

👉 **[Open Codex Agent Runner](https://codex-agent-runner.pages.dev)**

The page will automatically detect your local Ollama and you're ready to chat!

---

## 💬 How to Chat

Just type naturally! You can also address specific AI personas:

| Type this… | What happens |
|---|---|
| `Hello, how are you?` | Chat directly with the AI |
| `@ollama explain black holes` | Talk to Ollama |
| `@copilot write me a Python function` | Talk to Copilot persona |
| `@lucidia tell me a story` | Talk to Lucidia persona |
| `@blackboxprogramming` | Talk to the BlackRoad AI |

All of these talk to your **local** Ollama — nothing is sent to any external server.

---

## 🔒 Your Privacy, Guaranteed

- ✅ **Fully offline** — your conversations never leave your machine
- ✅ **No account needed** — zero sign-up, zero tracking
- ✅ **Free forever** — no subscriptions or API costs
- ✅ **Open source** — see exactly what runs on your machine

---

## 🛠 For Developers

Want to use the API in your own project?

```js
import { ollamaChat, parseHandle } from './ollama.js';

// Parse @handle from user input
const { handle, prompt } = parseHandle('@lucidia explain quantum entanglement');

// Stream response from local Ollama
await ollamaChat({
model: 'llama3',
messages: [{ role: 'user', content: prompt }],
Expand All @@ -20,37 +89,18 @@ await ollamaChat({
});
```

## API

### `parseHandle(text)`

Strips a recognized `@handle` prefix and returns `{ handle, prompt }`.

### `ollamaChat(options)`
See [ollama.js](./ollama.js) for the full API.

Streams a chat completion from the local Ollama API.
---

| Option | Default | Description |
|--------|---------|-------------|
| `baseUrl` | `http://localhost:11434` | Ollama server URL |
| `model` | `llama3` | Model name |
| `messages` | — | OpenAI-style message array |
| `onChunk` | — | Called with each text chunk |
| `onDone` | — | Called when stream completes |
| `onError` | — | Called on failure |
## 🆘 Need Help?

## Requirements

- [Ollama](https://ollama.ai) running locally

## Project Structure

```
ollama.js # Chat client with handle parsing and streaming
ollama.test.js # Tests
index.html # Web interface
```
- **Ollama shows "offline"?** Make sure Ollama is running — open the Ollama app or run `ollama serve` in your terminal
- **No models available?** Run `ollama pull llama3` in your terminal
- **Still stuck?** [Open an issue](https://github.com/blackboxprogramming/codex-agent-runner/issues) and we'll help!

## License
---

Copyright 2026 BlackRoad OS, Inc. All rights reserved.
<div align="center">
Made with ❤️ by <a href="https://github.com/blackboxprogramming">BlackRoad OS</a>
</div>
Loading