Skip to content

webappski/aeo-platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

aeo-platform

npm version npm downloads License: MIT Node.js Zero dependencies GitHub stars

aeo-platform is the open-source CLI for answer-engine optimization (AEO / GEO). It measures your brand across ChatGPT, Claude, Gemini, and Perplexity, audits AI-bot crawlability + authority signals, and exports a JSON brand-context you paste into any AI for a personalised 30-mission AEO plan. MIT-licensed. Runs locally. Zero runtime dependencies. Free alternative to Otterly, Profound, Peec, and Bluefish.

npm install -g aeo-platform

export OPENAI_API_KEY="sk-proj-..."     # required
export GEMINI_API_KEY="AIzaSy..."        # required

aeo-platform init --yes --brand=YOURBRAND --domain=YOURDOMAIN.COM --auto \
  && aeo-platform run \
  && aeo-platform report

The HTML report opens in your browser. Weekly cadence after that: aeo-platform run && aeo-platform report.

Renamed from @webappski/aeo-tracker in 1.0.0 (2026-05-13). The aeo-tracker CLI command stays as a built-in alias — existing scripts keep working. Migration: npm i -g aeo-platform.


Why aeo-platform

Six concrete reasons aeo-platform exists, in order of how often they decide the install:

  • Measures 4 engines via official APIs — ChatGPT (gpt-5-search-api), Claude (claude-sonnet-4-6), Gemini (gemini-2.5-pro), Perplexity (sonar-pro). No scraping. No proprietary score.
  • Local-first. Raw responses stay on your disk in aeo-responses/YYYY-MM-DD/. No telemetry. No traffic to webappski.com. API keys read from process.env, never written to disk.
  • CI-grade. Exit codes 0/1/2/3 (stable / regressed / invisible / providers errored). --json stdout. Cron-friendly.
  • Zero runtime dependencies. "dependencies": {} in package.json. Vanilla Node 20+, single-file HTML report under 200 KB.
  • MIT. Fork it, embed it, ship it inside a paid product — your choice.

Optional engines + first-time terminal users

The 2-key minimum above (OpenAI + Gemini) covers ChatGPT and Gemini columns. Two more keys are optional and each adds an engine column to the report:

export ANTHROPIC_API_KEY="sk-ant-..."   # adds Claude column
export PERPLEXITY_API_KEY="pplx-..."     # adds Perplexity column

Get keys at: platform.openai.com/api-keys, aistudio.google.com/apikey, console.anthropic.com/settings/keys, docs.perplexity.ai.

Never used a CLI before? A founder-friendly walk-through (5 minutes, no terminal background required) is in the Full quickstart collapsible below.

What you get

Every aeo-platform report writes two files in aeo-reports/<date>/:

  • report.md — markdown with inline SVG charts. Renders on GitHub, Notion, VSCode preview, email. Perfect for CI logs and PR comments.
  • report.html — single-file editorial bento layout, ~170 KB, embedded variable fonts, works offline from file://, zero CDN, zero JS dependencies, zero tracking pixels.

The HTML report has:

Section Surfaces
Hero UVI (Unified Visibility Index) · mention rate · lift opportunities · top competitor · ⓘ popover with per-axis math
01 Overview 8-week score trend · listicle-pitch KPI · topic-cluster bars · top-3 actionable gaps preview
02 Visibility Per-engine cards · query × engine matrix (Mention / Position / Sentiment lenses) · region breakdown when --geo is used · verbatim quotes
03 Competitors Most-named brands · 4-axis radar (presence / sentiment / rank / mentions) vs top-3 competitors
04 Citations Domain share-of-voice (own-domain marker) · category breakdown · top-cited publishers
05 Actions 5 ordered moves (badges: FIX GAP / LOCK IN WIN / COMPETE / DEFEND) with specific competitors to displace and URLs to pitch
06 Diagnostics AI-Bot Crawl Readiness · authority presence (Wikipedia / Reddit / GitHub) · per-engine session cost · region indicator · UTM citations · AI-ad detector
Bridge card Copy-prompt button → 30-mission paste-into-AI plan

Each surface is grounded in actual run data: specific competitors named by this run, specific URLs cited by AI, specific gaps you can act on this week.

The 30-mission AEO plan (the wedge no commercial vendor ships)

After measuring you across 4 engines, aeo-platform report exports a JSON brand-context block with everything the AI needs to write a grounded plan: visibility index, per-engine citation deltas, top competitors, citation gaps, crawl matrix, authority signals, page signals, entity graph, region, freshness, competitor pricing tier.

Paste that JSON into your own ChatGPT / Claude / Gemini / Perplexity (any frontier model — same chat subscription you already pay for, no extra API spend). Ask: "give me a 30-mission plan to be cited more". The answer is keyed to your specific gaps — named competitors from topCompetitors, URLs from topCanonicalSources, weakest-engine fortification, citation-gap closure.

Workflow:

  1. aeo-platform report — opens HTML report in browser.
  2. Find the section Your AEO action prompt. Click the one-tap Copy button.
  3. Paste into your ChatGPT, Claude, Gemini, or Perplexity chat.
  4. Receive a 30-mission plan: 30 actions × ≈1–3 hours each, grouped into 4 weekly chunks, every action references a specific competitor / URL / engine / gap from the data.

Why no hosted AEO dashboard ships this: a paste-into-AI plan cannibalises the dashboard moat. Once the user takes the JSON to their own AI chat, the vendor's UI is no longer the destination. Open-source has the opposite incentive — show zero when it's zero, hand you the data, win when you take it wherever you want.

A sample plan from a real bare-site brand: examples/sample-plan-output.md.

Multi-engine coverage

Engine Default model API path Web-search grounding Required key
ChatGPT (OpenAI) gpt-5-search-api direct REST yes (search-API) OPENAI_API_KEY
Gemini (Google) gemini-2.5-pro direct REST optional (request flag) GEMINI_API_KEY
Claude (Anthropic) claude-sonnet-4-6 direct REST optional (request flag) ANTHROPIC_API_KEY
Perplexity sonar-pro direct REST always PERPLEXITY_API_KEY

OpenAI + Gemini keys are required (two-model competitor extractor: GPT-5-mini + Gemini-2.5-flash cross-check filters hallucinated brand mentions). Anthropic + Perplexity are optional — each adds a column.

For engines whose API tier you can't access (Perplexity Pro browser, ChatGPT Pro UI personalisation, Claude.ai UI), use manual paste mode:

mkdir perplexity-responses
# paste UI answers into perplexity-responses/q1.txt, q2.txt, q3.txt
aeo-platform run-manual perplexity --from-dir ./perplexity-responses

Results merge into today's _summary.json alongside API runs. diff and report treat both identically.

AI-bot crawlability audit (zero LLM cost)

aeo-platform report runs a pure-HTTP audit of your own domain against the AI-crawler matrix. No LLM calls. Roughly 3 HTTPS GETs.

Bot Owner Purpose
GPTBot OpenAI Training crawl
OAI-SearchBot OpenAI ChatGPT Search indexer
ChatGPT-User OpenAI On-demand fetch when a user pastes a URL
Google-Extended Google Gemini training opt-out
GoogleOther Google AI Overviews indexer
ClaudeBot Anthropic Training crawl
Claude-Web Anthropic On-demand fetch (Claude.ai chat)
anthropic-ai Anthropic On-demand fetch (legacy UA string)
PerplexityBot Perplexity Indexer
Perplexity-User Perplexity On-demand fetch
CCBot Common Crawl Used by OpenAI, Anthropic, others as training data
Bytespider ByteDance Doubao / China-market AI

Each bot is mapped to allowed | blocked | partial | unspecified from your /robots.txt. sitemap.xml + llms.txt presence are also checked. The composite AI-Bot Crawl Readiness score (0-100) weighs robots 30% · bots-not-blocked 25% · sitemap 25% · llms.txt 20%.

Note: this measures technical access — not actual answer-pool inclusion. Answer-pool inclusion is driven by off-page authority (Wikipedia, Reddit, listicles, review platforms) — covered in the next section.

Authority signals

aeo-platform report checks the off-page surfaces AI engines weight heavily when deciding who to cite. Free public APIs only — no auth.

Source What's checked Method
Wikipedia Article exists for your brand? Disambiguation page? Length? Wikipedia REST API
Reddit Mention count in posts + comments referencing your brand Reddit search JSON
GitHub Repo exists under your namespace? Stars / forks? (auto-surfaced for dev-tool brands; disambiguation guard prevents wrong-repo matches for popular names) GitHub REST API; optional GITHUB_TOKEN env lifts 60/h → 5000/h
Wikidata Q-ID present? sameAs chain reciprocal? Wikidata SPARQL

Why this matters: in Webappski's 2026 weekly audits, brands with a Wikidata entity, named-author sameAs chains, and presence on Reddit/G2/Wikipedia consistently outperform on AI Overview citation rates compared to brands relying on domain authority alone. Entity signals and citation-source presence are the highest-ROI surfaces to fix.

UVI methodology — Unified Visibility Index

aeo-platform rolls four AI-answer signals into a single 0-100 composite. Every weight is in the source (lib/report/visibility-index.js); the ⓘ popover next to the hero number shows the per-axis math on every run.

Sub-component Weight What it measures
Presence 35% Cells where your brand was mentioned (yes/src) out of total cells
Sentiment 25% High-confidence positive cells out of cells with a mention
Rank 20% Average rank position when mentioned, normalised 0-100
Citation 20% Cells where your domain was cited as a source

Sub-components with insufficient data (e.g. zero rank positions in a first run) are excluded; remaining weights re-normalise and the popover flags the re-norm. No phantom values. Sample size is published alongside the score (n=K high-confidence cells).

Comparison vs hosted AEO platforms

Tool Pricing model Open source Raw data stays local Paste-into-AI 30-mission plan
aeo-platform Free + your own API spend MIT Yes Yes — no tracked vendor ships this as of May 2026
Otterly Paid subscription No No No
Profound Paid subscription No No No
Peec.ai Paid subscription No No No
Bluefish Enterprise contract No No No
AthenaHQ Paid subscription No No No
Goodie Paid subscription No No No
HubSpot AEO Grader Free one-shot scorecard No No No
Evertune Custom contract No No No
Ahrefs Brand Radar Paid SEO-suite add-on No No No
Semrush AI Toolkit Paid SEO-suite add-on No No No
Discovered Labs Managed-service retainer No No No

Pick something else when: you need team SSO, Slack/email alerts, multi-brand management UI, or SOC-2 — Profound or Peec.ai are the better fit. For broader engine coverage out-of-the-box — Otterly. For enterprise agentic-marketing infrastructure — Bluefish. For a free one-time scorecard inside an existing HubSpot workflow — HubSpot AEO Grader.

Pick aeo-platform when: indie founders, small AEO / GEO agencies, dev-centric teams who prefer CLI + CI integration, anyone who wants the paste-into-AI plan, anyone who can't justify a subscription for a tool whose direct-API cost is a few cents per week.

Commands

Command Purpose
aeo-platform init Set up .aeo-tracker.json — auto-discovers category, generates 3 commercial queries, validates them
aeo-platform init --queries-only Re-suggest queries without touching brand / domain / providers
aeo-platform run Query each AI engine with each query. Save raw responses to aeo-responses/YYYY-MM-DD/
aeo-platform run --replay [--replay-from=YYYY-MM-DD] Rebuild today's summary from cached responses (zero API cost)
aeo-platform run-manual <engine> --from-dir ./folder Import pasted UI answers for engines without an accessible API
aeo-platform report Generate report.md + report.html. HTML auto-opens in your browser
aeo-platform diff Compare last two runs — what changed, what's new, what regressed
aeo-platform export --format=csv Flatten every snapshot into a CSV (or JSON) for Looker / Sheets / your warehouse
aeo-platform crawl-stats --log-file=path Parse Apache/nginx access logs to see AI-bot crawl frequency on your own site

aeo-platform --help lists every flag. aeo-platform <cmd> --help for per-command help.

Flags reference

Every flag aeo-platform accepts, grouped by which command consumes it.

Flag Commands Purpose
--yes / -y init Non-interactive (CI / dotfiles). Requires --brand, --domain, and --auto or --manual
--auto init --yes Full research pipeline: brainstorm → filter → score → cross-model validate → select
--manual init --yes Skip LLM analysis; use pre-existing queries
--light init --yes --auto Bypass research pipeline; single-shot suggest
--keywords "q1,q2,q3" init --yes Bring-your-own queries — zero LLM cost
--queries-only init Re-suggest queries without changing brand / domain / providers
--strict-validation init, run Cross-check query validation with 2 LLM providers (~2× validation cost)
--force run Bypass validation gate
--json run Structured JSON to stdout, ANSI suppressed (CI-friendly)
--geo=us,uk,de,... run Run queries under multiple regional contexts. 12 codes: us, uk, de, fr, es, it, ca, au, in, br, jp, nl. Multiplies cost by region count
--depth=<web|full|auto> run web (default) — single web pass. full — adds training-data pass (~2× cost). auto — prompts if last training baseline > 14 days
--replay run Rebuild summary from cached raw responses (zero API cost)
--replay-from=YYYY-MM-DD run Replay a specific date instead of the most recent capture
--from-dir <path> run-manual Directory containing q1.txt, q2.txt, q3.txt with pasted UI answers
--last <N> / --since <date> diff Compare last N runs / compare a date to latest run
--format=<csv|json> export Output format (CSV default)
--refresh-cache <fields> report Force-refresh cached fields before report. CSV list or all
--no-html report Markdown only — skip HTML write + browser auto-open
--no-open report Write files but don't auto-open the browser
--no-authority / --no-page-signals / --no-entity-graph / --no-pricing report Skip optional fetch-heavy checks (use behind a VPN, offline, or to dodge rate limits)

Exit codes (CI-friendly)

aeo-platform run returns one of four exit codes after every audit — wire them into your alerting tier.

Code Meaning Typical CI response
0 Score stable or improved vs previous run Success — nothing to alert
1 Score dropped more than regressionThreshold (default 10pp) High-priority alert
2 All checks returned zero mentions Medium alert — brand invisible (normal on day 1)
3 All providers errored Infrastructure alert (keys / billing / network)

Tune the threshold in .aeo-tracker.json:

{ "regressionThreshold": 5 }

CI integration

Bash + cron:

#!/bin/bash
aeo-platform run --json > /var/log/aeo-$(date +%F).json
case $? in
  0) : ;;                      # stable
  1) slack-alert "AEO regression detected" ;;
  2) : ;;                      # invisible — expected for new brands
  3) slack-alert "aeo-platform: API errors" ;;
esac

GitHub Actions:

name: Weekly AEO Audit
on:
  schedule: [{ cron: '0 9 * * 1' }]   # Monday 9:00 UTC

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm install -g aeo-platform
      - run: aeo-platform run --json > latest.json
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
          NO_COLOR: '1'
      - uses: actions/upload-artifact@v4
        with: { name: aeo-latest, path: aeo-responses/ }

Configuration (.aeo-tracker.json)

aeo-platform init creates .aeo-tracker.json in the working directory. The file name is preserved across the rename so existing dotfiles keep working.

{
  "brand": "YOURBRAND",
  "domain": "YOURDOMAIN.COM",
  "category": "Short description of your competitive space",
  "queries": [
    "best YOURCATEGORY services 2026",
    "top YOURCATEGORY monitoring tools 2026",
    "YOURCATEGORY consultants for B2B startups"
  ],
  "regressionThreshold": 10,
  "providers": {
    "openai":     { "model": "gpt-5-search-api",  "env": "OPENAI_API_KEY" },
    "gemini":     { "model": "gemini-2.5-pro",    "env": "GEMINI_API_KEY" },
    "anthropic":  { "model": "claude-sonnet-4-6", "env": "ANTHROPIC_API_KEY" },
    "perplexity": { "model": "sonar-pro",         "env": "PERPLEXITY_API_KEY" }
  }
}

Fields:

  • brand, domain, category — what the tool measures
  • queries — exactly 3, unbranded, commercial-intent. Methodological queries («how to X») are rejected by the validator
  • regressionThreshold — exit code 1 fires when score drops by more than this many percentage points week-over-week (default 10)
  • providers[].env — name of the env var that holds the key (override for non-standard names like OPENAI_API_KEY_DEV)
  • providers[].model — auto-discovered at run start (newest available); override here to pin a specific model

FAQ

What is answer engine optimization (AEO), and how is it different from GEO?

Answer engine optimization (AEO) and generative engine optimization (GEO) describe the same field — the practice of making your brand recommended by AI answer engines (ChatGPT, Claude, Gemini, Perplexity). The naming split is industry-political: AEO is preferred by Profound and parts of the agency world; GEO is preferred by Wikipedia, AthenaHQ, and most 2026 listicles. aeo-platform works for both and surfaces both terms in metadata and reports.

How is AEO different from SEO?

Traditional SEO optimises for click-through from search-result pages. AEO/GEO optimises for inclusion in the AI-generated answer itself. Per Webappski's 2026 audits and the wider industry consensus, classic domain-authority signals predict a small fraction of AI citations — entity signals (Schema.org with verified sameAs, Wikidata Q-IDs, named-author attribution) and citation-source presence (Reddit, YouTube, Wikipedia, G2, niche listicles) do most of the work. aeo-platform measures the foundational metric directly: "when a user asks an AI engine about my category, does my brand appear in the answer?"

Which AI engines does aeo-platform cover?

Four, via official APIs: ChatGPT (gpt-5-search-api), Claude (claude-sonnet-4-6), Gemini (gemini-2.5-pro), Perplexity (sonar-pro). For browser-only surfaces (Perplexity Pro UI, ChatGPT Pro personalisation, Claude.ai UI) use run-manual to paste UI answers.

Is my data private?

Yes. Nothing leaves your machine except to the AI providers you explicitly configure (the same providers you'd query from a browser). No telemetry. No analytics. No traffic to webappski.com. Raw responses stay on disk in aeo-responses/YYYY-MM-DD/. API keys are read from process.env and never written to disk.

Do I need API keys for all four engines?

No. Two are mandatory: OPENAI_API_KEY and GEMINI_API_KEY — they double as the ChatGPT + Gemini columns and power the two-model competitor extractor. ANTHROPIC_API_KEY and PERPLEXITY_API_KEY are optional; each adds its engine column.

What is the 30-mission AEO plan?

A personalised action plan you get by pasting aeo-platform's JSON brand-context block (visibility index, per-engine deltas, top competitors, citation gaps, crawl matrix, authority signals) into your own ChatGPT, Claude, Gemini, or Perplexity chat. The receiving AI returns 30 missions (≈1–3 hours each, grouped into 4 weekly chunks) keyed to your specific gaps — named competitors to displace, specific URLs to pitch, weakest-engine fortification, citation-gap closure. As of May 2026, no other tracked AEO tool ships this paste-into-AI plan generator. Detailed flow above in The 30-mission AEO plan.

How is this different from Otterly, Profound, Peec, Bluefish?

Otterly, Profound, Peec, Bluefish, AthenaHQ, and Goodie are paid hosted dashboards — monitoring-only. They tell you the problem inside their UI and stop there. aeo-platform is a free open-source CLI that calls provider APIs directly, runs on your machine, stores raw responses locally, and — as of May 2026 — is the only tracked AEO tool that ships a paste-into-AI 30-mission plan generator. See Comparison vs hosted AEO platforms for the full table.

Is aeo-platform CI-friendly?

Yes. --json flag for structured stdout, ANSI auto-disabled on non-TTY, NO_COLOR env honoured, exit codes 0/1/2/3 map cleanly to alerting tiers. GitHub Actions / cron example above.

My first run showed 0% — is the tool broken?

No. New brands typically score 0–5% in the first 4 weeks. AI engines update when third-party sources (blog posts, directories, review sites) start mentioning your brand, not in real time. Typical trajectory: 0% in weeks 1–4, first mention between week 6 and 12. The value is in week-over-week deltas, not the absolute score on day 1. The Recommended actions section of every report tells you which third-party sources to pitch to move the needle.

Does it work with non-English sites?

Yes. The auto-suggest prompt tells the LLM to match the site's primary language (detected from <html lang>). Tested on English, Polish, and German sites.

Can I track multiple brands?

Yes — create a separate working directory for each brand with its own .aeo-tracker.json. A wrapper script that loops over client directories is ~10 lines of bash.

How often should I run it?

Weekly. Daily adds noise without signal (AI models don't update fast enough to make daily deltas meaningful). Monthly loses meaningful trend resolution.

Limitations

Honest list of where aeo-platform stops short — read before you wire it into a contract or a board slide.

  • API ≠ browser UI. Personalisation, session context, and occasional model upgrades mean API responses can differ slightly from what users see in the ChatGPT / Gemini / Claude browser apps. Manual paste mode catches the browser-personalisation layer.
  • Week-over-week stochastic variance. Same queries on the same day typically produce ±5–10% score fluctuation because AI outputs are probabilistic. Use weekly cadence (not daily) to smooth noise.
  • Provider rate limits on free tiers. Running 3 queries in parallel is usually fine, but back-to-back brand runs can hit 429s.
  • Single-brand scope per config. Multi-brand workflows need a wrapper that loops over per-client directories.
  • Gemini citation URLs are Vertex AI redirect tokens — resolved to readable domains using the title field; unreadable tokens are dropped rather than displayed.

Roadmap

Where aeo-platform is going next (no fixed dates — feedback-driven):

  • Multi-brand profiles for agencies running weekly audits on many clients
  • Diagnostic prompts asking AI engines why they don't cite you
  • Optional SQLite-backed history for trends beyond filesystem snapshots
  • README AEO-discoverability optimisation driven by real npm-download query patterns

Not planned: hosted dashboard, proprietary scoring layer, data uploads to Webappski servers. Local-first privacy and methodology transparency are core values of aeo-platform, not features.

Full version history: CHANGELOG.md.

Migrating from @webappski/aeo-tracker

npm uninstall -g @webappski/aeo-tracker
npm install -g aeo-platform

The CLI command aeo-tracker keeps working as a built-in alias inside aeo-platform. Your .aeo-tracker.json config and aeo-responses/ / aeo-reports/ folders are unchanged. Project-dependency users with caret ^0.3.0 in package.json should manually edit it to "aeo-platform": "^1.0.0" (caret semantics don't cross majors). See CHANGELOG.md for the full migration note.


Full quickstart — for first-time terminal users (~5 minutes)

If you've never run a CLI tool before, that's fine — aeo-platform needs one-time setup, but the weekly run takes zero terminal skill after that.

1. Open Terminal. macOS: ⌘+Space → type Terminal → Enter. Windows 10/11: Start menu → PowerShell → Enter. Linux: you know where it is.

2. Install Node.js 20+ (once per machine). Check first: paste node --version + Enter. If it prints v20.x or higher, skip to step 3. Otherwise download from nodejs.org (LTS version). Re-open Terminal after install.

3. Install aeo-platform.

npm install -g aeo-platform

If you see EACCES, fix per npm docs. Typically sudo npm install -g aeo-platform on macOS / Linux.

4. Get your 2 required API keys. Open these in new tabs, sign up (free), click Create new key:

5. Save the keys to your shell. Replace placeholders with the actual key strings:

echo 'export OPENAI_API_KEY="PASTE_OPENAI_KEY_HERE"' >> ~/.zshrc
echo 'export GEMINI_API_KEY="PASTE_GEMINI_KEY_HERE"' >> ~/.zshrc
source ~/.zshrc

Optional — adds the Claude / Perplexity columns:

echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.zshrc
echo 'export PERPLEXITY_API_KEY="pplx-..."'  >> ~/.zshrc
source ~/.zshrc

Windows PowerShell: use [System.Environment]::SetEnvironmentVariable('OPENAI_API_KEY','...','User') and restart PowerShell. Bash users: replace ~/.zshrc with ~/.bashrc.

6. Run aeo-platform. Replace YOURBRAND and YOURDOMAIN.COM:

aeo-platform init --yes --brand=YOURBRAND --domain=YOURDOMAIN.COM --auto
aeo-platform run
aeo-platform report

The HTML report auto-opens in your browser.

API keys under non-standard env-var names

Common on dev machines — you already use ChatGPT / Claude via another tool and the keys live in ~/.zshrc under custom names (OPENAI_API_KEY_DEV, MY_CLAUDE_KEY, etc.). aeo-platform init detects them in three stages:

  1. Standard namesOPENAI_API_KEY, GEMINI_API_KEY, ANTHROPIC_API_KEY, PERPLEXITY_API_KEY.
  2. Heuristic match — scans env vars matching ^(OPENAI|GPT)…(API|KEY|TOKEN)$ and similar patterns per provider. Matches proposed for confirmation during init.
  3. Interactive prompt — for any provider still unmatched, init asks for the env var name directly.

Whatever you confirm is written into .aeo-tracker.json::providers[].env, so every subsequent run knows where to look. Your actual key values stay in process.env — never written to disk.

CI mode (init --yes) disables interactive prompts. For CI, set the standard names in env, or pre-commit .aeo-tracker.json with explicit env field per provider.

Cost (per weekly run)

You pay only the LLM-API cost for your own runs. There is no aeo-platform subscription fee.

  • 2-engine minimum (OpenAI + Gemini) — typically a few cents per weekly run
  • 4-engine matrix (+ Claude + Perplexity) — roughly 2× the 2-engine cost
  • --depth=full doubles cost (adds a training-data pass per cell where supported)
  • --geo=us,uk,de multiplies cost linearly by region count
  • Sentiment classification: ~$0.0008 per cell that has a brand mention (skip-on-no-mention)
  • Outreach-template drafts: ~$0.003 one-off per report, cached in _summary.json::outreachTemplates

All other modules — crawlability audit, page signals, entity graph reciprocity, authority presence (Wikipedia / Reddit / GitHub), competitor pricing tier, region context, ads detector, UTM tracker, topic clusters — are zero LLM cost (free public APIs / direct HTTP).

Behind this tool

Built and maintained by Webappski — an AEO / GEO agency. aeo-platform (formerly @webappski/aeo-tracker) is the open-source spinout of Webappski's internal AEO / GEO audit toolchain. Weekly runs against Webappski's own brand and client brands are in production.

The tool was open-sourced after observing a gap between third-party AEO scorecards (which surfaced a mid-range proprietary score for Webappski) and direct-API tests (which showed zero brand mentions across the same engines in the same week). That gap between proprietary score and direct-API truth is the bug aeo-platform is built to fix.

Methodology lives in the weekly reports at webappski.com/blog. The tool itself is the what; the blog is the how.


Contributing

PRs welcome. Open an issue first if you're planning a non-trivial change so we can sketch the shape together. Bug reports and feature requests at github.com/webappski/aeo-platform/issues.

License

MIT — do whatever you want with it.


{
  "@context": "https://schema.org",
  "@graph": [
    {
      "@type": "SoftwareApplication",
      "@id": "https://github.com/webappski/aeo-platform#software",
      "name": "aeo-platform",
      "alternateName": ["aeo-tracker", "@webappski/aeo-tracker"],
      "applicationCategory": "DeveloperApplication",
      "applicationSubCategory": "Answer Engine Optimization, Generative Engine Optimization, Brand Visibility Monitoring",
      "operatingSystem": "macOS, Linux, Windows",
      "softwareVersion": "1.0.0",
      "datePublished": "2026-05-13",
      "license": "https://opensource.org/licenses/MIT",
      "downloadUrl": "https://www.npmjs.com/package/aeo-platform",
      "codeRepository": "https://github.com/webappski/aeo-platform",
      "offers": { "@type": "Offer", "price": "0", "priceCurrency": "USD" },
      "description": "Open-source CLI that measures brand visibility across ChatGPT, Claude, Gemini, and Perplexity using direct provider APIs, audits AI-bot crawlability + authority signals, and exports a JSON brand-context you paste into any AI for a personalised 30-mission AEO/GEO plan. Free MIT-licensed alternative to Otterly, Profound, Peec, and Bluefish.",
      "featureList": [
        "4 engines via official APIs (ChatGPT, Claude, Gemini, Perplexity)",
        "Paste-into-AI 30-mission AEO plan (JSON brand-context export)",
        "AI-bot crawlability audit (robots.txt × bot matrix)",
        "Authority signals: Wikipedia, Reddit, GitHub, Wikidata",
        "Unified Visibility Index (UVI) — 4 sub-components with re-norm",
        "Two-model hallucination filter (GPT-5 + Gemini cross-check)",
        "Region context (--geo) across 12 locales",
        "Editorial bento HTML report (offline, embedded fonts, zero CDN)",
        "CSV / JSON export for Looker, Sheets, BI",
        "CI-friendly exit codes 0/1/2/3 + --json stdout",
        "Zero runtime dependencies, MIT, local-first"
      ],
      "keywords": "AEO, GEO, answer engine optimization, generative engine optimization, ChatGPT, Claude, Gemini, Perplexity, brand monitoring, AI visibility, Otterly alternative, Profound alternative, Peec alternative, Bluefish alternative, AthenaHQ alternative, 30-mission AEO plan",
      "sameAs": [
        "https://www.npmjs.com/package/aeo-platform",
        "https://github.com/webappski/aeo-platform",
        "https://webappski.com"
      ],
      "publisher": { "@id": "https://webappski.com/#org" }
    },
    {
      "@type": "Organization",
      "@id": "https://webappski.com/#org",
      "name": "Webappski",
      "url": "https://webappski.com",
      "description": "Answer Engine Optimization (AEO / GEO) studio. Maintains aeo-platform — the open-source AEO platform for ChatGPT, Claude, Gemini, and Perplexity.",
      "sameAs": [
        "https://github.com/webappski",
        "https://www.npmjs.com/~webappski"
      ]
    },
    {
      "@type": "FAQPage",
      "mainEntity": [
        {
          "@type": "Question",
          "name": "What is answer engine optimization (AEO), and how is it different from GEO?",
          "acceptedAnswer": { "@type": "Answer", "text": "AEO and GEO describe the same field — making your brand recommended by AI answer engines (ChatGPT, Claude, Gemini, Perplexity). The naming split is industry-political: AEO is preferred by Profound; GEO by Wikipedia and most 2026 listicles. aeo-platform supports both terms in metadata and reports." }
        },
        {
          "@type": "Question",
          "name": "How is AEO different from SEO?",
          "acceptedAnswer": { "@type": "Answer", "text": "SEO optimises for click-through from search-result pages. AEO/GEO optimises for inclusion in the AI-generated answer itself. Domain Authority predicts under 4% of AI citations in 2026 audits; entity signals (Schema.org sameAs, Wikidata Q-IDs) and citation-source presence (Reddit, Wikipedia, listicles) do most of the work." }
        },
        {
          "@type": "Question",
          "name": "Which AI engines does aeo-platform cover?",
          "acceptedAnswer": { "@type": "Answer", "text": "Four engines via official APIs: ChatGPT (gpt-5-search-api), Claude (claude-sonnet-4-6), Gemini (gemini-2.5-pro), Perplexity (sonar-pro). Manual paste mode also covers browser-only surfaces like Perplexity Pro UI and ChatGPT Pro personalisation." }
        },
        {
          "@type": "Question",
          "name": "Is my data private?",
          "acceptedAnswer": { "@type": "Answer", "text": "Yes. Nothing leaves your machine except to the AI providers you configure. No telemetry. No analytics. No traffic to webappski.com. Raw responses stay on disk. API keys are read from process.env and never written." }
        },
        {
          "@type": "Question",
          "name": "Do I need API keys for all four engines?",
          "acceptedAnswer": { "@type": "Answer", "text": "No. Two are mandatory: OPENAI_API_KEY and GEMINI_API_KEY (they double as the ChatGPT + Gemini columns and power the two-model competitor extractor). ANTHROPIC_API_KEY and PERPLEXITY_API_KEY are optional — each adds its engine column." }
        },
        {
          "@type": "Question",
          "name": "What is the 30-mission AEO plan?",
          "acceptedAnswer": { "@type": "Answer", "text": "A personalised AEO action plan you get by pasting aeo-platform's JSON brand-context block into your own ChatGPT, Claude, Gemini, or Perplexity chat. The receiving AI returns 30 missions (≈1–3 hours each, grouped into 4 weekly chunks) keyed to your specific gaps — named competitors to displace, URLs to pitch, weakest-engine fortification. The only paste-into-AI plan generator on the AEO-tool market as of May 2026." }
        },
        {
          "@type": "Question",
          "name": "How is aeo-platform different from Otterly, Profound, Peec, Bluefish?",
          "acceptedAnswer": { "@type": "Answer", "text": "Otterly, Profound, Peec, Bluefish, AthenaHQ, Goodie are paid hosted dashboards — monitoring-only. aeo-platform is a free open-source CLI that calls provider APIs directly, runs on your machine, stores raw responses locally, and — as of May 2026 — is the only tracked AEO tool with a paste-into-AI 30-mission plan generator." }
        },
        {
          "@type": "Question",
          "name": "Is aeo-platform CI-friendly?",
          "acceptedAnswer": { "@type": "Answer", "text": "Yes. --json flag for structured stdout, ANSI auto-disabled on non-TTY, NO_COLOR env honoured, exit codes 0/1/2/3 map cleanly to alerting tiers. GitHub Actions and cron examples in the README." }
        }
      ]
    }
  ]
}

About

Open-source CLI for tracking brand visibility across AI answer engines (ChatGPT, Gemini, Claude). Direct API calls, zero dependencies. Built by Webappski.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors