An AI-powered terminal news assistant that lets you search for news, read articles, fact-check claims, and have intelligent conversationsβall from your command line.
- π Smart News Search β Natural language search with automatic date parsing ("last year AI news" β searches 2024)
- π Robust Article Reading β Multi-method scraping with 6 fallback strategies including browser-based extraction
- π€ AI Summaries β Context-aware summaries powered by local LLM (Ollama)
- π¬ Conversational β Chat naturally with context-aware responses
- β Fact-Checking β Verify claims against trusted sources (Snopes, PolitiFact, FactCheck.org)
- π Morning Briefing β Geo-located dashboard with top news on startup
- π’ Sequential IDs β Simple numeric IDs (1, 2, 3) for easy command typing
- β¨οΈ Autocomplete β Tab completion for commands with smart defaults
- π§ Configurable β Adjust article limits, choose your LLM model
- π Location-Aware β Automatic country detection for localized news
- π Typo Correction β LLM-powered input sanitization
Linux / macOS:
curl -sSL https://raw.githubusercontent.com/ikenai-lab/news-cli/main/install.sh | bashWindows (PowerShell as Admin):
irm https://raw.githubusercontent.com/ikenai-lab/news-cli/main/install.ps1 | iexGlobal Install via uv (if you have uv installed):
uv tool install git+https://github.com/ikenai-lab/news-cli.gitThe install scripts will:
- β
Check for and install
uv(Python package manager) - β
Check for and install
Ollama(Local LLM runtime) - β Clone the repository
- β Install Python dependencies
- β Pull the LLM model (~2GB)
# 1. Install prerequisites
# Linux/macOS:
curl -LsSf https://astral.sh/uv/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
# Windows (using winget):
winget install astral-sh.uv
winget install Ollama.Ollama
# 2. Clone and setup
git clone https://github.com/ikenai-lab/news-cli.git
cd news-cli
uv sync
# 3. Optional: Install browser for JS-heavy sites
uv run playwright install chromium
# 4. Pull the LLM model
ollama pull llama3.2:3b# Run with defaults
uv run news-cli
# Specify model and article limit
uv run news-cli --model llama3.2:3b --limit 10| Option | Default | Description |
|---|---|---|
--model |
llama3.2:3b |
Ollama model to use |
--limit |
5 |
Articles per search (1-20) |
You can set persistent defaults (saved to ~/.config/news-cli/config.json) so you don't need to specify options every time.
# View current config
news-cli config
# Set default model
news-cli config --model llama3.2:3b
# Set default article limit
news-cli config --limit 10Type / to see all available commands with autocomplete.
| Command | Description |
|---|---|
/read <id> |
Read and summarize article (e.g. /read 1) |
/open <id> |
Open article in browser (e.g. /open 1) |
/save-article <id> |
Save article content to markdown file |
/save-session <file> |
Save conversation history to JSON file |
/analyze <id> |
AI analysis for bias, tone, facts |
/fact-check <id> |
Verify claims against fact-check sites |
/similar <id> |
Find related news from different sources |
/limit <n> |
Set articles per search (1-20) |
/briefing |
Refresh the morning briefing |
/quit or /exit |
Exit the application |
Just type naturally! The AI understands:
"latest AI news"β Search"what happened with OpenAI last week"β Search with date filter"read the techcrunch article"β Reads matching article"give me article 3"β Reads article #3"read 1"β Reads first article in list
On startup, you'll see a personalized dashboard:
π° Morning Briefing (Location: India)
βββββββββββββ India Headlines βββββββββββββ
β # β Date β Source β Title β
β‘ββββββββββββββββββββββββββββββββββββββββββ©
β 1 β 2025-12-28 β ndtv.com β ... β
...
π° 12 articles loaded. Use /read <#> to read any article.
Verify claims in any article:
/fact-check 3
βββββββββββββββ Fact-Check Results βββββββββββββββ
β # β Claim β Sources β Top Source β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 1 β "AI will replace..." β 3 β Snopes... β
...
news-cli/
βββ src/
β βββ main.py # CLI entry point
β βββ agent.py # NewsAgent with LLM integration
β βββ startup.py # Ollama checks + geolocation
β βββ tools/
β β βββ search.py # DuckDuckGo search with time filters
β β βββ scraper.py # Multi-method article scraper
β β βββ fact_check.py # Claim verification tool
β βββ ui/
β βββ render.py # Rich UI components
β βββ completer.py # Slash command autocomplete
βββ pyproject.toml
βββ README.md
| Package | Purpose |
|---|---|
ollama |
Local LLM client |
ddgs |
DuckDuckGo search |
nodriver |
Stealth browser automation (replacing Selenium/Playwright) |
cloudscraper |
Cloudflare bypass |
trafilatura |
Article content extraction |
readability-lxml |
Fallback content extraction |
rich |
Terminal UI components |
typer |
CLI framework |
httpx |
HTTP client |
prompt-toolkit |
Command autocomplete |
The scraper uses a multi-layered approach with 6 fallback methods:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. Cloudscraper β
β β³ Fast, lightweight Cloudflare bypass β
β β (if fails) β
β 2. Nodriver (Stealth Browser) β
β β³ Chrome DevTools Protocol based (masked as User) β
β β³ Handles heavy JS and complex anti-bots β
β β (if fails) β
β 3. Direct Fetch (httpx + trafilatura) β
β β³ Standard HTTP with article extraction β
β β (if fails) β
β 4. Archive.org (Wayback Machine) β
β β³ Check for cached snapshots if live site fails β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
For sites that block all scraping (like MSN), use /open <id> to view in browser.
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE for details.
