Skip to content

distbit0/lineate

Repository files navigation

lineate

Paste a link to an article, PDF, video, or thread and Lineate produces a private GitHub Gist that helps you prioritise what to read and internalise what you do read. It starts with takeaways, a devil’s-advocate critique, and a personalised “skip this” rationale, then includes the content or a semantically complete summary.

What it does

  • You can paste a link (or any text containing links) and get back cleaned and converted URLs.
  • Lineate can fetch content or transcripts, write them to a private GitHub Gist, and return the gist URL.
  • Lineate can also rewrite URLs to more extraction-friendly forms.
  • Processed URL(s) are opened in your browser by default.

Epistemic triage prefix

Every output starts with a short triage header that helps you prioritise what to read and engage more deeply with what you do read.

  • The main takeaways capture the central claims and conclusions.
  • A 60 word devil’s-advocate critique pressures the claims and assumptions.
  • A 30 word “skip this” argument is written against your stated interests/persona. Persona configuration lives in PERSONA.txt.

Example output: https://gist.github.com/distbit0/4125df79e8a43b002a2d3a805b90e115

Supported inputs and adapters include: You can paste any text containing URLs, or a local file path for .pdf, .mp3, or .mp4.

  • Some converters only run when you pass --force-convert-all or append ## to a specific URL.
  • Converters that write gists include YouTube (/watch?, /live/), Twitter/X (/status/), Discord (discord.com), Telegram (t.me), PDFs (.pdf, arxiv.org/pdf/...), DocSend (docsend.com/view/...), GitBook (gitbook, docs.), Discourse (/t/...), Substack, Apple Podcasts, SoundCloud, Streameth, Rumble, and direct .mp3/.mp4.
  • URL normalisers include Medium (medium.com to scribe.rip), Google Docs export, Wikipedia mobile, Reddit (old.reddit.com), and GreaterWrong (lesswrong.com).
  • Warpcast (warpcast.com) is passed through unchanged.

Quick start

  1. Install Python deps (managed via pyproject.toml):
    • uv sync
  2. Ensure external tools are installed (see below).
  3. Fill in .env with required keys.
  4. Set the LLM model in config.json.
  5. Run:
    • uv run --env-file .env -m lineate "https://..."
    • or just run with no args to use the clipboard.

Usage

uv run --env-file .env -m lineate [text-or-url]

Options:

  • --force-convert-all forces conversion for all URLs.
  • --summarise replaces the extracted body with a semantically complete summary.
  • --no-open prevents opening processed URLs in your browser.
  • --force-refresh forces refresh for all converters (equivalent to appending ###).
  • --force-no-convert skips conversion for all URLs.

Environment variables (.env)

Put these in this repo’s .env

Required for core functionality:

  • gh_api_key – GitHub personal access token with Gist scope (used by writeGist).

Required for LLM summarisation/title generation:

  • LLM_API_KEY – used for text-generation calls to the configured llm.api_base_url.

Required for audio/video transcription:

  • OPENAI_API_KEY – used for Whisper transcription.

Required for specific sources:

  • Discord: DISCORD_AUTH_TOKEN
  • Telegram: TELEGRAM_API_ID, TELEGRAM_API_HASH, TELEGRAM_SESSION_NAME
  • Twitter/X: TWITTER_BEARER_TOKEN, TWITTER_CT0_TOKEN, TWITTER_COOKIE
    • Optional: TWITTER_USER_AGENT, TWITTER_XCLIENTTXID, TWITTER_XCLIENTUUID

Notes:

  • Telethon will create a .session file named after TELEGRAM_SESSION_NAME in the repo.
  • Twitter/X credentials need to be refreshed if the session expires.

How to get each key (links + minimal steps):

Config (config.json)

Set the LLM endpoint and model under llm. The repo default points at OpenRouter and uses an OpenRouter model id:

{
  "llm": {
    "api_base_url": "https://openrouter.ai/api/v1",
    "reasoning_effort": "high",
    "model": "qwen/qwen3.6-plus-preview:free",
    "pdf_model": "openai/gpt-5.4",
    "pdf_reasoning_effort": "medium"
  }
}

llm.model and llm.reasoning_effort are the defaults for the non-PDF text-generation paths. llm.pdf_model and llm.pdf_reasoning_effort are used only for the PDF-to-Markdown conversion path.

System dependencies

  • uv – Python package manager used to sync/install deps.
  • ffmpeg – required by pydub and yt-dlp for audio conversion.
  • yt-dlp – required for some sources (e.g., Rumble).
  • Clipboard helper for pyperclip:
    • Linux (X11): xclip or xsel (or wl-clipboard on Wayland)
    • macOS: pbcopy/pbpaste (built-in)

Install (CLI dependencies):

  • macOS (Homebrew):
    • brew install ffmpeg yt-dlp uv
  • Linux (Fedora):
    • sudo dnf install ffmpeg yt-dlp uv

Python dependencies used by lineate

Installed via uv sync from pyproject.toml. Key runtime packages:

  • requests, loguru, python-dotenv, pyperclip
  • openai (OpenRouter/OpenAI-compatible client for LLM calls, plus Whisper transcription)
  • pydub (audio slicing)
  • youtube-transcript-api, bs4
  • telethon (Telegram)
  • yt-dlp (Rumble)
  • soundcloud-lib
  • python-dateutil (Discord timestamps)

About

Turns links into private gists with takeaways, devil’s-advocate critique, and persona-based “skip this” argument, plus full content or complete summary.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages