Skip to content

dvnc0/saywhat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

saywhat

A command-line tool that pipes text to an LLM (Large Language Model) and asks questions about it. Perfect for quickly analyzing command output, log files, or any text data using AI.

Features

  • Pipe any command output directly to an LLM for analysis
  • Works with OpenAI, local LLMs, or any OpenAI-compatible API
  • Secure configuration storage in your system config directory
  • Support for custom API endpoints (including localhost)
  • Configurable models and SSL settings

Installation

From Source

# Clone the repository
git clone https://github.com/yourusername/saywhat.git
cd saywhat

# Build and install
cargo install --path .

Using Cargo

cargo install saywhat

Quick Start

  1. Initialize with your LLM provider:

    # For OpenAI
    saywhat init --api-url https://api.openai.com/v1 --api-key YOUR_API_KEY --model gpt-4
    
    # For a local LLM (like LM Studio or Ollama with OpenAI compatibility)
    saywhat init --api-url http://localhost:8080/v1 --model local-model
    
    # For self-signed certificates
    saywhat init --api-url https://your-server/v1 --api-key YOUR_KEY --insecure
  2. Ask questions about piped input:

    cat error.log | saywhat "what are the main errors?"

Usage Examples

Analyze System Information

df -h | saywhat "which disk is most full?"

Understand Command Output

ps aux | saywhat "which processes are using the most memory?"

Parse Log Files

tail -100 /var/log/app.log | saywhat "summarize the errors in this log"

Code Review

git diff | saywhat "what are the key changes in this diff?"

Summarize Documentation

cat README.md | saywhat "explain this in simple terms"

Network Analysis

netstat -tulpn | saywhat "what ports are listening and what services?"

File System Operations

ls -lah | saywhat "which files are larger than 1MB?"

Configuration

View Current Configuration

saywhat config

This displays:

  • Configuration file location
  • API URL
  • API Key (truncated for security)
  • Model name
  • SSL verification settings

Configuration File Location

Configuration is stored at:

  • Linux/macOS: ~/.config/saywhat/config.toml
  • Windows: %APPDATA%\saywhat\config.toml

Manual Configuration

You can also edit the config file directly:

api_url = "https://api.openai.com/v1"
api_key = "sk-..."
model = "gpt-4"
insecure = false

Command Reference

saywhat init

Initialize or update configuration.

Options:

  • --api-url <URL> - API base URL (required)
  • --api-key <KEY> - API key for authentication (optional for local LLMs)
  • --model <MODEL> - Model to use (default: "gpt-4")
  • --insecure - Skip SSL certificate verification

saywhat config

Display current configuration.

saywhat <question>

Ask a question about piped input.

Example:

echo "Hello World" | saywhat "what language is this?"

Compatible LLM Providers

saywhat works with any service that implements the OpenAI Chat Completions API:

  • OpenAI - https://api.openai.com/v1
  • Anthropic (via proxy) - Various proxy services available
  • Local LLMs:
  • Self-hosted models - Any custom OpenAI-compatible endpoint

Tips

  • For local LLMs, you typically don't need an API key
  • Use --insecure flag only for development/testing with self-signed certificates
  • The tool automatically detects if you're piping input or typing manually
  • Questions can be multiple words without quotes: saywhat what is this about
  • Use Ctrl+D to end manual input if no piped data is provided

Privacy & Security

  • API keys are stored locally in your system's config directory
  • Config files are created with user-only permissions
  • No data is logged or stored by saywhat itself
  • All data transmission happens directly between your machine and your configured API endpoint

Building from Source

Prerequisites

  • Rust 1.70 or newer
  • Cargo

Build

cargo build --release

The binary will be available at target/release/saywhat.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

Unlicense

Troubleshooting

"No configuration found"

Run saywhat init first to set up your LLM provider.

"Request failed" errors

  • Check your API URL is correct
  • Verify your API key is valid
  • Ensure your internet connection is working
  • For local LLMs, make sure the server is running

SSL Certificate errors

If using a self-signed certificate, add the --insecure flag during init.

No response from LLM

  • Check that your model name is correct for your provider
  • Verify the API endpoint supports the Chat Completions format
  • Try with a different model

Changelog

v0.1.0

  • Initial release
  • Basic piping and question functionality
  • Configuration management
  • Support for OpenAI-compatible APIs

About

A CLI tool that takes piped in text and passes it to an LLM.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages