A command-line tool that pipes text to an LLM (Large Language Model) and asks questions about it. Perfect for quickly analyzing command output, log files, or any text data using AI.
- Pipe any command output directly to an LLM for analysis
- Works with OpenAI, local LLMs, or any OpenAI-compatible API
- Secure configuration storage in your system config directory
- Support for custom API endpoints (including localhost)
- Configurable models and SSL settings
# Clone the repository
git clone https://github.com/yourusername/saywhat.git
cd saywhat
# Build and install
cargo install --path .cargo install saywhat-
Initialize with your LLM provider:
# For OpenAI saywhat init --api-url https://api.openai.com/v1 --api-key YOUR_API_KEY --model gpt-4 # For a local LLM (like LM Studio or Ollama with OpenAI compatibility) saywhat init --api-url http://localhost:8080/v1 --model local-model # For self-signed certificates saywhat init --api-url https://your-server/v1 --api-key YOUR_KEY --insecure
-
Ask questions about piped input:
cat error.log | saywhat "what are the main errors?"
df -h | saywhat "which disk is most full?"ps aux | saywhat "which processes are using the most memory?"tail -100 /var/log/app.log | saywhat "summarize the errors in this log"git diff | saywhat "what are the key changes in this diff?"cat README.md | saywhat "explain this in simple terms"netstat -tulpn | saywhat "what ports are listening and what services?"ls -lah | saywhat "which files are larger than 1MB?"saywhat configThis displays:
- Configuration file location
- API URL
- API Key (truncated for security)
- Model name
- SSL verification settings
Configuration is stored at:
- Linux/macOS:
~/.config/saywhat/config.toml - Windows:
%APPDATA%\saywhat\config.toml
You can also edit the config file directly:
api_url = "https://api.openai.com/v1"
api_key = "sk-..."
model = "gpt-4"
insecure = falseInitialize or update configuration.
Options:
--api-url <URL>- API base URL (required)--api-key <KEY>- API key for authentication (optional for local LLMs)--model <MODEL>- Model to use (default: "gpt-4")--insecure- Skip SSL certificate verification
Display current configuration.
Ask a question about piped input.
Example:
echo "Hello World" | saywhat "what language is this?"saywhat works with any service that implements the OpenAI Chat Completions API:
- OpenAI -
https://api.openai.com/v1 - Anthropic (via proxy) - Various proxy services available
- Local LLMs:
- LM Studio -
http://localhost:1234/v1 - Ollama (with OpenAI compatibility)
- LocalAI
- text-generation-webui
- LM Studio -
- Self-hosted models - Any custom OpenAI-compatible endpoint
- For local LLMs, you typically don't need an API key
- Use
--insecureflag only for development/testing with self-signed certificates - The tool automatically detects if you're piping input or typing manually
- Questions can be multiple words without quotes:
saywhat what is this about - Use Ctrl+D to end manual input if no piped data is provided
- API keys are stored locally in your system's config directory
- Config files are created with user-only permissions
- No data is logged or stored by saywhat itself
- All data transmission happens directly between your machine and your configured API endpoint
- Rust 1.70 or newer
- Cargo
cargo build --releaseThe binary will be available at target/release/saywhat.
Contributions are welcome! Please feel free to submit a Pull Request.
Unlicense
Run saywhat init first to set up your LLM provider.
- Check your API URL is correct
- Verify your API key is valid
- Ensure your internet connection is working
- For local LLMs, make sure the server is running
If using a self-signed certificate, add the --insecure flag during init.
- Check that your model name is correct for your provider
- Verify the API endpoint supports the Chat Completions format
- Try with a different model
- Initial release
- Basic piping and question functionality
- Configuration management
- Support for OpenAI-compatible APIs