Skip to content

mboss37/chadllm

Repository files navigation

ChadLLM

A lightning-fast desktop AI chat application built with Rust + GTK4 + Libadwaita for Linux GNOME, featuring a modular multi-LLM provider architecture.

🚀 Features

Core Features

  • Multi-LLM Provider Support: Unified interface for OpenAI and Grok (xAI) with Claude and Gemini coming soon
  • Modular Architecture: Clean, extensible provider system for easy addition of new LLMs
  • Dynamic Provider Selection: Sleek side-by-side dropdowns for provider and model selection
  • Real-time Typography: Consistent font sizing across all UI components with live updates
  • Conversation History: Configurable context window (1-50 turns) via GSettings
  • Markdown Rendering: Beautiful code blocks, formatting, and syntax highlighting
  • Efficient API Calls: Optimized non-streaming approach for consistent token tracking
  • Copy Functionality: One-click copy buttons for AI responses with keyboard shortcut
  • Font Customization: Adjustable font family and size (8-24pt) with 14pt default
  • Provider Icons: Visual provider identification with theme-aware logos
  • Terminal Aesthetic: Clean, minimalistic interface design

Technical Features

  • GPU-Accelerated Rendering: Smooth 60FPS animations
  • Memory Efficient: <50MB RAM usage
  • Fast Startup: Sub-100ms application launch
  • Wayland Support: Full compatibility with modern Linux desktops
  • Structured Logging: Comprehensive debugging and monitoring
  • Keyboard Shortcuts: Full keyboard navigation support
  • Analytics Dashboard: Token usage tracking and cost monitoring
  • Unified Preferences: Single settings dialog for all configuration
  • Auto-Apply Settings: Instant UI updates without Apply buttons

🖼️ Screenshot

ChadLLM Interface

Clean, modern interface with multi-provider support, message timestamps, and native GNOME integration

🏗️ Architecture

ChadLLM uses a modular multi-LLM provider architecture with a unified response system that ensures consistent token tracking and analytics across all providers.

src/llm/                      # Multi-LLM Provider System
├── manager.rs               # Provider manager and selection
├── unified/                 # Unified response system
│   ├── response.rs          # UnifiedLLMResponse struct
│   ├── usage.rs             # TokenUsage, TokenDetails
│   └── metadata.rs          # ResponseMetadata
└── providers/               # Provider implementations
    ├── trait.rs             # LLMProvider trait
    ├── openai/              # OpenAI provider (GPT-4o, GPT-4o-mini)
    ├── xai/                 # Grok provider (grok-2, grok-3, grok-4-fast)
    ├── anthropic/           # Claude provider (planned)
    └── google/              # Gemini provider (planned)

Unified Response System

All providers convert their native API responses into a standardized UnifiedLLMResponse format that includes:

  • Content: The actual response text
  • Usage Data: Token counts (input, output, total) for accurate analytics
  • Metadata: Provider info, model name, response ID, timestamps
  • Cost Calculation: Automatic pricing based on current provider rates

This ensures consistent analytics, cost tracking, and database storage regardless of the underlying provider.

🚀 Quick Start

Prerequisites

Install the required system dependencies:

# Fedora/RHEL/CentOS
sudo dnf install gtk4-devel libadwaita-devel pkg-config

# Ubuntu/Debian
sudo apt install libgtk-4-dev libadwaita-1-dev pkg-config

# Arch Linux
sudo pacman -S gtk4 libadwaita pkg-config

Build and Run

cargo build
cargo run

Development Setup

# Install schemas and icons for development
make install-dev

# Run with debug logging
GSETTINGS_SCHEMA_DIR=data RUST_LOG=debug cargo run

# Build release version
cargo build --release

⚙️ Configuration

ChadLLM uses GNOME's GSettings for configuration. Here are the most common commands:

# View all settings
gsettings list-recursively com.chadllm.app

# Set API keys
gsettings set com.chadllm.app openai-api-key "sk-your-key-here"
gsettings set com.chadllm.app grok-api-key "xai-your-key-here"

# Set default provider and model
gsettings set com.chadllm.app default-provider "openai"
gsettings set com.chadllm.app default-model "gpt-4o"

# Set conversation history limit (1-50)
gsettings set com.chadllm.app conversation-history-limit 15

# Set font settings
gsettings set com.chadllm.app font-family "monospace"
gsettings set com.chadllm.app font-size 14.0

# Set color scheme (default/light/dark)
gsettings set com.chadllm.app color-scheme "dark"

# Reset to defaults
gsettings reset-recursively com.chadllm.app

⌨️ Keyboard Shortcuts

  • Ctrl+, - Open Preferences
  • Ctrl+? - Show Keyboard Shortcuts
  • F1 - Show About dialog
  • Ctrl+Q - Quit application
  • Enter - Send message
  • Shift+Enter - Add new line in message
  • Ctrl+L - Clear chat history
  • Ctrl+C - Copy last AI response

🔧 Development

Project Structure

chadllm/
├── src/
│   ├── main.rs                   # Application entry point
│   ├── config/                   # GSettings configuration management
│   ├── llm/                      # Multi-LLM Provider System
│   │   ├── manager.rs           # Provider manager and selection
│   │   ├── unified/             # Unified response system
│   │   └── providers/           # Provider implementations
│   ├── ui/                       # Main UI, header bar, and dialogs
│   ├── chat/                     # Chat interface and message handling
│   ├── conversation/             # Conversation history management
│   ├── database/                 # SQLite database and analytics
│   ├── dashboard/                # Analytics dashboard
│   ├── preferences/              # Unified preferences dialog
│   └── markdown/                 # Markdown rendering
├── assets/                       # Application assets
│   ├── icons/                    # Provider icons (light/dark themes)
│   └── logos/                    # Application logos
├── data/                         # GNOME integration files
└── scripts/                      # Development and maintenance scripts

Adding New LLM Providers

  1. Create provider module in src/llm/providers/{provider}/
  2. Implement LLMProvider trait
  3. Add conversion logic to unified format
  4. Register provider in manager
  5. Add provider icons and configuration

See TECHNICAL.md for detailed implementation guide.

📊 Logging and Debugging

Control log levels with the RUST_LOG environment variable:

# Show essential info (recommended for normal usage)
RUST_LOG=info cargo run

# Show detailed debug info (for troubleshooting)
RUST_LOG=debug cargo run

# Debug specific modules
RUST_LOG=chadllm::chat=debug,chadllm::api=info cargo run

🎯 Performance Goals

  • Startup: Sub-100ms application launch
  • Memory: <50MB RAM usage
  • Rendering: <10ms redraws, 60FPS animations
  • UI: Responsive interface for thousands of messages

📄 License

MIT License

📚 Documentation


Built for speed, designed for GNOME, powered by Rust.

GNOME Integration

  • Native GNOME Integration: Full Adwaita theming and GNOME HIG compliance
  • GSettings Configuration: Integrated with GNOME's settings system
  • Hamburger Menu: Standard GNOME menu with Preferences, Keyboard Shortcuts, About, and Quit
  • Preferences Dialog: Native GNOME preferences window
  • About Dialog: Standard GNOME about dialog with proper metadata
  • Desktop Integration: Proper .desktop file, AppData, and GSettings schema

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors