A simple bash script to easily find and run llamafile models with fuzzy matching and interactive selection.
- 🔍 Fuzzy pattern matching - Find models by partial name
- 🎯 Interactive selection - Use fzf to choose from multiple matches
- 🌈 Colorized output - User-friendly colored messages
- 📁 Flexible search - Searches current directory and
models/subdirectory - ✅ Permission checking - Warns if llamafile isn't executable
- 🔗 Easy installation - One-command setup with symlink
Run this one-liner to install directly from GitHub:
curl -fsSL https://raw.githubusercontent.com/rikby/llamafile-bin/main/install.sh | bashThis will:
- Create
~/.config/llamafile/directory (uses$XDG_CONFIG_HOMEif set) - Download the latest
llamafile.shscript from GitHub - Create a configuration file with default settings
- Create
~/.config/llamafile/models/directory for your models - Create a symlink at
~/bin/llamafile - Provide PATH setup instructions
-
Clone this repository:
git clone https://github.com/rikby/llamafile-bin.git cd llamafile-bin -
Run the installation script:
./install.sh
Add your llamafile models to the ~/.config/llamafile/models/ directory:
# Download or copy your .llamafile files to the models directory
cp /path/to/your/model.llamafile ~/.config/llamafile/models/
# Make sure they're executable
chmod +x ~/.config/llamafile/models/*.llamafileAdd ~/bin to your PATH if it's not already there:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/bin:$PATH"# Show all .llamafile files and select with fzf
llamafile
# Find files matching a pattern
llamafile Qwen
llamafile llama
llamafile phi# Run any available model (opens fzf selector)
llamafile
# Find and run Qwen models
llamafile Qwen
# Finds: Qwen2.5-7B-Instruct-1M-Q4_K_M.llamafile
# Find models starting with "llama"
llamafile llama
# Pass arguments to the llamafile
llamafile Qwen --help
llamafile phi --chat~/.config/llamafile/
├── llamafile.sh # Main script (downloaded automatically)
├── config.sh # Configuration file
└── models/ # Directory for your .llamafile files
└── (your .llamafile files go here)
~/bin/
└── llamafile # Symlink to ~/.config/llamafile/llamafile.sh
Note: Models are stored in ~/.config/llamafile/models/ and are not tracked in git.
-
bash or zsh shell - The scripts are compatible with both
-
fzf - For enhanced interactive file selection (optional but recommended)
# Install via homebrew (macOS) brew install fzf # Install via apt (Ubuntu/Debian) sudo apt install fzf # Install via yum (RHEL/CentOS) sudo yum install fzf
Note: If fzf is not installed, the script will fallback to a simple numbered list for selection.
-
Search: Looks for
*.llamafilefiles in:- Current directory (
.) - Configured models directory (
~/.config/llamafile/models/)
- Current directory (
-
Match: Finds files matching the pattern
[pattern]*.llamafile -
Select:
- If one match: runs it directly
- If multiple matches: opens fzf for selection
- If no matches: shows error message
-
Execute: Runs the selected llamafile with any additional arguments
The script provides helpful error messages for common issues:
- fzf not installed: Falls back to numbered list selection
- No matches found:
Error: No llamafile found matching 'pattern'. - Not executable:
Error: File 'filename' is not executable. Please run 'chmod +x "filename"'.
You can download llamafile models from various sources:
- Hugging Face - Search for models with "llamafile" tag
- Mozilla's llamafile releases - Pre-built models
- Convert your own GGUF models using llamafile tools
Popular models to try:
- Qwen2.5-7B-Instruct - Great general purpose model
- Llama-3.1-8B-Instruct - Strong instruction following
- Phi-3-mini - Lightweight and fast
chmod +x ~/.config/llamafile/models/your-model.llamafileMake sure ~/bin is in your PATH:
echo $PATH | grep "$HOME/bin"Make sure you have .llamafile files in the models directory:
ls -la ~/.config/llamafile/models/Install fzf using your system's package manager (see Requirements section).
This project is released into the public domain. Feel free to use, modify, and distribute as needed.