Thank you for your interest in contributing to genai! This document provides guidelines and information for contributors.
- Rust (latest stable version)
- Git
- Fork the repository
- Clone your fork locally
- Create a new branch for your feature or bugfix
- Make your changes
- Run tests and ensure code quality
- Submit a pull request
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test file
cargo test --test live_api_testsgenai includes comprehensive live API tests that validate functionality against real AI providers. These tests are located in /tests/live_api_tests.rs.
The live API tests make actual API calls to providers like OpenRouter and Anthropic. To run these tests:
- Set API Keys as Environment Variables:
export OPENROUTER_API_KEY="your-openrouter-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# Optional: Cerebras
export CEREBRAS_API_KEY="your-cerebras-api-key"- Run the Live API Tests:
cargo test --test live_api_tests -- --nocaptureThe test suite includes comprehensive validation of:
- ✅ Basic Chat Functionality - Tests basic chat completion
- ✅ Streaming Support - Validates real-time streaming responses
- ✅ Tool/Function Calling - Tests function calling capabilities
- ✅ JSON Mode - Validates structured JSON output
- ✅ Image Processing - Tests image analysis functionality
- ✅ Multiple Providers - Cross-provider compatibility testing
- ✅ Error Handling - Validates proper error scenarios
- ✅ Model Resolution - Tests model name resolution
- Enabled Tests: Core functionality tests are enabled by default
- Ignored Tests: Some tests are marked with
#[ignore]to avoid excessive API calls during development - Environment Checks: Tests automatically skip if required API keys are not set
When adding new live API tests:
- Follow the existing patterns in
/tests/live_api_tests.rs - Include environment variable checks for required API keys
- Use the
TestResulttype for consistent error handling - Add appropriate assertions and logging
- Consider marking expensive tests with
#[ignore]
Example test structure:
#[tokio::test]
async fn test_new_feature() -> TestResult<()> {
if !has_env_key("PROVIDER_API_KEY") {
println!("Skipping PROVIDER_API_KEY not set");
return Ok(());
}
let client = Client::default();
let chat_req = ChatRequest::new(vec![
ChatMessage::user("Test message"),
]);
let result = client.exec_chat("model-name", chat_req, None).await?;
let content = result.first_text().ok_or("Should have content")?;
assert!(!content.is_empty(), "Content should not be empty");
println!("✅ Test passed: {}", content);
Ok(())
}# Check formatting
cargo fmt --check
# Apply formatting
cargo fmt# Run clippy with strict warnings
cargo clippy --all-targets --all-features -- -W clippy::all
# Run clippy with default settings
cargo clippy --all-targets --all-features- Follow Rust idioms and conventions
- Use meaningful variable and function names
- Add documentation for public APIs
- Keep functions focused and small
- Handle errors appropriately
When testing with specific providers, ensure model names are current and available:
- OpenRouter: Use namespaced models (e.g.,
openrouter::anthropic/claude-3.5-sonnet) - Anthropic: Use current model names (e.g.,
claude-3-5-sonnet-20241022) - Other Providers: Check provider documentation for latest model names
- Never commit API keys to the repository
- Use environment variables for API keys
- Document required environment variables in test files
- Consider using
.envfiles for local development (add to.gitignore)
- Ensure all tests pass
- Run code formatting and linting
- Update documentation if needed
- Write clear commit messages
- Submit pull request with descriptive title and description
feat: add new feature description
fix: resolve issue description
docs: update documentation
test: add or improve tests
refactor: code refactoring
- Check existing issues and pull requests
- Review the codebase and examples
- Ask questions in pull requests
- Refer to provider documentation for API-specific details
When working with specific AI providers, refer to their official documentation:
- Never expose API keys in code or commits
- Validate and sanitize user inputs when appropriate
- Follow security best practices for API integrations
- Report security vulnerabilities privately
Thank you for contributing to genai! 🚀