Code scanner to check for issues in prompts and LLM calls
-
Updated
Apr 6, 2025 - Python
Code scanner to check for issues in prompts and LLM calls
KAI Data Center Builder
La Perf is a framework for AI performance benchmarking — covering LLMs, VLMs, embeddings, with power-metrics collection.
AI Benchmark 知识库 — 全面收录各大 AI 公司用来测试模型性能的 Benchmark 题库完整集合
Building an AI team to play Codenames using top Large Language Models (LLMs), evaluating performance, and pitting them against each other. Explore their strategy and capabilities in this interactive competition!
Arbitrary Numbers
A functionally operational, mathematically unhinged system for achieving 10× effective memory amplification on Apple Silicon using quantized fractal compression, complex-plane KV decomposition, and Euler-aligned swap geometry.
Test AI provider latency (TTFB, TTFT, TPS) in your CI/CD pipeline. Benchmark OpenAI, Anthropic, Google, and more.
Chrome extension that removes old ChatGPT messages from the DOM to keep long conversations fast and responsive.
Extreme-performance Metal kernels for MLX. Optimized for Apple Silicon. Part of the Eco-Metal ecosystem.
A streamlined and easy-to-use AI performance evaluation / summary template with modern UI in HTML, including correct percentage chart and comparison with other models, precision, recall, F1-score, and confusion matrix. Enables you to create the result chart within 3 minutes.
AI Performance Engineering Cheatsheet: From Cloud to Edge.
⚙️ Streamline AI performance evaluation with a user-friendly HTML template for quick charts and model comparisons in just minutes.
Speedtest for AI. Test latency to every major AI provider from your terminal.
Add a description, image, and links to the ai-performance topic page so that developers can more easily learn about it.
To associate your repository with the ai-performance topic, visit your repo's landing page and select "manage topics."