Life-Experimentalist / EquiLens Star 2 Code Issues Pull requests 🎯 EquiLens - AI Bias Detection Platform for LLMs via Ollama. Interactive CLI with corpus generation, multi-metric auditing, statistical analysis & visualization. Features enhanced auditors, dynamic concurrency, resume capability & rich progress tracking. Alt Links: https://equilens.pages.dev/ , https://life-experimentalist.github.io/EquiLens/ statistical-analysis ai-ethics bias-measurement rich-cli ollama-integration uv-python ai-research-tools ai-bias-detection llm-auditing Updated Oct 27, 2025 Python
16246541-corp / indoctrine.ai Star 1 Code Issues Pull requests Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀 testing ai safety compliance bias fairness fairness-testing fairness-ai fairness-ml ethical-ai llm llm-audit llm-auditing Updated Nov 25, 2025 Python
CSHVienna / LLMScholarBench Star 0 Code Issues Pull requests LLM-based scholar recommendation auditing benchmarking interventions llm-auditing Updated Feb 23, 2026 Jupyter Notebook
Venkateshwar-PortoAI / facet-benchmark Sponsor Star 0 Code Issues Pull requests A four-probe benchmark for measuring attribution faithfulness in multi-factor LLM reasoning. Eight frontier models, CI-verified numeric claims, Zenodo-archived. benchmark attribution ai-safety explainability legal-nlp mechanistic-interpretability llm-evaluation llm-auditing trustworthy-ml llm-faithfulness Updated Apr 14, 2026 Python