Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
-
Updated
Apr 16, 2026 - HTML
Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
ATLAST Protocol — The Trust Layer for the Agent Economy. Make AI agent work verifiable with Evidence Chain Protocol (ECP). Open source · MIT License · weba0.com
Append-only event kernel with Ed25519-signed Merkle checkpoints. Every AI action gets a verifiable receipt.
LUMINA-30: non-binding civilizational boundary framework for preserving effective human refusal authority before irreversible external impact from advanced AI systems.
Cryptographic receipt system for AI agent accountability. Tamper-evident, hash-chained receipts with Ed25519/HMAC signing.
Measurement infrastructure for multi-turn AI interaction safety evaluation
AISS v2.0.0 — standalone release
Official CLG wrapper for Model Context Protocol: tamper-evident decision and outcome receipts and real-time mandate enforcement for MCP tool calls.
A shows bottlenecks in human only workflows while B is for agentic and HITL workflows to ensure accountability and to prevent automation bias
Eziokwu: Heart-centered AI accountability framework for verifying algorithmic decision-making and organizing evidence for regulatory evaluation. Truth infrastructure built on Igbo philosophical principles.
∈ Principle — A philosophical and institutional proposal for public authorship and responsibility in the age of AI. Foundational text with DOI (Zenodo). CC BY 4.0.
Neutral reference framework for institutional accountability and post-incident review in high-risk autonomous AI systems.
Gamified accountability system for Claude Code workflows with progressive consequences, strikes, and rewards. Based on ArXiv 2506.01347 NSR research.
OpenExecution Provenance Specification — implements AEGIS (Agent Execution Governance and Integrity Standard) for auditable, tamper-evident AI agent behavioral records. Apache 2.0.
Practical and research-oriented exploration of ethics, responsibility, and governance for AI in software engineering. Policy frameworks, case studies, assessment tools and actionable guidance for responsible AI adoption in engineering teams. Week 07 assignment for AI & SE learning track.
Bolt-on Python SDK for tamper-evident AI decision records. ADR Specification v0.1 + Reasoning Capture Methodology v1.0.
Post-deployment behavioral measurement framework for AI agents — traces failures, quantifies preventable waste, maps correction persistence, and produces governance-ready evidence from real production sessions
Execution provenance protocol for AI agents — tamper-evident, third-party verifiable behavioral records. Hash chains (SHA-256) + Ed25519 signatures + JCS (RFC 8785). Zero dependencies, 202 tests.
An execution kernel that treats LLMs as untrusted compute and enforces policy via deterministic runtime interception.
Add a description, image, and links to the ai-accountability topic page so that developers can more easily learn about it.
To associate your repository with the ai-accountability topic, visit your repo's landing page and select "manage topics."