Skip to content
#

ai-accountability

Here are 27 public repositories matching this topic...

LUMINA-30: non-binding civilizational boundary framework for preserving effective human refusal authority before irreversible external impact from advanced AI systems.

  • Updated May 2, 2026

OpenExecution Provenance Specification — implements AEGIS (Agent Execution Governance and Integrity Standard) for auditable, tamper-evident AI agent behavioral records. Apache 2.0.

  • Updated Apr 10, 2026
  • Python

Practical and research-oriented exploration of ethics, responsibility, and governance for AI in software engineering. Policy frameworks, case studies, assessment tools and actionable guidance for responsible AI adoption in engineering teams. Week 07 assignment for AI & SE learning track.

  • Updated Dec 24, 2025
  • Jupyter Notebook

Execution provenance protocol for AI agents — tamper-evident, third-party verifiable behavioral records. Hash chains (SHA-256) + Ed25519 signatures + JCS (RFC 8785). Zero dependencies, 202 tests.

  • Updated Mar 11, 2026
  • JavaScript

Improve this page

Add a description, image, and links to the ai-accountability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-accountability topic, visit your repo's landing page and select "manage topics."

Learn more