A template for building structured, version-controlled knowledge bases with an ontology-first approach. Edit a config file, add markdown data, get a full HTML site + JSON API.
Knowledge as Code is a pattern created for PAICE.work PBC. It applies software engineering practices to knowledge management: plain text, Git-native, zero-dependency, ontology-driven, multi-output from a single source.
- AI Tool Watch — AI model capabilities across 12 products
- Every AI Law — global AI regulatory landscape
- Meeting Standards Reference — meeting facilitation standards
- Use this template -- click "Use this template" on GitHub, or clone locally
- Edit
project.yml-- define your domain entities, groups, colors, and site identity - Replace example data -- see Replacing example data below
- Build --
node scripts/build.js(ornpm run build) - Deploy -- push to GitHub, Pages deploys automatically
- Static HTML site — homepage, list pages, detail pages, coverage matrix, timeline, comparison tool
- JSON API — programmatic access at
docs/api/v1/ - Bridge pages — SEO-targeted pages like "Does X require Y?"
- Dark/light theme — with persistence
- Client-side search — lazy-loaded, keyboard-navigable
- Sortable tables — click any column header to sort
- MCP server — AI agent access to your knowledge base
- Discovery files — llms.txt, agents.json, RSS for machine consumption
- Zero dependencies — Node.js built-ins only
project.yml # Domain configuration (edit this first)
data/
examples/ # Example data (replace with your own)
primary/ # Stable anchor entities (e.g., requirements, obligations)
container/ # Grouping entities (e.g., frameworks, regulations)
authority/ # Source entities (e.g., organizations, regulators)
mapping/ # index.yml connecting containers to primaries
scripts/
build.js # Config-driven site generator
validate.js # Cross-reference validator
docs/ # Generated output (do not edit)
Every knowledge-as-code project has four entity roles:
Authority → Container → Provision → Primary
| Role | What it is | Example domains |
|---|---|---|
| Primary | Stable anchors that don't change when sources change | Requirements, Obligations, Capabilities, Controls |
| Container | Grouping entities that contain provisions | Regulations, Frameworks, Products, Standards |
| Authority | Source entities that produce containers | Regulators, Vendors, Standards bodies |
| Secondary | Mapping entities connecting containers to primaries | Provisions, Implementations, Mappings |
Primaries are stable; containers are unstable. When a framework is amended, its provisions change, but the underlying requirements persist.
All domain-specific settings live in project.yml:
- Entity names — what to call each entity type (e.g., "Requirement" vs "Obligation")
- Groups — categories for primary entities, with dark/light mode colors
- Statuses — lifecycle states for containers, with colors
- Navigation — site nav items
- Bridge pages — which SEO pages to generate
- Theme — accent colors
The template ships with example data in data/examples/ (ISO 27001, NIST CSF). To replace it with your own domain:
-
Update
project.yml-- rename entity types, groups, statuses, and colors to match your domain. The directory names underentities.*.directorycontrol where the build script looks for files. -
Delete example files -- remove the contents of
data/examples/requirements/,data/examples/frameworks/,data/examples/organizations/, anddata/examples/mapping/index.yml. -
Create your data files -- add markdown files following the format documented in
data/_schema.md. Each entity type has specific frontmatter requirements and body structure. -
Update the mapping file -- create entries in
data/examples/mapping/index.ymlthat connect your containers to your primaries. -
Validate and build:
node scripts/validate.js # Check cross-references node scripts/build.js # Generate the site
The build script looks for data in data/examples/ first, then data/. You can rename data/examples/ to data/ if you prefer a flatter structure.
node scripts/build.js # Build the site (or: npm run build)
node scripts/validate.js # Validate cross-references (or: npm run validate)
node scripts/verify.js # Check entity freshness (or: npm run verify)- File-over-App — data in markdown files, not a database
- Zero dependencies — no npm install, no supply chain risk
- Bespoke static generation — the build script is the specification
- GitOps — Git is the single source of truth
Every Knowledge-as-Code site includes machine-readable discovery files:
- MCP Server --
mcp-server.jsprovides read-only access to all entities via Model Context Protocol - llms.txt -- Generated at
docs/llms.txtwith entity model, API endpoints, and entity listings - agents.json -- Machine-readable metadata at
docs/agents.jsonfor agent discovery - RSS feed -- Recent updates at
docs/index.xml - JSON API -- Programmatic access at
docs/api/v1/
The MCP server exposes your knowledge base as tools that AI agents can call. Tool names are dynamically generated from your project.yml entity configuration.
Add to Claude Code (or any MCP-compatible client) via mcp.json:
{
"mcpServers": {
"knowledge-base": {
"command": "node",
"args": ["mcp-server.js"],
"description": "Read-only access to the knowledge base"
}
}
}Test it directly:
node mcp-server.jsThe server reads project.yml at startup and exposes tools for listing and retrieving each entity type. For example, with the default config you get tools like list_requirements, get_requirement, list_frameworks, get_framework, etc. The exact tool names depend on your entity configuration.
Knowledge as Code includes a verification scaffold for detecting stale data:
- Add
last_verified: YYYY-MM-DDto entity frontmatter - Run
node scripts/verify.jsto check for staleness - Configure threshold in
project.ymlunderverification.staleness_days - See VERIFICATION.md for details on adding AI-assisted verification
Knowledge as Code is part of a broader set of open standards:
- Graceful Boundaries — How services communicate operational limits to humans and agents
- Skill Provenance — Version identity that travels with agent skill bundles
- Siteline — AI agent readiness scanner for websites
- Knowledge as Code — The pattern definition and community hub
Knowledge as Code has six defining properties:
- Plain text canonical — knowledge in human-readable, version-controlled files
- Self-healing — automated verification detects when knowledge drifts from reality
- Multi-output — one source produces every format needed (HTML, JSON API, MCP, SEO pages)
- Zero-dependency — no external packages; nothing breaks when you come back in a year
- Git-native — Git is the collaboration layer, audit trail, and deployment trigger
- Ontology-driven — a vendor-neutral taxonomy maps to domain-specific implementations
Read the full pattern definition at knowledge-as-code.com.
Knowledge as Code is a PAICE.work project. See ATTRIBUTION.md for details.
When you use this template, update the following:
- Edit
project.ymlwith your domain entities, colors, and site identity - Replace example data in
data/examples/with your own - Update
docs/CNAMEwith your custom domain (or remove it) - Push to GitHub — Pages deploys automatically via the included workflow
Knowledge as Code is a PAICE.work project. PAICE.work PBC is a public benefit corporation building infrastructure for productive collaboration between humans and autonomous agents. Structured, version-controlled, agent-accessible knowledge is a foundation for that collaboration.
MIT