Skip to content
View ariaxhan's full-sized avatar

Highlights

  • Pro

Block or report ariaxhan

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ariaxhan/README.MD

Aria Han

AI Systems Architect

Website Medium GitHub LinkedIn


I build systems that make AI agents predictable. Not smarter. Predictable.

Agents fail when they have to guess. The fix isn't better prompts; it's better contracts. Explicit scope. Typed learnings. Tiered routing based on complexity. The agent doesn't interpret. It executes against constraints.

Systems architecture applied to language models. Poet's precision for the prompts; engineer's rigor for the infrastructure.


Recent

What a Year of AI Taught Me About Freedom

Three startups in one year. The gap between idea and artifact collapsed. Corporations are deploying AI through workers who checked out years ago; the debt piles up everywhere they touch. AI isn't a tool. It's the gate torn off its hinges.

Stop Writing Markdown. Start Writing Memory.

Markdown is a human-readable format being used for machine-to-machine communication. I rebuilt my agent workflow around SQLite: three tables (context, learnings, errors), session startup hooks that inject ambient context before you type anything, zero rotting markdown. Representation is the bottleneck. Not intelligence. Not scale.

Automations with Claude Code

Eight times a day, agents read my vaults, pull something from outside my usual orbit, publish the collision: personalized dispatches and Taper-style code-poems that build a self-growing gallery. The second brain metaphor is backwards. It shouldn't just store. It should think proactively.


The Thesis

Agents need constraints, not capability.

Most AI workflows operate on vibes. You describe a task. The agent interprets. If the output matches what you imagined, ship it. If not, iterate until exhausted.

I build systems that eliminate the interpretation step:

  • Contracts before code: Explicit acceptance criteria, scope boundaries, failure conditions. Written before implementation, not discovered during review.
  • Learnings as database objects: Not notes. Typed rows: failures, patterns, gotchas. Queryable by category. Evidence-gated. Surviving session boundaries.
  • Tiered routing: Complexity determines pipeline. 1-2 files: execute directly. 3-5 files: contract required. 6+: full verification with adversarial QA.

The agents aren't getting smarter. They're getting access to what their predecessors learned. That turns out to be enough.


Research

I measure how models compute, not whether they're correct.

latent-diagnostics: Representation-level analysis of LLMs via SAE attribution graphs. Grammar tasks show d=1.08 higher influence than reasoning tasks. After length control, genuine computational regime differences emerge.

universal-spectroscopy-engine: Treats LLM activations as light spectra. 52% reduction in SAE reconstruction loss with structured vs natural language syntax. LLMs are vector computers pretending to be text processors.

experiments: Append-only specimen archive for LLM experiments.


Production

HeyContext AI memory platform. Persistent context, psychological insight extraction, living projects.
Brink iOS journaling with private AI and biometrics. SwiftUI, HealthKit.
HeyContent Content management for creators. Cross-platform insights, conversational persona generation.

Infrastructure

KERNEL AgentDB-first coding methodology. SQLite for agent memory; contracts before code; orchestration for complex work. Cursor version.
conductor MCP server bridging Claude Desktop and Claude Code.
memory-pool Memory isn't a timeline. Structured architecture for persistent AI context.
event-horizon Physics-informed encryption. SYK scrambling, chimera camouflage, resonance locking.

Writing

Self-Learning Agent Civilization: The original system that started everything.

Stop Building Chatbots: Why the chat interface is a dead end.

KERNEL: Configuration that adapts as you work.

Semantic Drift is Quantum Decoherence: Multi-agent coordination through physics.

Why Prompt Engineering Can't Fix Hallucinations: The case for mechanistic intervention.


Python · TypeScript · Swift
FastAPI · Next.js · SvelteKit
Claude · SAEs · Modal

San Francisco

Email X

Pinned Loading

  1. kernel-claude kernel-claude Public

    KERNEL is a Claude Code plugin that makes your setup evolve automatically based on how you actually work.

    Python 5

  2. latent-diagnostics latent-diagnostics Public

    Representation-level analysis and supervision framework for large language models

    Jupyter Notebook

  3. memory-pool memory-pool Public

    Memory isn't a timeline.

    Svelte

  4. persist-os/vector-native persist-os/vector-native Public

    LLMs speaking their native language: vector operations, not English.

    Python 4 1

  5. experiments experiments Public

    Experiment engine for LLMs based on natural history specimens.

    Jupyter Notebook

  6. substrate substrate Public

    Taper-style generative gallery

    HTML 1