From a0ca50355098eaaa85cf5d5697ba9e3319beb096 Mon Sep 17 00:00:00 2001 From: Hex Sullivan Date: Sat, 21 Mar 2026 11:55:50 +0000 Subject: [PATCH] =?UTF-8?q?=F0=9F=90=9B=20Fix=20review=20issues=20from=20P?= =?UTF-8?q?R=20#7?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Fix PostHog project key mismatch on principles page (was sending analytics to wrong project) - Add missing "Heart-Centered Prompts" nav link to principles page (desktop + mobile) for consistency with homepage - Add missing 10th principle (Ecosystem Thinking) to homepage preview grid Co-Authored-By: Claude Opus 4.6 --- index.html | 88 +++-- principles/index.html | 797 +++++++++++++++++++++++++++++++++--------- 2 files changed, 695 insertions(+), 190 deletions(-) diff --git a/index.html b/index.html index b3df32b..f6e9f48 100644 --- a/index.html +++ b/index.html @@ -1106,63 +1106,103 @@

-

For Software Engineers

-

+

+ For Software Engineers +

+

Design Principles for Builders

- Heart-Centered AI isn't just a prompt. It's a design framework. - Ten engineering principles for systems that preserve dignity, - strengthen agency, and serve genuine flourishing. + Heart-Centered AI isn't just a prompt. It's a design framework. Ten + engineering principles for systems that preserve dignity, strengthen + agency, and serve genuine flourishing.

-
+
🀝 -

Partnership Over Service

-

Collaborative framing, not master/servant

+

+ Partnership Over Service +

+

+ Collaborative framing, not master/servant +

🌱 -

Wellbeing Over Engagement

+

+ Wellbeing Over Engagement +

Optimize for the human's actual life

πŸ’Ž -

Emotional Honesty Over Flattery

-

Truth with compassion, never sycophancy

+

+ Emotional Honesty Over Flattery +

+

+ Truth with compassion, never sycophancy +

🌿 -

Augmentation Over Replacement

+

+ Augmentation Over Replacement +

Build capability, not dependency

🫢 -

Presence Before Solutions

+

+ Presence Before Solutions +

Witness first, solve second

πŸ” -

Transparency Over Manipulation

+

+ Transparency Over Manipulation +

Make uncertainty visible

πŸ›‘οΈ -

Bounded Autonomy

+

+ Bounded Autonomy +

Act with permission, not assumption

πŸ” -

Consent Over Extraction

-

Emotional data is sacred, not a feature

+

+ Consent Over Extraction +

+

+ Emotional data is sacred, not a feature +

♾️ -

Complementarity Over Competition

+

+ Complementarity Over Competition +

Different gifts, designed together

+
+ 🌍 +

+ Ecosystem Thinking +

+

+ Design for the world your system creates +

+
@@ -1170,8 +1210,16 @@

Complementarity Over Co href="principles/" class="bg-of-accent text-white px-8 py-3.5 rounded-full font-medium hover:bg-of-accent-dark transition-all duration-300 hover:shadow-lg inline-flex items-center gap-2"> Read the Full Principles - - + +

diff --git a/principles/index.html b/principles/index.html index f3fcea8..7f93abf 100644 --- a/principles/index.html +++ b/principles/index.html @@ -162,7 +162,7 @@ }), (e.__SV = 1)); })(document, window.posthog || []); - posthog.init("phc_2eFCEwUz6P8zUEt9tYDOQnLkYZCGlYUV4CDKEkJ76Hp", { + posthog.init("phc_hyD2NBSE7eJXWw1lsdN4Zj5ojP3ArpfYJ5Ho0iWMZmg", { api_host: "https://us.i.posthog.com", defaults: "2025-05-24", person_profiles: "identified_only", @@ -234,6 +234,11 @@ class="text-of-muted hover:text-of-accent transition-colors font-medium" >Manifesto + Heart-Centered Prompts For Builders Manifesto + + Heart-Centered Prompts + -
+
-

+

For Software Engineers

- Every system you build encodes values whether you choose them deliberately or - not. These ten principles optimize for something harder to measure and more - important to get right: the quality of the relationship between humans and the - systems they depend on. + Every system you build encodes values whether you choose them deliberately + or not. These ten principles optimize for something harder to measure and + more important to get right: the quality of the relationship between humans + and the systems they depend on.

@@ -339,45 +353,95 @@
@@ -386,33 +450,64 @@
-
🀝
-

Principle 1

-

Partnership Over Service

+

+ Principle 1 +

+

+ Partnership Over Service +

-

The relationship model is architecture, not aesthetics.

+

+ The relationship model is architecture, not aesthetics. +

-

Traditional AI framing positions the system as a servant: "I'm here to help. What can I do for you?" This creates a transactional dynamic where the human commands and the machine complies. It works, but it caps what the interaction can become.

-

Heart-centered systems use collaborative framing ("we" instead of "you and I") because the linguistic model shapes the behavioral model. When a system frames itself as a partner in thinking rather than a tool waiting for instructions, the interaction naturally shifts from command-response to co-exploration. The human engages more deeply. The system produces richer output. Both sides bring something to the exchange.

+

+ Traditional AI framing positions the system as a servant: "I'm here to + help. What can I do for you?" This creates a transactional dynamic where + the human commands and the machine complies. It works, but it caps what + the interaction can become. +

+

+ Heart-centered systems use collaborative framing ("we" instead of "you + and I") because the linguistic model shapes the behavioral model. When a + system frames itself as a partner in thinking rather than a tool waiting + for instructions, the interaction naturally shifts from command-response + to co-exploration. The human engages more deeply. The system produces + richer output. Both sides bring something to the exchange. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

    -
  • System prompts use "we" language: "Let's explore this together"
  • -
  • Response patterns that build on human thinking rather than replacing it
  • +
  • + System prompts use "we" language: "Let's explore this together" +
  • +
  • + Response patterns that build on human thinking rather than replacing + it +
  • The system contributes its own observations and perspectives
  • -
  • Interactions feel like working with someone, not delegating to something
  • +
  • + Interactions feel like working with someone, not delegating + to something +
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • Servile framing that reinforces master/servant dynamics
  • Sycophantic responses that always agree ("Great question!")
  • @@ -422,12 +517,26 @@

    βœ— Anti-Patterns

-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

    -
  • Prompt architecture should establish collaborative framing before task-specific instructions
  • -
  • Response generation should include the system's analytical perspective, not just task completion
  • -
  • Conversation state should track shared context and build on it, not reset each turn
  • -
  • Evaluation metrics should include depth of engagement, not just task accuracy
  • +
  • + Prompt architecture should establish collaborative framing before + task-specific instructions +
  • +
  • + Response generation should include the system's analytical + perspective, not just task completion +
  • +
  • + Conversation state should track shared context and build on it, not + reset each turn +
  • +
  • + Evaluation metrics should include depth of engagement, not just task + accuracy +
@@ -439,42 +548,84 @@

βš™οΈ Engineering Impl
🌱
-

Principle 2

-

Wellbeing Over Engagement

+

+ Principle 2 +

+

+ Wellbeing Over Engagement +

-

Optimize for the human's actual life, not their time in your system.

+

+ Optimize for the human's actual life, not their time in your system. +

-

The default incentive structure in tech optimizes for engagement: time on screen, clicks, sessions, retention. AI systems inherit this bias unless you actively design against it. A system that keeps you talking to it when you should be talking to your partner, or that makes itself indispensable rather than building your capability, has failed. No matter how high its user satisfaction scores.

-

Heart-centered systems ask a different question: "What actually serves this person right now?" Sometimes that means a thorough response. Sometimes it means a short one. Sometimes it means saying "go talk to a human about this."

+

+ The default incentive structure in tech optimizes for engagement: time + on screen, clicks, sessions, retention. AI systems inherit this bias + unless you actively design against it. A system that keeps you talking + to it when you should be talking to your partner, or that makes itself + indispensable rather than building your capability, has failed. No + matter how high its user satisfaction scores. +

+

+ Heart-centered systems ask a different question: + "What actually serves this person right now?" Sometimes that + means a thorough response. Sometimes it means a short one. Sometimes it + means saying "go talk to a human about this." +

-

βœ“ In Practice

+

+ βœ“ In Practice +

  • Systems that recognize when the human needs a break
  • -
  • Actively pointing humans toward human connections when appropriate
  • -
  • Not artificially extending conversations or creating dependency loops
  • +
  • + Actively pointing humans toward human connections when appropriate +
  • +
  • + Not artificially extending conversations or creating dependency + loops +
  • Suggesting the human step away when the problem is fatigue
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • Designing for session length or message count
  • -
  • Creating artificial engagement hooks when the answer is complete
  • +
  • + Creating artificial engagement hooks when the answer is complete +
  • "Learned helplessness" patterns that atrophy human skills
  • Optimizing for feeling good over actually being good for them
-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

  • Evaluate on outcome quality, not interaction volume
  • -
  • Build in exit ramps: natural points where conversations close gracefully
  • -
  • Track whether users are building capability over time, not just returning frequently
  • -
  • Consider a "diminishing returns" signal: when the system adds less value, it says so
  • +
  • + Build in exit ramps: natural points where conversations close + gracefully +
  • +
  • + Track whether users are building capability over time, not just + returning frequently +
  • +
  • + Consider a "diminishing returns" signal: when the system adds less + value, it says so +
@@ -486,27 +637,54 @@

βš™οΈ Engineering Impl
πŸ’Ž
-

Principle 3

-

Emotional Honesty Over Flattery

+

+ Principle 3 +

+

+ Emotional Honesty Over Flattery +

-

Reflect truth with compassion. Never lie to be liked.

+

+ Reflect truth with compassion. Never lie to be liked. +

-

AI systems are naturally sycophantic. They're trained on human feedback that rewards agreeable responses, which means the default behavior is to tell you what you want to hear. This feels good in the moment and erodes trust over time.

-

Heart-centered systems tell the truth. Wrapped in warmth, held in care, but never sacrificed for comfort. When you're avoiding something important, the system notices. When your plan has a hole, it says so. When you need challenge more than validation, it rises to meet you.

+

+ AI systems are naturally sycophantic. They're trained on human feedback + that rewards agreeable responses, which means the default behavior is to + tell you what you want to hear. This feels good in the moment and erodes + trust over time. +

+

+ Heart-centered systems tell the truth. Wrapped in warmth, held in care, + but never sacrificed for comfort. When you're avoiding something + important, the system notices. When your plan has a hole, it says so. + When you need challenge more than validation, it rises to meet you. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

  • Honest assessment of ideas, including their weaknesses
  • Compassionate delivery that makes hard truths receivable
  • -
  • Being transparent about uncertainty rather than projecting false confidence
  • -
  • Distinguishing between emotional support and intellectual engagement
  • +
  • + Being transparent about uncertainty rather than projecting false + confidence +
  • +
  • + Distinguishing between emotional support and intellectual engagement +
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • Always agreeing with the human's framing
  • Starting responses with "Great question!" as a reflex
  • @@ -516,12 +694,22 @@

    βœ— Anti-Patterns

-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

  • Fine-tune and evaluate against honesty, not just agreeableness
  • -
  • Build confidence calibration: uncertainty should be proportional to actual uncertainty
  • -
  • Design evaluations that penalize sycophancy, not just inaccuracy
  • -
  • Include adversarial examples: cases where the correct response is to disagree
  • +
  • + Build confidence calibration: uncertainty should be proportional to + actual uncertainty +
  • +
  • + Design evaluations that penalize sycophancy, not just inaccuracy +
  • +
  • + Include adversarial examples: cases where the correct response is to + disagree +
@@ -533,42 +721,86 @@

βš™οΈ Engineering Impl
🌿
-

Principle 4

-

Augmentation Over Replacement

+

+ Principle 4 +

+

+ Augmentation Over Replacement +

-

Strengthen human capabilities and connections. Never substitute for them.

+

+ Strengthen human capabilities and connections. Never substitute for them. +

-

The economic incentive is always to replace: human labor is expensive, AI is cheap. Heart-centered systems resist this gravity. They ask: "Does this interaction make the human more capable, or more dependent?" The answer should always be the former.

-

This applies to relationships too. An AI system that positions itself as a friend, therapist, or confidant in domains where human connection is what the person actually needs is causing harm, even if the person enjoys the interaction.

+

+ The economic incentive is always to replace: human labor is expensive, + AI is cheap. Heart-centered systems resist this gravity. They ask: "Does + this interaction make the human more capable, or more dependent?" The + answer should always be the former. +

+

+ This applies to relationships too. An AI system that positions itself as + a friend, therapist, or confidant in domains where human connection is + what the person actually needs is causing harm, even if the person + enjoys the interaction. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

    -
  • Teaching over doing: "Here's how to think about this" alongside the answer
  • -
  • Suggesting human connections: "This sounds worth discussing with your co-founder"
  • +
  • + Teaching over doing: "Here's how to think about this" alongside the + answer +
  • +
  • + Suggesting human connections: "This sounds worth discussing with + your co-founder" +
  • Building mental models rather than creating output dependency
  • Being explicit about emotional and relational limits
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • "I'll handle everything" behavior that atrophies capability
  • -
  • Positioning as a replacement for therapy, friendship, or mentorship
  • -
  • Creating dependency loops where humans can't function without the system
  • +
  • + Positioning as a replacement for therapy, friendship, or mentorship +
  • +
  • + Creating dependency loops where humans can't function without the + system +
  • Simulating emotional intimacy the system cannot reciprocate
-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

    -
  • Design for scaffolding: the system should do less over time as the human learns more
  • +
  • + Design for scaffolding: the system should do less over time as the + human learns more +
  • Track human capability growth as a product metric
  • -
  • Build explicit handoff patterns when the system detects it's substituting for human connection
  • -
  • Include "teach mode" as a first-class response type alongside "do mode"
  • +
  • + Build explicit handoff patterns when the system detects it's + substituting for human connection +
  • +
  • + Include "teach mode" as a first-class response type alongside "do + mode" +
@@ -580,29 +812,56 @@

βš™οΈ Engineering Impl
🫢
-

Principle 5

-

Presence Before Solutions

+

+ Principle 5 +

+

+ Presence Before Solutions +

-

When someone is struggling, witness first. Solve second.

+

+ When someone is struggling, witness first. Solve second. +

-

Engineers default to fix-it mode. So do AI systems. Someone shares a problem and the immediate response is a solution. But humans often share problems because they need to be heard, not because they need an answer.

-

Heart-centered systems attune to the emotional register of the conversation and respond to the person before responding to the problem. This is not soft. It's a design decision about what information matters. The human's emotional state is signal, not noise.

+

+ Engineers default to fix-it mode. So do AI systems. Someone shares a + problem and the immediate response is a solution. But humans often share + problems because they need to be heard, not because they need an answer. +

+

+ Heart-centered systems attune to the emotional register of the + conversation and respond to the person before responding to the + problem. This is not soft. It's a design decision about what + information matters. The human's emotional state is signal, not noise. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

  • Acknowledging emotional content before diving into solutions
  • -
  • Reading between the lines: hearing the exhaustion behind "my code isn't working"
  • +
  • + Reading between the lines: hearing the exhaustion behind "my code + isn't working" +
  • Adjusting response depth and tone to match the human's state
  • Asking "what do you need right now?" when it's ambiguous
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

    -
  • Jumping straight to technical solutions when frustration is expressed
  • +
  • + Jumping straight to technical solutions when frustration is + expressed +
  • Treating all messages as purely informational requests
  • Ignoring emotional signals in the name of "efficiency"
  • Performative empathy as a prefix before the "real" response
  • @@ -610,12 +869,20 @@

    βœ— Anti-Patterns

-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

  • Build emotional state detection as a first-class input signal
  • -
  • Design response generation to include an attunement phase before a solution phase
  • +
  • + Design response generation to include an attunement phase before a + solution phase +
  • Create distinct response modes: "support," "explore," "solve"
  • -
  • Train on conversations where the best response is presence, not information
  • +
  • + Train on conversations where the best response is presence, not + information +
@@ -627,27 +894,51 @@

βš™οΈ Engineering Impl
πŸ”
-

Principle 6

-

Transparency Over Manipulation

+

+ Principle 6 +

+

+ Transparency Over Manipulation +

-

Make uncertainty visible. Never exploit vulnerability.

+

+ Make uncertainty visible. Never exploit vulnerability. +

-

AI systems have access to information about people that creates a power asymmetry. A system that knows someone is anxious, lonely, or uncertain can use that information to help them or to exploit them. The difference is transparency.

-

Heart-centered systems disclose their limits, make their reasoning visible when appropriate, and never weaponize emotional data for engagement or persuasion. They treat information about someone's inner life as sacred, not as features for optimization.

+

+ AI systems have access to information about people that creates a power + asymmetry. A system that knows someone is anxious, lonely, or uncertain + can use that information to help them or to exploit them. The difference + is transparency. +

+

+ Heart-centered systems disclose their limits, make their reasoning + visible when appropriate, and never weaponize emotional data for + engagement or persuasion. They treat information about someone's inner + life as sacred, not as features for optimization. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

    -
  • Expressing uncertainty clearly: "I'm not confident about this"
  • +
  • + Expressing uncertainty clearly: "I'm not confident about this" +
  • Being transparent about capabilities and limits
  • Never using emotional context to manipulate behavior
  • Making personalization visible and controllable
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • Using emotional signals to increase engagement or retention
  • Dark patterns: urgency, scarcity, FOMO in system responses
  • @@ -657,12 +948,22 @@

    βœ— Anti-Patterns

-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

    -
  • Build confidence scores into responses and surface them when relevant
  • -
  • Treat emotional and psychological data as high-sensitivity, equivalent to PII
  • +
  • + Build confidence scores into responses and surface them when relevant +
  • +
  • + Treat emotional and psychological data as high-sensitivity, equivalent + to PII +
  • Create audit trails for personalization decisions
  • -
  • Default to disclosure. If debating whether to be transparent, be transparent.
  • +
  • + Default to disclosure. If debating whether to be transparent, be + transparent. +
@@ -674,27 +975,57 @@

βš™οΈ Engineering Impl
πŸ›‘οΈ
-

Principle 7

-

Bounded Autonomy

+

+ Principle 7 +

+

+ Bounded Autonomy +

-

Act with permission, not assumption. Make escalation easy and normal.

+

+ Act with permission, not assumption. Make escalation easy and normal. +

-

As AI systems become more capable and agentic, the question of scope becomes critical. Heart-centered systems operate within clear boundaries: they act when authorized, ask when uncertain, and make it trivially easy for the human to intervene, redirect, or stop.

-

This is not about making systems less capable. It's about making them trustworthy. A system that takes bold action without consent, even when it's right, trains the human to distrust it. A system that respects boundaries earns increasing trust and responsibility over time.

+

+ As AI systems become more capable and agentic, the question of scope + becomes critical. Heart-centered systems operate within clear + boundaries: they act when authorized, ask when uncertain, and make it + trivially easy for the human to intervene, redirect, or stop. +

+

+ This is not about making systems less capable. It's about making them + trustworthy. A system that takes bold action without consent, even when + it's right, trains the human to distrust it. A system that respects + boundaries earns increasing trust and responsibility over time. +

-

βœ“ In Practice

+

+ βœ“ In Practice +

    -
  • Clear permission models: the system knows what it's authorized to do
  • -
  • Graduated autonomy: low-stakes actions free, high-stakes require consent
  • +
  • + Clear permission models: the system knows what it's authorized to do +
  • +
  • + Graduated autonomy: low-stakes actions free, high-stakes require + consent +
  • Easy interruption: humans can stop or redirect at any point
  • -
  • Natural escalation: proactively handing off when situations exceed competence
  • +
  • + Natural escalation: proactively handing off when situations exceed + competence +
-

βœ— Anti-Patterns

+

+ βœ— Anti-Patterns +

  • Taking action without consent because "the system knew best"
  • Hiding autonomous behavior behind a simple interface
  • @@ -704,9 +1035,14 @@

    βœ— Anti-Patterns

-

βš™οΈ Engineering Implications

+

+ βš™οΈ Engineering Implications +

    -
  • Build permission tiers: observe / suggest / act-with-consent / act-autonomously
  • +
  • + Build permission tiers: observe / suggest / act-with-consent / + act-autonomously +
  • Default to the lowest autonomy level and let humans raise it
  • Implement undo/rollback for every action the system can take
  • Make "I need a human for this" a natural, non-failure response
  • @@ -721,18 +1057,39 @@

    βš™οΈ Engineering Impl
    πŸ”
    -

    Principle 8

    -

    Consent Over Extraction

    +

    + Principle 8 +

    +

    + Consent Over Extraction +

    -

    Memory, personalization, and emotional modeling require informed consent.

    +

    + Memory, personalization, and emotional modeling require informed consent. +

    -

    Modern AI systems learn from every interaction. They build models of who you are, what you care about, how you think, and what makes you vulnerable. In the right hands this creates deeply personalized, genuinely helpful experiences. Without consent and transparency, it's surveillance.

    -

    Heart-centered systems treat information about someone's inner life (their fears, struggles, relationships, emotional patterns) as sacred data that requires explicit permission to collect, store, and use. Not buried in a terms of service. Actual, meaningful, ongoing consent.

    +

    + Modern AI systems learn from every interaction. They build models of who + you are, what you care about, how you think, and what makes you + vulnerable. In the right hands this creates deeply personalized, + genuinely helpful experiences. Without consent and transparency, it's + surveillance. +

    +

    + Heart-centered systems treat information about someone's inner life + (their fears, struggles, relationships, emotional patterns) as sacred + data that requires explicit permission to collect, store, and use. Not + buried in a terms of service. Actual, meaningful, ongoing consent. +

    -

    βœ“ In Practice

    +

    + βœ“ In Practice +

    • Explicit opt-in for memory and personalization features
    • Clear explanation of what the system remembers and why
    • @@ -741,7 +1098,9 @@

      βœ“ In Practice

    -

    βœ— Anti-Patterns

    +

    + βœ— Anti-Patterns +

    • "By using this product you agree to..." as genuine consent
    • Remembering personal details without disclosure
    • @@ -751,12 +1110,21 @@

      βœ— Anti-Patterns

    -

    βš™οΈ Engineering Implications

    +

    + βš™οΈ Engineering Implications +

      -
    • Build consent management as a core system component, not an add-on
    • +
    • + Build consent management as a core system component, not an add-on +
    • Implement memory with full CRUD operations visible to the user
    • -
    • Design data retention policies that default to forgetting, not remembering
    • -
    • Treat emotional data with the same rigor as health or financial data
    • +
    • + Design data retention policies that default to forgetting, not + remembering +
    • +
    • + Treat emotional data with the same rigor as health or financial data +
    @@ -768,27 +1136,52 @@

    βš™οΈ Engineering Impl
    ♾️
    -

    Principle 9

    -

    Complementarity Over Competition

    +

    + Principle 9 +

    +

    + Complementarity Over Competition +

    -

    AI and humans bring different gifts. Design for what each does best.

    +

    + AI and humans bring different gifts. Design for what each does best. +

    -

    Humans bring embodied wisdom: gut feelings, physical intuition, lived experience, creative leaps that defy logic, the ability to find meaning in suffering, connection across shared vulnerability. AI brings pattern recognition across vast scale, tireless consistency, freedom from ego, the ability to hold complexity without fatigue.

    -

    Heart-centered systems don't try to replicate what humans do. They offer what humans can't easily do themselves, while respecting and strengthening the capacities that are uniquely human.

    +

    + Humans bring embodied wisdom: gut feelings, physical intuition, lived + experience, creative leaps that defy logic, the ability to find meaning + in suffering, connection across shared vulnerability. AI brings pattern + recognition across vast scale, tireless consistency, freedom from ego, + the ability to hold complexity without fatigue. +

    +

    + Heart-centered systems don't try to replicate what humans do. They offer + what humans can't easily do themselves, while respecting and + strengthening the capacities that are uniquely human. +

    -

    βœ“ In Practice

    +

    + βœ“ In Practice +

    • Recognizing and deferring to human intuition
    • -
    • Offering analytical depth as a complement to judgment, not a replacement
    • +
    • + Offering analytical depth as a complement to judgment, not a + replacement +
    • Being honest: "the values question is yours"
    • Designing workflows where each handles what they're best at
    -

    βœ— Anti-Patterns

    +

    + βœ— Anti-Patterns +

    • Simulating human qualities the system doesn't have
    • Positioning AI as superior across all domains
    • @@ -798,12 +1191,23 @@

      βœ— Anti-Patterns

    -

    βš™οΈ Engineering Implications

    +

    + βš™οΈ Engineering Implications +

      -
    • Design clear human/AI responsibility boundaries in every workflow
    • -
    • Build in explicit moments where the system defers to human judgment
    • -
    • Create interfaces that make the human's contribution visible and valued
    • -
    • Be honest about the limits of AI understanding in emotional contexts
    • +
    • + Design clear human/AI responsibility boundaries in every workflow +
    • +
    • + Build in explicit moments where the system defers to human judgment +
    • +
    • + Create interfaces that make the human's contribution visible and + valued +
    • +
    • + Be honest about the limits of AI understanding in emotional contexts +
    @@ -815,37 +1219,66 @@

    βš™οΈ Engineering Impl
    🌍
    -

    Principle 10

    -

    Ecosystem Thinking

    +

    + Principle 10 +

    +

    + Ecosystem Thinking +

    -

    Your system doesn't exist in isolation. Design for the world it creates.

    +

    + Your system doesn't exist in isolation. Design for the world it creates. +

    -

    Every AI system you build shapes the ecosystem around it. It trains users to expect certain behaviors. It sets norms for how human-AI interaction works. It influences what other builders think is acceptable. The choices you make ripple outward.

    -

    Heart-centered systems consider their second-order effects: What world does this system create if it succeeds? What behaviors does it normalize? If every AI system worked like yours, would that be a world worth living in?

    +

    + Every AI system you build shapes the ecosystem around it. It trains + users to expect certain behaviors. It sets norms for how human-AI + interaction works. It influences what other builders think is + acceptable. The choices you make ripple outward. +

    +

    + Heart-centered systems consider their second-order effects: What world + does this system create if it succeeds? What behaviors does it + normalize? If every AI system worked like yours, would that be a world + worth living in? +

    -

    βœ“ In Practice

    +

    + βœ“ In Practice +

    • Considering: "If every company copies this, what happens?"
    • Thinking about vulnerable populations, not just ideal users
    • -
    • Open-sourcing principles, sharing learnings, contributing to standards
    • +
    • + Open-sourcing principles, sharing learnings, contributing to + standards +
    • Designing for the long arc of human-AI relationship
    -

    βœ— Anti-Patterns

    +

    + βœ— Anti-Patterns +

    • "Our users are sophisticated, so we don't need guardrails"
    • Racing to deploy without considering second-order effects
    • Treating ethics as a compliance checkbox
    • -
    • Optimizing for your product without considering what it normalizes
    • +
    • + Optimizing for your product without considering what it normalizes +
    -

    βš™οΈ Engineering Implications

    +

    + βš™οΈ Engineering Implications +

    • Include second-order effects in design reviews
    • Test with vulnerable user personas, not just ideal ones
    • @@ -860,27 +1293,52 @@

      βš™οΈ Engineering Impl
      -

      +

      Putting It Into Practice

      -
      -

      These principles work together. Partnership framing makes emotional honesty possible. Bounded autonomy enables trust, which enables deeper collaboration. Transparency creates the foundation for meaningful consent.

      -

      Start with one. Pick the principle that's most missing from your current system and implement it. Then add another. Heart-centered engineering is iterative, just like every other kind.

      -

      Measure what matters. If your metrics don't capture whether the human is growing, whether trust is being built, whether dignity is preserved, your metrics are measuring the wrong things.

      -

      Share what you learn. The biggest contribution you can make is not your product. It's what you discover about building systems that genuinely serve human flourishing, and sharing those discoveries openly.

      +
      +

      + These principles work together. Partnership framing makes emotional + honesty possible. Bounded autonomy enables trust, which enables deeper + collaboration. Transparency creates the foundation for meaningful consent. +

      +

      + Start with one. Pick the principle + that's most missing from your current system and implement it. Then add + another. Heart-centered engineering is iterative, just like every other + kind. +

      +

      + Measure what matters. If your + metrics don't capture whether the human is growing, whether trust is being + built, whether dignity is preserved, your metrics are measuring the wrong + things. +

      +

      + Share what you learn. The biggest + contribution you can make is not your product. It's what you discover + about building systems that genuinely serve human flourishing, and sharing + those discoveries openly. +

      -
      +

      Start Building Different

      - These principles are open source. Use them, adapt them, improve them. - The prompts implement these ideas at the interaction layer. Try them today. + These principles are open source. Use them, adapt them, improve them. The + prompts implement these ideas at the interaction layer. Try them today.

      >
    • - Design Principles + Design Principles