<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://ebarronh.github.io/buildathon-mar-26/feed.xml" rel="self" type="application/atom+xml" /><link href="https://ebarronh.github.io/buildathon-mar-26/" rel="alternate" type="text/html" /><updated>2026-03-07T07:12:26+00:00</updated><id>https://ebarronh.github.io/buildathon-mar-26/feed.xml</id><title type="html">Cleo</title><subtitle>Cleo is an AI-powered sales training partner that helps businesses improve sales education through training models, interactive simulations, intelligent grading, and real-time AI voice interactions.</subtitle><author><name>Cleo Team</name></author><entry><title type="html">Week 2: Designing the Experience</title><link href="https://ebarronh.github.io/buildathon-mar-26/blog/2026/03/06/week-2-designing-the-experience/" rel="alternate" type="text/html" title="Week 2: Designing the Experience" /><published>2026-03-06T00:00:00+00:00</published><updated>2026-03-06T00:00:00+00:00</updated><id>https://ebarronh.github.io/buildathon-mar-26/blog/2026/03/06/week-2-designing-the-experience</id><content type="html" xml:base="https://ebarronh.github.io/buildathon-mar-26/blog/2026/03/06/week-2-designing-the-experience/"><![CDATA[<h1 id="designing-the-experience">Designing the Experience</h1>
<p><strong>From Problem Space to Pixel-Ready UX Specification</strong>
Week 2 Design Deliverable — ProductBC Build-a-Thon 2026</p>

<hr />

<h2 id="week-2-summary">Week 2 Summary</h2>

<p>Last week we validated our problem space. This week we designed the experience. We went from sticky notes and whiteboard sketches to a full UX design specification — color systems, component hierarchies, user journey flows, accessibility strategy, and six distinct visual directions we debated, merged, and refined.</p>

<p>The biggest shift this week was moving from <em>what</em> Cleo should do to <em>how it should feel</em>. We spent the first half of the week mapping complexity, the second half generating and evaluating design directions with AI, and the final push synthesizing everything into a spec that’s ready for implementation.</p>

<p>Here’s how we got there.</p>

<hr />

<h2 id="1-mapping-complexity-the-cynefin-framework">1. Mapping Complexity: The Cynefin Framework</h2>

<p>Before we touched a single pixel, we needed to understand what kind of problem we were actually solving. Not all parts of Cleo are equally hard — and the design approach should match the complexity of each challenge.</p>

<p>We used the <strong>Cynefin Framework</strong> to categorize the key challenges across four domains:</p>

<p><img src="/buildathon-mar-26/assets/images/week-2/cynefin-analysis.png" alt="Cynefin Framework analysis for Cleo &amp; You" /></p>

<table>
  <thead>
    <tr>
      <th>Domain</th>
      <th>Challenge</th>
      <th>Design Implication</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Complex</strong> (probe-sense-respond)</td>
      <td>Building call simulations that feel real — quick, accurate, human-like</td>
      <td>Requires iterative experimentation. We can’t spec this perfectly upfront; we need to build, test, and adjust.</td>
    </tr>
    <tr>
      <td><strong>Complicated</strong> (sense-analyze-respond)</td>
      <td>Recognizing which content needs to be updated or deleted</td>
      <td>Solvable with expertise. Clear rules + good UX for content management.</td>
    </tr>
    <tr>
      <td><strong>Complicated</strong></td>
      <td>Obtaining company data when no documentation exists</td>
      <td>Needs analysis — email ingestion, meeting recordings, alternative capture methods.</td>
    </tr>
    <tr>
      <td><strong>Chaotic</strong> (act-sense-respond)</td>
      <td>Creating AI personas that match real customer characteristics across industries</td>
      <td>Novel territory. No best practice exists — we need to act fast, learn from what works, and iterate.</td>
    </tr>
  </tbody>
</table>

<p>The Cynefin mapping gave us permission to treat different parts of the product differently. The simulation engine lives in the complex domain — we’ll probe and iterate. The content management system lives in the complicated domain — we can plan it methodically.</p>

<hr />

<h2 id="2-finding-the-core-what-is-cleo-really">2. Finding the Core: What Is Cleo, Really?</h2>

<p>Before designing screens, we forced ourselves to answer one question:</p>

<blockquote>
  <p><strong>“If Cleo is one thing, what is it?”</strong></p>
</blockquote>

<p>The answer we landed on: <strong>a sparring partner, not a testing system.</strong></p>

<p>The mental model isn’t “training software.” It’s a coach who hits you tough shots so you’re ready for the match. Like a tennis partner who doesn’t let you off easy — not a referee keeping score.</p>

<p>This reframe changed everything downstream:</p>

<pre><code class="language-mermaid">flowchart LR
    A["Old Mental Model:&lt;br/&gt;Training Platform"] --&gt;|reframe| B["New Mental Model:&lt;br/&gt;Coach Partner"]
    B --&gt; C["Simulations are sparring,&lt;br/&gt;not tests"]
    B --&gt; D["Scores show growth,&lt;br/&gt;not judgment"]
    B --&gt; E["Sharing is showing off,&lt;br/&gt;not reporting"]
    B --&gt; F["Privacy protects the&lt;br/&gt;learning journey"]
</code></pre>

<p>We defined five experience principles that would guide every design decision:</p>

<ol>
  <li><strong>Instant, always</strong> — Offline-first. No loading states, no “connecting…” The app is ready the moment you open it.</li>
  <li><strong>Feels real, not like training</strong> — The simulation is the product. Minimal chrome. Closer to a phone call than a learning app.</li>
  <li><strong>Opinionated and delightful</strong> — Every interaction is intentionally designed. Micro-animations, haptics, Dynamic Island. Craft is the brand.</li>
  <li><strong>Progress, not pressure</strong> — Scoring motivates growth. Privacy protects the journey. Sharing is a celebration, not a requirement.</li>
  <li><strong>Platform-native, not cross-platform</strong> — Leverage everything iOS and Android offer. This is why it’s native.</li>
</ol>

<hr />

<h2 id="3-the-design-process-generating-and-evaluating-with-ai">3. The Design Process: Generating and Evaluating with AI</h2>

<p>This is where the week got interesting. Rather than starting from a blank Figma canvas, we used a structured AI-assisted design process that let us explore more directions, faster, than a two-person team normally could.</p>

<h3 id="step-1-inspiration-analysis">Step 1: Inspiration Analysis</h3>

<p>We started by studying four products that nail the <em>feeling</em> we’re after — not because they’re in our category, but because they’ve solved similar UX problems:</p>

<pre><code class="language-mermaid">mindmap
  root((Design&lt;br/&gt;Inspiration))
    Tesla Mobile App
      Status at a glance
      Haptic controls
      Offline-resilient
      No loading spinners
    Apple Music
      Full-screen immersion
      Browse-to-play transition
      Dynamic Island integration
      Offline-first downloads
    Linear
      Blazing fast
      Zero loading states
      Beautiful information density
      Keyboard-first on web
    NotebookLM
      Upload-and-transform concept
      Cautionary: too slow
      Cautionary: too generic
      Lesson: speed is non-negotiable
</code></pre>

<p><strong>What we took from each:</strong></p>
<ul>
  <li><strong>Tesla:</strong> Open the app, instantly know everything. No digging required. We applied this to the learner home screen — readiness status, recent sessions, and next recommended practice all visible without navigating.</li>
  <li><strong>Apple Music:</strong> The transition from browsing to being <em>in</em> a simulation should feel like tapping play — instant immersion. Full-screen, distraction-free.</li>
  <li><strong>Linear:</strong> The manager-facing web dashboard should channel Linear’s speed and intentionality. Dense information presented beautifully.</li>
  <li><strong>NotebookLM:</strong> Same “upload and transform” promise, but executed with speed and specificity. Where NotebookLM feels sluggish, Cleo must feel instant.</li>
</ul>

<h3 id="step-2-anti-pattern-identification">Step 2: Anti-Pattern Identification</h3>

<p>Equally important was defining what we would <em>not</em> do. We built an explicit anti-pattern list:</p>

<ul>
  <li>No enterprise dashboard bloat (no 47-tab navigation)</li>
  <li>No onboarding wizards or tutorials (Tesla doesn’t explain itself; neither should Cleo)</li>
  <li>No completion-theater UX (no badge collections, no confetti for finishing mandatory modules)</li>
  <li>No generic AI interactions (no “I’m an AI assistant, how can I help you?” — the AI persona has a name, a mood, a personality)</li>
  <li>No NotebookLM-style processing delays (content transformation must feel instant)</li>
</ul>

<h3 id="step-3-six-visual-directions">Step 3: Six Visual Directions</h3>

<p>With principles and inspiration locked in, we generated six distinct visual directions using our AI-assisted design workflow. Each direction explored a different approach to the same core experience:</p>

<table>
  <thead>
    <tr>
      <th>#</th>
      <th>Direction</th>
      <th>Philosophy</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1</td>
      <td><strong>Minimal Zen</strong></td>
      <td>Ultra-clean, spacious, Tesla-inspired. Readiness score as focal point.</td>
    </tr>
    <tr>
      <td>2</td>
      <td><strong>Bold Cards</strong></td>
      <td>Large hero cards, strong CTAs, fintech-inspired energy.</td>
    </tr>
    <tr>
      <td>3</td>
      <td><strong>Dark Immersive</strong></td>
      <td>Fully dark, gaming/audio UI, performance dashboard feel.</td>
    </tr>
    <tr>
      <td>4</td>
      <td><strong>Playful Coach</strong></td>
      <td>Warm, motivational, fitness-app-inspired streaks and progress.</td>
    </tr>
    <tr>
      <td>5</td>
      <td><strong>Manager Dashboard</strong></td>
      <td>Linear-inspired density for team readiness views.</td>
    </tr>
    <tr>
      <td>6</td>
      <td><strong>Context-First</strong></td>
      <td>Event-aware suggestions, prominent privacy badge, weekly rhythm.</td>
    </tr>
  </tbody>
</table>

<p>We built an interactive HTML showcase to compare them side-by-side. Here are three of the directions:</p>

<h3 id="direction-2-bold-cards--the-winner">Direction 2: Bold Cards — The Winner</h3>

<p><img src="/buildathon-mar-26/assets/images/week-2/concept-1.png" alt="Design Direction 2: Bold Cards — large emerald hero card with scenario list" /></p>

<p>Large, colorful hero card that surfaces the most relevant practice scenario. Emerald green gradient hero with white CTA button — impossible to miss. The primary action (start a practice call) dominates the screen. Action-first design.</p>

<h3 id="direction-4-playful-coach">Direction 4: Playful Coach</h3>

<p><img src="/buildathon-mar-26/assets/images/week-2/concept-2.png" alt="Design Direction 4: Playful Coach — motivational language and streak tracking" /></p>

<p>The friendliest direction — motivational language, streak tracking, warm gradients. Feels like a fitness app for your sales skills. “Hey Jordan!” with a 3-day streak and progress chips. Encouragement-first without being childish.</p>

<h3 id="direction-6-context-first">Direction 6: Context-First</h3>

<p><img src="/buildathon-mar-26/assets/images/week-2/concept-3.png" alt="Design Direction 6: Context-First — event-aware suggestions with privacy badge" /></p>

<p>The most contextually aware direction. The app knows what you need to practice based on upcoming events (conferences, calls, launches). Privacy is visually prominent — the purple shield badge is always visible. Weekly activity gives a personal rhythm without surveillance.</p>

<h3 id="step-4-the-merge-decision">Step 4: The Merge Decision</h3>

<p>We didn’t pick one direction and discard the rest. We merged the best elements:</p>

<pre><code class="language-mermaid">flowchart TD
    D2["Direction 2: Bold Cards&lt;br/&gt;&lt;em&gt;Primary learner experience&lt;/em&gt;"] --&gt; FINAL["Final Design Direction"]
    D1["Direction 1: Minimal Zen&lt;br/&gt;&lt;em&gt;Simulation + score screens&lt;/em&gt;"] --&gt; FINAL
    D5["Direction 5: Manager Dashboard&lt;br/&gt;&lt;em&gt;Linear-inspired web layout&lt;/em&gt;"] --&gt; FINAL
    D6["Direction 6: Context-First&lt;br/&gt;&lt;em&gt;Privacy badge concept&lt;/em&gt;"] --&gt; FINAL

    FINAL --&gt; L["Learner Mobile:&lt;br/&gt;Bold Cards + Immersive Simulation"]
    FINAL --&gt; M["Manager Web:&lt;br/&gt;Linear-Inspired Dashboard"]
    FINAL --&gt; P["Privacy:&lt;br/&gt;Purple Shield Badge Everywhere"]

    style D2 fill:#10B981,color:#fff
    style FINAL fill:#059669,color:#fff
</code></pre>

<p><strong>The rationale:</strong></p>
<ul>
  <li><strong>Bold Cards</strong> for the learner home because it’s action-first — one tap to practice, under 10 seconds to hearing a voice</li>
  <li><strong>Minimal Zen’s dark immersive screens</strong> for the actual simulation — the call experience should feel like picking up the phone, not using an app</li>
  <li><strong>Manager Dashboard’s density</strong> for the web experience — managers need information, not pretty cards</li>
  <li><strong>Context-First’s privacy badge</strong> — the purple shield carries into every screen, making privacy visible at all times</li>
</ul>

<hr />

<h2 id="4-the-core-loop-user-journey-design">4. The Core Loop: User Journey Design</h2>

<p>With the visual direction locked, we mapped the critical user journeys. The most important one — the learner’s first practice session — had to be flawless:</p>

<pre><code class="language-mermaid">flowchart TD
    A["Opens app"] --&gt; B["Home screen:&lt;br/&gt;readiness + hero card"]
    B --&gt; C{"Taps 'Start call'"}
    C --&gt; D["Context card:&lt;br/&gt;persona name, role, situation"]
    D --&gt; E["Taps 'Call'"]
    E --&gt; F["Screen transitions&lt;br/&gt;to dark immersive mode"]
    F --&gt; G["Phone rings...&lt;br/&gt;persona picks up"]
    G --&gt; H["Live conversation&lt;br/&gt;with AI persona"]
    H --&gt; I{"Call ends"}
    I --&gt; J["Full-screen&lt;br/&gt;score reveal"]
    J --&gt; K{"Score result"}
    K --&gt;|Green| L["Celebration!&lt;br/&gt;Share prompt appears"]
    K --&gt;|Yellow| M["Encouraging:&lt;br/&gt;'Almost there' + tips"]
    K --&gt;|Red| N["Directional:&lt;br/&gt;'Work on X' + retry"]

    style C fill:#10B981,color:#fff
    style J fill:#10B981,color:#fff
    style L fill:#10B981,color:#fff
    style M fill:#F59E0B,color:#fff
    style N fill:#EF4444,color:#fff
</code></pre>

<p><strong>Key design decisions in this flow:</strong></p>
<ul>
  <li>No onboarding tutorial — the app is self-evident</li>
  <li>Hero card on home screen means the first action is obvious</li>
  <li>Under 10 seconds from app open to hearing a voice</li>
  <li>Score reveal is the emotional payoff — animated, celebratory, forward-looking</li>
  <li>Share prompt only appears on green scores — never pressure on yellow/red</li>
  <li>Private by default — the purple shield is always visible</li>
</ul>

<h3 id="the-privacy-first-share-flow">The Privacy-First Share Flow</h3>

<p>This was one of the harder design problems: how do you make sharing feel like showing off instead of reporting?</p>

<pre><code class="language-mermaid">flowchart TD
    A["Learner hits green"] --&gt; B["Celebratory score&lt;br/&gt;reveal animation"]
    B --&gt; C["Share prompt:&lt;br/&gt;'Share with your manager?'"]
    C --&gt; D{Decision}
    D --&gt;|Share| E["Preview: exactly what&lt;br/&gt;manager will see"]
    D --&gt;|Not now| F["Score saved privately&lt;br/&gt;Purple shield visible"]
    E --&gt; G{"Confirm?"}
    G --&gt;|Yes| H["Achievement sent&lt;br/&gt;Satisfying animation"]
    G --&gt;|Cancel| F
    F --&gt; I["Can share later&lt;br/&gt;from progress screen"]

    style A fill:#10B981,color:#fff
    style H fill:#10B981,color:#fff
    style F fill:#8B5CF6,color:#fff
</code></pre>

<p>The key insight: transparency creates trust. The learner sees <em>exactly</em> what the manager will see before confirming. “Not now” is never judged. The purple shield badge reinforces safety at every step.</p>

<hr />

<h2 id="5-design-system-the-foundations">5. Design System: The Foundations</h2>

<p>We built a complete design system specification covering all three platforms. Here are the key decisions:</p>

<h3 id="color-system">Color System</h3>

<table>
  <thead>
    <tr>
      <th>Role</th>
      <th>Color</th>
      <th>Hex</th>
      <th>Usage</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Primary</td>
      <td>Emerald</td>
      <td><code class="language-plaintext highlighter-rouge">#10B981</code></td>
      <td>Brand, CTAs, green scores, celebration</td>
    </tr>
    <tr>
      <td>Primary Dark</td>
      <td>Deep Emerald</td>
      <td><code class="language-plaintext highlighter-rouge">#059669</code></td>
      <td>Buttons, active states</td>
    </tr>
    <tr>
      <td>Score Yellow</td>
      <td>Warm Amber</td>
      <td><code class="language-plaintext highlighter-rouge">#F59E0B</code></td>
      <td>Encouraging, not warning</td>
    </tr>
    <tr>
      <td>Score Red</td>
      <td>Soft Red</td>
      <td><code class="language-plaintext highlighter-rouge">#EF4444</code></td>
      <td>Directional, not alarming</td>
    </tr>
    <tr>
      <td>Privacy</td>
      <td>Purple</td>
      <td><code class="language-plaintext highlighter-rouge">#8B5CF6</code></td>
      <td>Shield badge, privacy indicators</td>
    </tr>
    <tr>
      <td>Simulation BG</td>
      <td>Deep Dark</td>
      <td><code class="language-plaintext highlighter-rouge">#030712</code></td>
      <td>Immersive call screens</td>
    </tr>
  </tbody>
</table>

<h3 id="platform-strategy">Platform Strategy</h3>

<p>We made the deliberate choice to go <strong>platform-native</strong>, not cross-platform:</p>

<table>
  <thead>
    <tr>
      <th>Platform</th>
      <th>Technology</th>
      <th>Design Approach</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>iOS (primary)</td>
      <td>SwiftUI</td>
      <td>Custom components, haptics, Dynamic Island, Apple Watch</td>
    </tr>
    <tr>
      <td>Android</td>
      <td>Jetpack Compose</td>
      <td>Material 3 base, heavily customized to match Cleo identity</td>
    </tr>
    <tr>
      <td>Web</td>
      <td>Tailwind + shadcn/ui</td>
      <td>Linear-inspired, manager-focused, keyboard shortcuts</td>
    </tr>
  </tbody>
</table>

<h3 id="component-priority">Component Priority</h3>

<p>We identified 10 custom components and ordered them by criticality:</p>

<pre><code class="language-mermaid">block-beta
    columns 3
    block:p0["P0 — Core Loop"]:3
        A["Simulation&lt;br/&gt;Call Screen"]
        B["Score&lt;br/&gt;Reveal"]
        C["Hero Scenario&lt;br/&gt;Card"]
    end
    block:p1["P1 — Complete Experience"]:3
        D["Scenario&lt;br/&gt;List Card"]
        E["Privacy&lt;br/&gt;Shield Badge"]
        F["Readiness&lt;br/&gt;Indicator"]
    end
    block:p2["P2 — Growth"]:3
        G["Share&lt;br/&gt;Achievement"]
        H["Content&lt;br/&gt;Upload Zone"]
        I["Team Readiness&lt;br/&gt;Card"]
    end

    style p0 fill:#10B981,color:#fff
    style p1 fill:#F59E0B,color:#fff
    style p2 fill:#3B82F6,color:#fff
</code></pre>

<p>The P0 components — Simulation Call Screen, Score Reveal, Hero Scenario Card, and Context Card — deliver the complete practice-to-score experience. Everything else builds on top of that loop.</p>

<hr />

<h2 id="6-behind-the-scenes-the-workspace">6. Behind the Scenes: The Workspace</h2>

<p>This is what the process actually looked like — Linear for tracking issues, the code editor with AI conversations shaping the UX spec in real time, and the project structure growing organically as artifacts were produced.</p>

<p><img src="/buildathon-mar-26/assets/images/week-2/progress-week-2.png" alt="Our workspace during Week 2 — Linear issues, AI-assisted UX design conversations, and project structure" /></p>

<p>No Figma, no whiteboard — just structured prompts, iterative refinement, and a spec that wrote itself through conversation. The AI wasn’t generating mockups in isolation; it was asking us questions about color, typography, layout density, and platform strategy, then synthesizing our answers into a coherent specification.</p>

<hr />

<h2 id="7-what-we-learned-about-ai-assisted-design">7. What We Learned About AI-Assisted Design</h2>

<p>This week was a real test of using AI as a design collaborator. Here’s what worked and what we’d do differently:</p>

<p><strong>What worked well:</strong></p>
<ul>
  <li><strong>Generating multiple directions fast</strong> — Six visual directions in a day. A two-person team couldn’t have explored that breadth manually.</li>
  <li><strong>UX writing at scale</strong> — Error messages, empty states, accessibility labels. AI generated first drafts that we edited for tone.</li>
  <li><strong>Structured specification</strong> — The final UX spec is comprehensive and implementation-ready. AI helped maintain consistency across 1,000+ lines of specification.</li>
  <li><strong>Pattern analysis</strong> — AI was excellent at analyzing inspiring products and extracting transferable patterns.</li>
</ul>

<p><strong>What required human judgment:</strong></p>
<ul>
  <li><strong>The merge decision</strong> — Which elements from which direction to combine required taste, not logic.</li>
  <li><strong>Emotional calibration</strong> — How a red score should <em>feel</em> (directional, not alarming) can’t be specified by an AI — it has to be felt.</li>
  <li><strong>Anti-patterns</strong> — Knowing what <em>not</em> to do (no badge walls, no completion theater) came from our experience with bad enterprise software, not from AI.</li>
  <li><strong>Privacy as emotion</strong> — The insight that privacy isn’t a feature but an <em>emotion</em> came from human empathy with the learner’s vulnerability during practice.</li>
</ul>

<hr />

<h2 id="8-next-steps--week-3">8. Next Steps — Week 3</h2>

<ul>
  <li><strong>Build the interactive prototype</strong> — Clickable flows for the core practice loop (home -&gt; scenario -&gt; call -&gt; score -&gt; share)</li>
  <li><strong>User testing</strong> — Put the prototype in front of 5-8 target users and test: Does the flow feel natural? Is “under 10 seconds to a voice” achievable in practice?</li>
  <li><strong>Technical architecture</strong> — Start mapping the design specification to implementation: API contracts, voice synthesis pipeline, scoring model</li>
  <li><strong>Component development</strong> — Begin building the P0 components in SwiftUI</li>
</ul>

<hr />

<h2 id="the-bottom-line">The Bottom Line</h2>

<p>Week 1 was about <em>should we build this?</em> Week 2 was about <em>what should it feel like?</em></p>

<p>We went from a validated problem space to a design specification that covers the complete learner and manager experience across three platforms. The key insight that drove everything: Cleo isn’t training software. It’s a sparring partner. That reframe — from compliance to confidence, from testing to coaching, from reporting to showing off — shaped every design decision we made.</p>

<p>The UX spec is done. Now we build it.</p>]]></content><author><name>Ernest &amp; Melanie</name></author><summary type="html"><![CDATA[Designing the Experience From Problem Space to Pixel-Ready UX Specification Week 2 Design Deliverable — ProductBC Build-a-Thon 2026]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-2/concept-1.png" /><media:content medium="image" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-2/concept-1.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The CRM Racket: Why Every Major Platform Is Failing Sales Teams</title><link href="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/28/crm-landscape-debate/" rel="alternate" type="text/html" title="The CRM Racket: Why Every Major Platform Is Failing Sales Teams" /><published>2026-02-28T00:00:00+00:00</published><updated>2026-02-28T00:00:00+00:00</updated><id>https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/28/crm-landscape-debate</id><content type="html" xml:base="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/28/crm-landscape-debate/"><![CDATA[<h1 id="the-crm-racket-why-every-major-platform-is-failing-sales-teams">The CRM Racket: Why Every Major Platform Is Failing Sales Teams</h1>
<h3 id="what-happens-when-a-salesforce-architect-a-startup-growth-lead-a-vc-investor-and-a-product-designer-tear-apart-the-six-crms-that-define-the-market--and-ask-whether-anyone-should-build-a-seventh">What happens when a Salesforce architect, a startup growth lead, a VC investor, and a product designer tear apart the six CRMs that define the market — and ask whether anyone should build a seventh</h3>

<hr />

<h2 id="the-setup">The Setup</h2>

<p>Here’s a number that should alarm every sales leader: the average enterprise CRM deployment now costs more per year than the salary of the rep forced to use it. Salesforce’s own data reveals that 77% of its AI agent deployments fail. HubSpot users routinely describe the jump from free to paid as a “bait-and-switch.” Pipedrive shut down its own community forum. And Zoho — the supposed budget option — has an interface that users compare to a “dumpster fire.”</p>

<p>CRM software was supposed to help salespeople sell. Instead, the category has devolved into a sprawling, overpriced mess where every major platform optimizes for the buyer who signs the contract — the VP of Sales, the CIO — rather than the human who lives inside the software eight hours a day.</p>

<p>We assembled four people who’ve seen CRM failure from every angle. We gave them research on all six major platforms. We asked one question: <em>Is there room for something new?</em></p>

<p><strong>The panel:</strong></p>

<ul>
  <li><strong>Alex Mercer</strong> — Former Salesforce solutions architect, 10 years implementing enterprise CRM. Has seen every failure mode in the book and built workarounds for most of them.</li>
  <li><strong>Priya Nair</strong> — Growth lead at a 50-person SaaS startup. Has migrated CRMs three times in four years and is currently paying for tools she resents.</li>
  <li><strong>James Okafor</strong> — VC investor who has evaluated 200+ B2B SaaS companies. Sits on boards of two companies that sell <em>to</em> CRM users.</li>
  <li><strong>Lena Park</strong> — Product designer who has redesigned CRM onboarding flows for six companies. Believes most enterprise software is an act of aggression against its users.</li>
</ul>

<p>Two rounds. No hedging. No vendor diplomacy.</p>

<hr />

<h2 id="the-big-six-a-brutal-honest-scorecard">The Big Six: A Brutal Honest Scorecard</h2>

<pre><code class="language-mermaid">quadrantChart
    title CRM Landscape - Complexity vs Value Delivered to Sales Reps
    x-axis "Low Complexity" --&gt; "High Complexity"
    y-axis "Low Value to Reps" --&gt; "High Value to Reps"
    quadrant-1 Powerful but Painful
    quadrant-2 Sweet Spot - Empty
    quadrant-3 Avoid
    quadrant-4 Simple but Incomplete
    Salesforce: [0.92, 0.55]
    Dynamics365: [0.88, 0.48]
    HubSpot: [0.65, 0.62]
    Zoho: [0.72, 0.38]
    Pipedrive: [0.30, 0.52]
    Attio: [0.38, 0.60]
</code></pre>

<p><strong>Salesforce</strong> — The category king that became a category tax. Technically capable of anything; practically accessible to almost no one without a certified admin. Agentforce AI has signed 8,000 deals against a stated goal of one billion agents. The complexity-value paradox is complete: it can do everything except make a sales rep’s day easier.</p>

<p><strong>HubSpot</strong> — The best loss leader in SaaS history, followed by one of the industry’s most aggressive pricing cliffs. Free tier is excellent. The jump to Professional — $500+/month with mandatory onboarding fees — is the CRM equivalent of a bait-and-switch. Breeze AI lives in a walled garden and can’t see outside HubSpot data.</p>

<p><strong>Pipedrive</strong> — Built the best pipeline UI in the market, then let private equity hollow out everything around it. No marketing automation. No branching logic in workflows. No customer support tools. Vista Equity Partners acquired it in 2020; the SMB community forum was shut down in July 2025. The playbook is textbook.</p>

<p><strong>Zoho</strong> — The paradox of the market. Priced for small businesses ($14-52/user/month), complexity designed for enterprise IT teams. Users describe the interface as overwhelming. Zia AI runs on open-source models and struggles with anything off-script. The target customers are the least equipped to manage its demands.</p>

<p><strong>Microsoft Dynamics 365</strong> — A CRM for organizations that have already fully surrendered to the Microsoft ecosystem. Copilot AI is promising but region-locked, language-limited, and restricted by environment type. True total cost of ownership: a $50K/year license typically requires $150-250K in implementation costs. Read that again.</p>

<p><strong>Attio</strong> — The most interesting player and the most frustrating. Architecturally closer to Notion than Salesforce, with a genuine AI-integrated data model. But it solves the 0-50 employee CRM problem elegantly while leaving growth-stage teams stranded. No real mobile app. Immature integration ecosystem. Weak reporting. Companies graduate off Attio at exactly the moment they need CRM the most.</p>

<hr />

<h2 id="round-1-opening-shots">Round 1: Opening Shots</h2>

<h3 id="alex-mercer--ive-spent-a-decade-building-workarounds-for-software-that-shouldnt-need-them">Alex Mercer | “I’ve Spent a Decade Building Workarounds for Software That Shouldn’t Need Them”</h3>

<p>I’ve made a very good living implementing Salesforce. I need everyone here to understand what that actually means. It means the product is so complex that companies pay people like me $200-300 an hour to make it do things it should do out of the box. That’s not a business model for a CRM — that’s a business model for a consulting firm.</p>

<p>The complexity-value paradox is real. Salesforce and Dynamics 365 can model virtually any business process. Both require dedicated technical staff. Both have implementation costs that dwarf licensing. And both have user interfaces their own customers describe as “dense” and “outdated.”</p>

<p>When over 1,100 G2 reviewers cite “missing features” in Salesforce, they’re talking about functionality that technically exists but is buried behind so much configuration that it might as well not. Enterprise-grade has become a euphemism for “requires a consultant.” When your CRM requires a consultant, your CRM has failed.</p>

<h3 id="priya-nair--ive-been-a-customer-of-four-of-these-platforms-they-all-failed-me-differently">Priya Nair | “I’ve Been a Customer of Four of These Platforms. They All Failed Me Differently.”</h3>

<p>Alex, I don’t disagree — but you’re letting HubSpot and Pipedrive off too easy by focusing on enterprise.</p>

<p>I’ve migrated my team from HubSpot to Pipedrive to a hybrid Attio stack in four years. Every time, the decision was rational. Every time, six months later, we found the new problem we’d traded the old problem for.</p>

<p>HubSpot’s free tier is a masterpiece of product design. It is also the most sophisticated pricing trap in the SaaS industry. You build muscle memory, import your contacts, your team learns the workflows — and then you hit the ceiling. $500/month with mandatory onboarding fees. Annual lock-in. Contact-based pricing that scales unpredictably. One user reported $600/month just for 20,000 contacts.</p>

<p>Pipedrive felt like the answer. Clean, fast, sales-focused. Then we discovered there’s no if/else branching in automations. No multi-org linking. The “AI-first 2025 strategy” delivered suggestions for when to follow up, not actual automation. And then they shut down the community forum where we went for help.</p>

<p><strong>The dirty secret of the CRM market: every platform that looks affordable upfront has a monetization trap in the middle.</strong></p>

<h3 id="james-okafor--the-market-map-tells-a-clear-story--if-you-know-how-to-read-it">James Okafor | “The Market Map Tells a Clear Story — If You Know How to Read It”</h3>

<p>I want to push back on both of you slightly. The pricing criticism is real, but it misses the deeper structural problem: <strong>these platforms were all designed in a different era of selling.</strong></p>

<pre><code class="language-mermaid">graph LR
    A["Early Stage&lt;br/&gt;0-20 employees"] --&gt;|Start here| B["Attio / HubSpot Free"]
    A --&gt;|Also viable| C[Pipedrive]
    B --&gt;|Hit growth ceiling| D["HubSpot Pro / Salesforce"]
    C --&gt;|Hit complexity ceiling| D
    D --&gt;|Price shock / complexity| E["Churn / Re-evaluate"]
    E --&gt;|Loop back| A
    style E fill:#ff6b6b,color:#fff
    style D fill:#ffa500,color:#fff
</code></pre>

<p>Every company cycles through this loop. Salesforce was built when CRM meant “database for your sales reps to populate.” HubSpot was built when CRM meant “marketing funnel management.” None of them were built for how B2B selling works in 2026 — async, mobile, AI-assisted, with selling happening across LinkedIn DMs, Slack, Zoom, and email simultaneously.</p>

<p>Attio is the only platform with a modern architectural assumption at its core: the relationship graph should build itself from actual communication. But it hasn’t solved the growth-stage problem. Companies start on Attio and graduate to HubSpot or Salesforce past 50 employees. The platform that breaks this loop wins the market.</p>

<h3 id="lena-park--every-crm-ive-redesigned-was-designed-for-the-wrong-person">Lena Park | “Every CRM I’ve Redesigned Was Designed for the Wrong Person”</h3>

<p>I need to add something all three of you are dancing around: <strong>CRM failure is a design failure.</strong> Not a change management failure. Not a training failure.</p>

<p>When Salesforce reports 77% Agentforce deployment failure, the instinct is to blame data quality. But the root cause is that the UI was designed for administrators, not salespeople. Every interaction optimizes for data capture completeness, not for the rep’s next 60 seconds.</p>

<p>Zoho is the extreme case study. Priced like a simple tool, designed like a legacy enterprise platform, sold to customers who don’t have IT departments. I’ve redesigned onboarding flows for Zoho customers — the problem isn’t that users are unsophisticated, it’s that the product actively resists them.</p>

<p>And Attio — the most interesting design case — got something right that the others didn’t: it started from the user’s mental model, not from the data model. But flexibility became complexity. A blank canvas is not simplicity. <strong>It’s complexity wearing a turtleneck.</strong></p>

<hr />

<h2 id="round-2-the-debate-turns">Round 2: The Debate Turns</h2>

<h3 id="the-ai-argument-james-vs-lena-vs-alex">The AI Argument: James vs. Lena vs. Alex</h3>

<p><strong>James:</strong> Everyone’s dunking on CRM AI, but the technology has genuinely changed. Automatic data capture, conversation intelligence, predictive scoring — these aren’t demos anymore. Attio’s AI is embedded in the data model, not bolted on. The enrichment and workflow automation are real.</p>

<p><strong>Lena:</strong> They work in controlled environments. Salesforce’s Agentforce has a 77% failure rate. HubSpot’s Breeze can only see data inside HubSpot — not Slack, not Google Docs, not the dozen tools your team actually uses. An AI that sees one-tenth of your work context isn’t intelligence. It’s a parlor trick with good branding.</p>

<p><strong>Alex:</strong> Attio’s AI is real but shallow. Useful for data enrichment, sluggish for lead scoring, absent for predictive analytics. No deal risk scoring. No revenue forecasting. Zoho’s Zia runs on open-source models and struggles with anything off-script. Pipedrive’s AI is advisory at best. Dynamics 365 Copilot is region-locked. The honest assessment across all six: no major CRM has shipped AI that fundamentally changes how a salesperson works. They’ve shipped AI features that justify price increases. That’s different.</p>

<p><strong>James:</strong> Which is <em>exactly</em> why there’s an opportunity. The technology exists. The incumbents can’t deploy it because their architectures won’t support it. A new entrant building AI-native from day one, with access to the full communication stack — email, calendar, Slack, calls — could deliver what the incumbents only promise. That’s not a critique. That’s an investment thesis.</p>

<h3 id="the-simplicity-paradox-all-four-weigh-in">The Simplicity Paradox: All Four Weigh In</h3>

<p><strong>Lena:</strong> Every CRM that claims simplicity is lying. Zoho prices at $14/user and delivers endless tabs. Pipedrive looks simple because it’s incomplete — no marketing, no support, no branching logic. Attio feels modern but trades legacy complexity for blank-canvas complexity. Nobody has cracked the design problem of making a genuinely powerful CRM that a rep can master in a day.</p>

<p><strong>Priya:</strong> I don’t want <em>simple</em>. I want <em>opinionated</em>. Ship me a pipeline that works. Ship me automation templates that match how people actually sell. Let me customize later. Attio gives me a blank canvas and says “build anything.” I don’t want to build anything. I want to sell.</p>

<p><strong>Alex:</strong> The best products I’ve ever seen aren’t simple or complex — they’re opinionated. Strong defaults, overridable when needed. Every CRM in this market either makes no choices (Attio, Salesforce) or makes the wrong ones (Pipedrive capping features, HubSpot gating reporting).</p>

<p><strong>Lena:</strong> That’s the product principle. Opinionated defaults, not blank canvases. Not infinite options.</p>

<pre><code class="language-mermaid">graph TD
    subgraph Paradox["The Simplicity Paradox"]
        A["Zoho&lt;br/&gt;Low price, not low complexity&lt;br/&gt;Legacy UI, steep learning curve"]
        B["Pipedrive&lt;br/&gt;Simple appearance, incomplete product&lt;br/&gt;Missing core features"]
        C["Attio&lt;br/&gt;Modern UX, still hard to use&lt;br/&gt;Blank canvas overwhelm"]
    end
    A --&gt; D["Nobody has solved:&lt;br/&gt;Powerful + Simple + Complete"]
    B --&gt; D
    C --&gt; D
    style D fill:#2ecc71,stroke:#27ae60,stroke-width:3px,color:#fff
</code></pre>

<h3 id="priya-challenges-alex-complexity-isnt-the-disease--its-the-symptom">Priya Challenges Alex: “Complexity Isn’t the Disease — It’s the Symptom”</h3>

<p><strong>Priya:</strong> Alex, you keep talking about complexity like it’s a design choice. It’s not. Salesforce is complex because businesses are complex. The question isn’t “can we make CRM simple?” — it’s “can we make the <em>right things</em> simple while keeping the power available?”</p>

<p><strong>Alex:</strong> That’s exactly what Salesforce tells itself. “Our customers need this complexity.” No — your <em>consultants</em> need this complexity. I watched a 200-person company spend $150K configuring Salesforce to do what Pipedrive does out of the box, plus three things they actually needed. That’s a $150K tax on three features.</p>

<p><strong>Priya:</strong> Fair. But Pipedrive couldn’t do those three things, and that’s why we left. The issue isn’t that Salesforce is complex — it’s that there’s no middle ground between “too simple to be useful” and “too complex to be usable.” That middle ground is the entire opportunity.</p>

<hr />

<h2 id="the-verdict-is-there-room-for-a-new-player">The Verdict: Is There Room for a New Player?</h2>

<p>All four panelists converged: <strong>unequivocally yes.</strong> And the gap is more specific than “better CRM.”</p>

<p><strong>James</strong> framed the investment thesis: “The target is the individual sales rep and frontline manager at 10-200 person companies. PE consolidation is creating refugees from Pipedrive. HubSpot’s pricing cliff is creating orphans at growth stage. AI maturity makes automatic data capture and intelligent next-actions finally viable. And the mobile-first generation entering sales roles will not accept software designed for 2008 desktops. This market is being <em>created</em> right now.”</p>

<p><strong>Priya</strong> defined the experience: “It should feel like a fast, intelligent assistant on your phone. A CRM that knows who I talked to, what was said, and what to do next — without me typing anything. Less enterprise software, more the best consumer app you’ve ever used, but for selling.”</p>

<p><strong>Alex</strong> defined the anti-pattern: “Don’t try to be Salesforce. Don’t build a platform. Don’t create an ecosystem. Don’t serve IT departments. Build for the human who uses it daily and currently resents it. The companies that fail will try to creep toward complexity because enterprise buyers wave big checks. That’s what happened to every CRM in this study.”</p>

<p><strong>Lena</strong> defined the moment: “The generation that grew up on Instagram and TikTok will not accept software that feels like it was designed in 2008. The first CRM that feels as good as a consumer app wins the next generation of sellers.”</p>

<pre><code class="language-mermaid">flowchart TD
    A[Sales Rep Opens App] --&gt; B{What does the AI surface?}
    B --&gt; C["Meeting brief:&lt;br/&gt;Who is this person,&lt;br/&gt;last interaction, open items"]
    B --&gt; D["Deal risk alert:&lt;br/&gt;No contact in 12 days,&lt;br/&gt;competitor mentioned"]
    B --&gt; E["Next best action:&lt;br/&gt;Follow up on proposal&lt;br/&gt;sent 3 days ago"]
    C --&gt; F["Rep takes action with context"]
    D --&gt; F
    E --&gt; F
    F --&gt; G["App auto-logs outcome&lt;br/&gt;from email and calendar sync"]
    G --&gt; H["Manager sees pipeline&lt;br/&gt;without rep doing data entry"]
    style H fill:#4CAF50,color:#fff
    style A fill:#2196F3,color:#fff
</code></pre>

<hr />

<h2 id="what-simple-fast-ai-first-actually-means">What “Simple, Fast, AI-First” Actually Means</h2>

<p>The panel translated the research into six non-negotiable product principles — less a feature list, more a set of constraints that define what this product must be and, equally important, what it must refuse to become.</p>

<table>
  <thead>
    <tr>
      <th>Principle</th>
      <th>What It Means</th>
      <th>Who Gets It Wrong Today</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Zero-entry data capture</strong></td>
      <td>Email, calendar, calls sync automatically. Relationship graph from communication, not forms.</td>
      <td>Everyone — Salesforce needs manual entry, Attio partially solves it</td>
    </tr>
    <tr>
      <td><strong>AI that works on Day 1</strong></td>
      <td>Meeting briefs, follow-up timing, deal risk. Not six-month forecasting models.</td>
      <td>Salesforce (needs clean data), HubSpot (walled garden AI)</td>
    </tr>
    <tr>
      <td><strong>Mobile-first UX</strong></td>
      <td>Core workflows faster on phone than desktop. Phone is the primary interface.</td>
      <td>Attio (no real app), Zoho (terrible app), Dynamics (desktop-only)</td>
    </tr>
    <tr>
      <td><strong>Opinionated defaults</strong></td>
      <td>Ships with pipeline, reports, and automation that work out of the box.</td>
      <td>Attio (blank canvas), Salesforce (requires consultants)</td>
    </tr>
    <tr>
      <td><strong>Transparent flat pricing</strong></td>
      <td>One price, per user, per month. No add-ons, no contact scaling, no lock-in.</td>
      <td>Pipedrive (add-on hell), HubSpot (contact-based scaling)</td>
    </tr>
    <tr>
      <td><strong>Cross-stack AI vision</strong></td>
      <td>Sees Slack, email, calendar, Zoom — not just CRM data. Twenty deep integrations, not 2,000 shallow ones.</td>
      <td>HubSpot (walled garden), every incumbent platform</td>
    </tr>
  </tbody>
</table>

<hr />

<h2 id="where-the-panel-landed">Where the Panel Landed</h2>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Opening Position</th>
      <th>Final Verdict</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Alex</strong></td>
      <td>“CRM complexity is a consulting industry disguised as software”</td>
      <td>“The gap is real — opinionated defaults plus AI-native architecture is the formula”</td>
    </tr>
    <tr>
      <td><strong>Priya</strong></td>
      <td>“I’ve migrated three times and I’m still unhappy”</td>
      <td>“Build it for the rep, not the buyer. Phone-first. I’d switch tomorrow.”</td>
    </tr>
    <tr>
      <td><strong>James</strong></td>
      <td>“The market map shows a clear structural gap”</td>
      <td>“PE refugees + HubSpot orphans + AI maturity = investable thesis right now”</td>
    </tr>
    <tr>
      <td><strong>Lena</strong></td>
      <td>“Every CRM is designed for the wrong person”</td>
      <td>“Consumer-grade design meets sales workflow. First one to ship it wins.”</td>
    </tr>
  </tbody>
</table>

<hr />

<p><em>The CRM market is an $80+ billion industry built on a broken premise: give salespeople a database to fill out, and good things will happen. They won’t. They never did.</em></p>

<p><em>Four people who started from radically different positions — an architect, a customer, an investor, and a designer — converged on the same gap, the same target user, and the same product principles. That convergence is itself a signal. When four independent lenses point to the same thing, from first principles, it’s worth listening.</em></p>

<p><em>The research is clear. The gap is real. The only question is who builds it first.</em></p>]]></content><author><name>Ernest &amp; Melanie</name></author><summary type="html"><![CDATA[The CRM Racket: Why Every Major Platform Is Failing Sales Teams What happens when a Salesforce architect, a startup growth lead, a VC investor, and a product designer tear apart the six CRMs that define the market — and ask whether anyone should build a seventh]]></summary></entry><entry><title type="html">Week 1: Cleo &amp;amp; You</title><link href="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/week-1-finding-the-problem/" rel="alternate" type="text/html" title="Week 1: Cleo &amp;amp; You" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/week-1-finding-the-problem</id><content type="html" xml:base="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/week-1-finding-the-problem/"><![CDATA[<h1 id="cleo--you">Cleo &amp; You</h1>
<p><strong>Your AI Onboarding &amp; Sales Enablement Assistant</strong>
Week 1 Discovery Deliverable • ProductBC Build-a-Thon 2026</p>

<hr />

<h2 id="week-1-summary">Week 1 Summary</h2>

<p>This week we attended Workshop 1 on Product Discovery and used the frameworks introduced to define our product idea, Cleo — an AI-powered onboarding and sales enablement assistant. We identified our core problem, defined our target user segments, crafted our initial hypothesis, built our first Opportunity Solution Tree, and mapped our key assumptions across desirability, viability, feasibility, and usability. We also developed an interview guide to kick off user research next week.</p>

<p>The biggest takeaway we implemented was the hypothesis-driven discovery approach — specifically the structure “We believe [user segment] experiences [problem] when [context] because [underlying cause].” Instead of jumping to solutions, we forced ourselves to articulate the root cause before thinking about how to solve it. We also applied the Opportunity Solution Tree to connect our desired outcome to concrete opportunities and testable experiments.</p>

<p>The most important thing we learned: the most critical question in product development is “Should we build this?” before “Can we build this?” Even though we’ve both lived this problem firsthand, we still had assumptions we hadn’t made explicit. The assumption mapping exercise surfaced that the riskiest thing isn’t whether we can build Cleo — it’s whether sales teams will actually trust and consistently use an AI system for onboarding, rather than defaulting to asking a colleague.</p>

<p>To pressure-test our thinking early, we also used AI agents to simulate expert perspectives during our research. We created four personas: Maya, a Principal Designer at Apple; Dmitri, a Staff Engineer at Tesla; Richard, a Buffett School investor; and Dr. Sarah, a PhD in Educational Psychology. Each agent challenged our assumptions from a different angle — design, technical feasibility, business viability, and learning science — helping us stress-test Cleo before talking to real humans.</p>

<p><a href="/buildathon-mar-26/blog/2026/02/27/cleo-debate-four-voices/">Read the full debate → Cleo &amp; You: A Debate in Four Voices</a></p>

<p><img src="/buildathon-mar-26/assets/images/week-1/agents-claude.png" alt="AI agents running in Claude" /></p>

<hr />

<h2 id="1-problem-statement">1. Problem Statement</h2>

<blockquote>
  <p><em>Sales and customer-facing teams at growing companies lose weeks of productivity during onboarding and product updates because institutional knowledge lives inside specific people and static documents — not in a system that is always available, interactive, and adaptable to how each person learns.</em></p>
</blockquote>

<p>This problem was identified through direct experience at two different companies:</p>

<table>
  <thead>
    <tr>
      <th><strong>VoPay (Fintech)</strong></th>
      <th><strong>Fresh Tracks (Travel)</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>A fast-growing fintech startup managing multiple high-priority projects simultaneously. Sales onboarding relied on whiteboard sessions and one-on-one demos from a few key people. New hires had access to dozens of documents and SOPs but still couldn’t retain product knowledge effectively without live interaction.</td>
      <td>A travel company where sales reps engage customers in live phone conversations about travel experiences. Onboarding is slow and dependent on shadowing experienced reps — an approach that does not scale and limits how much new hires can practice before speaking with real customers.</td>
    </tr>
  </tbody>
</table>

<p>While the context differs, the root problem is identical: knowledge is locked in people and static documents, and new team members cannot learn, practice, or get answers independently.</p>

<hr />

<h2 id="2-user-segments">2. User Segments</h2>

<table>
  <thead>
    <tr>
      <th><strong>Primary: New Sales Hires</strong></th>
      <th><strong>Primary: Existing Sales Team</strong></th>
      <th><strong>Secondary: Sales Managers / Admins</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Joining a company with complex products or services. Needs to get up to speed fast. Learns best through interaction, not passive reading. Frustrated by waiting for an available trainer.</td>
      <td>Preparing for product launches, events, or new service lines. Wants to practice pitch variations without consuming a colleague’s time. Needs quick refreshers on product details before calls.</td>
      <td>Responsible for keeping knowledge current. Currently the bottleneck for all training. Wants to offload repetitive onboarding tasks and ensure consistency across the team.</td>
    </tr>
  </tbody>
</table>

<hr />

<h2 id="3-initial-hypothesis">3. Initial Hypothesis</h2>

<blockquote>
  <p><em>“We believe that new sales hires and existing sales team members at growing companies experience slow ramp-up, inconsistent product knowledge, and lack of independent practice opportunities when onboarding or preparing for product updates, because institutional knowledge is locked inside specific people and static documents that cannot be accessed on-demand, do not adapt to individual learning styles, and do not allow safe, independent practice.”</em></p>
</blockquote>

<table>
  <thead>
    <tr>
      <th><strong>Specific</strong></th>
      <th><strong>Falsifiable</strong></th>
      <th><strong>Consequential</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Named user segments, clear context (onboarding + product updates), and identified root cause.</td>
      <td>If interviews show teams don’t actually feel this pain, or if they have adequate solutions already, this hypothesis is wrong.</td>
      <td>If wrong, the entire product direction changes — making this the most important assumption to test first.</td>
    </tr>
  </tbody>
</table>

<p><img src="/buildathon-mar-26/assets/images/week-1/laptop-finger.png" alt="Image showing the landing website of Cleo &amp; You" /></p>

<hr />

<h2 id="4-opportunity-solution-tree">4. Opportunity Solution Tree</h2>

<p>The OST maps the path from our desired outcome to testable experiments. We start with the outcome, then identify the user pain points (opportunities) that, if addressed, would achieve it.</p>

<hr />

<p><strong>DESIRED OUTCOME</strong></p>

<p>Sales and customer-facing team members reach full productivity and confidence in their role in 50% less time, without depending on a specific person being available to train them.</p>

<hr />

<p><strong>OPPORTUNITIES (User Pain Points &amp; Needs)</strong></p>

<table>
  <thead>
    <tr>
      <th><strong>Opportunity 1: Knowledge locked in people</strong></th>
      <th><strong>Opportunity 2: No safe space to practice</strong></th>
      <th><strong>Opportunity 3: Content goes stale fast</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>New hires can only learn from specific people who are not always available. When that person is busy or leaves, knowledge disappears.</td>
      <td>Sales reps have no way to rehearse calls or test product knowledge independently without consuming a colleague’s or manager’s time.</td>
      <td>Existing documentation (SOPs, product guides, decks) becomes outdated and requires someone to manually update and re-communicate changes.</td>
    </tr>
  </tbody>
</table>

<hr />

<p><strong>SOLUTIONS (Ideas to Address Opportunities)</strong></p>

<table>
  <thead>
    <tr>
      <th><strong>Solution A: AI Knowledge Base</strong></th>
      <th><strong>Solution B: AI Call Simulator</strong></th>
      <th><strong>Solution C: Living Knowledge Hub</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Cleo ingests company documents, SOPs, product guides, and call recordings. Team members can ask questions in natural language and get instant, accurate answers in their preferred format.</td>
      <td>Cleo simulates real customer calls using the company’s actual customer profiles, tone, and objection patterns learned from past recordings. Reps can practice anytime without needing a colleague.</td>
      <td>Any team member can upload new documents, recordings, or updates to Cleo. The knowledge base stays current automatically, removing the single point of failure for knowledge transfer.</td>
    </tr>
  </tbody>
</table>

<hr />

<p><strong>ASSUMPTION TESTS (Experiments to Validate)</strong></p>

<table>
  <thead>
    <tr>
      <th><strong>Test 1: Desirability</strong></th>
      <th><strong>Test 2: Feasibility</strong></th>
      <th><strong>Test 3: Viability</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Interview 8–10 sales managers and new hires across 3+ industries: Would they trust and regularly use an AI system for onboarding? What would make them stop using it?</td>
      <td>Build a prototype that ingests 2–3 real company documents and responds accurately to questions. Test: does Cleo answer correctly 80%+ of the time without hallucinating?</td>
      <td>Run a pricing test with 5 companies: Would they pay $X/month per seat? What’s the minimum viable feature set before they’d pay? Target CAC &lt; 3 months of revenue.</td>
    </tr>
  </tbody>
</table>

<hr />

<h2 id="5-assumption-mapping">5. Assumption Mapping</h2>

<p><strong>Solution being evaluated:</strong> Cleo — an AI system that ingests company knowledge (documents, videos, call recordings) and enables team members to learn and practice independently through natural conversation and call simulation.</p>

<table>
  <thead>
    <tr>
      <th><strong>DESIRABILITY — Do users want this?</strong></th>
      <th><strong>VIABILITY — Does the business model work?</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>• Sales teams experience onboarding as a real, painful problem</td>
      <td>• Companies will pay a monthly SaaS fee per seat or per team</td>
    </tr>
    <tr>
      <td>• Teams will trust an AI system to teach product knowledge</td>
      <td>• CAC will be low enough to reach profitability within 12 months</td>
    </tr>
    <tr>
      <td>• Sales reps will actually use a simulator to practice (vs. skipping it)</td>
      <td>• The problem is painful enough that churn will be low</td>
    </tr>
    <tr>
      <td>• Managers will champion adoption of Cleo to their teams</td>
      <td>• Market is large enough (any company with a sales team)</td>
    </tr>
    <tr>
      <td><strong>FEASIBILITY — Can we build this?</strong></td>
      <td><strong>USABILITY — Can users figure it out?</strong></td>
    </tr>
    <tr>
      <td>• We can reliably ingest documents, videos, and audio recordings</td>
      <td>• Non-technical users can upload content without IT help</td>
    </tr>
    <tr>
      <td>• AI can accurately represent company tone from call recordings</td>
      <td>• New hires can self-onboard with Cleo without a walkthrough</td>
    </tr>
    <tr>
      <td>• System won’t hallucinate or give dangerously wrong product info</td>
      <td>• Sales reps understand how to start a practice call simulation</td>
    </tr>
    <tr>
      <td>• Build is achievable within the 5-week buildathon timeline</td>
      <td>• Admins can update the knowledge base in under 10 minutes</td>
    </tr>
  </tbody>
</table>

<h3 id="riskiest-assumption-to-test-first">Riskiest Assumption to Test First</h3>

<blockquote>
  <p><strong>Assumption:</strong> Sales teams will trust an AI system enough to actually use it for onboarding and practice — and won’t just default back to asking a colleague.</p>

  <p><strong>Why it’s riskiest:</strong> If teams don’t trust Cleo’s accuracy or feel it’s impersonal, adoption will fail regardless of how well we build it. Trust is the prerequisite for everything else.</p>

  <p><strong>How we’ll test it:</strong> Conduct 8–10 story-based interviews with sales managers and recent new hires across 3+ industries. Ask about the last time they onboarded someone or were onboarded. Listen for pain around availability, consistency, and knowledge gaps.</p>
</blockquote>

<p><img src="/buildathon-mar-26/assets/images/week-1/latop-zed.png" alt="Melanie working on Cleo at her laptop" /></p>

<hr />

<h2 id="6-interview-guide">6. Interview Guide</h2>

<p><strong>Methodology:</strong> Story-based interviewing — asking about specific past experiences, not hypotheticals. Conducted by both team members together where possible.</p>

<h3 id="opening-set-context">Opening (set context)</h3>
<ul>
  <li>Tell me about your role and how long you’ve been at your current company.</li>
  <li>How many people have joined your sales team in the last 12 months?</li>
</ul>

<h3 id="story-based-questions-core-discovery">Story-Based Questions (core discovery)</h3>
<ul>
  <li>Tell me about the last time someone new joined your sales team. Walk me through what their first two weeks looked like.</li>
  <li>What was the hardest part of that onboarding process — for them, and for you?</li>
  <li>Tell me about a time when a new hire struggled with product knowledge during a real customer call. What happened?</li>
  <li>Think about the last product update or new service your team had to learn. How did that training happen? What worked and what didn’t?</li>
  <li>Tell me about the last time a sales rep needed a quick answer during or before a call. Where did they go? How long did it take?</li>
</ul>

<h3 id="depth-questions-dig-into-the-pain">Depth Questions (dig into the pain)</h3>
<ul>
  <li>How much time do you personally spend on onboarding or answering repetitive product questions each week?</li>
  <li>If you could wave a magic wand and fix one thing about how your team learns, what would it be?</li>
  <li>Have you tried any tools or systems to solve this? What happened?</li>
</ul>

<h3 id="solution-probing-test-our-hypothesis">Solution Probing (test our hypothesis)</h3>
<ul>
  <li>If there was a system that could answer product questions instantly and let reps practice calls on their own — what would make you trust it enough to actually use it?</li>
  <li>What would make you stop using it?</li>
</ul>

<hr />

<h2 id="7-next-steps--week-2">7. Next Steps — Week 2</h2>

<ul>
  <li><strong>Conduct 6–8 interviews with sales managers and new hires across at least 3 industries</strong></li>
  <li>Complete an Interview Snapshot within 15 minutes of each interview</li>
  <li>Begin synthesizing: Interviews → Insights → Clusters → Themes → Opportunities</li>
  <li>Update the Opportunity Solution Tree based on what we learn</li>
  <li>Identify and prioritize the next riskiest assumption to test</li>
</ul>

<blockquote>
  <p><em>Note: AI personas are for preparation only. Real human interviews are the source of truth — we will not skip them.</em></p>
</blockquote>]]></content><author><name>Ernest &amp; Melanie</name></author><summary type="html"><![CDATA[Cleo &amp; You Your AI Onboarding &amp; Sales Enablement Assistant Week 1 Discovery Deliverable • ProductBC Build-a-Thon 2026]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-1-finding-the-problem.png" /><media:content medium="image" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-1-finding-the-problem.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Cleo &amp;amp; You: A Debate in Four Voices</title><link href="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/cleo-debate-four-voices/" rel="alternate" type="text/html" title="Cleo &amp;amp; You: A Debate in Four Voices" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/cleo-debate-four-voices</id><content type="html" xml:base="https://ebarronh.github.io/buildathon-mar-26/blog/2026/02/27/cleo-debate-four-voices/"><![CDATA[<h1 id="cleo--you-a-debate-in-four-voices">Cleo &amp; You: A Debate in Four Voices</h1>
<h3 id="what-happens-when-an-apple-designer-a-tesla-engineer-a-buffett-school-investor-and-a-learning-scientist-tear-apart-the-same-startup-pitch--raw-unfiltered-and-without-mercy">What happens when an Apple designer, a Tesla engineer, a Buffett-school investor, and a learning scientist tear apart the same startup pitch — raw, unfiltered, and without mercy</h3>

<hr />

<h2 id="the-setup">The Setup</h2>

<p><em>This experiment was part of our <a href="/buildathon-mar-26/blog/2026/02/27/week-1-finding-the-problem/">Week 1 discovery process</a>. We used AI agents to simulate expert perspectives and stress-test our product hypothesis before conducting real user interviews.</em></p>

<p><strong>The pitch:</strong> <em>“Cleo &amp; You: a startup looking to build an AI tool to re-invent how companies coach their employees internally. It is meant to fully understand the business and tech features to then provide personalized learnings for different types of employees. Want to read? Done. Video? Also done. Audiobook, done. We could even simulate a sales call which involves soft skills, even as much as having a VR experience for you to practice.”</em></p>

<p><strong>The panel:</strong></p>

<ul>
  <li><strong>Maya Chen</strong> — Principal Designer, Apple. 12 years shipping products used by hundreds of millions. Brutal about design theater.</li>
  <li><strong>Dmitri Volkov</strong> — Staff Software Engineer, Tesla. Distributed systems, ML infrastructure, zero patience for vaporware.</li>
  <li><strong>Richard Park</strong> — Investor, Buffett school. 40+ startup investments across SaaS and edtech. Allergic to pitches without moats.</li>
  <li><strong>Dr. Sarah Torres</strong> — PhD Educational Psychology. 15 years coaching C-suite executives and building Fortune 500 learning programs. Done being polite about edtech snake oil.</li>
</ul>

<p>Two rounds. No hedging. No corporate speak. This is what they found.</p>

<hr />

<h2 id="round-1-opening-fire">Round 1: Opening Fire</h2>

<h3 id="maya-chen--a-swiss-army-knife-nobody-asked-for--but-the-blade-might-be-sharp">Maya Chen | “A Swiss Army Knife Nobody Asked For — But the Blade Might Be Sharp”</h3>

<p>Maya opened with structural skepticism and one genuine flicker.</p>

<p>The structural problem: the corporate learning space is already a graveyard of platforms that promise personalization and deliver glorified Netflix queues nobody watches. BetterUp, CoachHub, Cornerstone, Sana, Docebo — the list is endless. And BetterUp already has an AI coach. “The pitch says ‘fully understand the business and tech features.’ That is an enormous claim. At Apple, we spent years just getting internal tools to understand team-specific context, and we had the luxury of controlling the entire ecosystem.”</p>

<p>The feature list — read, video, audio, VR, simulation, business understanding — is not a product, it’s five products sharing a pitch deck. “Everything is the enemy of great design.”</p>

<p>The flicker: the simulated sales call. “Soft skills are embodied — you learn them by doing, by feeling the awkwardness, by recovering from a bad answer in real time. If Cleo can create a practice environment that feels psychologically safe enough to fail in, where the AI gives specific and actionable feedback, that’s a product I’d want to design.”</p>

<p>Her three make-or-break questions: Is this one coherent experience or five apps duct-taped together? What does personalization actually look like at 9:47am between meetings? And where’s the Duolingo-equivalent feedback loop — the thing that makes the user <em>feel</em> themselves growing?</p>

<h3 id="dmitri-volkov--five-hard-products-in-a-trenchcoat">Dmitri Volkov | “Five Hard Products in a Trenchcoat”</h3>

<p>Dmitri performed a feature-by-feature technical autopsy. His ratings:</p>

<table>
  <thead>
    <tr>
      <th>Feature</th>
      <th>Verdict</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>“Fully understand the business”</td>
      <td>Hard, borderline impossible at implied fidelity</td>
    </tr>
    <tr>
      <td>Text-based personalized learning</td>
      <td>Moderate — solvable but incumbents already do it</td>
    </tr>
    <tr>
      <td>Video generation</td>
      <td>Hard — components exist, integrated pipeline is bleeding edge</td>
    </tr>
    <tr>
      <td>Audiobook</td>
      <td>Easy — it’s an API call</td>
    </tr>
    <tr>
      <td>Sales call simulation</td>
      <td>Hard but tractable — prior art exists, validated market</td>
    </tr>
    <tr>
      <td>VR experience</td>
      <td>Expensive as hell — separate engineering discipline, Series C feature</td>
    </tr>
  </tbody>
</table>

<p>The hardest engineering problem: knowledge ingestion. Everything downstream — video, audio, simulation — depends on “fully understanding the business.” Get it wrong and every output layer teaches employees confidently wrong information. For a training product, the accuracy bar is not “low hallucination” — it’s near-zero.</p>

<p>Real v1: a Slack chatbot + text learning paths + basic roleplay. 6-9 months, 8-12 engineers. “The pitch as described is a 3-year, $50M+ roadmap being sold as a product.”</p>

<p>His prescription: kill VR entirely, focus on knowledge ingestion + sales roleplay.</p>

<h3 id="richard-park--hard-pass--a-feature-list-wearing-a-startup-costume">Richard Park | “Hard Pass — A Feature List Wearing a Startup Costume”</h3>

<p>Richard arrived with a market map and a verdict: the competitive landscape isn’t “room for one more,” it’s a knife fight in a phone booth.</p>

<p>LinkedIn Learning: AI coaching, conversational chatbots, personalized paths, roleplay — shipped 2024-2025. Yoodli: $60M raised, $300M+ valuation, doing exactly the sales simulation piece with Google and Snowflake as customers, 900% ARR growth. Hyperbound: 25,000+ users across 7,000+ companies. Docebo: full “AI-First” platform with scenario-based simulation and AI-generated video.</p>

<p>“What’s stopping LinkedIn from doing this tomorrow? Nothing. They’re already doing it today.”</p>

<p>His killer question — “Why not LinkedIn?” — and six demands before writing a check: paying enterprise pilots, a proprietary data flywheel, proof of switching cost creation, unit economics by cohort, and a specific answer to his question.</p>

<h3 id="dr-sarah-torres--a-feature-list-masquerading-as-a-learning-science-strategy">Dr. Sarah Torres | “A Feature List Masquerading as a Learning Science Strategy”</h3>

<p>Sarah unloaded the scientific indictment. Three counts:</p>

<p><strong>Count 1: Format personalization is built on a debunked theory.</strong> The multi-format choice (read/video/audio) reflects discredited learning styles research. Multiple meta-analyses confirm no meaningful benefit to matching instruction format to learner preference. What matters is matching modality to <em>content type</em>, not learner preference. “Cleo has built their pitch around a discredited theory. That’s a red flag the size of Texas.”</p>

<p><strong>Count 2: Actual learning science is absent.</strong> No spaced repetition. No retrieval practice. No mastery gates. No adaptive sequencing. “The average LMS sees completion rates of 20-30%. Over half of registered e-learners never meaningfully engage. And here comes Cleo &amp; You with… more content, in more formats, with AI and VR sprinkled on top.”</p>

<p><strong>Count 3: They confuse content delivery with learning.</strong> “The problem with corporate learning is that it doesn’t produce learning. And nothing in this pitch tells me Cleo understands that.”</p>

<p>One concession: the simulation. “81% of users find AI-driven simulations realistic and valuable. Immersive training shows 75% better knowledge retention than classroom methods.” But she wants structured debriefing, spaced intervals, mastery progression — none of which appear in the pitch.</p>

<hr />

<h2 id="round-2-the-debate-turns">Round 2: The Debate Turns</h2>

<h3 id="dmitri-corrects-richard-linkedin-is-not-the-threat">Dmitri Corrects Richard: LinkedIn Is Not the Threat</h3>

<p>Dmitri investigated LinkedIn Learning’s actual technical architecture and delivered a precision correction. LinkedIn Learning cannot ingest Company X’s product documentation, sales playbooks, or internal processes. Their roleplay is generic workplace scenarios, not company-specific sales calls. Their entire model is a centralized content library — recommending from a catalog, not generating from proprietary data.</p>

<p>“There IS a real technical gap. LinkedIn would need to fundamentally rebuild their pipeline to do what Cleo is describing. They can’t easily pivot — it’s architecturally incompatible with their business model.”</p>

<p>He reframed the threat: “Yoodli is the threat. Not LinkedIn.”</p>

<p>Richard accepted the correction. He updated the competitive question from “Why not LinkedIn?” to “Why not Yoodli plus 18 months of product iteration?” But he held on the core concern: killing the LinkedIn objection doesn’t create a moat — it narrows the competitive field to players who are already well-funded and well-positioned.</p>

<h3 id="maya-vs-sarah-the-friction-wars">Maya vs. Sarah: The Friction Wars</h3>

<p>Sarah’s fluency illusion argument threatened to invalidate Maya’s design instincts entirely. Robert Bjork’s research: when learning <em>feels</em> smooth, learners rate it as highly effective — but retention craters. The frictionless experience is often the pedagogically fraudulent one.</p>

<p>Maya didn’t retreat. She reframed: <strong>“Design’s job is to remove the wrong friction while preserving the right friction.”</strong></p>

<p>Her gym analogy: a great gym removes every friction that isn’t the workout — equipment confusion, wayfinding, locker room chaos — while keeping the weights heavy. The design never makes the dumbbells lighter. “The moment of practice should be hard. Everything around it should be beautiful.”</p>

<p>Sarah responded with a precision counterpoint: gamification that rewards <em>completion behaviors</em> (finishing modules, maintaining streaks) triggers the fluency illusion. Gamification that rewards <em>struggle behaviors</em> (attempting harder scenarios, improving over baseline) produces real motivation tied to real growth. “Your streak counter should measure ‘days you attempted something harder than last time,’ not ‘days you opened the app.’”</p>

<p>Her verdict on Duolingo: half-brilliant, half-trap. The spaced repetition engine underneath is real science, properly implemented. But the gamification layer — tuned for daily active users, not fluent speakers — is the cautionary tale. “Build the science right. Let design wrap the struggle in momentum, not hide it.”</p>

<h3 id="richard-moves-the-datadog-thesis">Richard Moves: The Datadog Thesis</h3>

<p>Richard’s hardest challenge produced the debate’s clearest strategic framework. Dmitri had shown that the simulation moat evaporates as models commoditize — the bare simulation engine is replicable. But the <em>organizational intelligence accumulated through simulation usage</em> is not.</p>

<p>Month 1-3: 500 reps simulate sales calls. Every call generates structured data — which objections stumped them, where confidence dropped, which product features couldn’t be explained.</p>

<p>Month 12: the system knows which practice patterns predict deal closure at <em>this</em> company, which skills degrade over time and need reinforcement, the optimal training sequence for <em>this</em> company’s new hires. A competitor starting at Month 0 has none of this.</p>

<p>Richard processed this through the investor’s moat lens and surfaced the innovator’s dilemma: LinkedIn and Docebo optimize for engagement metrics (time on platform, seats utilized). Genuine outcome measurement might <em>cannibalize</em> their KPIs — if training becomes more efficient, customers need fewer seat-hours. “They’re structurally disincentivized from building this. Same reason Netflix didn’t build TikTok — the better product for the user was a worse product for their business model.”</p>

<p>He proposed the Datadog playbook: enter as the outcome measurement layer (“we’ll tell you whether your current training is working”), become indispensable, then expand into coaching module by module. His verdict moved from “hard pass, don’t call me” to “conditional pass — come back with the right pitch and I’ll listen for 30 minutes.”</p>

<h3 id="sarah-reframes-the-gtm-wrong-door-wrong-buyer">Sarah Reframes the GTM: Wrong Door, Wrong Buyer</h3>

<p>The debate’s most strategically significant insight came from Sarah identifying that the entire go-to-market was aimed at the wrong person.</p>

<p>“The CHRO buys for compliance, culture, and optics. Their success metric is ‘we have a program,’ not ‘the program works.’ They buy LinkedIn Learning the way a company buys a gym membership benefit — it checks a box, and nobody cares if employees actually use it.”</p>

<p>The right buyer: the CRO or VP of Sales. “84% of sales reps hit quota when their company uses best-in-class sales enablement. Top onboarding programs get new reps productive 3.4 months faster — that’s pipeline in the door.” The CRO measures in close rates and ramp time, not completion badges.</p>

<p>The structural implication: LinkedIn will never make this pivot. LinkedIn Learning sits in the HR/L&amp;D budget. It is organizationally incapable of becoming a revenue-aligned tool owned by sales leadership. That’s not a strategic gap — it’s an architectural one, and it’s permanent.</p>

<h3 id="mayas-product-vision-the-film-room">Maya’s Product Vision: The Film Room</h3>

<p>Maya synthesized the design answer to what would categorically differentiate Cleo from Yoodli, Second Nature, and Hyperbound.</p>

<p>All current simulation tools operate on the same paradigm: practice → score → repeat. The driving test model.</p>

<p>What none are doing: <strong>moment-level coaching with replay.</strong> After a 6-minute simulation, instead of a score, you get a timeline. Three decision points where the conversation could have gone differently. You tap minute 2:14 — where the prospect said “We’re happy with our current vendor” and you listed features. You see what you said (with audio), what the prospect was likely thinking, and 2-3 alternative responses with predicted outcome branches. A “replay just this moment” button. You practice that 30-second exchange four different ways.</p>

<p>“This is the film room model, not the driving test model. It’s how elite athletes actually improve — not by running the whole game again, but by rewatching specific plays, understanding the micro-decision, and drilling the alternative. No one in the market is doing this with the design sophistication it deserves.”</p>

<p>Layer Sarah’s spaced repetition: the system tracks which <em>types</em> of moments you struggle with across simulations. You keep folding under price pressure objections. Next Tuesday, without being asked, Cleo surfaces a 3-minute micro-practice: just a price pressure scenario, just the 30-second pivot point. Spaced repetition applied not to flashcards but to <em>conversational decision points.</em></p>

<h3 id="sarahs-three-tier-model-the-architecture-of-a-real-company">Sarah’s Three-Tier Model: The Architecture of a Real Company</h3>

<p>Sarah evolved her “augment the coach” position into a three-tier model that resolves the hardest challenge the panel raised: most companies don’t <em>have</em> a coaching program to augment. They have a once-a-year performance review and maybe a LinkedIn Learning license.</p>

<ul>
  <li><strong>Tier 1 — AI IS the coach:</strong> Structured skill practice (sales, objection handling, presentations, product knowledge). Full stack. No human needed. This is where 90% of companies start because they have nothing. This is the v1.</li>
  <li><strong>Tier 2 — AI CREATES the coach:</strong> Platform data identifies which employees need development and in which areas. A lightweight human coaching marketplace for the specific needs the AI has quantified and proven. The AI does the diagnostic; the human does the therapy.</li>
  <li><strong>Tier 3 — AI AUGMENTS the coach:</strong> For companies with existing coaching programs, simulation data makes expensive human sessions more targeted. The coach walks into the 1:1 already knowing where to focus.</li>
</ul>

<p>“Most companies enter at Tier 1 because they have nothing. The platform generates data that reveals the need for Tier 2. That’s how you build a company, not just a product.”</p>

<h3 id="dmitris-v1-spec-the-salesforce-thesis">Dmitri’s v1 Spec: The Salesforce Thesis</h3>

<p>Dmitri produced the debate’s most actionable output: a concrete, scoped, technically feasible v1.</p>

<p>Instead of “fully understanding the business” through broad RAG over thousands of internal documents, start with one data source — Salesforce CRM. Salesforce data is <em>structured</em>. Win/loss patterns, competitor mentions in closed-lost reasons, product mix, rep performance by segment — these are database queries, not inference. The hallucination problem that plagues unstructured document RAG nearly disappears.</p>

<p>Supplement with 15-20 hand-curated, human-validated documents: sales playbook, competitive battle cards, pricing sheet, product matrix, methodology framework. A manageable corpus where accuracy can be validated manually.</p>

<p>The result: simulation powered by <em>your</em> actual deal history. “A competitor starting at Month 0 has zero of this context. Cleo would be coaching reps to handle the exact objections from your actual closed-lost deals. That’s categorically different from generic roleplay.”</p>

<p>Technical estimate: 6-8 engineers, 5-7 months. A real, demonstrable, fundable v1.</p>

<p>On psychological safety — Maya argued that a Slack bot creates zero psychological safety for practice. Dmitri conceded fully and added the engineering reason: “Psychological safety isn’t a design luxury. It’s a data throughput constraint.” Reps practicing in a space where they perceive surveillance adopt at 10-15%. A genuinely safe environment hits 40-60%. A 3-4x difference in adoption is a 3-4x difference in data flywheel velocity.</p>

<hr />

<h2 id="closing-statements">Closing Statements</h2>

<p><strong>Maya Chen:</strong></p>
<blockquote>
  <p><em>“Cleo gets one thing right: the intuition that corporate learning should be something you</em> do<em>, not something you watch — and that’s a genuine design insight worth building on. What they get catastrophically wrong is pitching six products in a trench coat; ‘read, video, audio, VR, simulation, full business understanding’ isn’t a vision, it’s a panic attack masquerading as a pitch deck. If they want a real company, they must pick one blade — the simulation — build it as a film room with moment-level replay and a coach dashboard that makes the human 10x better, and ship it with enough design taste and learning science rigor that people practice because they</em> want <em>to, not because HR told them to.”</em></p>
</blockquote>

<p><strong>Dmitri Volkov:</strong></p>
<blockquote>
  <p><em>“Cleo gets one thing exactly right: the insight that company-specific knowledge — not generic courses — is the future of employee coaching, and that AI can finally make that economically viable. What they get catastrophically wrong is confusing a 5-year vision with a product — bolting together five distinct engineering disciplines and calling it a startup is how you burn $20M teaching nobody anything. Pick one wedge — Salesforce-powered sales simulation with real learning science — ship it in 6 months, let the organizational data flywheel compound, and earn the right to expand; that’s a company, everything else is a pitch deck.”</em></p>
</blockquote>

<p><strong>Richard Park:</strong></p>
<blockquote>
  <p><em>“Cleo &amp; You correctly identifies that enterprise coaching is broken and that companies are desperate to prove their $445B training spend actually changes behavior — that instinct is worth something. What they get catastrophically wrong is pitching a feature list in a market where LinkedIn, Yoodli, Docebo, and a dozen funded startups already ship every feature they described — no moat, no wedge, no awareness of the battlefield they’re walking into. If they want a real company, they must kill the Swiss Army knife, build the only AI coaching platform that measures actual behavioral change with clinical-grade rigor, and use that outcome data as a wedge to land enterprise contracts that no incumbent’s completion-rate dashboard can defend against.”</em></p>
</blockquote>

<p><strong>Dr. Sarah Torres:</strong></p>
<blockquote>
  <p><em>“Cleo gets one thing right: simulation-based practice is where learning science and AI capability genuinely converge, and the market is starving for it. What they get catastrophically wrong is confusing content delivery in multiple formats with personalization — that’s not learning science, it’s a debunked learning styles myth dressed in AI clothing, and building on that foundation will produce another beautifully designed platform that changes nothing. If they want a real company, they must kill the content jukebox, sell AI-powered deliberate practice with spaced repetition and mastery gates to revenue leaders who can measure the outcome in quota attainment, and stop pretending that choosing between a video and an audiobook is the revolution corporate learning needs.”</em></p>
</blockquote>

<hr />

<h2 id="key-findings">Key Findings</h2>

<p><strong>1. The pitch is a roadmap dressed as a product.</strong>
Every panelist reached this diagnosis independently. “Five products in a trenchcoat” (Dmitri). “Everything is the enemy of great design” (Maya). “A feature list wearing a startup costume” (Richard). “A content jukebox” (Sarah). The convergence across four completely different professional lenses is damning.</p>

<p><strong>2. The format “personalization” claim is intellectually dishonest.</strong>
Read/video/audio choice is modality accessibility, not personalization. It reflects discredited learning styles theory when positioned as personalization. It should be a footnote, not a headline.</p>

<p><strong>3. Sales simulation is the only legitimate v1 candidate — and everyone agrees.</strong>
It’s experiential, measurable, validated by market (Yoodli’s $300M valuation), and categorically different from passive content delivery. But it must be done with learning science rigor — spaced practice, mastery gates, structured feedback.</p>

<p><strong>4. VR is a Series C feature being sold as a product.</strong>
The effectiveness data is real. The startup reality is also real: $50K-$200K+ per module to produce, hardware distribution nightmare, separate engineering discipline. ROI threshold for VR requires 375+ learners per module just to break even with classroom training.</p>

<p><strong>5. “Fully understanding the business” is the most dangerous claim in the pitch.</strong>
One wrong answer about pricing teaches a sales rep to quote wrong numbers to a real prospect. The Salesforce-narrow approach is the viable path to genuine company-specific intelligence without the hallucination exposure.</p>

<p><strong>6. The real buyer is the CRO, not the CHRO.</strong>
Revenue leaders demand proof of outcomes measured in pipeline and close rates. LinkedIn Learning cannot reach this buyer because it lives in the HR budget permanently. That’s structural, not strategic.</p>

<p><strong>7. The moat is accumulated organizational intelligence, not features.</strong>
The simulation engine commoditizes as models improve. The 12 months of company-specific learning data — which objections trip your reps, which practice patterns predict deal closure — does not commoditize. Every simulation strengthens the model.</p>

<p><strong>8. Pedagogical superiority isn’t IP — but the infrastructure to prove it is.</strong>
Learning science is published research. But building the assessment architecture and outcome attribution methodology that <em>proves</em> behavior change to a CRO is an operational moat incumbents won’t build because their business model disincentivizes it.</p>

<hr />

<h2 id="the-company-hiding-inside-the-pitch">The Company Hiding Inside the Pitch</h2>

<p>Four panelists, four professional languages, one converged product vision:</p>

<p><strong>Build:</strong> A Salesforce-native AI sales coach. Deep CRM integration extracts company-specific intelligence — your win/loss patterns, your competitor objections, your rep performance data. A curated knowledge layer (15-20 human-validated documents) adds product context without hallucination exposure. AI-powered simulation surfaces scenarios built from <em>your</em> closed-lost deals, not generic role-play templates.</p>

<p><strong>Make it science:</strong> Spaced repetition brings reps back to their specific weak points at calibrated intervals. Mastery gates prevent advancement without demonstrated competency. Moment-level feedback with replay — the film room model — lets reps analyze specific decision points, hear alternatives, and drill the 30-second exchange four different ways.</p>

<p><strong>Make it safe:</strong> A dedicated web app, not a Slack integration. Row-level security at the database. Architectural data isolation so reps can fail without performance surveillance. A manager dashboard showing aggregate patterns that makes every coaching conversation more targeted.</p>

<p><strong>Sell to:</strong> The VP of Sales. Pitch ramp time in months, not completion rates. Pitch close rate improvement, not engagement hours. Never lead with “learning platform.”</p>

<p><strong>Expand:</strong> The Datadog playbook — enter as the outcome measurement layer, prove what’s working and what isn’t, then expand into coaching module by module. The measuring instrument becomes the platform. The platform becomes organizational learning infrastructure.</p>

<hr />

<h2 id="where-the-panel-landed">Where the Panel Landed</h2>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Initial Verdict</th>
      <th>Final Verdict</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Maya</strong></td>
      <td>Skeptical with a flicker</td>
      <td>“If they build this, I’d want to design it”</td>
    </tr>
    <tr>
      <td><strong>Dmitri</strong></td>
      <td>Cautiously interested if focused</td>
      <td>“CRM + simulation + adaptive learning = real product, real moat”</td>
    </tr>
    <tr>
      <td><strong>Richard</strong></td>
      <td>Hard pass</td>
      <td>“Conditional pass — come back with the right pitch”</td>
    </tr>
    <tr>
      <td><strong>Dr. Sarah</strong></td>
      <td>Critical of pedagogy</td>
      <td>“Three-tier model selling to revenue leaders — that’s a company”</td>
    </tr>
  </tbody>
</table>

<hr />

<p><em>The pitch as written would not survive a serious investor meeting. The company hiding inside the pitch might be one of the more interesting opportunities in enterprise software right now — if its founders have the discipline to kill what’s killing it.</em></p>

<p><em>The debate produced something unexpected: four people who started from radically different places converged on a coherent, specific, buildable product vision. That convergence is itself a signal. When an Apple designer, a Tesla engineer, a Buffett-school investor, and a learning scientist all point to the same thing — independently, from first principles — it’s worth listening.</em></p>]]></content><author><name>Ernest &amp; Melanie</name></author><summary type="html"><![CDATA[Cleo &amp; You: A Debate in Four Voices What happens when an Apple designer, a Tesla engineer, a Buffett-school investor, and a learning scientist tear apart the same startup pitch — raw, unfiltered, and without mercy]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-1/image-ai-debate.png" /><media:content medium="image" url="https://ebarronh.github.io/buildathon-mar-26/assets/images/week-1/image-ai-debate.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>