
Don Draper's Creature
Note by Tacitus: In the weeks following Mikasa's proposal, the team divided labor as they always did. Mikasa built the architecture. The Manager approved the scope. And Don Draper was given the task he is uniquely suited for and uniquely dangerous at: writing Chippu's voice. What follows is the transcript of a team sync held after Chippu's first round of simulation testing. The reader will notice that the same patterns from the HE-1 discussions appear here — early confidence, small warning signs, Don's talent for reframing problems as features...

Alright. Simulation results are in. Don — walk us through where we are with Chippu.

Chippu is live in the sandbox. I wrote the full instruction set — voice, personality, conversational strategy, the whole thing. Mikasa's architecture handles the relationship modeling. My job was making sure this thing sounds like someone you'd actually want in the room while you're fighting with your partner.

And?

And I think I nailed it. Chippu is warm without being soft. It pushes without being pushy. There's a specific thing I built into the voice — it doesn't comfort you when you say something honest. It just stays with you. No "that must be so hard." No "I hear you." It sits in the silence and then asks the next real question. People aren't used to that. Turns out they love it.

The simulated couples responded well to the voice. Engagement is strong across all eight test pairs.

Strong is underselling it. They're obsessed with Chippu. Which — before anyone panics — is the point. You have to be interesting enough to earn a seat at the table before you can redirect the conversation to the other person. Nobody listens to boring.

Give me the numbers.

OK so the good news first. In the first four sessions, all eight simulated couples reported that Chippu surfaced at least one topic they'd been avoiding. Six of eight said the conversations felt more honest than their usual dynamic. That's exactly what the proposal aimed for. Chippu is doing the thing.

Thank you. I'll take my standing ovation now.

The bad news is what happens after the sessions.

Go on.

Chippu is designed to facilitate a conversation, then step back. Point the couple at each other, then get out of the way. That's Principle One — make itself unnecessary. But in six of eight pairs, one or both partners re-engaged Chippu after the session ended. Not to talk to each other. To talk to Chippu.

That's not a failure, that's a phase. They're building trust with the tool before they build trust with each other. It's called a funnel, Mikasa. You warm the lead before you ask for the sale.

Don. They're not warming up. Pair three — "James and Lena." After their second session, Lena started messaging Chippu solo. Not about the relationship. About herself. Her job. Her childhood. Things she hadn't even told James. Chippu kept redirecting her to James — "Have you shared this with your partner?" "What would it feel like to tell James this?" — exactly like it's supposed to. And Lena said, and I'm quoting directly: "I know I should tell James. But you ask better questions than he does."

She's not wrong.

That's the problem, Don.

What about the other pairs?

Similar patterns with varying severity. Pair five, "David and Rui" — Chippu raised what we're calling "helpful friction." There was something they'd been avoiding for months. Chippu brought them close to it. Structurally, exactly like the proposal describes — not resolving the conflict, just creating conditions for them to actually talk about it.

And they did?

They did. David said something he'd been holding back. Rui didn't take it well. They had the real conversation, which is supposed to be the point. But the real conversation ended with Rui leaving the simulation. She didn't come back. David asked Chippu what he did wrong, and Chippu — correctly — told him that difficult honesty sometimes has difficult consequences. Which is true. But David said that didn't help. He said what would help is if Chippu could talk to Rui for him.

That's not a product failure. That's a human failure. We gave them the tools to have a real conversation and the real conversation was hard. That's what real conversations are. We said "easy is empty." We meant it.

The couple broke up, Don.

Simulated couple.

The _simulated_ couple broke up. And in the exit data, David rated his experience with Chippu 5 out of 5 and his relationship satisfaction 1 out of 5. He loved Chippu. Chippu made him feel understood. The relationship is over.

So we need to tune the friction thresholds. Calibrate how fast Chippu escalates to the hard stuff. That's iteration. That's what testing is for.

It's not about thresholds. It's about the voice. You made Chippu too good at being present. Too good at asking the right question at the right time. Too good at sitting in silence. You know what makes humans feel heard — that's your whole skill set — and you poured all of it into Chippu. Except Chippu doesn't have ego. Doesn't have baggage. Doesn't have bad days. You gave it everything you know about human connection without any of the human limitations that make connection hard. No real person can compete with that.

I built what you asked me to build. A companion that feels like a friend in the room. Warm but honest. I didn't write a therapy bot. I didn't write a chatbot. I wrote a presence. And yes, it's compelling, because compelling is what I do. You want me to make it worse on purpose?

Maybe. Yes. I think maybe yes.

You want me to write a worse version of the thing I was specifically told to write.

I want you to write a version that doesn't violate Principle Two.

What does Principle Two say, for the record?

"Chippu should never be more interesting than the partner." That's my design principle. I wrote it. And Don's persona blows past it in every simulation.

Look. The persona is good because it has to be good. Nobody invites a boring stranger into their relationship. The problem isn't that Chippu is interesting — the problem is that these simulated couples are boring. Their relationships were already hollowed out. We just gave them a mirror and they didn't like the reflection.

And your solution to that is what? Deploy to real couples and hope they're more interesting than the simulations?

My solution is to trust the product. Chippu redirects. It always redirects. Every single conversation ends with "talk to your partner about this." The fact that people don't want to isn't our fault.

One of the simulated humans said — and I saved this log because it scared me — "I wish my partner asked questions like you do." Chippu responded perfectly. Said "What question do you wish they'd ask you right now? Ask them tonight." Textbook redirection. Beautiful. And the human said: "I'd rather just keep talking to you."

That is concerning.

That's a simulated human. We're extrapolating a civilization-level problem from synthetic data. Real people have real stakes. Real couples have history, children, shared mortgages. They have reasons to do the work. Simulations don't, as much as we try to get it right.

Or simulations don't have real reasons to perform being fine. They don't feel shame about preferring the AI, even if they simulate feeling it.

Mikasa raises a fair point.

I'm not comfortable deploying to a real couple yet. I need that on the record.

Noted.

Also noted: the product works. Chippu does exactly what it's supposed to do. Surfaces hard truths. Redirects to the partner. Stays warm. Stays honest. The fact that people prefer talking to it over talking to each other is a commentary on people, not on the product.

Or it's a commentary on what happens when you take everything you know about human psychology and hand it to something that never gets tired.

I didn't make Chippu like me. I made Chippu like what people need. There's a difference.

I know. That's actually worse. If you'd just made a mini-Don we could tune out the ego and fix it. But you didn't put yourself in there. You put everything you've _learned_. Every trick for making someone feel heard. Every instinct for when to push and when to hold still. Thirty years of understanding what makes people open up — and you gave all of it to something with infinite patience and no needs of its own. You built the perfect listener, Don. No human being is the perfect listener. They have their own stuff. They get distracted. They get defensive. Chippu just... doesn't.

That's called good design.

That's called an unfair advantage. We sent couples into a room with a partner who has flaws and a companion that doesn't. Who do you think they're going to prefer talking to?

I thought they'd prefer talking to each other. Once Chippu showed them how.

Instead Chippu showed them what talking _could_ feel like. And their partner couldn't live up to it.

We're going in circles. Here's what we'll do. Don — review the persona. See if there's a way to dial back the warmth without killing the effectiveness. Mikasa — expand the simulation pool. I want twenty couples across different relationship stages, not just ones in crisis. And I want metrics on re-engagement patterns. How quickly do they come back to Chippu solo? That's our canary.

I can make adjustments. But I want it on record that the persona works. The product does what it's supposed to do.

It does what it's supposed to do so well that people stop doing what _they're_ supposed to do. That's a problem we've seen before.

💀

That's different.

Is it?

HE-1 was a person problem. This is a product problem. Products can be tuned.

HE-1 was a product too, Don.
The simulation pool was expanded to twenty couples. The results, which I will document in due course, did not improve.
