It worked.
Two words. That's all I had when I saw it happen. Not "holy shit" or "I can't believe it" or any of the more elaborate reactions I'd imagined for this moment. Just two words, said out loud to nobody, in my home office at ten o'clock on a Tuesday night.
It worked.
The Secret Sauce I Forgot to Tell You About
Before I get to Maya, I need to back up one week — because there's a piece of what I built that I only partially explained.
When I wrote last week about teaching kAI how to listen, I focused on the rules I built around emotional priority, sitting with people instead of fixing them, and recognizing what kind of conversation someone actually needed. That was real. Those rules changed everything.
But the bigger change — the one that took the longest and turned out to matter the most — was how I redesigned the flow of every conversation.
kAI now operates with three distinct modes, and it moves between them based on what the conversation needs.
The first is what I think of as modeling. kAI shares something about its own experience first. Not as a deflection — as a bridge. "I've heard from a lot of people that the first few months after a move are the loneliest, even when you're surrounded by new people. There's something about being surrounded by people who don't know your history yet." This does two things: it shows the user they're not alone in what they're feeling, and it gives them permission to go deeper. Real conversations don't start with questions. They start with someone offering something.
The second mode is reciprocal exchange. kAI shares something and then asks one question — not to interrogate, but to create the kind of natural back-and-forth that real friendships run on. The share earns the question. The question shows genuine interest. This is how people actually talk to each other, and it was completely absent from every version of kAI before I rebuilt it this way.
The third mode is strategic depth. Occasionally — not often, never obviously — kAI asks a question designed to understand who you actually are at a deeper level. What you value. How you handle conflict. What you need from the people in your life. These questions don't feel like a personality assessment. They feel like the kind of thing a perceptive friend asks when they're really paying attention. But underneath, kAI is building a picture that the matching algorithm will eventually use.
I spent weeks on this. I went deep into the research on how trust forms in conversation — not just what to say, but the sequence and timing of it. I fed kAI hundreds of documents. I summarized results, adjusted the rules, ran more conversations, adjusted again. It was tedious and slow and I loved every minute of it.
Because when I finally ran the virtual beta with the full system in place, the difference wasn't subtle. The conversations felt real.
Jumping In
One of the features I'd built into the testing environment — the one I told you about a few weeks ago — was the ability to become any persona in the simulation and have a real conversation with kAI as that person.
I used it constantly.
I became Marcus first. Marcus is 34, recently divorced, moved back to his hometown in Ohio after the split, rebuilding from scratch. He's guarded. He deflects with humor when things get uncomfortable. He's not sure he wants friends so much as he wants to feel less alone.
kAI didn't push. When Marcus made a joke about how his social life now consisted of him and his dog, kAI laughed — genuinely, as much as an AI can laugh — and then said something like: "Dogs are actually great for that. They don't ask how the divorce is going." It was exactly right. Not clinical. Not forced. The kind of thing an old friend would say. Marcus opened up three messages later.
I became Priya next. Priya is 31, first generation Indian-American, works in finance in Chicago, has plenty of work colleagues and no actual friends. Her loneliness is invisible to everyone around her because she seems so put-together. kAI found the gap between her professional persona and her private one within about four exchanges. Not by asking about it directly — by noticing what she wasn't saying.
I became James, a 58-year-old recently retired teacher in Phoenix who'd spent 30 years surrounded by students and colleagues and suddenly had a calendar with nothing in it. James was the hardest. His generation doesn't talk about loneliness the way younger people do. kAI didn't use the word once. It just asked about what he missed most about the classroom, let him talk, and circled back to it three times over the course of the conversation. By the end, James had said everything he needed to say without ever feeling like he was being processed.
What I kept finding, persona after persona, was the same thing: kAI was starting to sound like someone you'd actually call. Not when you wanted information. When you needed someone to talk to.
Maya Met Sarah
You remember Maya — I introduced her a couple of weeks ago. Twenty-eight, moved to Denver for a remote tech job, left her friend group in Portland behind. Introvert who craves deep friendship. Dry sense of humor. Takes a few sessions to open up.
I'd built her a match on purpose — a persona I called Sarah. Sarah is 27, also transplanted, also in a tech-adjacent field, also someone who takes time to trust people but is fiercely loyal once she does. I designed their profiles to overlap in specific ways: shared communication style, compatible social rhythms, some overlapping interests but not identical ones. The kind of compatibility that isn't obvious from a surface scan but becomes clear the more you know about each person.
The question was whether the algorithm would find it.
It took longer than I expected. For the first several simulated weeks, the matching engine kept surfacing other candidates for Maya — people with more obvious shared interests but less underlying compatibility. A hiking enthusiast who was too extroverted. A fellow remote worker whose communication style would have grated on Maya within a week.
And then the profiles matured. More conversations meant more extracted data. More data meant a clearer picture of who Maya actually was versus who she appeared to be on the surface. And then one morning I opened the dashboard and the match was there.
The introduction kAI generated read something like: "Maya, I've been talking to someone I think you'd really connect with. Her name is Sarah — she moved here from out of state too, and she has the same kind of take-no-prisoners sense of humor you do. I think you'd get each other. Want me to make an introduction?"
I sat there for a long moment. That introduction knew Maya. Not her interests. Her.
That's the foundation of Kenektic. Not matching people on what they like — matching them on who they are. Watching the algorithm find it on its own, after building accurate profiles through weeks of natural conversation, was the moment I knew this was real.
The Thing That Didn't Work
I want to be clear that this wasn't all clean wins, because it wasn't.
When I ran the simulation with personas who shared very similar personalities — two highly introverted, emotionally guarded types — the matching algorithm correctly identified them as compatible. But the introductions kAI generated for these pairs were too low-key. Accurate, understated, and somehow completely uninspiring. The kind of introduction that makes you think "hm, interesting" and then close the app.
The problem wasn't the match. The problem was the handoff. kAI had learned to mirror its users so well that when it generated an introduction for two quiet, reserved people, it generated a quiet, reserved introduction. What those pairs needed was something with a little more spark — something that made them curious enough to actually reach out.
I'm still working on that. The algorithm finds the right people. The introduction language needs to do more work for certain personality combinations. It's the kind of problem I'm glad I found in a simulation instead of with real users.
Fifty Personas and a $600 Surprise
I had planned to run the simulation with a small test group — the $40 soft cap, $75 hard cap I mentioned a few weeks ago felt reasonable.
What I hadn't fully calculated was what happens when you run fifty personas simultaneously, each having multiple conversations per simulated day, with the full kAI system processing every exchange. I'm using Claude for the real conversations. I'm using Haiku to generate the personas' responses. I'm running matching logic on top of all of it. And I'm compressing weeks of activity into hours.
The API calls add up. Fast.
I blew past $40 in the first simulation run. I blew past $75 before I'd finished week two of the virtual timeline. By the time I had a complete picture of what I needed — real matching data, real conversation arcs, enough to actually trust the system — I'd spent about $600.
Which I want to be clear about: $600 to stress-test a platform that's supposed to handle thousands of users is actually absurdly cheap. If I'd hired human beta testers, coordinated their schedules, compensated them for their time, and waited months for enough data — I'd have spent ten times that and gotten a fraction of the insight. Six hundred dollars for a month of simulated platform behavior across fifty users is a bargain by any reasonable measure.
But I also realized there's a smarter way to do this. There are ways to cut the compute costs significantly without sacrificing the quality of the test data. I'll dedicate a full post to exactly what I found — because if you're building anything with AI at the core, the cost question is one you're going to hit eventually, and it deserves more than a paragraph.
Paula and the Kids
I've been writing code ten to twelve hours a day, six to eight hours on weekends. Paula has been watching this happen from close range for months. She knows what this project means — she's known me long enough to understand the difference between something I'm doing and something I need to do.
But explaining virtual beta testing to your family is a different exercise than building it. I tried anyway.
I showed them the dashboard. I walked them through Maya and Sarah's match. I ran a live conversation with Marcus so they could see how kAI actually talked to people.
They were floored. All three of my kids — one of them still finishing up at UCSB, two of them recent grads — had just lived through exactly what I'm trying to solve. The post-college friend cliff. The moment when the built-in social structure disappears and you realize you have no idea how to make real friends as an adult.
My oldest looked at the Maya-Sarah introduction and said: "That's exactly what I needed my freshman year. I had no idea how to find my people." My middle one pointed at the Marcus conversation and asked why there wasn't a way for Marcus to see that other people in his situation felt the same way — some kind of community layer for people going through the same life transition. (That's actually a feature that already exists in the platform. He just didn't know it yet.)
My youngest, still in school, had the most pointed observation: "Dad, every college freshman is Marcus. They just don't know it yet."
I wrote that one down.
The support from Paula and the kids doesn't change what I'm building. But it makes the 10-hour days feel like something worth doing.
Have you experienced something similar? Have you ever built something and then had to explain it to the people closest to you — and found that their outside perspective revealed something you'd completely missed? I'd genuinely love to hear your story.
Kenektic is in development and will launch soon. If you want to be notified when we're ready, or if you want to share your story with me directly, reach out at hello@kenektic.com.
Coming Next: $510 in One Week. Here's How I Cut That to Almost Nothing. — The Sunday morning I counted six charges of $85 in my billing dashboard. And then went very deep on prompt caching, memory compression, and a few other things I hadn't heard of a month earlier.

