People keep expecting frontier AI models to ship with guardrails sophisticated enough to handle every possible use case. That expectation is not realistic, and it is going to keep failing in painful ways.

This is the second half of an argument I made last week. The first half was about strategy: foundation models are a platform, not a product, and trying to compete with the frontier labs is a losing game.

This half is about something more important. It is about safety, regulation, and the structural failure of the direct-to-consumer chatbot model.

The Guardrails Problem

The thing a model would need to guard against is constantly changing. New regulations. New vulnerabilities. New categories of users with new needs and new risks.

A 14-year-old talking to a general-purpose chatbot about loneliness needs a fundamentally different conversation than a 45-year-old executive asking it to draft a memo. A user in California is now governed by chatbot laws that did not exist a year ago. A user in a moment of crisis needs an entirely different response architecture than a user asking for a recipe.

No frontier model can be everything to everyone and also be safe for everyone. The model is too broad. The use cases are too many. The regulation is moving too fast.

This is why I think the direct-to-consumer chatbot model is going to keep generating tragedies until it changes. Foundation models were not built to fill the role of friend, therapist, doctor, or lawyer. When you let hundreds of millions of people use a general-purpose AI to fill voids in their lives, things will keep going badly in specific cases, no matter how much the labs invest in safety.

The Right Architecture

Foundation models should power companies that specialize in specific use cases, with the safety architecture, regulatory compliance, and behavioral guardrails built at the application layer where the use case is actually known.

Kenektic is built for friendship. We know exactly who our users are and what kind of conversations they are having. We can build crisis detection, age verification, professional boundary rules, and disclosures for ten different state laws because we know what we are protecting people from.

Anthropic cannot build all of that into Claude itself, because Claude has to also be useful to a developer writing code at three in the morning. The protections have to live where the context lives.

This is why Anthropic is starting to separate from the pack on corporate customers. They have been explicit about embracing the application-layer model, where their technology powers companies that solve specific problems. Their enterprise revenue is growing dramatically. Their consumer offering is intentionally more limited than their competitors'.

That is the architecture I am betting on. Specialized businesses, built on top of a foundation model, with safety and product fit built where they belong.

The Two Structural Problems With Consumer AI

ChatGPT is still serving a billion users a chatbot. That model has two structural problems that I think are going to come due over the next couple of years.

The first is that the vast majority of those users are on the free tier. Building hundred-billion-dollar data centers to give a product away for free is not a sustainable business. At some point the math has to work, and the math does not work yet.

The second is that the novelty is going to wear off for most consumers. The average person whose understanding of AI is "it is a chatbot" is not going to keep coming back forever. A smaller group will become obsessed with it. Many more will use it occasionally for specific tasks. But the model where AI is the destination, where you go to a chatbot because going to a chatbot is the experience, is not going to hold a billion users at the level of engagement that justifies the spend.

OpenAI rolled out Sora because it was fun and grabbed attention. Two things happened. The novelty wore off fast, and it consumed massive compute at the expense of the core product.

That pattern is going to repeat.

What This Means for Jobs

People assume that AI as a platform means AI takes all the jobs. That is the wrong frame.

When AWS replaced datacenters, an entire generation of system administrators saw their jobs change. But AWS also enabled thousands of new companies that needed engineers, designers, support staff, salespeople, finance teams, and operators. The net effect of the platform shift was not fewer jobs. It was different jobs, in different companies, doing different things.

The same will be true of foundation models. Some jobs will be automated out. That is real, and I am not going to pretend otherwise. But thousands of new businesses are going to be built on top of foundation models, businesses that could not have existed five years ago, and those businesses are going to need humans to run them. We do not know exactly what those jobs will look like yet, the same way we did not know in 2006 what a "DevOps engineer" was.

But the jobs will exist, because the businesses will exist, because the platform makes them possible.

The Future Worth Building Toward

Foundation models are infrastructure. Specialized companies are products. The human work moves into roles that did not exist before.

That is the version of this that creates more value than it destroys.

That is the version worth working toward.

Last week's Part 1 explained why building on top of foundation models is the only strategy that works. This is the companion piece on why direct-to-consumer is the wrong wrapper.

David Caplan is the founder and CEO of Kenektic, an AI-powered platform addressing the loneliness epidemic. More at kenektic.com.