The great AI deception

The decision-making class are being sold a bill of goods. They're replacing accountable humans with unaccountable systems, and they can't tell the difference.

The age-old divide persists: those who understand the technology and those who make decisions about it. The executive who sees AI as a feature to be added. The investor who knows the buzzwords but not the mathematics.

This asymmetry has always existed, but language models amplify it. The decision-makers literally cannot evaluate the quality of what they’re buying.

Language models excel at appearing competent. They’ll confidently cite non-existent legal precedents, invent plausible APIs, write code with subtle bugs. To someone with technical expertise, the hallucinations are obvious. To someone without? The output looks perfect.

The traditional safeguards—code review, testing, validation—are being abandoned. The demos are too impressive. The potential cost savings too attractive. The fear of being left behind too acute.

When this bubble bursts, it won’t be because AI became sentient. It will be from accumulated errors that seemed plausible at the time. Contracts citing imaginary cases. Code that corrupts data in subtle ways. Supply chains optimized for metrics that don’t map to reality.

The executives who replaced their knowledge workers will find themselves managing systems they cannot debug, cannot improve, and cannot truly understand. They traded human experts who could be held accountable for black boxes that can’t.

The real winners are the platform providers—those who control the models and infrastructure. They’ve successfully monetized approximation itself. When businesses built on their platforms fail, the terms of service ensure they bear no liability.

History rhymes: railway mania, the dot-com bubble, the 2008 financial crisis. Each time, complexity was sold as sophistication. This time, companies will discover their AI-automated operations have been making critical errors no one can diagnose. “The AI handles that” will become this generation’s “too big to fail.”

A human developer, however overworked, has skin in the game—reputation, career, perhaps even conscience. The model has no such constraints. It will produce errors with the same confidence as correct answers.

The technology has legitimate uses—drafting, summarization, pattern recognition within bounded contexts. The tragedy is that those making purchasing decisions cannot distinguish between appropriate and inappropriate applications.

The deception isn’t that machines are becoming intelligent. It’s that the decision-makers can’t tell the difference.