For the last 80 years, computers have been calculators. Fancy ones, sure — with screens, keyboards, networks. But under the hood, they’re still just deterministic machines. You give them input, and they process it with logic gates and silicon, and they spit out the exact same output every time. That’s the deal. That’s the contract.
And then came AI.
AI doesn’t work like that. At all.
AI — especially large language models — are not deterministic. They’re a soup of probabilities and neural weights. When you talk to an AI, you’re not talking to a computer. You’re talking to something more like a human brain: a machine that guesses, infers, hallucinates, and sometimes nails it. And sometimes doesn’t.
That’s fine. That’s expected. But the problem?
AI still runs on computers.
The interface hasn’t changed. We’re still typing on keyboards, expecting precise answers. We’re still clicking buttons, expecting repeatability. But AI doesn’t think like that. And so the human-AI interface is totally broken.
Ask ChatGPT “What’s the height of the Eiffel Tower?” and you might get the right number. Or not. And when it’s wrong, people freak out — “How can it not know that?” But think about it: the model is 1TB in size. It fits on a USB stick. You really believe all of humanity’s verified data fits in your pocket?
It’s not Google. It’s not Wikipedia. It’s a brain. A tiny, weird, synthetic brain that talks to you via a command-line interface and autocomplete.
That’s the real nightmare: the medium is lying about the message.
We call them “smartphones” because we used to make calls with them — even though calling is now maybe 1% of what we do. The name stuck. And maybe we’ll keep talking to AI through keyboards and chatboxes. But eventually, we’ll need new metaphors. New expectations. New ways to interact.
Because what’s coming isn’t a better calculator.
It’s something else entirely.