Intelligence Everywhere
Frontier models will continue scaling, and combined with training innovation they’ll become significantly more useful even than they are today. But they won’t reach general intelligence. A few providers will sell cloud access much like today, with expensive tiers out of reach for most people. A base model subscription will be a normal part of everyday life; poorer households will struggle to keep up.
There will be corrections. The application-layer die-off comes first — too many thin startups with no moat and no margin. The deeper correction comes later, when frontier scaling plateaus and the narrative of imminent general intelligence dissolves. But the underlying technology continues to grow regardless. The bubble is about misallocated capital and inflated expectations, not about whether the technology works. It does.
The slowdown may be a decade away, and until then, humanity will keep accelerating its spending on chips, datacenters, training, and inference. The energy question only gets louder.
On the frontier side, a closed-versus-open-source tension will persist around privacy and freedom — something like the Windows/Linux/macOS dynamic. Meanwhile, as models shrink and chips improve, local AI will diffuse into our everyday tools. Small models on small chips, embedded in objects we already use.
We’ve discovered practical intelligence that is cheap and distributable. We can put it in a hearing aid, a door, a toy, a pipe. The large model story is powerful and important, but it’s one application of a deeper capability. Everyone is watching the ceiling, asking how smart we can make the biggest model. The floor is what will transform everything. Intelligence is becoming small, cheap, and embeddable in anything. That’s the breakthrough of the century.