
The AI Bubble: Inevitable & Misunderstood
The valley runs on borrowed courage and someone else's chips. We call it vision when it's really vertigo—that dizzy rush of watching everyone else jump first. The smart money doesn't chase the noise; it owns the silence underneath.
This AI fever dream follows the same script. Yes, it's a bubble. Valuations float like prayers, and half the "revolutionary" software is just ChatGPT wearing a fake mustache. But this bubble has a peculiar architecture. Its center was built not to burst.
Picture three circles, like targets painted on the future.
The bullseye belongs to companies too big to fail at failing. Microsoft, Google, Amazon—they've made AI their entire bet, not just this quarter's gamble. They're hoarding silicon and electricity like doomsday preppers, because the next ten years of money depends on getting this right. If the returns don't materialize on schedule, they'll simply redefine what returns mean. That's what monopoly power buys you: the luxury of moving goalposts.
The middle ring is where the real work happens, quietly. These are the plumbers of the AI economy—building orchestration systems, data pipelines, evaluation frameworks. Unsexy, essential work. They don't get TED talks; they get contract renewals. While everyone else chases demos, they're laying pipe.
Then there's the outer ring: the carnival of "AI for everything" apps. This is where the bubble gets its name and its casualties. Venture-funded fever dreams that burn cash on inference costs while promising to revolutionize dog grooming or whatever. When the music stops, this is where the chairs disappear first. The crowd will watch this ring implode and declare the revolution over. They'll be wrong—again.
We've watched this movie before. The dot-com crash incinerated billions at the edges while the infrastructure hardened beneath: fiber optic cables, data centers, the cloud itself. Bubbles are wasteful teachers. They burn away the nonsense and force the boring work of building standards. The tuition isn't split evenly.
The Forgotten Middle
Here's what everyone's missing: small businesses don't want the singularity. They want their spreadsheets to talk to each other. They want claims processed faster, meetings that could have been emails to actually become emails, and maybe—just maybe—to leave the office before sunset on Friday.
Small and midsize enterprises employ most of the world and generate most of its output, yet they're treated like an afterthought in our breathless AI discourse. While enterprises negotiate million-dollar pilots and consumers chase shiny toys, SMEs are starving for software that removes work instead of replacing workers. They want the boring miracles: reconciliations that reconcile themselves, forms that fill themselves out, dashboards that actually dashboard.
This is where the middle ring finds its fortune. The winners won't sell "artificial intelligence"—that phrase has been focus-grouped to death anyway. They'll sell connection: APIs that actually work with existing software, rollback buttons that feel like insurance, human oversight that doesn't feel like extra homework. In this market, competitive advantage isn't another chat interface; it's eliminating the third Tuesday of every month that everyone spends manually updating systems that should have been talking all along.
Reading Tomorrow's Rules Today
Our regulatory conversation remains stubbornly backward-looking, fretting about AI oversight in the abstract while concrete standards grow teeth in plain sight. The future isn't coming; it's already here, dressed as compliance requirements.
Three examples point the way forward, if you're willing to look:
The EU Data Act hits September 12, 2025, demanding that cloud providers make switching actually possible—real portability, not the fake kind where your data technically belongs to you but good luck getting it out. Design for this now and you get two wins: compliance and the switching story that SMEs actually care about.
ISO/IEC 42001 gives the world an AI management standard, which means buyers can finally ask for process instead of promises. If your product can't map risks and controls to recognizable frameworks, expect procurement conversations to die slow, paperwork deaths.
The NIST AI Risk Management Framework, now extended with a Generative AI Profile, is becoming the common language of regulated industries. Translate your monitoring and escalation into this vocabulary and watch sales friction evaporate.
Layer on the EU AI Act's timeline—bans and obligations rolling in over the next few years—and the future looks less like the Wild West than a well-regulated logistics network. The durable strategies are obvious: portability by design, evidence trails by default, controls that follow models across vendors like faithful dogs.
What Stays When the Dust Settles
Yes, open-source models are improving and costs are falling. Good. Competition keeps the giants honest and fuels the tool layer. But even thriving open ecosystems run on someone's silicon, someone's electricity, someone's app stores. The majors don't need exclusivity to win; they need inevitability. They'll offer the cheapest default, the safest checkbox, the bundle that lets risk-averse CTOs sleep soundly.
The Playbook
Fund usefulness, not cleverness. Back companies that eliminate steps from workflows that already exist—finance ops, logistics, healthcare paperwork. Demand before-and-after metrics, rollback plans, named humans responsible when things break. If a product can't explain what it automated and where accountability lives, it's performance art.
Build for escape routes. Treat interoperability as a product feature, not a regulatory burden. Make exports, transparent schemas, and vendor-agnostic evaluation core to your offering. Price on problems solved, not tokens consumed.
Regulate where power pools. Focus on switching costs, bundling practices, and default placements in cloud and devices. Target the energy footprints of inference. Write procurement rules that reward management systems aligned with standards and evidence trails that speak established frameworks.
The AI bubble is real and will punish the loudest promises backed by the flimsiest moats. But the platform shift beneath it runs deeper—and its rulebook sits in plain view for anyone willing to read actual documents instead of tweets. The question isn't whether AI succeeds. It's which layer we empower in the process, and whether we aim that power toward the part of the economy where usefulness compounds fastest: millions of businesses that don't want a revolution, just fewer tabs, fewer mistakes, and the possibility of leaving work at work.