Here’s the thing about building AI services in 2025: it’s not just about clever ideas anymore. The real power lies in having access to massive foundation models and the computational resources to train them. And who has those in spades? The usual suspects: OpenAI, Anthropic, Google, and their ilk.
I’ve watched countless brilliant startups launch specialized AI tools - proper clever stuff for specific niches. Voice synthesis, image generation, code completion - you name it. But time and again, we see the same pattern play out: the moment these tools gain any real traction, the foundation model providers simply add the feature as a capability to their base models.
The “Just Add Water” Problem
Remember when ElevenLabs made a splash with their voice synthesis? Brilliant technology, genuinely impressive stuff. Then OpenAI casually dropped voice synthesis into GPT-4, and suddenly everyone’s integration plans got a bit… complicated. It’s like watching someone build a lovely sandcastle right before high tide.
The pattern is rather predictable:
- Startup identifies specific AI use case
- Builds specialized solution
- Gains traction
- Foundation model provider adds capability
- Startup’s advantage evaporates
Scale Is More Than Just Computing Power
“But surely specialized tools will always be better?” I hear you ask. Well, that’s where it gets interesting. The big players have three massive advantages:
- Data Flywheel: Every interaction with their models generates training data for improvement
- Integration Advantage: Their features work seamlessly together
- Distribution Power: They already have millions of developers using their APIs

The Uncomfortable Truth
The uncomfortable truth is that building an AI business on top of foundation models is rather like building a shop on someone else’s land - they can always decide to open their own hypermarket right next door.
What This Means for Builders
Does this mean we should all pack up and go home? Not quite. But it does mean being strategic about where we place our bets. Here’s what I’ve learned from watching this pattern play out:
- Focus on Implementation: The value is increasingly in how you use AI, not in building basic AI capabilities
- Build for Gaps: Look for spaces the big players are likely to ignore
- Assume Commoditization: Plan for your core tech advantages to eventually become commodities
The Path Forward
The future of AI innovation isn’t in building better basic capabilities - it’s in finding novel applications and combinations of existing ones. It’s about understanding specific domains deeply enough to apply AI in ways the generalists won’t think of.

The truly exciting opportunities lie in building things that are so specific, so wonderfully niche, that the big players won’t bother replicating them. Because while they can swallow any layer they want, they can’t digest everything at once.
And that’s where we come in - not trying to out-compute the giants, but out-thinking them in the corners they’re too busy to notice. After all, the best innovations often happen in the margins, not the mainstream.
Now, if you’ll excuse me, I’ve got some wonderfully specific AI experiments to attend to. Probably won’t change the world, but they might just carve out an interesting corner of it.