OpenAI is no stranger to contrarian strategies: complex financial structures, research-heavy, product behind login-wall, solution in search of a problem.
But the most perplexing is the (so far) success in shipping both a consumer (ChatGPT) and enterprise product (APIs) at the same time. But the two motions are always at odds.
Some thoughts:
Users can bypass ChatGPT and use the usage-based APIs directly. Many thin wrappers emulate the basic ChatGPT UI. You can easily store and export your conversations locally. And your data will never be used to train future models. What’s to stop competitors from building these experiences first with OpenAI models, then their own?
Ads will come for LLMs. Product, hotel, and other recommendations will show up in your output. It’s a matter of time. The go-to-market and feature set of an ad-supported LLM will be very different from an enterprise model.
Maybe there is a flywheel. Consumer application generates data. Enterprise applications consume it. I imagine that’s the current strategy with OpenAI. The flywheel might be powerful enough to subsidize the consumer version.
Maybe it’s a true platform shift. Just like the advent of word processors and spreadsheets, maybe LLMs are a fundamental tool that everyone will use (I think so). Maybe there’s a decade or two of “good times” for a company like OpenAI, just like Microsoft had before another platform shift (Google Docs) came around.
Great companies play by their rules, so it will be interesting to see this play out.
Matt, I agree with this:
"Just like the advent of word processors and spreadsheets, maybe LLMs are a fundamental tool that everyone will use (I think so)."
It sure does seem that way, after working with these tools nonstop for about a year, and seeing how much more productive I can be as a result, and how much more I can learn about the world.
"Maybe there’s a decade or two of “good times” for a company like OpenAI, just like Microsoft had before another platform shift (Google Docs) came around."
I agree with the "good times" principle, but feel like it might be more like 5 years as time between paradigms inevitably gets shorter and shorter.
By the way, can you imagine if Elon was running things at OpenAI ? 🥲😂 Interesting hypothesis.