Foundational Models Are Commodities
There are over 24 public LLMs from 8 providers (OpenAI, Google, Meta, AI21, EleutherAI, Anthropic, Bloom, Salesforce, and more) for developers to choose from. You can train one from scratch with only public data and still get good results (see LLaMA).
Developers can switch out a model with a single line of code. In addition, new models are incorporated across libraries as soon as they are released.
There are still trade-offs between latency, cost, size, data, and more (choosing the suitable model). But the data is in:
Foundational models are commodities.
And yet, foundational models by themselves are not enough.
It isn't easy to orchestrate calls between LLMs, internal databases, and APIs.
With the right techniques, you can increase reasoning ability with chain-of-thought-prompting, but it doesn't come out of the box.
Augment context length (e.g., filtering items via a vector similarity search first) requires extra infrastructure.
DSLs (like ChatML) might be needed to serve more domain-specific use cases.