Llama 2 is a commercially-available open-source model from Meta that builds on LLaMA, the “academic-use only” model that was, in reality, generally available to anyone who could click a download link.
At a high level, many are familiar with strategic open-source from big technology companies — products like Android, Chrome, and Visual Studio Code. But why exactly would Meta make the weights of the Llama 2 commercially available? A more in-depth analysis.
The framework in A Short Taxonomy of Open-Source Strategies identifies 7 different categories of strategic power from open-source: hiring, marketing, go-to-market (complement), go-to-market (free-tier), reduce competitor’s moat, goodwill, and standards lobbying.
Likewise, a more specific framework applied to machine learning models in Why Open-Source a Model? lists four more specific reasons:
You have proprietary data but not enough resources or expertise.
You want to recruit and retain top researchers.
You sell hardware or cloud resources.
You have no distribution but have a breakthrough insight.
Looking at the Llama 2 license hints at Meta’s goals.
No Improvements to Other Models: You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
Restrictive Terms for Competitors: If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
No Trademark Licenses: Retains branding and marketing rights.
Using the framework, the launch announcement, and the license, some hypotheses on why Meta open-sourced Llama 2:
Reduce Competitor’s Moat. Llama 2 hurts two kinds of competitors. The first is companies with proprietary models — Google and OpenAI (Microsoft, by association). The second is any company that sits in the serving stack but needs to organically build its audience (Meta has billions of captive users across its properties).
Go-to-market (free-tier or complement). Llama 2 is available in 7b, 13b, and 70b parameter sizes. What if we viewed these smaller models as “freemium” self-serve? You build your infrastructure around the Llama 2 architecture and try it out on your own cloud, but use a future offering from Meta that is (1) extremely large or (2) hyper-up-to-date in an online way that most organizations couldn’t accomplish.
What would be Meta’s complement that they are shipping? Possibilities:
LLM-enabled features for Instagram / Threads / Facebook.
Hardware — specially designed chips and data centers purpose-built for Llama.
ML Framework — Any Llama derivative will work best in PyTorch.
Future commercial offering — Managed Service
Marketing. Meta could potentially develop a reputation as a company built on the cutting edge if they play their cards right. Google’s reputation did wonders for it for decades (until it didn’t) — developers, users, and the general media. Meta has a much bigger uphill battle here, but the general sentiment is shipping.
Open AI has benefited from a huge user base, but that's one area where Meta could really take the ball and run with it. I mean, 2.5 billion daily users or whatever? Nothing else in the world is remotely close.
Great article! Not sure if you've written about this before, but do you have any insight into how or why Google fumbled with AI so spectacularly despite an early head start, great talent, and massive resources?