Hey Matt, great post as always. I would like to know your thoughts on model defensibility. Right now, it seems everyone is focused on creating InstructGPT like responses via finetuning open source models and I am curious whether you think it's a fad or something that can actually threaten proprietary models.
Too early to tell, but my take is that foundational models are on the road to commoditization. Smaller models, open-source models, fine-tuned on examples generated from larger proprietary models seems like the equilibrium. Faster, cheaper, easier to tune
Hey Matt, great post as always. I would like to know your thoughts on model defensibility. Right now, it seems everyone is focused on creating InstructGPT like responses via finetuning open source models and I am curious whether you think it's a fad or something that can actually threaten proprietary models.
Too early to tell, but my take is that foundational models are on the road to commoditization. Smaller models, open-source models, fine-tuned on examples generated from larger proprietary models seems like the equilibrium. Faster, cheaper, easier to tune