Meta’s LLaMA is one of the most popular families of large language models. Its non-commercial license and easy-to-obtain weights made it one of the most used open-source models in academia and beyond. A look at the ecosystem that’s developed:
Fine-tuned offshoots
Replicated, but under a permissive license — RedPajama, OpenLLaMA, OpenAlpaca,
Instruction-following model — Alpaca
More training data for different languages — Chinese-LLaMA-Alpaca
Quantization of LLaMA — GPTQ-for-LLaMA
Fine-tune on consumer hardware (with LoRA) — Alpaca-lora
Training data from other LLMs — WizardLM
“Uncensored” training data — WizardLM-Uncensored
Tools to run LLaMA
llama.cpp — port of LLaMA in pure C/C++
dalai — a command line tool that makes it easy to run llama locally
chat.matt-rickard.com — WebGPU accelerated Vicuna in the browser
Thanks - a very simple, direct directory!