What if there was a world where thousands of small models ruled instead of ChatGPT? What if there was a way to quickly and easily share different styles of models — for specific tasks, styles, or data?
LoRA (Low-Rank Adaptation of Large Language Models) is a fine-tuning strategy that trains relatively quickly and can be applied just with model weight deltas (i.e., small file size). It’s mainly used for image diffusion models (but can also translate to text-generation LLMs).
Several websites have sprung up to allow users to host and download these fine-tuning deltas (HuggingFace is one for general models, but many exist for specific models like Stable Diffusion). The fine-tunings are mostly hobbyist works — the styles of the most downloaded models range from photo-realistic to anime to pixel art to NSFW. The website itself isn’t that interesting, but there are a few interesting emerging behaviors worth noting:
Developers applying multiple LoRAs at once
LoRA’s with “trigger words” that have been deliberately added to the training set so that users can “trigger” the fine-tuning style more accurately.
Models trained on specific artists (some of which have expressed they don’t want their art trained on)
Fine-tuning for character universes — capturing the styles and characters from well-known movies and television shows so they can more reliably be triggered.
I wouldn’t be surprised if we see a more robust version of this idea in the future. IP holders (media companies, artists, etc.) hosting (or charging) for LoRA models of their characters. Fine-tunings compressing long prompt engineering into single token complex styles. Remixes of finely tuned models to create even more models.
https://civitai.com/ is doing this for Diffusion Models
https://adapterhub.ml/