ML Developer Experience
I came across a great reverse engineering of the GitHub Copilot product the other day. The takeaway: the developer experience part of Copilot is just as impressive as the machine learning part.
What are some model agnostic lessons?
Simple and familiar interfaces: Copilot works as an extension in VSCode, the editor of choice for most developers. It's simple to download and easy to authenticate to GitHub. Developers already had autocomplete and were used to having code suggestions as they typed. Copilot just did it better (and with contextual understanding).
Invoke when you can add value: Copilot answers sometimes don't make it to the user. They might be repetitive, not helpful, or not in the right context. Copilot uses cheap and easy tricks (e.g., logistic regression, simple regex) to determine whether or not to invoke the model. OpenAI surely learned this lesson with ChatGPT as well – reinforcement learning (RLHF) taught the model to discard answers it wasn't as confident about.
Fake it or reduce the scope when you can: Copilot's magic is that it gives contextual suggestions. It knows about a variable that was declared in a different file. How does it do this? Isn't it too much to send the whole repository as context? Yep, and that's why Copilot only sends the last 20 accessed files with the same suggestion as part of the prompt. Of course, that's not the whole context, but a simple enough heuristic that's right, most of the time.
Constantly be learning: Copilot sends telemetry back – what suggestions were accepted (i.e., what code blocks weren't edited "that much" after 30 seconds). This is essential to both the actual model (learning) and the human product managers. Whether or not the claims by GitHub on the effectiveness of Copilot are actually true (% of code written by Copilot), it's at least some sort of metric to judge the product's effectiveness.
Just as ChatGPT unlocked a renewed interest in GPT-3, I'll guess that the next few years will be more focused on applying the advancements in LLMs rather than increasing their understanding (although that might also happen).