Overview of GPT-as-a-Backend
I attended a hackathon on generative AI hosted by Scale last week, and the project that ended up winning was backend-GPT. Don't worry – DevOps engineers are safe for now. But it has a clever trick.
How it works and some thoughts.
The project consists of a backend that has a single catch-all API route. The backing store is a simple JSON file.
The trick: the route and payload (along with the JSON database) feed into a templated prompt that interprets the route into state operations on the database. The example the team built was a TODO app with REST-like (but unimplemented) endpoints that did simple CRUD operations on TODOs.
I could see this as a great tool for front-end developers to develop and test against realistic (but fake) backends without coordinating with the API team.
Some interesting routes for a possible next step:
Could we translate OpenAPI definitions to (better) client/servers? This was one of the topics of AI for Source Code Generation.
Instead of storing the state inside the context, could CRDTs be generated from the request? Mergeable data structures – similar to how multiplayer data structures work in applications like Figma.
Could routes be inferred from another source of documentation? Consume pages from a third-party SaaS's API docs to get a staging environment.