Interface Decomposition in Cloud
Snowflake's critical feature was disaggregating compute and storage from its' database engine. As a result, customers could scale (and pay for) storage and compute independently. That resulted in cost profiles that better tracked actual usage and zero-downtime scaling. This model couldn't have existed in an on-premise data center architecture – only in the cloud.
Snowflake's advancements have been copied and integrated – not only in competing data warehouse products but also in more traditional database interfaces like Postgres-compatible databases (AWS Aurora and Google Cloud's Alloy DB).
The playbook is to find commonly used interfaces (like ANSI SQL) and further decompose the interface into cloud primitives. Before, you'd take a complete application and lift and shift it to a cloud-managed service. You would offer two knobs – scale horizontally (more instances) and scale vertically (increase instance size, CPU+RAM).
The next generation of cloud-managed services decomposes these interfaces even further. It's still the same interface – whether that's a particular wire format, API, or ABI. However, the backend architecture might be (and most likely is) completely different. AWS Aurora or Google Cloud's Alloy DB – elastic and serverless Postgres-compatible databases. Postgres SQL compatible – but not a monolithic architecture.
Frontend frameworks have decomposed these interfaces too (although the interfaces aren't as well defined as SQL). Companies like Vercel and Gatsby have decomposed the interface into edge functions (API routes) and CDN (static files). So instead of scaling a web server that serves both static and dynamic routes (e.g., Nginx, Apache), you can scale each cloud primitive out independently.
What's next in interface decomposition? I'm not sure, but a few guesses
Event streaming (e.g., Kafka) + Functions
Compute + storage (for more applications like APM)
Compute + Network Filesystem