GraphQL Trades Complexity
GraphQL decouples the frontend team's data needs from the teams managing the API and data layers.
GraphQL was first deployed for Facebook's native mobile applications. Clients and backends had rapidly diverging data needs. Mobile clients were different than web clients. Some APIs didn't even exist (e.g., hydrated serverside and sent as HTML). iOS applications might rarely (or never) be updated.
The ability to scale GraphQL is highly dependent on your underlying data architecture. Meta did it well because they were able to design the protocol around it. It also depends on the shapes of data you're querying (e.g., highly normalized or not).
Like monorepos, GraphQL seems to have a U-shaped utility function – i.e., they are great for small teams and might be good for certain large teams. Small teams can rely on native caching solutions. Mid-sized teams will likely have some topology requiring special caching, which is when GraphQL gets tough (no native versioning, type-safety across different schemas, etc.). The largest teams will have these problems regardless of the technology they use.
Not all problems are technical. GraphQL might solve a coordination problem that's easier (or tougher) than solving it at the technical level.
GraphQL tends to be polarizing because it makes different trade-offs in complexity. Backend engineers no longer have to write N-custom APIs but now must solve (arguably, more) challenging data pipeline problems. Frontend engineers no longer have to wait for a bespoke API to deliver their data but now have to write complex queries on the client.
There's no silver bullet. GraphQL can't solve all problems. But it solves certain ones for certain teams. Whether or not it will become as ubiquitous as REST and RPC, TBD.