Database migrations sound difficult but are even more difficult in practice. Unlike stateless code, they are an arrow in time.
Development / production parity is impossible. You can (and should) try to recreate production state as much as possible. This is much easier at the API and web tiers of your application. There might be different endpoints, different API keys, or different network rules, but there are workarounds for coercing them to be somewhat reproducible between environments. When it comes to databases, all bets are off. Development environments won’t have live data (and shouldn’t have personally identifiable information).
Rollbacks are great in theory but tough in practice. Having a “down” migration is usually seen as a major feature of most frameworks but doesn’t work in practice. Each “down” migration requires special care — what do you do with the existing data? Not all transformations have an inverse. A sophisticated “down” migration is just another forward migration.
Application and database migrations can’t happen atomically. It’s hard to orchestrate dual changes between APIs and the database schemas they rely on. Changes in development that work will fail in production (maybe intermittent downtime). What happens when you need to support backward compatibility (for either app or database)?
Schema state is hard to version control. After many migrations, how can reliably recreate the current database state? In a perfect world, it’s just running all of the existing migrations. But in practice, long-lived environments often accumulate undocumented or unintentional changes. In the application world, this is easy with ephemeral services and environments that are often short-lived.
I’ve never run a down migration.
90%+ of DB migrations I’ve seen go wrong involve a drop (column or table) without compatible code deployed. You can’t just put it back!
What do you think of: https://xata.io/blog/pgroll-schema-migrations-postgres