Myths About ORMs
Object-relational mapping (ORM) libraries are scattered about millions of code repositories. They promise to bridge the gap between a relational database and an object-oriented programming language.
Instead of writing a SQL query like `SELECT * from users where id = 1`, you would define a User type with some special annotations and then write something like `user = users.select().where(“id = ?”, id)`. Some other features they might provide: type safety, connection management, or a migration framework.
But using an ORM is controversial. Some developers swear by them, while others avoid them altogether. Some myths about ORMs.
You don’t need to learn SQL. ORMs don’t force developers to write SQL, and some use this as a crutch to avoid learning SQL. Most applications will need to break out to raw SQL sooner rather than later. Without knowing SQL, ORM-only developers will quickly run into performance issues, debugging issues, and more.
You are better off with raw SQL. The other end of the spectrum: why use an ORM at all if you can just use raw SQL? Why learn the intricacies of a bespoke ORM library instead of host-language agnostic SQL? While this is true for complicated queries (or schemas that don’t map well to objects), for the majority of CRUD operations, it’s much easier to use an ORM.
ORMs let your application be database agnostic. Even trivial applications end up using some database-specific feature or type. Even if the ORMs support multiple databases, it’s difficult to write an application that works for multiple database engines at the same time. It’s not a general abstraction over the data layer.
You are better off writing your own. As developers start to write more and more raw SQL, they might start to build out the primitives that resemble an ORM. Building a full-fledged ORM library is difficult. Edge cases, correctness, and a deep understanding of the language’s type system are usually needed.