Nowadays, using an ORM (Object-Relational Mapping) is a low-hanging fruit because there are plenty of such implementations available for modern programming languages and a variety of databases. But is this healthy or always necessary for the application?
I often face situations during my workshops, courses, and interviews when I see people implicitly considering using an ORM as a must-have approach every time a database is involved. This is primarily caused by the fact that the majority of developers do not like writing SQL nor handling the mappings between the database result sets and the object models.
In this article, I will try to explain why this approach is toxic, when and how you should use an ORM, and what are the consequences if not properly rationalized.
Assuming that an ORM will prevent you from doing the boring stuff (e.g. writing SQL) so you could focus on different parts of the applications is in general a wrong assumption. The primary purpose of an ORM is to map the database result sets to object graphs. In addition, an ORM also tracks object changes and synchronizes those changes back to the database. The SQL part must and should stay in the total control of developers. I can argue that every software engineer using a relational database as part of the backend application should understand how a relational database works, the standard SQL, and the flavors in regards to that database. Using an ORM does not mean that you do not have to care anymore about how the application interacts with the database so you can treat it as a neglectable piece in your architecture. On the contrary, you just added another layer of abstraction and complexity, the ORM itself.