All these things fall short the moment you need real "production" features, such as reliable migrations (writing them by hand? no thanks. Outsourcing to another library like knex? no thanks), transactions, community support, relationship/nested/join queries without a ton of boilerplate and being battle tested.
So far, the best thing I've found in the node ecosystem is Prisma [1], and it's better than the alternatives by a very long shot in my opinion.
Agree, it's pretty simple writing them. The problem is deciding which ones to apply for a given version of your application, when, running them automatically on each of your developer's laptops, your testing environment, and automate and keep track of all the changes.
Or are we talking of just writing a "CREATE TABLE..." and hand it over the wall to the ops? I agree that's simpler. Not something I like to do these days.
2. have a unique, incremental migration ID in each migration filename
3. apply them one by one in order depending on the current schema_version from the target environment, and update the schema_version accordingly
4. rollback the transaction in case of error, otherwise commit and enjoy your updated schema
I used golang-migrate/migrate in production for the past 3 years to do exactly this but you can easily implement it yourself too. golang-migrate/migrate can be used either as a Go library or as a CLI tool.
And I assume you mean all of this is easier to do, safer, takes less effort, has more documentation, is more tested and proven and will be easier to understand by a new joiner after the one that wrote it left than using an existing, popular solution?
It’s definitely tested and proven, that’s a very common approach (not this specific migrate tool, but the general pattern). It’s simple to setup and only require standard SQL (or whatever your DB is using), has almost no moving parts, and is as stable as it can be.
I’m not saying it’s easier than other solutions, that’s not really a criteria I personally care about. But it is stable, simple, relatively easy to maintain, using standard tools, reliable.
Writing migrations by hand doesn't mean you don't have an automatic migration system. TypeORM for example can generate migrations automatically for you, but you can also generate an empty one and write the SQL yourself, and get the safety of automatic migrations
Same here, it's way more flexible. I find it quite important to know exactly what will be run as a migration given how critical changing the schema can be.
We investigated using Prisma v2 as a way of auto-creating a GraphQL API that directly interfaces with a PostgreSQL database, but we immediately pivoted to other solutions as soon as we discovered that the Prisma Client is really spinning up a behind-the-scenes GraphQL rust server itself to access the db. The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.
For the life of me, I couldn't figure out why it's desirable to have an intermediary GraphQL server issuing database requests. Is that something you have also discovered while using Prisma?
> We investigated using Prisma v2 as a way of auto-creating a GraphQL API that directly interfaces with a PostgreSQL database, but we immediately pivoted to other solutions as soon as we discovered that the Prisma Client is really spinning up a behind-the-scenes GraphQL rust server itself to access the db. The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.
Do you know what Hasura is doing differently? I assumed it worked the same way as Prisma does, with an intermediate server.
Hasura is a gql server too. I can’t say how it’s different from prisma, but it works by running as a frontend process to your Postgres dB. You issue gql, it converts to sql and then back to gql on the way out.
Some of the things it does differently though. They declare access permissions in the dB (Hasura tables, managed through their api) and they work those into the queries. They do this by building a json object with any of your variables in and then reference those within the query. This allows for a neat trick where they can run the same query for different clients by starting with multiple rows (one per client) containing their respective variables. This is particularly great for subscriptions. They can poll the dB, to grab all data for all connected clients in a single hit, and then if anything has changed, they can notify the external clients about it.
Hasura is a GraphQL layer for the database. It introspects a PostgreSQL database and creates a GraphQL API which exposes access/operations to the database.
Prisma is a toolkit that consists of a database client (Prisma Client), a migrations tool (Prisma Migrate), and a database IDE/UI (Prisma Studio). The generated Prisma Client is in TypeScript and can be imported into a Node.js application.
I'm not sure about PostGraphile, but Hasura I know transforms a graphql request into a SQL request that can return the JSON response expected as part of the GraphQL spec. I believe that's the reason that Hasura only works with PostgreSQL, since it has the necessary SQL functions to generate arbitrary JSON objects.
> The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.
Please forgive my ignorance, but these type of conversations always intrigue me and make me want to learn more. As a software engineer, at what point are you running these performance benchmarks, and how are you doing them exactly? How have you had time to make these decisions and pivot to other technologies?
> reliable migrations (writing them by hand? no thanks. Outsourcing to another library like knex? no thanks)
IMHO, "real production features" means also caring about the long term maintenance, performance and reliability of the data.
My experience with ORMs (especially ActiveRecord patterns) is that it always becomes a mess once you reach a certain level of complexity. It is very fast to get started, but gets slower over time once you start having a lot of bugs and hard-to-solve behaviours.
I find that most of the production features you mentioned are actually more difficult using a fat ORM.
How many hours have i wasted figuring out how i can write and map some complex joins or aggregation query with <insert ORM name here>? Would have been a 3 minute task if all i had to write was just SQL ...
Plus i have a hard time seeing the benefit of Prisma. You are learning an entirely new DSL just to define your schema - which actually isn't that far off standard JS or TS syntax-wise so it feels like a complete waste of time to come up with the DSL in the first place.
I can only imagine the hard time you have once you first have to break out of the frameworks cage because you hit a case which isn't easily solved by the framework itself ...
>I find that most of the production features you mentioned are actually more difficult using a fat ORM.
Prisma doesn't look like the traditional ORM to me. In fact, the library from the OP uses classes, which prisma and Knex do not use
> How many hours have i wasted figuring out how i can write and map some complex joins or aggregation query with <insert ORM name here>? Would have been a 3 minute task if all i had to write was just SQL
In my experience this is just a very, very small percent of the cases. Most of the time I find myself doing pretty simple CRUD operations, and the boilerplate for the simple cases goes out of hand quickly, specially when running joins.
> In my experience this is just a very, very small percent of the cases. Most of the time I find myself doing pretty simple CRUD operations, and the boilerplate for the simple cases goes out of hand quickly, specially when running joins.
That's true, although the most frustrating percent of the cases :)
I think what KISS will eventually need is some simple toolbelt for the basic CRUD queries, e.g. expanding lists of column names, dealing with casing (snake_case to camelCase). So making the stuff you write 90% of the times easy without inventing a custom SQL abstraction.
MyBatis (which i mentioned in another comment) approached this for example with sql snippets one could re-use in queries.
KISS enables the same thing by allowing nesting of tagged sql strings. I guess time will show which additional helpers are needed to make the devs life easy/reduce the boilerplate for the simple queries.
This is one of the opinionated parts of kiss-orm I guess, because I specifically do not want to implement this.
I think having consistency in the naming of the properties/columns is more important. I would rather break the naming convention by having snake_case properties than automagically rename properties.
Given i have defined the columns of interest in my data objects anyways...
class User { name = null; email = null; }
... i wouldn't necessarily have to repeat them when writing my SQL:
sql`SELECT ${Object.getOwnPropertyNames(new User).map(camelCase).join(', ')} FROM users`
Of course using a better helper function.
There's a few repetitive tasks when writing SQL where don't necessarily need a powerful query builder but still end up writing a few handler functions. Shipping some common helpers with the framework might be handy.
Plenty. Just this week, I spent the better part of a day trying to figure out why customers were losing data. Turns out it was a bad ActiveRecord polymorphic definition that was written over 4 years ago, but which only started surfacing recently due to overlapping ids in the relevant tables. When I looked at the generated SQL, the bug was obvious. But looking only at the ActiveRecord definitions, the bug was nonobvious. Give me plain SQL all day everyday. I’m tired of ORMs.
The user object will have the correct TypeScript type, including the posts object. If you remove the include object from the findOne call and avoid fetching related posts, the user's type will change. This approach to type safety helps in catching many problems at build time.
It's possible because of the Prisma schema, which is the single source of truth for both the database schema and the generated TypeScript client.
You don't have the burden of mapping between the database schema and the types in their application. Besides that, it's a declarative and concise description for the database schema and is database agnostic.
The type safety features come at "zero-cost" because you don't have to write all the TypeScript types and mappings to the database. Prisma generates the TypeScript client from your Prisma schema.
> You are learning an entirely new DSL just to define your schema - which actually isn't that far off standard JS or TS syntax-wise
The Prisma schema is actually fundamentally different to JS or TS because it's declarative rather than imperative.
I understand where Prisma is coming from with the custom DSL: they want to guarantee type safety and therefore need to know exactly the structure of the types the result set is supposed to be mapped to.
In most other languages you'd shout "reflection" but unfortunately, there is no such thing in TS. Hence the custom DSL so you know, while parsing, what the structure of the type is.
I'm just asking myself: why invent the custom DSL for that? You could just use babel to parse TS types. Sure babel is quite the dependency but in a node environment, that wouldn't be a bigger concern to me then inventing a custom DSL instead.
You could even use TS decorators to add more metadata like sequences and (foreign) keys to the TS types.
> I'm just asking myself: why invent the custom DSL for that?
Fair question. Besides all the type safety features I mentioned above, having the DSL (Prisma schema) allows generating database clients in more than one language, e.g. Go without the database declaration being tied to a specific programming language.
It's also the reason it's declarative in contrast with most ORMs that rely on an imperative language to define these mappings.
The second reason is that the Prisma schema is used for declarative database migrations with Prisma Migrate.
The third reason is that Prisma supports introspection of an existing database. So if you were to introduce Prisma to an existing project, you'd introspect the database which would populate the Prisma schema based on the database schema. This would then allow you to use Prisma for migrations.
Could all that be achieved without a custom DSL? Perhaps. But it'd probably tie Prisma to a specific language
ecosystem and would diminish the developer experience of the features it offers.
I can understand the reluctance around a new DSL, but in reality, I haven't seen many complaints about the need to use it.
Most of exevp's comment about the Prisma schema are a non-issue given an understanding of the value it provides. But I can understand the reluctance to learn yet another DSL. But in my experience, if you've worked with relational databases, it's a breath of fresh air to be able to define your database schema with it.
Looks nice, pretty close to what we use at FB, except we use the builder pattern, so you can compose the query more easily across function calls (instead of passing in a single bag of options to findMany).
The builder pattern grew on me over the past few years and I think it's one of the best ways to do anything arbitrarily composable that is reasonably complex.
.NET's LINQ can also be used as a builder and I really, really love it - be it as part of an ORM or just .NET in general.
I love Prisma and am a huge fan of Nexus[1] in particular. But it's interesting that you mention transactions as Prisma does not yet support long running transactions[2]. I have a side project that I've been working on for a little while, but I haven't touched it in months as I'm waiting on LRT support. Prisma is being very actively developed though as far as I can see, so I'm confident that there will be a solution for this eventually.
Because to me, long running transactions are not something that should be done so lightly. A transaction is expensive for the database, so interpolating transactions with requests to external systems, other databases, etc is a bad idea from the get go to me. Just write down what you know at the moment, and update when you have the result, or use a small CQRS implementation for that specific use case. It is more reliable and scalable.
I'm no expert when it comes to transactions or databases in general, so excuse my ignorance, but it seems to me an API to perform extremely simple transactions like the one outlined in that issue (no external systems or other databases involved) would be a very basic requirement for many projects?
Been "hanging out" in the Prisma discord and reading through issues for a while and it comes up quite a lot. It doesn't seem anyone has come up with a simple/viable workaround so far.
My experience is that libraries which do handle that for you automatically do a really bad job at it.
Writing it by hand and most important thoroughly testing it and trying to avoid doing any changes needing complex migrations is in my experience more reliable on the long run.
Tell that to the Django ORM. I've used it in really big projects, for really big companies and it never failed on us.
As I said in another thread, the problem is not writing the SQL. The problem is everything else after you have written it (applying on deploy, automating for other's dev envs, rollback, etc)
>As I said in another thread, the problem is not writing the SQL. The problem is everything else after you have written it (applying on deploy, automating for other's dev envs, rollback, etc)
What's wrong with migration managers ? They handle all you mentioned just fine. The only problem I've seen with this approach were when working on a team that didn't have proper CI flow, then it was possible for people to check in bad migrations and screw up branches, etc.
I've worked with both approaches in multiple languages, and frankly I prefer the no-magic migration manager approach.
Migrations automatically generated from model also mean your DB is tied to your app, which I'm not a fan of, if anything the reverse should be true, and the fat model approaches of rails are terrible IMO (I haven't used Django for anything serious in over 10 years so I don't remember how fat their model layer is - I know it has enough metadata to generate the admin CRUD but don't remember if it also encourages having logic on models).
The best solution I've seen is F# which uses dynamic type providers (basically compiler plugin) to auto-generate models from DB/schema at compile time, and yesql in clojure (clojure being dynamic and built arround working with data using SQL results inside of it is natural and amazing).
So far, the best thing I've found in the node ecosystem is Prisma [1], and it's better than the alternatives by a very long shot in my opinion.
[1] https://www.prisma.io/docs/understand-prisma/why-prisma