Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: ORM for TypeScript with no query-builder, supporting full SQL queries (github.com/seb-c)
220 points by Seb-C on Sept 3, 2020 | hide | past | favorite | 119 comments


For a long time, I have been frustrated with the state-of-the-art about the existing ORMs. I like things to be simple, but somehow all ORMs seems to be bloated and overcomplicate a lot of things.

When designing a new project, I have been trying to find a more satisfying design while avoiding the existing ORMs. I especially wanted to use proper SQL rather than reducing it's syntax to fit it in another language.

This is the result of my experiments. This is a completely new balance between productivity, simplicity and readability, while providing useful features.

I use the template-string tagging syntax to allow writing and concatenating raw SQL queries, while keeping it safe from injections, which allowed me to build kiss-orm. There is no query-building involved, you can freely use the full-power of SQL.

I would appreciate feedback on this (and contributions if you love it :D ).


Still not very convinced your ORM is solving a problem it didn't create but I certainly like the approach more than more traditional ORMs.

I feel like Im not really the type that wants an ORM to take care of SQL or relational functionality, all I really want is an object mapper at the edges, going in, I want to pass an object and have it map into the right fields, coming out, I want it mapped to the right place.

Doing stuff like preloading or relation loading should be done through convention or by database schema querying.

Its a tricky problem to solve, while some relational mapping is super helpful to prototype it almost always results in a mess down the line where just writing SQL upfront prevents pain later.


I sort of agree here. I tend to stay away from ORMs in general, but I'd probably use one if it actually justified itself. Most ORMs I've used basically just abstract the data layer so you don't have to write own SQL, but that in itself comes at a cost (and almost all of them are bound to eventually write some really crappy SQL for you and cause performance bottlenecks). If there's not an appreciable benefit besides just not having to write as much SQL, then it's not really worth using.


In one way or other you cannot avoid the SQL builder, this ORM is building it in the user's code base.

I believe in ballance,

you need the query builder for simple queries (that makes most of your queries)

but the complex queries should be also supported (even if they are not portable)

We have written exactly this kind of ORM https://github.com/holdfenytolvaj/pogi

It saves you a lot of code writing, but is not taking away the power of postgresql (however it is anchored to postgresql)


Jdbi works the way you describe: https://jdbi.org/


Yep, I've been enjoying JDBI for a new microservice. /me makes sign of cross and throws holy water on legacy Hibernate code it's replacing.


Personally, all I want is something that can dynamically build SQL and can hydrate into specific objects.

I have no interest in automatically building relationships, I'm completely fine doing that manually, or not at all. Which is why dapper is probably my favorite DB solution out there.

    return conn.query<MyKlass>("select * from my_klass_table where status = :status", new { status=myStatus }).ToList();
The only thing that can be bothersome sometimes is inserts and updates. It would be really nice to have something that could auto-generate the SQL for you.

    var sql = SQLGenerator.insert(MyObject);
or

    var sql = SQLGenerator.insert(MyObject, MyKlass.InsertableProperties);
It would take care of one of the downsides of raw SQL, which is schema updates needing to be dealt with in multiple places, and would remain super simple.

I mean hell, if you wanted to get fancy you could create a generator that would figure out the columns necessary by reaching out to the DB the first time and caching it afterwards, and using a naming convention to map to the actual properties. But personally, I'm 100% fine not having that.


My own TypeScript non-ORM, Zapatos, shares much of the same design philosophy (and indeed a sql`...` tagged template function): https://jawj.github.io/zapatos/

Previous discussion: https://news.ycombinator.com/item?id=23273543


Zapatos is amazing. I'd definitely use it if I were writing Node backends, I think its design is wonderful.

Also, I'm impressed by how beautiful and interactive your docs are. How did you do that? The show imports button, the embedded monaco modal.. Did you just code that up yourself just for these docs, or is this "just" some nice documentation template that I'm not aware of? Either way, very nice!


Thanks. I just coded it up myself[1]. :)

As an educator, I’d say the docs are probably the most important element of a library.

[1] https://github.com/jawj/zapatos-docs


Wow this is amazing. Really bridges the last remaining gap to close the type-safety gap from SQL -> TS/Node.JS -> GraphQL -> TS / client code.

Exactly the kind of "use SQL in typescript code with type-safety" non-ORM that I've always wanted.


thought this looked awesome when I saw it. Was waiting for a little more traction, my project to calm down and perhaps some GQL helpers to appear before jumping on board (not requesting them but just naturally picked up by the community)


Glad to hear it! Presumably GQL is GraphQL, which so far I have let completely pass me by. How would those helpers look, roughly?


"you can freely use the full-power of SQL"

Sounds like Dapper from the .Net world - which I like precisely for that reason:

https://github.com/StackExchange/Dapper


OP's project is a little more opinionated than Dapper in that it defines repositories and whatnot. Dapper's more like just a set of query extensions that support basic object mapping. I love it personally, but I wouldn't even call it a micro-ORM.


It might be a good idea to focus on the use of template strings for safe and handy SQL generation instead of introducing too many opinionated ORM concepts.

If one had to, i'd separate these things in different libraries and let the developer opt in to what he needs.


Thankfully [0], it [1] already [2] exists. [3]

[0] https://github.com/gajus/slonik-sql-tag-raw

[1] https://github.com/felixfbecker/node-sql-template-strings

[2] https://github.com/blakeembrey/sql-template-tag

[3] https://www.npmjs.com/search?q=sql%20template

Edit: When I first read your comment, I thought you were saying that you wanted a separate library just to have the SQL template functionality.

I agree with what you're saying though.


That is something I seriously considered, but I just did not want to bother with multiple packages and repositories, especially for a repository class that is about 150 lines of code.

The repository class that I provide is a very basic and handy abstraction for the 4 basic CRUD operations, but ultimately the goal is to write your own queries and repositories.


This is great. I built something a bit lower-level than this targeting MySQL not long ago. This library has taught me a couple language features I didn't know, however. The SQL tag is pretty clever. I was pleasantly surprised to see transaction support (I feel like people who don't actually use their libraries in real products tend to leave this kind of thing out).

I noticed the support for soft-delete (which seems to be a simpler thing in PgSql than in other relational db systems), which is also nice. I think another fairly easy, generic win would be a way to specify that a model has audit tracking fields (createdAt/By, modifiedAt/By, deletedAt/By, etc.). I know some database servers also have a way to track this separately (no idea whether PgSql specifically does), but there are use cases for showing that kind of stuff at an application level as well (and also can make ETL type jobs a bit less painful).

All in all, great work. It looks very polished!


Thanks! It was a bit tricky to solve the transactions problem, because I did not want to abstract the BEGIN / COMMIT / ROLLBACK instructions, but at the same time I needed to provide something to ensure the integrity of a transaction made of multiple commands.

As for the audit fields, I decided not to include it because it very simple to implement, and would be clearer to do it specifically. Most of those fields could be implemented by just adding a default value in the right function.


Another interesting way to solve the transaction problem is following more of a Unit of Work pattern. Then the transaction is more or less held until the unit is committed. Or rolled back. This is also wonderful for writing integration tests because your repositories all contribute to the same UoW and your test can just roll back when it’s finished to return to a clean state.


In Ruby, I have always been extremely fond of the Sequel ORM - http://sequel.jeremyevans.net/

The amount of expressiveness and capability it gives is simple outstanding. It lets me drop into pure SQL as well as write statements which are simpler and easire to grok when I'm using the ORM level abstractions.

Sequel for SQL users - http://sequel.jeremyevans.net/rdoc/files/doc/sql_rdoc.html

Cheat sheet - http://sequel.jeremyevans.net/rdoc/files/doc/cheat_sheet_rdo...


I'm extremely happy with Django's ORM. It never felt bloated or overcomplicated to me, especially when compared with the innate verbosity and complexity of SQL.


I've been playing with tagged-template for a while now and have a similar solution at work. I wanted to build something like that for a while now, well done!

I wrote about tagged template literal on my website : https://cavaleiro.fr/posts/template-literals/


What do you think about ORMs like Androids Room[1], where you do specify your own queries with sql (by annotating abstract methods) and can bring your own model and the ORM creates the database schema in the database and gives you access to easy insertion methods.

I don't know how Room does it but darts floor library(which is similar)[2] then generates the code that is necessary, in addition to a very slim runtime layer for managing update events. I find that this is also an approach which in some ways only does half the work for more flexibility, while still depending on a fixed database schema.

[1] https://developer.android.com/training/data-storage/room

[2] Shameless plug: https://github.com/vitusortner/floor


Thanks. I do not have experience with Room, but I usually prefer to control and know what is happening in my schema.

For example, if your schema is automatically handled, how do you both rename and change the type of a column without losing data? Would it perform a delete and a create?


Only the creation is handled automatically. Room helps with migrations by enforcing a database version but you will have to write the migration code(sql) yourself.


thanks. I love it! It is not clear where the `article.id` comes from in many-to-many example. Anyway I don't think I would use those relationships patterns, instead I would just load and operate in plain (shallow) model objects.


Thanks! It was a copy/paste mistake. I just fixed the README file.


tl;dr: thank god it finally has been done.

Long version: i've been seriously frustrated with the state of ORM (in Javascript in particular) for years.

Javascript ORM are nice and handy if all you're doing is simple CRUD stuff. If you're starting with more complex relational queries (we're using an RDBMS so why wouldn't we?) you quickly reach the limits of what the ORM can map. If you start doing more complex aggregations or stuff like window functions and the likes, you most certainly have to fallback to raw queries, usually rendering the whole mapping function of the ORM completely useless.

Also projects like Knex.js (or for example HQL in the Java world) look nice at first but are mostly useless IMHO because they just replace SQL with another syntax you have to learn. Why stay with the language everybody familiar with RDBMS can speak if you can invent some useless abstraction, right? And please don't tell me you want to support multiple RDBMS in the same codebase. How often is this really an important use-case?

I really loved the way MyBatis did this in Java: instead of mapping tables to objects, mapping result sets to objects and leaving the full power of SQL to the developer.

Always wanted (and actually started something almost the same as you did some weeks ago) to basically do MyBatis in Javascript and never had the time to.

Thanks for getting it started.


Very surprised by the no value you attach to Knex, I'm curious to get your view on the values I see in using it for a few years now.

I feel at ease with SQL and like to get as close to it as possible in my Node service. But Knex still appears to be highly valuable to me to, for instance, not care about managing DB connections, at least until they become critical for my use-case.

Not care about sanitising inputs and protecting myself from SQL injections.

Have more readable and maintainable code in my repositories than SQL in plain strings as a default. Yes I have some raw queries but 98% of my queries are easy to follow knex chains.

Not care about creating and maintaining code for migrations. Running them in transactions, keeping track of and running only the ones needed, ... so happy I didn't have to re-invent that and be the responsible of it never ever failing in production.


> not care about managing DB connections, at least until they become critical for my use-case.

That's something the db driver usually does. E.g. when using Postgres, the pg library already comes with the connection pooling. Haven't looked into the implementation in Knex but i'd suspect they just use the Pool class of pg (https://node-postgres.com/features/pooling).

> Not care about sanitising inputs and protecting myself from SQL injections.

That's also not that much of a concern when just binding parameters.

> Have more readable and maintainable code in my repositories than SQL in plain strings as a default. Yes I have some raw queries but 98% of my queries are easy to follow knex chains.

Comes with the cognitive cost of maintaining another abstraction for SQL.

> Not care about creating and maintaining code for migrations.

That's actually the one feature which made me use Knex for years (just for the migration part of course :) ). I didn't use the schema builder functions mostly, just a bunch of `knex.raw` calls in the migration files. But for the benefits you mentioned (transactions, bookkeeping) it is really useful.


IMHO one of the key use cases for query builders like Knex is that you can programmatically build up a query in a very clean fashion.

Otherwise you need to resort to workarounds like concatenating strings, or building the biggest possible query and using boolean toggles to disable parts of your query.


I don't undertand your frustration. Virtually every ORM I know has a query builder which as a .raw() function that lets you do anything you want.


Thanks for your comment!

As you say, one of the main arguments for query-builders is allowing the support of different RDBMS. But in my experience it not only never happens, but the abstraction is never perfect and makes any unsupported edge case impossible to solve without hacking around.

There is so much that can be done with SQL that would result in a mess of inefficient spaghetti code...


Also one of the bigger reasons why i dislike SQL abstractions: you usually pick your RDBMS for a reason, because of some vendor-specific features, extensions or something. Abstracting SQL away makes end up with the smallest common denominator which defeats the purpose of choosing one specific RDMBS.

So whenever you have reasons to spend some thoughs on the choice of RDBMS, you'll probably not want to abstract SQL away.


We actually went with knex/objection specifically because there was a planned migration to postgres that it would simplify, and we'd be able to migrate different databases in stages.

Plus it was integrated into feathers, and knex simplifies hook-based query building logic.


All these things fall short the moment you need real "production" features, such as reliable migrations (writing them by hand? no thanks. Outsourcing to another library like knex? no thanks), transactions, community support, relationship/nested/join queries without a ton of boilerplate and being battle tested.

So far, the best thing I've found in the node ecosystem is Prisma [1], and it's better than the alternatives by a very long shot in my opinion.

[1] https://www.prisma.io/docs/understand-prisma/why-prisma


I actually much prefer writing migrations by hand. It's actually pretty simple, and it gives you complete control over the schema.


Agree, it's pretty simple writing them. The problem is deciding which ones to apply for a given version of your application, when, running them automatically on each of your developer's laptops, your testing environment, and automate and keep track of all the changes.

Or are we talking of just writing a "CREATE TABLE..." and hand it over the wall to the ops? I agree that's simpler. Not something I like to do these days.


1. create a "schema_version" table

2. have a unique, incremental migration ID in each migration filename

3. apply them one by one in order depending on the current schema_version from the target environment, and update the schema_version accordingly

4. rollback the transaction in case of error, otherwise commit and enjoy your updated schema

I used golang-migrate/migrate in production for the past 3 years to do exactly this but you can easily implement it yourself too. golang-migrate/migrate can be used either as a Go library or as a CLI tool.

https://github.com/golang-migrate/migrate

Or did I misunderstand the issues you're talking about?


So... do pretty much exactly what ORMs with migration support already do for you?


Yes, but write the DDL by hand rather than having it auto-generated.


Yes, the point is that you don’t need an ORM to manage migrations.


And I assume you mean all of this is easier to do, safer, takes less effort, has more documentation, is more tested and proven and will be easier to understand by a new joiner after the one that wrote it left than using an existing, popular solution?

Sorry, I disagree.


It’s definitely tested and proven, that’s a very common approach (not this specific migrate tool, but the general pattern). It’s simple to setup and only require standard SQL (or whatever your DB is using), has almost no moving parts, and is as stable as it can be.

I’m not saying it’s easier than other solutions, that’s not really a criteria I personally care about. But it is stable, simple, relatively easy to maintain, using standard tools, reliable.


Just use Dbup [1] if you're a .net shop, and flyway [2] otherwise.

They are both existing, popular solutions that helps you organize your plain SQL migration scripts.

1: https://dbup.readthedocs.io/en/latest/ 2: https://flywaydb.org/


Writing migrations by hand doesn't mean you don't have an automatic migration system. TypeORM for example can generate migrations automatically for you, but you can also generate an empty one and write the SQL yourself, and get the safety of automatic migrations


Same here, it's way more flexible. I find it quite important to know exactly what will be run as a migration given how critical changing the schema can be.


We investigated using Prisma v2 as a way of auto-creating a GraphQL API that directly interfaces with a PostgreSQL database, but we immediately pivoted to other solutions as soon as we discovered that the Prisma Client is really spinning up a behind-the-scenes GraphQL rust server itself to access the db. The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.

For the life of me, I couldn't figure out why it's desirable to have an intermediary GraphQL server issuing database requests. Is that something you have also discovered while using Prisma?


> We investigated using Prisma v2 as a way of auto-creating a GraphQL API that directly interfaces with a PostgreSQL database, but we immediately pivoted to other solutions as soon as we discovered that the Prisma Client is really spinning up a behind-the-scenes GraphQL rust server itself to access the db. The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.

Do you know what Hasura is doing differently? I assumed it worked the same way as Prisma does, with an intermediate server.


Hasura is a gql server too. I can’t say how it’s different from prisma, but it works by running as a frontend process to your Postgres dB. You issue gql, it converts to sql and then back to gql on the way out.

Some of the things it does differently though. They declare access permissions in the dB (Hasura tables, managed through their api) and they work those into the queries. They do this by building a json object with any of your variables in and then reference those within the query. This allows for a neat trick where they can run the same query for different clients by starting with multiple rows (one per client) containing their respective variables. This is particularly great for subscriptions. They can poll the dB, to grab all data for all connected clients in a single hit, and then if anything has changed, they can notify the external clients about it.


Hasura is a GraphQL layer for the database. It introspects a PostgreSQL database and creates a GraphQL API which exposes access/operations to the database.

Prisma is a toolkit that consists of a database client (Prisma Client), a migrations tool (Prisma Migrate), and a database IDE/UI (Prisma Studio). The generated Prisma Client is in TypeScript and can be imported into a Node.js application.

While Prisma can be used to build a GraphQL API that connects to a database, Prisma is completely agnostic to the GraphQL tools you use. (https://www.prisma.io/docs/understand-prisma/prisma-in-your-...)


I'm not sure about PostGraphile, but Hasura I know transforms a graphql request into a SQL request that can return the JSON response expected as part of the GraphQL spec. I believe that's the reason that Hasura only works with PostgreSQL, since it has the necessary SQL functions to generate arbitrary JSON objects.


> The performance of fetching a mildly complicated query (3 joins) ended up being more than three times slower than Hasura or PostGraphile.

Please forgive my ignorance, but these type of conversations always intrigue me and make me want to learn more. As a software engineer, at what point are you running these performance benchmarks, and how are you doing them exactly? How have you had time to make these decisions and pivot to other technologies?


> reliable migrations (writing them by hand? no thanks. Outsourcing to another library like knex? no thanks)

IMHO, "real production features" means also caring about the long term maintenance, performance and reliability of the data.

My experience with ORMs (especially ActiveRecord patterns) is that it always becomes a mess once you reach a certain level of complexity. It is very fast to get started, but gets slower over time once you start having a lot of bugs and hard-to-solve behaviours.


Interesting, then what is your alternative? It seems like hand-rolling all of your migrations would be an enormous pain on larger projects.


Well, this project is my alternative, and so far I did not see any case where it fails, but I'd be happy to learn.

This migration pattern is quite common in many frameworks, I just removed the query-building abstraction layer.


I find that most of the production features you mentioned are actually more difficult using a fat ORM.

How many hours have i wasted figuring out how i can write and map some complex joins or aggregation query with <insert ORM name here>? Would have been a 3 minute task if all i had to write was just SQL ...

Plus i have a hard time seeing the benefit of Prisma. You are learning an entirely new DSL just to define your schema - which actually isn't that far off standard JS or TS syntax-wise so it feels like a complete waste of time to come up with the DSL in the first place. I can only imagine the hard time you have once you first have to break out of the frameworks cage because you hit a case which isn't easily solved by the framework itself ...


>I find that most of the production features you mentioned are actually more difficult using a fat ORM.

Prisma doesn't look like the traditional ORM to me. In fact, the library from the OP uses classes, which prisma and Knex do not use

> How many hours have i wasted figuring out how i can write and map some complex joins or aggregation query with <insert ORM name here>? Would have been a 3 minute task if all i had to write was just SQL

In my experience this is just a very, very small percent of the cases. Most of the time I find myself doing pretty simple CRUD operations, and the boilerplate for the simple cases goes out of hand quickly, specially when running joins.


> In my experience this is just a very, very small percent of the cases. Most of the time I find myself doing pretty simple CRUD operations, and the boilerplate for the simple cases goes out of hand quickly, specially when running joins.

That's true, although the most frustrating percent of the cases :)

I think what KISS will eventually need is some simple toolbelt for the basic CRUD queries, e.g. expanding lists of column names, dealing with casing (snake_case to camelCase). So making the stuff you write 90% of the times easy without inventing a custom SQL abstraction.

MyBatis (which i mentioned in another comment) approached this for example with sql snippets one could re-use in queries.

KISS enables the same thing by allowing nesting of tagged sql strings. I guess time will show which additional helpers are needed to make the devs life easy/reduce the boilerplate for the simple queries.


> expanding lists of column names

I am not sure about what do you mean here?

> dealing with casing (snake_case to camelCase)

This is one of the opinionated parts of kiss-orm I guess, because I specifically do not want to implement this.

I think having consistency in the naming of the properties/columns is more important. I would rather break the naming convention by having snake_case properties than automagically rename properties.


> I am not sure about what do you mean here?

Given i have defined the columns of interest in my data objects anyways...

  class User { name = null; email = null; }
... i wouldn't necessarily have to repeat them when writing my SQL:

   sql`SELECT ${Object.getOwnPropertyNames(new User).map(camelCase).join(', ')} FROM users`
Of course using a better helper function.

There's a few repetitive tasks when writing SQL where don't necessarily need a powerful query builder but still end up writing a few handler functions. Shipping some common helpers with the framework might be handy.


I see what you mean. Indeed, more helpers would be ... helpful I guess, but for now I only implement what I really need and feel like is necessary.

For your specific example, what is wrong with `SELECT *`?


> How many hours...?

Plenty. Just this week, I spent the better part of a day trying to figure out why customers were losing data. Turns out it was a bad ActiveRecord polymorphic definition that was written over 4 years ago, but which only started surfacing recently due to overlapping ids in the relevant tables. When I looked at the generated SQL, the bug was obvious. But looking only at the ActiveRecord definitions, the bug was nonobvious. Give me plain SQL all day everyday. I’m tired of ORMs.


Thanks for sharing your thoughts about Prisma.

> Plus i have a hard time seeing the benefit of Prisma

Prisma is supposed to improve your productivity and confidence when working with a database. It does so with a strong focus on type safety.

Most ORMs and query builders in the Node.js/TypeScript ecosystem do not provide the level of type safety that Prisma does.

For example in a blog with users and posts (1:n) querying a user and related posts looks as follows:

  const user = await prisma.user.findOne({
      where: {
        id: 1
      },
      include: {
        posts: true
      }
    })

  console.log(user.posts)

The user object will have the correct TypeScript type, including the posts object. If you remove the include object from the findOne call and avoid fetching related posts, the user's type will change. This approach to type safety helps in catching many problems at build time.

It's possible because of the Prisma schema, which is the single source of truth for both the database schema and the generated TypeScript client.

You don't have the burden of mapping between the database schema and the types in their application. Besides that, it's a declarative and concise description for the database schema and is database agnostic.

The type safety features come at "zero-cost" because you don't have to write all the TypeScript types and mappings to the database. Prisma generates the TypeScript client from your Prisma schema.

> You are learning an entirely new DSL just to define your schema - which actually isn't that far off standard JS or TS syntax-wise

The Prisma schema is actually fundamentally different to JS or TS because it's declarative rather than imperative.


I understand where Prisma is coming from with the custom DSL: they want to guarantee type safety and therefore need to know exactly the structure of the types the result set is supposed to be mapped to.

In most other languages you'd shout "reflection" but unfortunately, there is no such thing in TS. Hence the custom DSL so you know, while parsing, what the structure of the type is.

I'm just asking myself: why invent the custom DSL for that? You could just use babel to parse TS types. Sure babel is quite the dependency but in a node environment, that wouldn't be a bigger concern to me then inventing a custom DSL instead.

You could even use TS decorators to add more metadata like sequences and (foreign) keys to the TS types.


> I'm just asking myself: why invent the custom DSL for that?

Fair question. Besides all the type safety features I mentioned above, having the DSL (Prisma schema) allows generating database clients in more than one language, e.g. Go without the database declaration being tied to a specific programming language.

It's also the reason it's declarative in contrast with most ORMs that rely on an imperative language to define these mappings.

The second reason is that the Prisma schema is used for declarative database migrations with Prisma Migrate.

The third reason is that Prisma supports introspection of an existing database. So if you were to introduce Prisma to an existing project, you'd introspect the database which would populate the Prisma schema based on the database schema. This would then allow you to use Prisma for migrations.

Could all that be achieved without a custom DSL? Perhaps. But it'd probably tie Prisma to a specific language ecosystem and would diminish the developer experience of the features it offers.

I can understand the reluctance around a new DSL, but in reality, I haven't seen many complaints about the need to use it.


I have this same stuff in TypeORM but don't have the DSL issues that exevp mentioned..


I think this article does a good job of comparing the two if you're familiar with TypeORM: https://medium.com/better-programming/prisma-vs-typeorm-60d0... which compares the two.

Most of exevp's comment about the Prisma schema are a non-issue given an understanding of the value it provides. But I can understand the reluctance to learn yet another DSL. But in my experience, if you've worked with relational databases, it's a breath of fresh air to be able to define your database schema with it.


Looks nice, pretty close to what we use at FB, except we use the builder pattern, so you can compose the query more easily across function calls (instead of passing in a single bag of options to findMany).


The builder pattern grew on me over the past few years and I think it's one of the best ways to do anything arbitrarily composable that is reasonably complex.

.NET's LINQ can also be used as a builder and I really, really love it - be it as part of an ORM or just .NET in general.


This. I've actually used Rails/ActiveRecord migrations in Node projects. More than once...


I love Prisma and am a huge fan of Nexus[1] in particular. But it's interesting that you mention transactions as Prisma does not yet support long running transactions[2]. I have a side project that I've been working on for a little while, but I haven't touched it in months as I'm waiting on LRT support. Prisma is being very actively developed though as far as I can see, so I'm confident that there will be a solution for this eventually.

[1] https://nexusjs.org/

[2] https://github.com/prisma/prisma/issues/1844


Because to me, long running transactions are not something that should be done so lightly. A transaction is expensive for the database, so interpolating transactions with requests to external systems, other databases, etc is a bad idea from the get go to me. Just write down what you know at the moment, and update when you have the result, or use a small CQRS implementation for that specific use case. It is more reliable and scalable.


I'm no expert when it comes to transactions or databases in general, so excuse my ignorance, but it seems to me an API to perform extremely simple transactions like the one outlined in that issue (no external systems or other databases involved) would be a very basic requirement for many projects?

Been "hanging out" in the Prisma discord and reading through issues for a while and it comes up quite a lot. It doesn't seem anyone has come up with a simple/viable workaround so far.


> reliable migrations

My experience is that libraries which do handle that for you automatically do a really bad job at it.

Writing it by hand and most important thoroughly testing it and trying to avoid doing any changes needing complex migrations is in my experience more reliable on the long run.

EDIT: removed inappropriate caps usage


Tell that to the Django ORM. I've used it in really big projects, for really big companies and it never failed on us.

As I said in another thread, the problem is not writing the SQL. The problem is everything else after you have written it (applying on deploy, automating for other's dev envs, rollback, etc)


>As I said in another thread, the problem is not writing the SQL. The problem is everything else after you have written it (applying on deploy, automating for other's dev envs, rollback, etc)

What's wrong with migration managers ? They handle all you mentioned just fine. The only problem I've seen with this approach were when working on a team that didn't have proper CI flow, then it was possible for people to check in bad migrations and screw up branches, etc.

I've worked with both approaches in multiple languages, and frankly I prefer the no-magic migration manager approach.

Migrations automatically generated from model also mean your DB is tied to your app, which I'm not a fan of, if anything the reverse should be true, and the fat model approaches of rails are terrible IMO (I haven't used Django for anything serious in over 10 years so I don't remember how fat their model layer is - I know it has enough metadata to generate the admin CRUD but don't remember if it also encourages having logic on models).

The best solution I've seen is F# which uses dynamic type providers (basically compiler plugin) to auto-generate models from DB/schema at compile time, and yesql in clojure (clojure being dynamic and built arround working with data using SQL results inside of it is natural and amazing).


It seems you cannot load relationships for a collection of entities easily without N+1 queries, unless I'm missing something. Based on the many-to-many section of the docs (https://github.com/Seb-C/kiss-orm#many-to-many), I would have to load relationships for each entity separately, and then if they have further nested relationships, run a query for each again. The subsequent section also mentions eager loading is not supported.

For me, being able to load relationships (and especially nested relationships) with little boilerplate and few queries is probably the most useful feature in an ORM (usually explicitly eager-loaded), so I'm sad to see it's not supported.


Implementing eager-loading with the current philosophy of kiss-orm would be tricky and difficult to use/read. It did not seem a high-priority, so I chose to not implement it for now.

Depending on the ORMs, the definition of eager-loading also varies.

I have seem ORMs doing everything in a single query, returning everything in a single result-set and then de-duplicating everything client-side. This is very messy (and impossible to hack/fix most of the time).

Currently, the way to go would be something like this:

    const articles = await articlesRepository.search(sql`
        "authorId" IN (${sqlJoin(
             users.map(user => user.id),
             sql`, `,
        )})
    `);
    // Dispatch and assign the articles in the collection of users
I could consider having a helper method to make this easier, but I am afraid this would be quite difficult to use.


Agreed on the N+1 query problem, but I'm a bit mystified why people still choose ORMs for any projects with even a moderate level of database complexity. When using a straight SQL layer (JDBC or the basic features of KISS-orm) the query is in SQL form and the performance characteristics of the query are obvious from the query or can be analyzed easily by taking the query and running it through the database's query analysis tools. Using an ORM just adds extra steps: instead of optimizing a query in SQL, the query needs to be optimized using directives or methods or annotations that the ORM provides in the hope that the SQL that is ultimately generated is efficient; that is, we're programming the ORM which programs the database instead of just programming the database. Why bother with the extra step? With modern programming languages there really isn't that much extra boilerplate to implement the DTOs for straight SQL and it usually results in code that is a lot easier to maintain and extend in the examples I've seen.


In my experience, 99% of the relationships I fetch fit the basic one-to-one, one-to-many, many-to-many definitions that pretty much all ORMs support. For these cases, the queries are generally more than efficient enough and there's little reason to reinvent the wheel and implement the code for fetching those relationships yourself.

For anything more complex, I agree. But for the common case of fetching simple and often (depending on your project) nested relations, I definitely enjoy the abstraction provided by ORMs.


Thank you for creating Kiss ORM. I have even created a HackerNews account to be able to comment on it.

I have been searching for this type of ORM in Typescript for a while. I agree to write raw SQL for queries. So easy and expressive and one less layer of abstraction. I also agree on the value of the respository pattern and methods for CRUD operations to not write this SQL by hand. Making the loading of associations an explicit decision is also the right way to go. The Rails community has good experience with performance surprises of automatic loading of relationships.

Personally I found the most useful ORM in Ecto for the Elixir (Erlang) language: https://hexdocs.pm/ecto/Ecto.html (ignoring the query capability). It follows very much the repository pattern like Kiss ORM. The API is a bit more succinct (you only define a schema for each table and use a generic repository instead of subclassing for each table) but that might be possible only due to Elixir's language capabilities like meta-programming.

One piece of Ecto that might be a win to implement in Kiss ORM is the "Changeset" pattern to give a canonical, succinct and productive solution to validate data (https://hexdocs.pm/ecto/Ecto.Changeset.html#content). For example have a look at how Ecto unifies validation (checked without hitting the DB) and constraints (checked by hitting the DB) in a single API. This type of functionality increases the usefulness of the repository's CRUD operations.

Thank's for your initiative to create KISS ORM. I will sure try it out and follow along it's evolution.


Just forgot to mention Ecto's "Multi API", that is worth knowing. Allows to construct a chain of operations as a data structure and to execute it later transactionally. You may even include operations that are part of transactional business logic but that do not hit the DB (like sending an email). (https://hexdocs.pm/ecto/Ecto.Multi.html#module-run)

As I understand KISS ORM's sequence function would also allow to express business logic transactionally and operations beyond the DB. Obviously the rollback would only effect the DB, but other failing operations can at least trigger the rollback, right? I think this is usefull as integration with external services (like email providers, payment APIs..) are really the source of runtime surprise that might fail a business operation and demand a DB rollback.


This multi-api indeed seems similar to the sequence function.

If you can try-catch the failure in the external service, you can rollback the transaction with kiss-orm. Actually kiss-orm does not abstract the transaction itself, so you can do whatever you want.

I just realized a flaw in my current implementation, which is that directly using the repository CRUD methods from inside the sequence (rather than a query) function would execute those operations outside the scope of the sequence.


Wow, thank you for the kind message!

I did not know about Ecto. It is interesting, but I think more abstract than what I would like kiss-orm to become.

About the ChangeSet stuff, from what I understood it is actually already possible in kiss-orm. The main difference being that I decided to not do the validation at runtime, but rely on typescript.

At worst you could have a runtime SQL error (inserting the wrong type of data in the wrong column for example), but the queries would be safe from injection.

You can definitely keep using kiss-orm with the default `any` type for the insert and update operation, but you can also specify it: https://github.com/Seb-C/kiss-orm#advanced-typings This way, your typings have to be right to use those methods. Runtime validation of inputs should not be done in the repository/database/orm layer anyway :) .


Love to see more options in this space! I've made a switch from traditional ORM (TypeORM and Sequelize) to a similar "light ORM", Pgtyped, and never looked back since.

Like in Kiss, you write queries in SQL. But unlike other "light ORMs", it also provides type safety by generating type declarations by directly connecting to your database instance and type-checking your query templates.

Honestly, I think it's the best of both worlds, and would love to see more developers finally learning SQL and ditching "fat ORMs" that try to hide it under abstraction layers that always end up leaking.


I agree. I’ve been using pgtyped for a recent project, along with https://github.com/graphile/migrate for migrations, and it has been great to just write SQL and not have to battle with an ORM’s abstractions.


I started using Objection.js and Knex last year and I think it might be the best ORM I've used on any platform.

There's no way I'm going back to writing raw queries. If I did that, eventually I'd rewrite Knex.

If and when I need a raw query, I can already do that with Knex.


When using raw SQL in strings, I really miss the automatic formatting that is provided for HTML, TSX and TS with prettier.

Raw SQL query strings also do not compose well and I miss auto completion when writing them (yes, I'm a spoiled kid after so much Typescript usage).

As with everybody else, I didn't like existing ORM/builder approaches, so I built and use my own with type-inference: https://github.com/hoeck/typesafe-query-builder. Any feedback would be great because I have the gut feeling that this one has gone way too far on the type astronaut side of things.


You should be able to do this with IntelliJ e.g. where you can inject a language into a string. This is quite handy as you can reformat and open the string in a separate editor if needed.

The caveat is that it does not work well if you are composing the SQL queries from multiple small parts.

see https://www.jetbrains.com/help/idea/running-injected-sql-sta...

I suppose that other editors or IDE can do similar things.


   sql"SOME SQL";
This looks to be a "tagged template": https://basarat.gitbook.io/typescript/future-javascript/temp.... I'm a TS user but hadn't seen that feature before. AFAICT, the library is using a side-side-side-feature (tagged templates) of TS as the core abstraction [1]. Impressive. Might be a little brittle in the face of significant SQL or query composition?

Anyhow, I'm a heavier-ORM user but that's impressive.

--- [1] https://github.com/Seb-C/kiss-orm/blob/c4b6c9ad2f81337938a9c...


Not specific to TS, just one of the modern JS features: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Scroll down to the part explaining how to use custom tags.


Tagged templates are a core feature of JS, not just of TS

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


This is nothing special, it's commonly used e.g. with GraphQL (the `gql` tag).


I think its funny that 'opinionated' could be considered a virtue. I'm imagining a charity that uses 'opinionated' to describe itself, or even worse, a person. Even Apple, a brand famous for its strong design stances, does not use that word as a description of its values.


This is very nicely done, kudos to the author. I'm sure many will find this useful.

Personally I've gotten away from using SQL RDBMSs. Since I primarily build on AWS I use DynamoDB but the same principle would apply elsewhere. I like to store data in the format that best supports the read model. Event sourcing allows me to change the structure of the read model, or add new read models, or vary strategies depending on data, as needed. I like that I no longer have the impedance mis-match of the normalized relational model.

It was a big jump to make, and I had to unlearn a lot. I'm not dismissing the value of RDBMSs at all. I love them, especially star schemas for analysis. I just want to share that it's ok to not use an relational model for your transaction store.


still yet to find an ActiveRecord equivalent for javascript.


I'm trying to build that:

https://github.com/stephenh/joist-ts/

Disclaimer docs aren't amazing yet, so looking through the examples/integration tests are the best way to see how it works.

Prisma has a few mentions in this thread; joist's differentiator is that it leans more "domain model" i.e. validation rules, derived values, and business-logic-y things being in the model.

Which is not for everyone (i.e. the ORM-less approach of the OP/kiss-orm is fine too), but if you're looking for that sort of thing...

Joist also has a novel approach for modeling the lazy-initialized collections/references of ORMs, which is historically tricky in Node/JS/TS. In Joist, all collections/references require `await` to access by default, but that gets tedious very quickly, so you can use populate hints to get a now-synchronously-accessible sub-view of the object graph.


What about this? https://github.com/typeorm/typeorm/blob/master/docs/active-r... (Forgive me if this isn't what you are looking for, not super familiar with ActiveRecord myself, just recalled seeing that yesterday!


https://mikro-orm.io/ is a much better typeorm alternative as it is actively being maintained. TypeORMs development has come to a halt. And many issues are not being fixed.


I like TypeORM but have passed on it for now because of this potential sql injection issue: https://github.com/typeorm/typeorm/issues/3740 I have passed a field to order_by() from the user in Django before. I would be more likely to use a dict now, but I shudder at the thought of accidentally creating an SQL injection bug because I expected to be protected by my ORM.


Yes, TypeORM is definitely the JavaScript answer to ActiveRecord


Can vouch for TypeORM, hopefully it's creator will pick it back up soon and get it to a stable v1.0.0!!


That's Prisma [1] for me. Coming from Django, I've found prisma even better than the Django ORM, despite not yet as complete.

[1] https://www.prisma.io/docs/understand-prisma/why-prisma


This. I haven't tried anything complex but so far Prisma 2 has just been amazing. The API is just so clean and simple.


This bills itself as 'ActiveRecord for Javascript'. Kinda relies on Graphiti's implementation of the JSON API spec. I really like Graphiti and Spraypaint though.


Sorry got ahead of myself there:

https://www.graphiti.dev/js/


Seems similar to PureORM[1]. It allows you to write regular native SQL and receive back properly structured (nested) pure business objects.

The name pureORM reflects both that it is pure ORM (there is no query builder dimension) as well as the purity of the mapped Objects.

[1] https://github.com/craigmichaelmartin/pure-orm


As a never-ORM guy I really like what this is trying to do. It seems like the best of both worlds. Kudos!


I am glad that it is appreciated! Thank you.


I actually built something almost exactly like this internally. This is awesome. But, its a far cry to call this an ORM. Its just a safe query serializer. ORM takes an object and writes the query for you.


the template syntax reminds me of an other postgres libary https://github.com/porsager/postgres


There's still nothing like Elixir's Ecto in JS land...


History keeps repeating itself. People never seem to learn anything. At first there were no ORMs, then ORMs became extremely popular and everyone was using them, then everyone learned that ORMs were a very bad idea and we stopped using them, vowing never to make the mistake again... And here we are again in 2020, ORMs are back. They will be in for a while, then out again, then in again....

Same with statically typed vs dynamically typed. First everything was statically typed, then dynamically typed (interpreted) languages gained popularity, developers LOVED not having to wait for the compiler to finish to test their changes; this was a revolution... Now again, we're going back to statically typed languages, everyone uses clunky bundling and transpilation tools which add bloat and everyone is happy to wait 20 seconds to a minute for their code to compile.

Every few years, developers believe the opposite of what they believed before and the consensus seems to be close to 100% every time. It's ridiculous.


That is because many people believe in extremes. After a while people should learn that there is no absolute truth, everything has it's place.

ORM is good sometimes, pure queries are better on others. So what is needed is more like relaxed ORMS.


That's overly broad. A lot of people jump on the latest trends, especially people who are simply new to development, because there's no way you can know stuff you haven't had time to learn.


What I've learned from my career so far is that developers hate history. They just don't want to learn from experience. They see the new shiny thing and jump on the hype train. Then they come to conclusions like "mongo was not the right choice since we never needed to scale and our data is relational".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: