Hacker Newsnew | past | comments | ask | show | jobs | submit | tryithard's commentslogin

Cool, We use liquibase I wonder how this compares to it?

Also, how do you handle the back filling on columns, how you make sure you don't miss any data before dropping the old column?


I don't know much about Liquidbase, but I believe it doesn't support accessing both the old and the new schema versions at the same time? (I could be wrong here)

Backfilling happens in batches, we use the PK of the table to update all rows, a trigger is also installed so any new insert/update executes the backfill mechanism to update any new column.

More details can be found here: https://github.com/xataio/pgroll/blob/main/pkg/migrations/ba...


jokes on them, it's been years since I used skype!


fyi, the link show all your positions like this:

< enter fancy job title here > Delivery unit - Office Berlin , Office Ghent



>> we were writing a lot of long SQL queries with bad performances

How your tool makes sure the queries are performant?

Seems this tool connects to cloud DB's only right? how can I access my on-premise DB?


Thanks for the feedback!

The tool is computing the Data pipeline into Apache Spark to rebuild the best way to compute the pipeline and then execute it. No matter how you build your pipeline, Apache will find the best way to compute it. Better performance and efficiency at the same time.

At the moment, it's just cloud DB's only.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: