Neon has pg18 ready to use on the day it's released, just like last year.
Sadly one of the most interesting features, Async I/O, is going to take some work at the file system level for us to support. For now all operations are still sync. And we'll keep updating on our progress there.
cold starts are 500ms on average, and that's only for the first call that wakes up the db from hibernation. people still seem to think that this latency happens for every call (see other threads here) but once the service has woken up (cold start over) you're back to regular (sub 10ms) latency timings and the service continues to run that way. you'll only hit a cold start again if (you have this option turned on) your service goes idle for > 5 min. You can turn scale-to-zero off and you'll run 24/7, have zero cold starts.
$19 plan is going away, will launch a better $5 plan soon.
I use neon quite a bit, profiling seems to show ~600-980ms of extra latency. This is in the AWS London region, on postgres 15/16.
Regardless if I've got a website that's used a couple of times a hour every hour then the practical reality is almost all users have a extra second of latency or so.
I'm not complaining, it's a great product that I'll continue to use, but it's the biggest pain point.
there isn't something like Vitess for Postgres yet but there needs to be. Migrations are painful in general and they become very painful at scale. i haven't used gel yet, i know it manages migrations but i don't know to what extent. most of my experience is with prisma, drizzle, and atlas.
neon is working on some plans to solve migrations at scale. we think our custom storage layer will allow us to optimize certain paths, like setting a default value in a new column for a table with millions of rows. this alter command can take a lock on a table for hours. but ultimately we need better tooling.
ideally there is a client and a hosted service such that you can use the client to run migrations on your own from the CLI and integrate it into your dev workflow. the hosted service version allows you to push up your schema change from the client to an API. from there you can manage the migration rollout from an operational dashboard that helps you tune resourcing.
when i was at github we used vitess to roll out a migration that took 3 weeks to complete. a long time to wait but that's a better tradeoff compared to a migration that takes down production for 6 hours.
I totally agree with schema migrations being painful, have you seen the open-source tool we developed to tackle this problem? It is called pgroll: https://github.com/xataio/pgroll
And someone can correct me if I am wrong but really similar migrations are possible with Postgres I remember reading an article about some company doing a similar migration strategy using postgres publications. We just need better tooling.
we have multi-region disaster recovery and replicas coming in 2025. Switch over time will be greater than this active-active system but the overall latency for Neon should be much less on a consistent basis.
It's never available on homebrew the same day so we all worked hard to make it available the same day on Neon. If you want to try out the JSON_TABLE and MERGE RETURNING features you can spin up a free instance quickly.
maybe you're seeing otherwise but i updated a min ago and 17 isn't there yet. even searching for 'postgres' on homebrew doesn't reveal any options for 17. i don't know where you've found those but it doesn't seem easily available.
and i'm not suggesting you use a cloud service as an alternative to homebrew or local development. neon is pure postgres, the local service is the same as whats in the cloud. but right now there isn't an easy local version and i wanted everyone else to be able to try it quickly.
My tests running ALTER varied from ~20 seconds to ~1 min for the changes.
> Current CI/CD practices often make it very easy for software developers to commit and roll out database migrations to a production environment, only to find themselves in the middle of a production incident minutes later. While a staging deployment might help, it's not guaranteed to share the same characteristics as production (either due to the level of load or monetary constraints).
(neon.tech employee here)
This is where branching databases with production data helps quite a bit. Your CI/CD environment and even staging can experience the schema changes. When you build from a seed database you can often miss this kind of issue because it lacks the characteristics of your production environment.
But the author rightly calls out how staging isn't even enough in the next paragraph:
>The problem is, therefore (and I will repeat myself), the scale of the amount of data being modified, overall congestion of the system, I/O capacity, and the target table's importance in the application design.
Your staging, even when branched from production, won't have the same load patterns as your production database. And that load and locks associated will result in a different rollout.
This has me thinking if you can match the production environment patterns in staging by setting staging up to mirror the query patterns of production. Mirroring like what's available from pg_cat could put your staging under similar pressure.
And then this also made me think about how we're not capturing the timing of these schema changes. Unless a developer looks and sees that their schema change took 56 seconds to complete in their CI system you won't know that this change might have larger knock on effects in production.
Author here - this is my primary goal, exposing the complexity developer might not even think about. Can't even count number of instances seemingly inconspicuous changes caused incident.
"Works on my DB" is new "works on my machine" (and don't trademark it, please :)))
Agreed! A common ORM pitfall is column rename which often doesn't get implemented as a rename as much as it does a DROP and ADD which will affect the data in a surprising way :-D
Sadly one of the most interesting features, Async I/O, is going to take some work at the file system level for us to support. For now all operations are still sync. And we'll keep updating on our progress there.