> You’re already flying this route with a 300-seat plane where 80+ people in business class generate most of your profit. Give those passengers a supersonic plane, cut the flight time in half, and charge the same price.
What does that end up doing to the cost of a seat in coach?
I thought this was common practice, generated columns for JSON performance. I've even used this (although it was in Postgres) to maintain foreign key constraints where the key is buried in a JSON column. What we were doing was slightly cursed but it worked perfectly.
If you're using postgres, couldn't you just create an index on the field inside the JSONB column directly? What advantage are you getting from extracting it to a separate column?
CREATE INDEX idx_status_gin
ON my_table
USING gin ((data->'status'));
Yes, as far as indices go, GIN indices are very expensive especially on modification. They're worthwhile in cases where you want to do arbitrary querying on JSON data, but you definitely don't want to overuse them.
If you can get away with a regular index on either a generated column or an expression, then you absolutely should.
It works until you realize some of these usages would've been better as individual key/value rows.
For example, if you want to store settings as JSON, you first have to parse it through e.g. Zod, hope that it isn't failing due to schema changes (or write migrations and hope that succeeds).
When a simple key/value row just works fine, and you can even do partial fetches / updates
Doesn't sound very cursed, standard normalized relations for things that need it and jsonb for the big bags of attributes you don't care to split apart
This is the typical practice for most index types in SingleStore as well except with the Multi-Value Hash Index which is defined over a JSON or BSON path
Looking for performance issues on a machine with different baseline IO and CPU load, buffer state, query plans, cardinality, etc. is just theater and will lead to a false sense of security. RegreSQL is approaching a stateful problem as if it were stateless and deterministic. A linter like https://squawkhq.com is a good partial solution but only addresses DDL problems.
RegreSQL would be better served by focusing only on the aspects of correctness that tools like SQLx and sqlc fundamentally cannot address. This is a real need that too few tools try to address.
Using the right language for the problem domain is a good thing, but what I can't stand is when people self-identify as the one language they are proficient in. Like, "I'm Staff JavaScript developer" no buddy, you aren't "Staff" anything if you only know one language.