Good post. I'd argue this is very similar to solo game development. There's a lot of extra administrative stuff that simply has nothing to do with actually making games and a lot more to do with making a real business. So the framing there is accurate.
I'm building all of the systems with LLMs and using LLMs to fast track the creation of content such as storylines, characters, etc. All of the assets are mostly bought and created by me.
Sounds fun! Asset creation...at least in terms of story content, should be the one area where LLMs would really shine, especially if it can somehow extend into logic and gameplay. Couple that with the ways of generating art assets (hard with an LLM, but it can do something at least), that would be cool. I hope to see these games in the future, although they might be labelled as slop unless done really well.
Did you get those backwards? Codex, Gemini, etc. all wait until the requests are done to accept user feedback. Claude Code allows you to insert messages in between turns.
SSD-native RDBMS sounds good in theory! What's in mean in practice? What relational databases are simpler and more performant? Point me in their direction!
I don't get what's so difficult to understand. They have ambitions beyond just coding. And Claude is generally a good LLM. Even beyond just the coding applications.
There's a fascinating gap between PostgreSQL theory and practice here. Elsewhere in this thread, I complained that PostgreSQL extensions can't do everything yet. One thing they can do, however, or ought to be able to do, is provide alternative storage engines. That's the central thing they're supposed to be especially good at providing.
(we maintain OrioleDB) Mostly because the TAM API isn't mature enough yet. Hopefully we can upstream more patches so that it's possible
> undo-based MVCC storage engine project stall?
From what I could gather, it ran out of steam simply because of the difficulty of the task. There is a lot of work involved to get the requisite patches into core and the community are (correctly) cautious
MySQL is definitely easier to use if you don’t want to ever have to think about DB maintenance; on the other hand, you’re giving up a TON of features that could make your queries enormously performant if your schema is designed around them - like BRIN indices, partial indices, way better partition management, etc.
OTOH, if and only if you design your schema to exploit MySQL’s clustering index (like for 1:M, make the PK of the child table something like (FK, some_id)), your range scans will become incredibly fast. But practically no one does that.
As someone who learned to think in MySQL, this is really true, at the time Postgres was a viable alternative too, only the tooling to get started reached me a little easier and quicker.
The major thing I advocate for is don't pick a NOSQL database to avoid relational dbs, only to try and do a bunch of relational work in NOSQL that would have been trivial in an RBDMS. Postgres can even power graph query results which is great.
> The major thing I advocate for is don't pick a NOSQL database to avoid relational dbs, only to try and do a bunch of relational work in NOSQL that would have been trivial in an RBDMS.
It has always felt to me like devs will gravitate towards doing the opposite of what makes sense for their DB. If they have a Document DB, they'll try to use it relationally. If they have a relational database, they'll shove everything into a JSON column. Then in both cases, they complain that it's slow.
> OTOH, if and only if you design your schema to exploit MySQL’s clustering index (like for 1:M, make the PK of the child table something like (FK, some_id)), your range scans will become incredibly fast. But practically no one does that.
You can achieve that with Postgres as well if you accept duplicating the data by adding an index with an include clause containing all the non-key columns you want to return in the SELECT clause. This way, you'll get fast index-only scans.
Major change for us to replace postgresql was replication and HA across geographies - i think on the Postgres side greenplum and cockroachdb are/were an option.
With MySQl variants like percona xtradb setup can go from 1 instance to cluster to geo replicating cluster with minimal effort.
While vanilla postges for an equivalent setup is basically pulling teeth.
I have a lot of respect for Postgres' massive feature set, and how easy it is to immediately put to use, but I don't care for the care and feeding of it, especially dealing with upgrades, vacuuming, and maintaining replication chains.
Once upon a time, logical replication wasn't a thing, and upgrading major versions was a nightmare, as all databases in the chain had to be on the same major version. Upgrading big databases took days because you had to dump and restore. The MVCC bloat and VACCUM problem was such a pain in the ass, whereas with MySQL I rarely had any problems with InnoDB purge threads not able to keep up with garbage collecting historical row versions.
Lots of these problems are mitigated now, but the scars still sometimes itch.
It’s kind of remarkable how little operational maintenance mysql requires. I’ve been a Postgres fan for a long time, but after working with a giant mysql cluster recently I am impressed. Postgres requires constant babysitting, vacuums, reindexing, various sorcery. MySQL just… works.
I agree that SQLite requires less maintenance, but you still need to vacuum to prevent the database file from accumulating space (for apps, I run VACUUM at startup).
SQLite vacuum is only needed to shrink the database after you remove a lot of data. It's not needed in routine operations like postgres does. Postgres has autovacuum usually on by default so I'm not understanding the complaint much
NUL in the middle of a string is fine, types have no meaning, VARCHAR limits are just suggestions…
The flexible typing is the biggest WTF to me, especially because it necessitates insane affinity rules[0]. For example, you can declare that a column is of type “CHARINT” (or “VARCHARINT”, for that matter), and while that will match the rule for TEXT affinity (contains the string “CHAR”), it also matches the rule for INTEGER affinity (contains the string “INT”), and since that rule matches first, the column is given INTEGER affinity. "FLOATING POINT" maps to INTEGER since it ends in "INT", and "STRING" maps to NUMERIC since it doesn't match anything else.
Then there are the comparison rules (same link). NULL < NULL, INTEGER || REAL < TEXT < BLOB - but those may be altered at comparison time due to type conversion. Hex values as strings get coerced to 0 as INTEGER, but only if they're in the SQL text, not if they're stored in a table. Finally, no conversion takes place for ORDER BY operations.
This is particularly galling considering that most of sqlite3's display types (this is `markdown`) don't visually differentiate between string-types and numeric-types - I manually added the strings on rows (by PK) 2 and 4 to assist the explanation.
sqlite> CREATE TABLE foobar (id INTEGER NOT NULL PRIMARY KEY, b BLOB NOT NULL);
sqlite> INSERT INTO foobar (b) VALUES (10), ('10'), (0xA), ('0xA');
sqlite> SELECT id, b, 15 > b, '15' > b, 0xF > b, '0xF' > b FROM foobar ORDER BY b;
| id | b | 15 > b | '15' > b | 0xF > b | '0xF' > b |
|----|----- |--------|----------|---------|-----------|
| 1 | 10 | 1 | 1 | 1 | 1 |
| 3 | 10 | 1 | 1 | 1 | 1 |
| 4 | '0xA' | 0 | 1 | 0 | 1 |
| 2 | '10' | 0 | 1 | 0 | 0 |
SQLite is great, if and only if you use STRICT mode (and enable FK checks, if applicable). Otherwise, best of luck.
Works great for these type of MOE models. The ability to have large amounts of VRAM let you run different models in parallel easily, or to have actually useful context sizes. Dense models can get sluggish though. AMD's ROCm support has been a little rough for Stable Diffusion stuff (memory issues leading to application stability problems) but it's worked well with LLMs, as does Vulkan.
I wish AMD would get around to adding NPU support in Linux for it though, it has more potential that could be unlocked.
reply