Was curious to read this, but then the massive full-page ugly-on-purpose AI-generated NFT-looking banner image at the top of the page turned my stomach to the point where there's no way I'd even consider it - even if the article isn't AI-generated (which it probably is).
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
In my experience, it's usually the database that gives out first. It's often a shared database, or one running on overprovisioned hardware.
The kinds of sites that go down when receiving unexpected traffic are usually built in such a way where they're making multiple DB requests to render each page. Or they have a dozen poorly configured perf-expensive WordPress plugins running. Or likely all of the above
That's not to say performance doesn't matter anymore or that blog posts on niche topics don't matter anymore.
It's more that there are 30 opponents on all sides fighting to minimize the impact of this kind of post. CPUs are still getting faster even now despite Moore's law being dead. The business or career impact of choosing between an associative list vs hashmap in a garbage-collected language like Guile Scheme is so minimal that it's hard to quantify.
If it's in a hot enough path that it matters, it's likely that there are at least 3 things you can do within 20 minutes of work (or 5 minutes of GPU time) that will solve the problem as effectively or better.
I remember the very specific period of time when blog posts talking about functional programming for React developers were en vogue. You can speed up you Scheme app by 15%, or you can build and deploy a completely new service from scratch in Node.JS in the same amount of time.
It used to feel like code had some kind of meaning or value. Now, it's just an artifact produced as a side effect of work. But that's been a trend for a decade or so now, AI is just the latest (and most significant) iteration of it.
Sounds like the difference between code as a craft versus artifact and product. Actually not even product, it's the inner guts of a product that most people don't need to care about, and increasingly just machine generated so it's not even meant to be read by humans. Write-only code of the post-programming era.
Professional software has always aspired to be an industrial process, like OOP and Agile, as a collective endeavor to produce code of decent quality that works reliably, to achieve business goals. Any aesthetic satisfaction or philosophical insights are a byproduct, nice to have, but not the main point.
Code as a craft is a niche for experts and researchers, for hobbyists and amateurs. The miniscule performance improvement gained from choosing an array or hashmap is insignificant in most situations, other than maybe resource-constrained contexts like embedded programming, retro computers, games, competitions.
But, thinking over it, code as a craft still has a healthy subculture of people across older and younger generations. Perhaps it's up to the older ones who remember the good ol' days of respectable craftsmanship ("craftspersonship") to promote and encourage others to carry on the tradition.
Or is that not even worth doing anymore with language models pumping out vibe-coded slop? Will programmers be relegated to reviewing and fixing such mass-produced code? Yes, probably. But no, of course it's worth preserving and evolving the culture of computer science and software development, maybe it's more important than ever to keep the flame of human spirit and imagination alive, supported by machine intelligence rather than replaced.
I'm working on a DSL and browser-based playground for procedural 3D geometry called Geotoy: https://3d.ameo.design/geotoy
It's largely finished and functional, and I'm now focused on polish and adding additional builtin functions to expand its capabilities. I've been integrating different geometry libraries and kernels as well as writing some of my own.
I've been stress-testing it by building out different scenes from movies or little pieces of buildings on Google Maps street view - finding the sharp edges and missing pieces in the tool.
My hope is for Geotoy to be a relatively easy-to-learn tool and I've invested significantly in good docs, tutorials, and other resources. Now my goal is to ensure it's something worth using for other people.
Svelte is definitely still my favorite way to build web apps in 2025.
I wasn't (and still am not) the biggest fan of the new Runes syntax, but I've gotten used to it, and it doesn't really hurt my productivity or get in my way that much.
There's definitely an ecosystem gap compared to React, but there are still lots of really good components and frameworks out there. For example, there's a native Svelte version of IBM's Carbon Design[1] which I've used and found to be very high-quality.
And as for arguments that React will keep winning due to LLMs not having enough corpus to learn less-popular frameworks, I've anecdotally had good success with using LLMs to edit and generate Svelte code. There are occasionally some issues (like it generating pre-runes syntax or using deprecated stuff like stores) but it works well enough to be useful, and definitely better than I expected.
You might be happy to hear that we're releasing a new version of the Svelte Society website to make it easier to find packages and other resources. We're currently migrating data and fixing bugs but if you want to give it a whirl you can find it here https://prod.sveltesociety.dev until we switch it over to the root domain.
The main takeaway from this for me is that SQLite’s query planner seems to be pretty limited. It’s reliant on stuff like the order in which WHERE conditions are specified, isn’t able to use multiple indexes in queries in many cases, bails out to scans when a variety of different operations show up in queries, etc.
It might be the case that SQLite has a simpler or less sophisticated query planner than other databases like Postgres or MariaDB, but in my experience those DBs struggle a lot with good querying planning as well. I’ve spent many hours in the past with issues like Postgres suddenly starting to ignore an index entirely because its computed table data distribution statistics got out of balance, or having to add manual annotations to MariaDB queries like STRAIGHT_JOIN in order to get a query to run faster.
I’m guessing that this is a really hard problem since it doesn’t seem to be really “solved” by any major DB vendor I’ve seen. A lot of modern DB engines like Clickhouse tend to just work around this problem by being so fast at full table scans that they don’t even need any sophisticated indexing set up at all.
> The main takeaway from this for me is that SQLite’s query planner seems to be pretty limited.
This doesn't appear to be true at all.
The order of WHERE conditions does not matter; the order of columns in an index does.
Everything you're describing is pretty much just how indexes fundamentally work in all databases. Which is why you're saying it hasn't been "solved" by anyone.
Indexes aren't magic -- if you understand how they work as a tree, it becomes very clear what can be optimized and what can't.
It is true that occasionally query planners get it wrong, but it's also often the case that your query was written in a non-obvious way that is equivalent in terms of its formal results, but is not idiomatic -- and making it more idiomatic means the query planner can more easily understand which indexes to use where.
(copying my reply from the other comment that said the same thing as you)
The order of conditions in a WHERE definitely does matter, especially in cases where the conditions are on non-indexed columns or there are CPU-intensive search operations like regex, string ops, etc.
I just ran this test locally with a table I created that has 50 million rows:
```
» time sqlite3 test.db "select count() from test WHERE a != 'a' AND a != 'b' AND a != 'c' AND a != 'd' AND b != 'c' AND d != 'd' AND e != 'f' AND f = 'g'"
sqlite3 test.db 5.50s user 0.72s system 99% cpu 6.225 total
» time sqlite3 test.db "select count() from test WHERE f = 'g' AND a != 'a' AND a != 'b' AND a != 'c' AND a != 'd' AND b != 'c' AND d != 'd' AND e != 'f'"
sqlite3 test.db 1.51s user 0.72s system 99% cpu 2.231 total
```
The only difference is swapping the `f = 'g'` condition from last to first. That condition never matches in this query, so it's able to fail fast and skip all of the work of checking the other conditions.
Sorry, I should have clarified -- the order of WHERE conditions doesn't matter for whether an index is utilized. I thought that was the context of the original comment, but now I realize maybe it was unclear.
Yes, of course you can skip evaluating other conditions if an AND fails and that can affect speed. So that's the same as most programming languages.
> A lot of modern DB engines like Clickhouse tend to just work around this problem by being so fast at full table scans that they don’t even need any sophisticated indexing set up at all.
There's only so much you can do with this approach due how to the algorithmic complexity scales as more joins are added. At some points you'll need some additional data structures to speed things up, though they not be indexes in name (e.g. materialized views)
Clickhouse isn’t fast at table scans, it’s just columnar. Indexes are basically a maintained transform from row storage to column storage; columnar databases are essentially already “indexed” by their nature (and they auto-apply some additional indexes on top, like zone maps). It’s only fast for table-scans in the sense that you probably aren’t doing a select * from table, so it’s only iterating over a few columns of data, whereas SQLite would end up iterating over literally everything — a table-scan doesn’t really mean the same thing between the two (a columnar database’s worst fear is selecting every column; a row-base database wants to avoid selecting every row)
Their problem is instead that getting back to a row, even within a table, is essentially a join. Which is why they fundamentally suck at point lookups, and they strongly favor analytic queries that largely work column-wise.
Columnar databases are not "already "indexed"". Their advantage instead comes from their ability to only load the relevant parts of rows when doing scans.
They’re indexed in the sense that they’re already halfway to the structure of an index — which is why they’re happy to toss indexes on top arbitrarily, instead of demanding the user to manage a minimum subset.
What does it even mean to be "halfway" to the structure of an index? Do they allow filtering a subset of rows with a complexity that's less than linear in the total number of rows or not?
A row-based index is a column-wise copy of the data, with mechanisms to skip forward during scanning. You maintain a separate copy of the column to support this, making indexes expensive, and thus the DBA is asked to maintain a minimal subset.
A columnar database’s index is simply laid out on top of the column data. If the column is the key, then it’s sorted by definition, and no index is really required outside of maybe a zone map, because you can binary search. A non-key column gets a zone map / skip index laid out on top, which is cheap to maintain… because it’s already a column-wise slice of the data.
You don’t often add indexes to an OLAP system because every column is indexed by default — because it’s cheap to maintain, because you don’t need a separate column-wise copy of the data because it’s already a column-wise copy of the data.
> A non-key column gets a zone map / skip index laid out on top, which is cheap to maintain… because it’s already a column-wise slice of the data.
I don't see how that's different from storing a traditional index. You can't just lay it on top of the column, because the column is stored in a different order than what the index wants.
Zonemap / skip indexes don’t require sorting, still provide significantly improved searching over full tablescans, and typically applied to every column by default. Sorting is even better, just at the cost of a second copy of the dataset.
In a row-based rdbms, any indexing whatsoever is a copy of the column-data, so you might as well store it sorted every time. It’s not inherent to the definition.
That's still a separate index though, no? It's not intrinsic in the column storage itself, although I guess it works best with it if you end up having to do a full-scan of the column section anyway.
> Sorting is even better, just at the cost of a second copy of the dataset.
> ...
> In a row-based rdbms, any indexing whatsoever is a copy of the column-data
I’m not saying columnar databases don’t have indexes, I’m saying that they get to have indexes for cheap (and importantly: without maintaining a separate copy of the data being indexed), to the point that every column is indexed by definition. It’s a separate data structure, but it’s not a separate db object exposed to the user — it’s just part of the definition
> So the same thing, no?
Consider it as like: for a given filtered-query, a row-based storage is doing a table-scan if no index exists. There is no middle ground. Say 0% value or 100%.
A columnar database’s baseline is a decent index, and if there’s a sorted index then even better. Say 60% value vs 100%.
The relative importance of having a separate, explicit, sorted index is much lower in a columnar database, because the baseline is different. (Although maintaining extra sorted indexes is a columnar database is much more expensive — you basically have to keep a second copy of the entire table sorted on the new key(s))
It definitely does matter, especially in cases where the conditions are on non-indexed columns or there are CPU-intensive search operations like regex, string ops, etc.
I just ran this test locally with a table I created that has 50 million rows:
```
» time sqlite3 test.db "select count() from test WHERE a != 'a' AND a != 'b' AND a != 'c' AND a != 'd' AND b != 'c' AND d != 'd' AND e != 'f' AND f = 'g'"
sqlite3 test.db 5.50s user 0.72s system 99% cpu 6.225 total
» time sqlite3 test.db "select count() from test WHERE f = 'g' AND a != 'a' AND a != 'b' AND a != 'c' AND a != 'd' AND b != 'c' AND d != 'd' AND e != 'f'"
sqlite3 test.db 1.51s user 0.72s system 99% cpu 2.231 total
```
The only difference is swapping the `f = 'g'` condition from last to first. That condition never matches in this query, so it's able to fail fast and skip all of the work of checking the other conditions.
I know that some people really find value in this and even enjoy it, but I can't imagine it and would never accept a role at a company that does anything close to this.
For me, so much of the value and enjoyment in programming comes from getting locked in and grinding out some feature or fix solo based on a spec or description. I also feel like programming without listening to music is way less enjoyable.
Again, no flame at all for people who work this way. It just doesn't match at all with my own sensibilities and work style to the point where it's hard for me to imagine doing it at all.
Those have caffeine in them right? I'm sure you know your body well, but that could be part of what's helping you as well; could be constricting blood vessels or something like that