Why does the randomization have to happen in the database query? Assuming there aren't any large gaps in the distribution of IDs, and if you know the max_id, couldn't you pick a random number between min_id and max_id and get the record whose ID matches that random number?
Precisely because of the gaps. Tons of Wikipedia article ID's aren't valid for random selection because they've been deleted, because they're a disambiguation page, because they're a redirect, or they're a talk page or user page or whatever else.
My comment covered your suggestion already -- that's why I wrote "you encounter the same problem where the id gaps in deleted articles make certain articles more likely to be chosen".
That requires setting up an entirely different service and somehow keeping it in perfect sync with the database, and along with all of the memory it requires.
And you've still got to decide how you're going to pick random ID's from an array of tens of millions of elements that are constantly having elements deleted from the middle. Once you've figured out how to do that efficiently, you might as well skip all the trouble and just use that algorithm on the database itself.
>And you've still got to decide how you're going to pick random ID's from an array of tens of millions of elements that are constantly having elements deleted from the middle. Once you've
for those unfamiliar with visualizing a doughnut, imagine a bagel-shaped treat of sweet cake-like dough, deep-fried and frosted, with optional sprinkles
There are people working on this, but "IA preserved as it was when it was killed in 2023" is nowhere near as valuable as IA as it would be in 2033 if it survives.
IA is constantly adding new content. ("New" meaning "not-yet-archived stuff from the past century", in addition to up-to-date web snapshots.)
What people need to be working on is to create a peer/successor organization that can take a copy of its archive and carry on its core functions, not just a static archive on a server somewhere.
I'd rather have "IA preserved as it was in 2023" than no IA at all. If we have a gap in the collection between tomorrow and when someone sets up a new IA in a few years, that's very different than losing most things collected about the inception of the Internet.
The main issues are storage, centrialisation, moderation and copyright; that if you wished, rebel and ignore these with a distrbuted model like BitTorrent, or IPFS.
If an library burns to the ground you loose everything, so you will have to decentrialise. That then causes moderation issues as if you were to truely go decentrialised; what's stopping a bad actor uploading icky stuff? It would resolve the take-down from copyright, as they couldn't kill all nodes. But all sounds like a lot of work for a very little return.
We're migrating our apps from Vue.js to HTMX, and it has been a great experience. The size of our codebase has consistently gone down as we move things to HTMX, and it feels like the level of complexity goes down as well.
If possible: Can you say how much of the code reduction is based on not needing to manually handle bespoke request behavior? (errors, transitions, animations, etc.)
Also: How much has the backend's responsibility in pre-rendering grown in response to the move to HTMX?
What kind of apps? I used Intercooler (sort of a predecessor to HTMX) for a personal project and it worked well but I didn't get a good feel for what it would be like to build more complicated UI.
I don't think UI is part of HTMX's concerns. It is a library for submitting data and loading HTML fragments. For general UI components, I like https://shoelace.style/.
Would you be open to sharing your experience on this a bit more ? We have a JS heavy app with 100s of VueJS components. Could those be transferred to HTMX ?