I think you’re correct. the reproduction isn’t very precise and the solution doesn’t seem right (I’m not seeing anything about the non-standard pages not being freed). I’d guess this was ignored because it was wrong…
DB lookups + extra index are way more expensive then hardware assisted decoding.
If your UUIDv4 is cached, your still suffering from extra storage and index. Not a issue on a million row system but imagine a billion, 10 billion.
And what if its not cached. Great, now your hitting the disk.
Computers do not suffering from lacking CPU performance, especially when you can deploy CPU instruction sets. Hell, you do not even need encryption. How about making a simple bit shift where you include a simple lookup identifier. Black box sure, and not great if leaked but you have other things to worry about if your actual shift pattern is leaked. Use extra byte or two for iding the pattern.
Obfuscating your IDs is easy. No need for full encryption.
Hardware assisted is a red herring here. As you noted the real problem is that random reads have poor data locality, which degrades your database performance in a way that is expensive to resolve.
Why would it be computationally complex? The encryption is implemented in the silicon, it is close to free for all practical purposes. The lookup table would be wildly more expensive in almost all cases due to comparatively poor memory locality.
I think 'local' in this context could also simply mean 'wall', as in wall clock time. When my friend asks me the date and time, I don't have to bother with time zones.
This type of date/time is also very useful to store in a computer.
Having had decades of this in geophysical exploration, the time is now AND (where relevant) it's the current UTC Zulu time AND it's both local times.
Any data collected is logged against a UTC sync for start of recording (and continues with lapsed time AND logging raw GPS times .. which sorts out leap second issues).
Any discussion re: start of next data collection is relative to the local time at the field end, the planes et al are often bound by daylight - so field operation local dawn and sunset are relevant.
Any discussion re: software changes | hardware support is relevant to the local time at the home office end as that'll rest on when the office staff or aircraft mechanics come in or free up.
For ease of communication many such discussions are along the lines of "we'll ring back in four hours and ...". That's a NOW relative epoch.
Additionally I've always wanted institutions to be part of the timeline of technology. Corporations, Nation-states, Universities, Guilds, International Organizations - the ways people innovatively organize make things possible that otherwise wouldn't be.
The higgs boson experiments, for example wouldn't have been possible without the complex international institutions that orchestrated it. Manhattan project, Moon landing, the internet ... the iphone ...
This was originally in the HN submission title, but this comment from the maintainer is mind-boggling:
> GitHub has confirmed that the block.txt file in the repo has been responsible for over 1 petabyte of bandwidth per day — mainly due to automated downloads (likely from Adobe’s Chrome extension). This is way beyond what a single repo is designed to handle and poses a risk of the repository being temporarily disabled as early as June 13.
Adobe appears to be actually responsible for the DDoS.
> i think we might have finally tracked it to the GenAIWebpageBlocklist for Adobe Reader plugin on Chrome
> Adobe has acknowledged the issue and removed the URL reference from their Chrome extension. The updated version is already submitted to the Chrome Web Store and should roll out to users in the coming days
> GitHub has removed the content warning from the repository and will continue to monitor bandwidth usage