It sounds like a combination of the Download Monitor plugin plus a misconfiguration at the web server level resulted in the file being publicly accessible at that URL when the developers thought it would remain private until deliberately published.
This comment reminded me to check whether https://www.distributed.net/ was still in existence. I hadn't thought about the site for probably two decades, I ran the client for this back in the late 1990s back when they were cracking RC5-64, but they still appear to be going as a platform that could be used for this kind of thing.
I was also excited about those projects and ran DESchall as well as distributed.net clients. Later on I was running the EFF Cooperative Computing Award (https://www.eff.org/awards/coop), as in administering the contest, not as in running software to search for solutions!
The original cryptographic challenges like the DES challenge and the RSA challenges had a goal to demonstrate something about the strength of cryptosystems (roughly, that DES and, a fortiori, 40-bit "export" ciphers were pretty bad, and that RSA-1024 or RSA-2048 were pretty good). The EFF Cooperative Computing Award had a further goal -- from the 1990s -- to show that Internet collaboration is powerful and useful.
Today I would say that all of these things have outlived their original goals, because the strength of DES, 40-bit ciphers, or RSA moduli are now relatively apparent; we can get better data about the cost of brute-force cryptanalytic attacks from the Bitcoin network hashrate (which obviously didn't exist at all in the 1990s), and the power and effectiveness of Internet collaboration, including among people who don't know each other offline and don't have any prior affiliation, has, um, been demonstrated very strongly over and over and over again. (It might be hard to appreciate nowadays how at one time some people dismissed the Internet as potentially not that important.)
This Busy Beaver collaboration and Terence Tao's equational theories project (also cited in this paper) show that Internet collaboration among far-flung strangers for substantive mathematics research, not just brute force computation, is also a reality (specifically now including formalized, machine-checked proofs).
There's still a phenomenon of "grid computing" (often with volunteer resources), working on a whole bunch of computational tasks:
It's really just the specific "establish the empirical strength of cryptosystems" and "show that the Internet is useful and important" 1990s goals that are kind of done by this point. :-)
I also wonder how many of the numerous AI proponents in HN comments are subject to the same effect. Unless they are truly measuring their own performance, is AI really making them more productive?
You could go the same way as the study, flip a coin to use AI or not, write down the task you just did, the time you thought the task took you and the actual clock time. Repeat and self-evaluate.
Sample of 16 is plenty if the effect is big enough.
It’s also not a sample size of 1, it’s a sample size of however many tasks you do because you don’t care about measuring the effect AI has on anyone but yourself if you’re trying to discern how it impacts you.
It's the most representative sample size if you're interested in your own performance though. I really don't care if other people are more productive with AI, if I'm the outlier that's not then I'd want to know.
It seems this is because the string "autoregressive prior" should appear on the right hand side as well, but in the second image it's hidden from view, and this has confused it to place it on the left hand side instead?
It also misses the arrow between "[diffusion]" and "pixels" in the first image.
This smells like an advert. Over the last year I've spent less money on energy by being on Octopus Tracker (which requires a smart meter) over any fixed tariff.
If this is true does this mean we don't need fingerprint scanning hardware any more, but we can just use a microphone and software to unlock a device when the user runs their finger over any convenient surface?
I used this many, many years ago but switched to Borg[0] about five years ago. Duplicity required full backups with incremental deltas, which meant my backups ended up taking too long and using too much disk space. Borg lets you prune older backups at will, because of chunk tracking and deduplication there is no such thing as an incremental backup.
I did the same. I had some weird path issues with Duplicity.
Borg is now my holy backup grail. Wish I could backup incrementally to AWS glacier storage but that just me sounding like an ungrateful begger. I'm incredibly grateful and happy with Borg!
Agree completely... used duplicity many years ago, but switched to Borg and never looked back. Currently doing borg-backups of quite a lot of systems, many every 6 hours, and some, like my main shell-host every 2 hours.
It's quick, tiny and easy... and restores are the easiest, just mount the backup, browse the snapshot, and copy files where needed.
AFAIK, the only difference is that Restic doesn't require Restic installed on the remote server, so you can efficiently backup to things like S3 or FTP. Other than that, both are fantastic.
Technically Borg doesn't require it either, you can backup to a local directory and then use `rclone` to upload the repo wherever.
Not practical for huge backups but it works for me as I'm backing up my machines configuration and code directories only. ~60MB, and that includes a lot of code and some data (SQL, JSON et. al.)
Pretty sure rclone uploads just fine without server dependencies, yeah. I never installed anything special on my home NAS and it happily accepts uploads with rclone.
That I can't really speak of. I know it does not reupload the same files at least (uses timestamps) but never really checked about only uploading file diffs.
Nothing offhand, but basically it can't know what's on the server without reading it all, and if it can't do that locally, it'll have to do it remotely. At that point, might as well re-upload the whole thing.
Its front page hints at this, but there must be details somewhere.
I think you’re misunderstanding something. There’s no need, and even no possibility, to have “rclone support” on the server, and also no need to “read it all”. rclone uses the features of whatever storage backend you’re using; if you back up to S3, it uses the content hashes, tags, and timestamps that it gets from bucket List requests, which is the same way that Restic works.
Borg does have the option to run both a client-side and a server-side process if you’re backing up to a remote server over SSH, but it’s entirely optional.
Not to make this an endless thread, but I have been wondering about what's the most rsync-friendly backup on-disk layout. I have found Borg to have less files and directories which I would naively think translates to less checks (and the files are not huge, too). I have tried Kopia and Bupstash as well but they both produce a lot of files and directories, much more than Borg. So I think Borg wins at this but I haven't checked Restic and the various Duplic[ati|icity|whatever-else] programs in a while (last I did at least a year ago).
I think the advantage of restic is that you don't need to rsync afterwards, it handles all that for you. Combined with its FUSE backup decryption (it mounts the remote backup as a local filesystem you can restore files from), it's very set-and-forget.
My problem with Restic was that it did not recognize sub-second timestamps of files. I made test scripts that tested it (and were creating files and directories in a hypothetical backup source, and were also changing the files) but then Restic insisted nothing was changed because the changes were happening too fast.
I modified the scripts to do `sleep 1` between each change but it left a sour taste and I never gave Restic a fair chance. I see a good amount of praise in this thread, I'll definitely revisit it when I get a little free time and energy.
Because yeah, it's not expected you'll make a second backup snapshots <1s after the first one. :D
I tried Restic again but, its repo size is 2x of that of Borg which allows you to fine-tune compression, and Restic doesn't.
So I'll keep an eye on Rustic instead (it is much faster on some hot paths + allows you to specify base path of the backup; long story but I need that feature a lot because I also do copies of my stuff to network disks and when you backup from there you want to rewrite the path inside the backup snapshot).
Rustic compresses equivalently to Borg which is not a surprise because both use zstd on the same compression level.
I have an overnight cron that flattens my duplicity backups from many incremental backups made over the course of one day to a single full backup file, that becomes the new backup. then subsequent backups over the course of the day do incremental on that file. So I always have full backups for each individual day with only a dozen or so incremental backups tacked onto it.
Same for me. Also, on MacOs duplicity was consuming much more CPU than Borg and was causing my fan to spin loudly. Eventually I moved to timemachine, but I still consider Borg a very good option.
SU was always one of the many aggregators in the addth.is toolbar, alongside places like Reddit. They do both serve the same function of making the Internet more discoverable - noting that early Reddit didn't have comments.