Hacker Newsnew | past | comments | ask | show | jobs | submit | SomaticPirate's commentslogin

How old were when you stopped? What made you realize you had an addiction?

Around 30 to 35, don't remember which year it was exactly.

And the thing that was largest in making me go, "Yeah, this isn't good for me, I need to quit" was that it was consuming my thoughts all the time. When I wasn't in front of the computer gaming, I was thinking about the game and planning the strategy for my next move. (I usually played turn-based games rather than action games). Which is fine in small doses, but it was taking over my mind when I was at church wanting to focus on worshiping God, when I was at work (and distracting me from getting work done), when I was trying to read...

Basically, I realized that it was an unhealthy focus for me, and taking over way too much of my attention that I wanted to be able to spend on a much wider variety of things. So I quit. First year was the hardest, second and third years were hard too, but by now I've gotten used to reaching for a book to read rather than a game. And the book, I can put down anytime I need to, without feeling that empty-ish feeling that says "Awww, I want to get back into the game..." That letdown when I exited the game was another clue, BTW: it matched how I'd heard drug addicts (specifically, former addicts who had kicked their habit) describe the feeling of coming down off a high. I've never used drugs myself so I can't compare it directly, but it was similar enough to the descriptions I'd heard from them that I said "okay, that's probably not a good sign either."


Are there archives of this? I have no doubt after this post goes viral some of these files might go “missing” Having a large number of conspiracies validated has lead me to firmly plant my aluminum hat


This makes sense for HPC and ML workloads. Big batch jobs where you are pushing the hardware and having everything local is a clear advantage. Also this company sells hardware so it makes sense for them to have hardware experience. Still think that for the majority on here, needing to make a physical phone call to their data center team (!!) to rack a server is a nutty proposition. You think the AWS api is slow? Trying calling Steve. If you have fixed compute costs after a year, sure, look at pulling some stuff on prem.

It isn't though. It crossed the chasm when Steve (who I would like to think is somewhat comfortable after writing a book, holding a director level position at several startups) decided to endorse an outright crypto pump and dump.

When he decided to monetize the eyeballs on the project instead of anything related to the engineering. Which, of course, Steve isn't smart enough to understand (in his own words) and he recommends you not buy but he still makes a tidy profit from it.

Its a memecoin now... that has a software project attached to it. Anything related to engineering died the day he failed to disavow the crypto BS and instead starting shrilling it.

What happened to engineers not calling out BS as BS.



My favorite part about that is gas town is supposedly so productive that this guys sleep patterns are affected by how much work he’s doing, but he took the time to physically go to a bank to get a 5 figure payout.

It makes it difficult to believe that gas town is actually producing anything of value.

I also lol at his bitching about how the bank didn’t let him do the transactions instantly as he describes himself how much of a scam this seems and how the worst thing is his bank account being drained, like banks don’t have a self interest in protecting their clientele from such scams.


Convicted 5 times... if this was a natural person it stands to reason their license to operate a motor vehicle would be revoked. However, a "corporate" person faces no such consequences. What is the equivalent of jail for these "corporate" entities who are more than happy to pay fines.


Don’t forget the apparent crypto grift angle now (something related to BAGS)

Ridiculous. Beads might be passable software but gas town just appears to be a good way to burn tokens at the moment


Is there a guide on how to do this? I haven’t ever used the raw hypervisor.


a quick kagi search revealed this: https://briancallahan.net/blog/20250222.html, perhaps it might work for you too ?


This compares VMWare Fusion to Virtual Buddy


It should just be a matter of producing a kernel and, if necessary, RAM disk that can be booted the same way as Linux.


“just” is doing a lot of work in that sentence.


Yes and no; kernels aren’t magic, and “change how this kernel is loaded to match how Linux does it” is actually a reasonable first assignment for an Operating Systems class at a top-tier school. (You’re basically just creating an alternative `main()` if you don’t need a RAM disk image from which to load drivers.)


It's a first assignment if you are talking about a computer from 1990.


What, pray tell, would you do for a first assignment in an Operating Systems class at a top-tier school that actually involves making changes to on realistic operating system code?


This is the set of assignments they do at the university of Illinois (a top 10 computer engineering school): https://courses.grainger.illinois.edu/ece391/fa2025/assignme...

It looks roughly the same as when I took 15 years ago, except they switched to riscv from x86. Honestly, what you're describing sounds too difficult for a first assignment. Implementing irq handlers or syscalls on an existing codebase is far more realistic, plausible, and useful.


I had to implement system calls in xv6.

You can look up which top tier schools use it for OS classes.


At the risk of getting further off-topic: what sort of system calls did they have you implement? I’ve never done but a tiny bit of kernel hacking and that sounds like a good exercise, but I’m not sure what would be a good first syscall to add.


Try asking your favorite llm. They will even guide you with a small curriculum.


Advice like this, and then people wonder why they’re lonely.


I don't know… people were lonely before LLMs. And, they're right, this is a question one could easily paste into a frontier model and easily get back info that's way more useful than the significant majority of blog posts or replies would give! shrug But also I'd still like to hear what fooker has to say!


Oh, is that what MIT’s using these days?


Then one needs to launch it. Not sure if there are any lancher UIs out there, or if one has to write custom code for that.


Parallels will run a VM that can (manually) boot bsd.rd from the EFI shell if you stick BOOTAA64.EFI and bsd.rd on a FAT32 GUID formatted.dmg, connect it to the VM, then boot EFI shell. Type:

    connect -r
    map -r
    fs0:
    bootaa64.efi
    boot bin.rd
Then you'll be in the OpenBSD installer, having booted an OpenBSD kernel.

You can grab the files from: https://ftp.openbsd.org/pub/OpenBSD/snapshots/arm64/

Actually installing the system is left as an exercise for the reader.


My point is that as long as OpenBSD can boot like Linux, you just have to tell whatever VM front-end you’re using that you’re booting a Linux but give it an OpenBSD kernel and RAM disk.

Traditionally BSD has booted very differently than Linux, because Linus adopted the same boot process as MINIX when he first developed it (since he was actually using the MINIX boot blocks at first).

BSD has historically used a bootstrap that understands V7FS/FFS and can load a kernel from a path on it. MINIX takes the actual kernel and RAM disk images as parameters so it doesn’t need to know about filesystems, and that tradition continued with Linux bootstraps once it was standalone.


Who else was rdev'ing the Linux kernel to tell it where the root ext2(?) partition was long before they were using RAM disks? Like with SLS or MCC?


Originally Linux had Minix FS, followed by ext. Ext2 wouldn't make an appearance until 1993 by Rémy Card, so it depends on when you were using it.


I had a friend who has now gotten several out of pocket MRIs essentially against medical advice because she believes her persistent headaches are from brain cancer.

Even after the first MRI essentially ruled this out, she fed the MRI to chatGPT which basically hallucinated that a small artifact of the scan was actually a missed tumor and that she needed another scan. Thousands wasted on pointless medical expenses.

Having friend's in healthcare, they have mentioned how common this is now. Someone coming in and demanding a set of tests based on chatGPT. They have explained that A, tests with false positives can actually be worse for you (triggers even more invasive tests) B, insurance won't cover any of your chatGPT requests.

Again, being involved in your care is important but disregarding the medical professional in front of you is a great way to set yourself up for substandard care.


Seeing a ton of adoption of this after the Minio debacle

https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.

RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.

Anyone have any advice for swapping this in for Minio?


I have not tried either myself, but I wanted to mention that Versity S3 Gateway looks good too.

https://github.com/versity/versitygw

I am also curious how Ceph S3 gateway compares to all of these.


When I was there, DigitalOcean was writing a complete replacement for the Ceph S3 gateway because its performance under high concurrency was awful.

They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.


Unrelated, but one of the more annoying aspects of whatever software they use now is lack of IPv6 for the CDN layer of DigitalOcean Spaces. It means I need to proxy requests myself. :(


I'd be curious to know how versitygw compares to rclone serve S3.


Disclaim: I work on SeaweedFS.

Why skipping SeaweedFS? It rank #1 on all benchmarks, and has a lot of features.


I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's. The only problem is that SeaweedFS documentation is hard to understand.


SeaweedFS is also so optimized for small objects, it can't store larger objects (max 32GiB[1]).

Not a concern for many use-cases, just something to be aware of as it's not a universal solution.

[1]: https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#st...


Not correct. The files are chunked into smaller pieces and spread to all volume servers.


Well, then I suggest updating the incorrect readme. It's why I've ignored SeaweedFS.


SeaweedFS is very nice and takes quite an effort to lose data.


can you link benchmarks


It is in the parent comment.


> but for entirely non-technical reasons we had to exclude it

Able/willing to expand on this at all? Just curious.


They seem to have gone all-in on AI, for commits and ticket management. Not interested in interacting with that.

Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).


Not the same person you asked, but my guess would be that it is seen as a chinese product.


RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884

I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.

Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.


Hi there, RustFS team member here! Thanks for taking a look.

To clarify our architecture: RustFS is purpose-built for high-performance object storage. We intentionally avoid relying on general-purpose consensus algorithms like Raft in the data path, as they introduce unnecessary latency for large blobs.

Instead, we rely on Erasure Coding for durability and Quorum-based Strict Consistency for correctness. A write is strictly acknowledged only after the data has been safely persisted to the majority of drives. This means the concern about "eating committed writes" is addressed through strict read-after-write guarantees rather than a background consensus log.

While we avoid heavy consensus for data transfer, we utilize dsync—a custom, lightweight distributed locking mechanism—for coordination. This specific architectural strategy has been proven reliable in production environments at the EiB scale.


Is there a paper or some other architecture document for dsync?

It's really hard to solve this problem without a consensus algorithm in a way that doesn't sacrifice something (usually correctness in edge cases/network partitions). Data availability is easy(ish), but keeping the metadata consistent requires some sort of consensus, either using Raft/Paxos/..., using strictly commutative operations, or similar. I'm curious how RustFS solves this, and I couldn't find any documentation.

EiB scale doesn't mean much - some workloads don't require strict metadata consistency guarantees, but others do.


What is this based on, honest question as from the landing page I don't get that impression. Are many committers China-based?


https://rustfs.com.cn/

> Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing

> Beijing ICP Registration No. 2024061305-1


Oh, I misread the initial comment and thought they had to exclude Garage. Thanks!


I’m Elvin from the RustFS team in the U.S. Thanks for sharing the benchmark; it’s helpful to see how RustFS performs in real-world setups.

We know trust matters, especially for a newer project, and we try to earn it through transparency and external validation. we were excited to see RustFS recently added as an optional service in Laravel Sail’s official Docker environment (PR #822). Having our implementation reviewed and accepted by a major ecosystem like Laravel was an encouraging milestone for us.

If the “non-technical reasons” you mentioned are around licensing or governance, I’m happy to discuss our long-term Apache 2.0 commitment and path to a stable GA.


From what I have seen in the previous discussions here (since and before Minio debacle) and at work, Garage is a solid replacement.


Seaweed looks good in those benchmarks, I haven't heard much about it for a while.


Wow, "hardened image" market is getting saturated. I saw atleast 3 companies offering this at Kubecon.

Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).

In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.

For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.

But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.

There is even a govt project called Ironbank to offer something like this to the DoD.

Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.


The real question isn't whether the market is saturated, it's whether it still exists once Docker gives away the core value prop for free.


Given Docker's track record it won't be free indefinitely, this is a move to gauge demand and generate leads.



Most likely yes. There are a lot enterprises out there that only trust paid subscriptions.

Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.


Counterpoint: most likely no, it really is about all the downstream impacts of critical and high findings in scanners. The risk of failing a soc2 audit for example. Once that risk is removed then the value prop is also removed.


F500s trust the paid subscriptions because it means you can escalate the issue -- you're now a paying client so you get support if/when things explode -- and that also gives you a lever to shift blame or ensure compliance.

I recall being an infra lead at an Big Company that you've heard of and having to spend a month working with procurement to get like 6 Mirantis / Docker licenses to do a CCPA compliance project.


I don't think this is the case here. The reason you want to lower your CVEs is to say "we're compliant" or "it's not our fault a bad thing happened, we use hardened images". Paying doesn't really change that - your SOC2 doesn't ask how much you spent, it asks what your patching policy is. This makes that checkbox free.


Yep differentiation is tricky here. Chainguard are expanding out to VM images and programming language repos, but the core of hardened container images has a lot of options.

The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...

People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.


If you distribute images to your customers it is a huge benefit to not have them come back with CVEs that really don't matter but are still going to make them freak out.


Even if you do SaaS. Some customers would ask you about known vulnerabilities in your images, and making it easy to show quick remediation schedule can make deals easier to close.


> outside of markets where there's a lot of compliance requirements

That includes anyone who wants to sell to the US government (and probably other governments as well).

FedRAMP easentially[1] requires using "hardened" images.

[1]: It isn't strictly required, but without out things like passing security scans and FIPS compliance are more difficult.


Depends what type of shop. If you're in a big dinosaur org and you 'roll your own' that ends up having a vulnerability, you get fired. If you pay someone else and it ends up having a vulnerability you get to blame it on the vendor.


Perhaps in theory, but I’d be willing to wager that most dinosaur orgs have so many unpatched vulns, they would need to fire everyone in their IT org to cover just the criticals


> There is even a govt project called Ironbank to offer something like this to the DoD.

Note that you don't have to be DoD to use Iron Bank images. They are available to other organizations too, though you do have to sign up for an account.


Many IronBank images have CVEs because many are based on ubi8/9 and while some have ubi8/9-micro bases, there are still CVEs. IronBank will disposition the critical and highs. You can access their Vulnerability Tracking Tool and get a free report.

Some images like Vault are pretty bare (eg no shell).


Ironbank was actually doing this before Chainguard existed, and as another mentioned, it's not restricted to DoD and also free to use for anyone, though you do need an account.

My company makes its own competing product that is basically the same thing, and we (and I specifically) were pretty heavily involved in early Platform One. We sell it, but it's basically just a free add-on to existing software subscriptions, an additional inducement to make a purchase, but it costs nothing extra on on its own.

In any case, I applaud Docker. This can be a surprisingly frustrating thing to do, because you can't always just rebase onto your pre-hardened base image and still have everything work, without taking some care to understand the application you're delivering, which is not your application. It was always my biggest complaint with Ironbank and why I would not recommend anyone actually use it. They break containers constantly because hardening to them just means copying binaries out of the upstream image into a UBI container they patch daily to ensure it never has any CVEs. Sometimes this works, but sometimes it doesn't, and it's fairly predictable, like every time Fedora takes a new glibc version that RHEL doesn't have yet, everything that links against starts segfaulting when you try to copy from one to the other. I've told them this many times, but they still don't seem to get it and keep doing it. Plus, they break tags with the daily patching of the same application version, and you can't pin to a sha because Harbor only holds onto three orphaned shas that are no longer associated with a tag.

So short and long of it, I don't know about meat on the bone, but there is real demand and it's getting greater, at least in any kind of government or otherwise regulated business because the government itself is mandating better supply chain provenance. I don't think it entirely makes sense, frankly. The end customers don't seem to understand that, sure, we're signing the container image because we "built" it in the sense that we put together the series of tarballs described by a json file, but we're also delivering an application we didn't develop, on a base image full of upstream GNU/Linux packages we also didn't develop, and though we can assure you all of our employees are US citizens living in CONUS, we're delivering open source software. It's been contributed to by thousands of people from every continent on the planet stretching decades into the past.

Unfortunately, a lot of customers and sales people alike don't really understand how the open source ecosystem works and expect and promise things that are fundamentally impossible. Nonetheless, we can at least deliver the value inherent in patching the non-application components of an image more frequently than whoever creates the application and puts the original image into a public repo. I don't think that's a ton of value, personally, but it's value, and I've seen it done very wrong with Ironbank, so there's value in doing it right.

I suspect it probably has to be a free add-on to some other kind of subscription in most cases, though. It's hard for me to believe it can really be a viable business on its own. I guess Chainguard is getting by somehow, but it also kind of feels like they're an investor darling getting by on the reputations of its founders based on their past work more than the current product. It's the container ecosystem equivalent of selling an enterprise Linux distro, and I guess at least Redhat, SUSE, and Canonical have all managed to do that, but not by just selling the Linux distro. They need other products plus support and professional services.

I think it's a no-brainer for anyone already selling a Linux distro to do this on top of it, though. You've already got the build infrastructure and organizational processes and systems in place.


CEO of VulnFree here.

I've been in contact with some of the security folks at Iron Bank. The last time we dug into Iron Bank images, they were simply worse than what most vendors offered. They just check the STIG box.


CEO of VulnFree here.

I'm not sure if Chainguard was first, but they did come early. The original pain point we looked into when building our company was pricing, but we've since pivoted since there are significant gaps in the market that remain unaddressed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: