Hacker Newsnew | past | comments | ask | show | jobs | submit | crapple8430's commentslogin

The only way to avoid that is if that $100 buys you actual ownership, like the ability to have your own secure boot keys and modify the software. So long as Apple still owns your phone, they can alter the deal, and there is nothing you can do about it.

Perhaps not even that is completely safe long term, as companies can introduce a locked down dependency, reverse policies (see Google's recent sideloading stance), or find some other workarounds.

This. If you pay them $100 for no ads, they'll just come back next quarter to ask for another $100, unless you actually own your device, i.e. are able to modify its software to actually enforce your rights.

Or better, at the destination. If we just blind everyone, nudity ceases to be a problem.

> a whole chain of removed software freedom

Indeed. But this already has happened for most people. All non-jailbroken iphones and most Androids cannot have their bootloader unlocked, and even if they can, the stuff you can run is often still substantially controlled by Google.

Though in theory this can also be done without involving the OS at the device driver level. It's not hard to imagine a CNN running inside your display controller to detect a bbox and blur out the nudity. It'd still suck and be a middle finger to the owner, but I don't feel like this is much worse than what's already there. Given the popularity of porn, I can easily imagine this sparking a general public sentiment against all this nonsense.


Sovereignty also means responsibility. Either you have to keep your network secure, or you pay someone else do it (not always very well), otherwise you get security problems. Same goes for redundants backups, hardware maintenance, etc.

You can run rclone every couple minutes on your NAS, it checks mtimes like rsync so it is reasonably efficient for most cases, though you may run into ratelimits with bigger data.

GPT 5 Pro is a good 10x more expensive so it's an apples to oranges comparison.

And the driver will just carry two phones, and be even more distracted than before. Cool

While this is undoubtably still an excellent deal, the comparison to the new price of H100 is a bit misleading, since today you can buy a new, legit RTX 6000 Pro for about $7-8k, and get similar performance the first two of the models tested at least. As a bonus those can fit in a regular workstation or server, and you can buy multiple. This thing is not worth $80k in the same way that any old enterprise equipment is not worth nearly as much as its price when it was new.

Fair points, but the deal is still great because of the nuances of the RAM/VRAM.

The Blackwells are superior on paper, but there's some "Nvidia Math" involved: When they report performance in press announcements, they don't usually mention the precision. Yes, the Blackwells are more than double the speed of the Hopper H100's, but thats comparing FP8 to FP4 (the H100's can't do native FP4). Yes, thats great for certain workloads, but not the majority.

What's more interesting is the VRAM speed. The 6000 Pro has 96 GB of GPU memory and 1.8 TB/s bandwidth, the H100 haas the same amount, but with HBM3 at 4.9 TB/s. That 2.5X increase is very influential in the overall performance of the system.

Lastly, if it works, the NVLink-C2C does 900 GB/s of bandwidth between the cards, so about 5x what a pair of 6000 Pros could do over PCIE5. Big LLMs need well over the 96 GB on a single card, so this becomes the bottleneck.

e.g. Here are benchmarks on the RTX 6000 pro using the GPT-OSS-120B model, where it generates 145 tokens/sec, and I get 195 tokens/sec on the GH200. https://www.reddit.com/r/LocalLLaMA/comments/1mm7azs/openai_...


The perf delta is smaller than I thought it'd be given the memory bandwidth difference. I guess likely comes from the Blackwell having native MXFP4, since GPT-OSS-120b has MXFP4 MOE layers.

The NVLink is definitely a strong point, I missed that detail. For LLM inference specifically it matters fairly little iirc, but for training it might.


GH200 has HBM3 memory. You cannot compare this to a RTX Pro 6000...

you do realize he has 2 H100s, you would need to buy 2 RTX 6000 Pro for $15-$16k plus the hardware. The ram that came with that hardware is worth more than $7000 now.

I think he is still correct in saying that the gear OP bought is worth much less now and further deteriorating fast. See my comment above here https://news.ycombinator.com/item?id=46227813.

GPUs have such a short liefspan these days that it is really important to compare new vs. used.


This is hard to say for sure.

I had 4x 4090, that I had bought for about $2200 each in early 2023. I sold 3 of them to help pay for the GH200, and got 2K each.


Is it? The used data center P40s I bought for $150 2 years ago went back up to $450 a few months ago, I sold one for $400. I just checked and price is down to $200, so I'm still profitable. I bought MI50s for $90 less than a year ago, they are now going for $200. What deterioration? OPs gear was far less and is no longer deprecating. It will probably hold this value for the next 4 years.

There are a lot of PC boards where the iGPU only has an HDMI 2.1 output, or with a DP1.4. But DP1.4 doesn't support some of the resolution/refresh combinations that HDMI 2.1 does. Normally this doesn't matter, but it could if you have, for example, the Samsung 57 inch dual 4K ultrawide.

I think you'd have bigger issues trying to drive that monitor with an iGPU

The iGPU on my 9950X is perfectly capable of driving my Dell U4025QW 5k2k ultrawide. Yeah it would suck for any modern 3D games, but for productivity or light gaming it's fine.

It requires I use the DisplayPort out on Linux because I can't use HDMI 2.1. Because the motherboard has only 1 each of DisplayPort and HDMI this limits my second screen.


It works fine with intel and amd igpu's. They won't run many games at the native resolution though. Doesn't really matter to me, as the igpu's are in work laptops for me, so 60hz or better passes for "adequate".

Even a raspberry pi 4 or newer has dual 4k outputs, that can fill the entire screen at native resolution. Macs have been the worst to use with it so far.


I don't have one, but I suppose it would be just fine if you only use it for running a desktop environment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: