Hacker Newsnew | past | comments | ask | show | jobs | submit | brian_herman's commentslogin

I love this I use it all the time.

Agreed


Ashai linux people are still working on support. They just posted support for M3.


So it's useless to me.


You could contribute to the Ashai project if them not yet spending their OSS time budget on the latest m-series chip when they have a backlog is such a problem for you.


The problem is Apple's vertically integration of their software and hardware without much of the public documents make it very hard to develop open source for. And the fisaco between the wider Linux kernel community and Rust camp, of course. marcan quit because of the Rust drama last year (just one year ceremony recently ironically) in Linux kernel.

PS: btw I'm of the anti-Rust in Linux kernel camp. I'm a Rust enthusiast, but I just don't believe Linux kernel is the right place.

It's 2026 and there is still no Cargo support to build kernel modules. You still need to make linker script hacks to add the object file into the makefiles, and you tell me that is "out of experimental status". I asked for Cargo support in 2020 IIRC and it is still not here...oh boy.

That means dependencies still have to be vendored by hand-picking and we cannot rely on scanning the dependency graph for GPL.

Redox would fare much better.


Or he could buy a laptop with a slightly slower chip which runs well with Linux and not pay money and his time for privilege of supporting a company building incompatible products.


Haha he usually has to recompile Linux because of all the crazy stuff he does with raspberry pis


Couldn't you buy a Mac Ultra with more memory for the same price?


This Asus box costs $3000, and the cheapest Mac Studio with the same amount of RAM costs $3500, or $3700 if you also match the SSD capacity.

You do get about twice as much memory bandwidth out of the Mac though.


What's the cheapest way to get the same memory and memory bandwidth as a Mac Studio but also CUDA support?


CUDA is only on nvidia GPUs, I guess a RTX Pro 6000 would get you close, two of them are 192GB in total. Vastly increased memory bandwidth too. Maybe two/four of the older A100/A6000 could do the trick too.


RTX pro does not have NV-link, because money, however. Otherwise, people might not have to drop 40,000 for true inference GPU.


Somehow, it is still cheaper to own 10x RTX 3060s than it is to buy a 120gb Mac.


The Mac will be much smaller and use less power, though.


How does the introspection/debugging tools look like for Apple/Mac hardware when it comes to GPU programming?


Would almost be a no-brainer if the Mac GPU wasn't a walled garden.


Is that any different from nVidia?


Yes? Apple does not document their GPUs or provide any avenue for low-level API design. They cut ties with Khronos, refuse to implement open GPU standards and deliberately funnel developers into a proprietary and non-portable raster API.

Nvidia cooperates with Khronos, implements open-source and proprietary APIs simultaneously, documents their GPU hardware, and directly supports community reverse-engineering projects like nouveau and NOVA with their salaried engineers.

Pretty much the only proprietary part is CUDA, and Nvidia emphatically supports the CUDA alternatives. Apple doesn't even let you run them.


The resale cost shouldn't be ignored either, that Mac Studio will definitely resell for more than this will by a significant amount. Least of all because the Mac Studio is useful in all kinds of industries whereas this is quite niche.


Oh thanks for clarifing!


Cuda is king


Still? Really? Why?


Inertia. Almost everybody else was asleep at the wheel for the last decade and you do not catch up to that kind of sustained investment overnight.


Better support than MPS and nothing Apple is shipping today can compete with even the high end consumer CUDA devices in actual speed.


Presumably the second point is irrelevant if you're choosing among devices with unified memory.


It is not. Unified memory is not a panacea, it says nothing about the compute performance of the hardware.

The Spark's GPU gets ~4x the FP16 compute performance of an M3 Ultra GPU on less than half the Mac Studio's total TDP.


right, but that doesn't describe a "high end consumer CUDA device". Nothing under that description has unified memory.


Every CUDA-compatible GPU has had support for unified memory since 2014: https://developer.nvidia.com/blog/unified-memory-cuda-beginn...

Can you be a bit more specific what technology you're actually referring to? "Unified memory" is just a marketing term, you could mean unified address space, dual-use memory controllers, SOC integration or Northbridge coprocessors. All are technologies that Nvidia has shipped in consumer products at one point or another, though (Nintendo Switch, Tegra Infotainment, 200X MacBook to name a few).


They mean the ability to run a large model entirely on the GPU without paging it out of a separate memory system.


They're basically describing the Jetson and Tegra lineup, then. Those were featured in several high-end consumer devices, like smart-cars and the Nintendo Switch.


Sure but neither had enough memory to be useful for large LLMs.

And neither were really consumer offerings.


Depends if you care how fast the result arrives. Imagery gen is a very different tool at <12 seconds an image vs nearer to 1 minute.


For how shit it all is, it's still the easiest to use, with most available resources when you inevitable need to dig through stuff. Just things like nsight GUI and available debugging options ends up bringing together a better developer experience compared to other ecosystems. I do hope the competitors get better though because the current de facto monopoly helps no-one.


My reasons for not choosing an Apple product for such a use-case:

1- I vote with my wallet, do I want to pay a company to be my digital overlord, doing everything they can to keep me inside their ecosystem? I put too much effort to earn my freedom to give it up that easily.

2- Software: Almost certainly, I would want to run linux on this. Do I want to have something that has or eventually will have great mainstream linux support, or something with closed specs that people in Asahi try to support with incredible skills and effort? I prefer the system with openly available specs.

I've extensively used mac, iphone, ipad over time. The only apple device I ever bought was an ipad, and I would never buy it, if I knew they deliberately disable multitasking on it.


Not disagreeing with any of your points, but this is a good trend right?

https://github.com/apple/container

> container is a tool that you can use to create and run Linux containers as lightweight virtual machines on your Mac. It's written in Swift, and optimized for Apple silicon.


That would have been an impressive piece of technology in 2015, when WSL was theoretical. To release it in 2025 is a very bad trend, and it reflects Apple's isolation from competition and reluctance to officially support basic dev features.

Container does nothing to progress the state of supporting Linux on Apple Silicon. It does not replace macOS, iBoot or the other proprietary, undocumented or opaque software blobs on the system. All it does is keep people using macOS and purchasing Apple products and viewing Apple advertisements.


As long as xpath is still there I approve


Deno creates binary files from typescript. https://deno.com/blog/deno-compile-executable-programs


Nice


Awesome job nice!


thank you!


Unfortunately voice actors will be replaced by someThing like this hopefully they will find someThing else To do


I dunno. It's definitely a concern in the community. But real people are still getting work.

Audible has ruined their catalog listings with their "Virtual voice" thing and no option to filter them out. They're mostly low quality books narrated by subpar AI voice that don't sell at all, while making it extremely difficult to find quality new books to listen to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: