If I have learned one thing it's is that current corporate strategy is no guarantee for the future. If you want to purchase a laptop now and want a great linux experience, then the M2 Is a great option. But don't assume that M(n+1) will ever get support.
This reasoning is essentially just as true for any other laptop maker Dell, Lenovo, Asus, Framework, HP etc might also decide to bomb linux support at any time.
You really thought the poster meant that Elon Musk personally went and implemented FSD? Just for your information, Musk is also not personally assembling every Tesla vehicle.
Well if there are plenty of posters then it should be easy to give me 5 comments of different people where it's clear they believe Tesla R&D is a solo Elon Musk operation.
That applies inside a single warp, notice the wording:
> In SIMT, all threads in the warp are executing the same kernel code, but each thread may follow different branches through the code. That is, though all threads of the program execute the same code, threads do not need to follow the same execution path.
This doesn't say anything about dependencies of multiple warps.
It's definitely possible, I am not arguing against that.
I am just saying it's not as flexible/cost-free as you would on a 'normal' von Neumann-style CPU.
I would love to see Rust-based code that obviates the need to write CUDA kernels (including compiling to different architectures). It feels icky to use/introduce things like async/await in the context of a GPU programming model which is very different from a traditional Rust programming model.
You still have to worry about different architectures and the streaming nature at the end of the day.
I am very interested in this topic, so I am curious to learn how the latest GPUs help manage this divergence problem.
TBH my asahi M2 macbook experience has been the best linux experience I have ever had. It's night and day compared to the XPS 13 I had before which was supposedly a well supported laptop for linux, you could even buy it with ubuntu.
The only real drawback is no thunderbolt, and till recently no DP, and no x86 support. But I don't use any x86 only apps enough for it to matter. No thunderbolt sucks though.
Having multiple hardware features broken isn’t anything close to my best Linux experience.
I’ve got a framework 13 and literally nothing is broken, device firmware updates happen automatically through Linux, literally more integrated with the hardware than a windows laptop.
One hardware feature really. Besides thunderbolt there really isn't anything that doesn't work. I happily give up thunderbolt over the significantly worse performance of the SoC and screen in the framework 13. Especially the screen is terrible. When I purchased my macbook the framework 13 was top of the list of alternatives. But I can't bear a bad screen. Note that I never use macos, I purchased the macbook with the goal of running linux on it. Macbook was simply one of the best supported devices.
The problem are really not the CPU cores itself. It's a generic arm core in terms of ISA with just a tiny bit of proprietary extensions. The problem are all the peripherals. GPU, NPU, Display, USB, Wifi, HID, sound etc etc. These all require custom drivers and reverse engineering.
While it's awesome that it runs there doesn't seem to be GPU support yet as the screenshot reports the llvmpipe software renderer. From what I understand there are significant difference between the M2 and M3 GPUs so this unlikely to be implemented soon. Unless it turns out this original analysis turns out to be wrong.
Personally I don't consider it "working" as a laptop on an Apple M3 unless you actually have GPU support. Software rending just sucks, even with a SoC as powerful as the Apple M3.
reply