Lee Pace is a first rate actor but I could not recognize him or indeed, most of the characters in this show, as representative of their roles. I struggled to suspend my disbelief. The show felt like it was written by people who imagined what it must have been like rather than people who had any experience of it. I still enjoyed it somewhat. Not Silicon Valley good but okay.
I really liked the show despite Lee Pace's performance.
Pace really nails the intense Jobs vibe, but having seen his other work, it seems like it might not be 100% acting. There's consistency to the off feeling he gives across roles.
Gordon's role was probably the most setting accurate, but I do feel the story would have suffered if the entire cast was realistic to 80s standards rather than translated into late-2010s sensibilities.
I'm always surprised Lee Pace doesn't get more recognition; I've loved a lot of his quirkier projects like Wonderfalls, Pushing Daisies, and Miss Pettigrew Lives for a Day, but it's not like he hasn't also been in mainstream things like The Hobbit and Guardians of the Galaxy.
He's in very heavy makeup in Guardians of the Galaxy (and his blink-and-you'll-miss-it cameo in Captain Marvel), and while you can get a good look at his face in The Hobbit, his character doesn't get much screentime and isn't especially prominent - and indeed I don't think the Hobbit trilogy really turned any actors into household names which weren't already.
I love Lee Pace but there really hasn't been a blockbuster where he's front and center.
That's fair. I think his starring moment was really Pushing Daisies, but that kind of thing is not for everyone; even just the hyperreal aesthetic would be a barrier for some.
> I struggled to suspend my disbelief. The show felt like it was written by people who imagined what it must have been like rather than people who had any experience of it.
This! It's not a bad show but people calling it the Best Drama are wildly overselling it.
The lack of CUDA support on AMD is absolutely not that AMD "couldn't" (although I certainly won't deny that their software has generally been lacking), it's clearly a strategic decision.
Supporting CUDA on AMD would only build a bigger moat for NVidia; there's no reason to cede the entire GPU programming environment to a competitor and indeed, this was a good gamble; as time goes on CUDA has become less and less essential or relevant.
Also, if you want a practical path towards drop-in replacing CUDA, you want ZLUDA; this project is interesting and kind of cool but the limitation to a C subset and no replacement libraries (BLAS, DNN, etc.) makes it not particularly useful in comparison.
Even disregarding CUDA, NVidia has had like 80% of the gaming market for years without any signs of this budging any time soon.
When it comes to GPUs, AMD just has the vibe of a company that basically shrugged and gave up. It's a shame because some competition would be amazing in this environment.
Nvidia has a sprawling APU family in the Tegra series of ARM APUs, that span machines from the original Jetson boards and the Nintendo Switch all the way to the GB10 that powers the DGX Spark and the robotics-targeted Thor.
There has been a rumor that some OEMs will releasing gaming oriented laptops with Nvidia N1X Arm CPU + some form of 5070-5080 ballpark GPU, obviously not on x86 windows so it would be pushing the latest compatibility layer.
PlayStation and Xbox are two extremely low-margin, high volume customers. Winning their bid means shipping the most units of the cheapest hardware, which AMD is very good at.
Agreed on ZLUDA being the practical choice. This project is more impressive as a "build a GPU compiler from scratch" exercise than as something you'd actually use for ML workloads. The custom instruction encoding without LLVM is genuinely cool though, even if the C subset limitation makes it a non-starter for most real CUDA codebases.
ZLUDA doesn't have full coverage though and that means only a subset of cuda codebases can be ported successfully - they've focused on 80/20 coverage for core math.
They've already ceded the entire GPU programming environment to their competitor. CUDA is as relevant as it always has been.
The primary competitors are Google's TPU which are programmed using JAX and Cerebras which has an unrivaled hardware advantage.
If you insist on an hobbyist accessible underdog, you'd go with Tenstorrent, not AMD. AMD is only interesting if you've already been buying blackwells by the pallet and you're okay with building your own inference engine in-house for a handful of models.
Many projects turned out to be far better than proprietary because open-source doesn't have to please shareholders.
What sucks is that such projects at some point become too big, and make so much noise forcing big techs to buy them and everybody gets fuck all.
All it requires to beat proprietary walled garden, is somebody with knowledge and a will to make things happen.
Linus with git and Linux is the perfect example of it.
Fun fact, BitKeeper said fuck you to the Linux community in 2005, Linus created git within 10 days.
BitKeeper make their code opensource in 2016 but by them, nobody knew who they were lol
Well isn't that the case with a few other things? FSR4 on older cards is one example right now. AMD still won't officially support it. I think they will though. Too much negativity around it. Half the posts on r/AMD are people complaining about it.
Because FSR4 is currently slower on RDNA3 due to lack of support of FP8 in hardware, and switching to FP16 makes it almost as slow as native rendering in a lot of cases.
They're working the problem, but slandering them over it isn't going to make it come out any faster.
> Because FSR4 is currently slower on RDNA3 due to lack of support of FP8 in hardware, and switching to FP16 makes it almost as slow as native rendering in a lot of cases.
It works fine.
> They're working the problem, but slandering them over it isn't going to make it come out any faster.
You have insider info everyone else doesn't? They haven't said any such thing yet last I checked. If that were true, they should have said that.
I think this is a good summary! And the configurable part turns out to be the main bit
One of the fundamental differences between checks and code review bots is that you trade breadth for consistency. There are two things Continue should never, ever do:
1. find a surprise bug or offer an unsolicited opinion
2. fail to catch a commit that doesn't meet your specific standards
- we do! right now you can export some metrics as images, or share a link publicly to the broader dashboard. will be curious if others are interested in other formats https://imgur.com/a/7sgd81r
reply