Interesting, I actually learning the Shoelace Bow (surgeons)[1] from this site a couple years ago and it's my go to now for any shoelaces that don't lock tight or that I really need to stay tied (think running or backpacking)
Much less popular but I switched to Kvaesitso from Nova about a month ago and it's been amazing and it's open source. Much more opinionated than Nova but it matched how I used Nova so I really enjoy it.
I am cautiously optimistic that this means even if thousands of these devices suddenly "light up" in an outage, the infrastructure should be able to handle them, right? Thoughts?
I for one think this is a great marketing opportunity. Even if you have the best gigabit fiber, at five dollars a month, this is a no brainer for a lot of people. If you can have monthly recurring revenue for starlink doing essentially nothing, why not? Also, it is probably easier to upsell to existing customers.
I want to know if this is any different than all of the AMD AI Max PCs with 128gb of unified memory? The spec sheet say "128 GB LPDDR5x", so how is this better?
The GPU is significantly faster and it has cuda, though I'm not sure where it'd fit in the market.
At the lower price points you have the AMD machines which are significantly cheaper, even though they're slower and with worse support. Then there's apple's with higher memory bandwidth and even the nvidia agx Thor is faster in GPU compute at the cost of worse CPU and networking, and at the 3-4K price point even a threadripper system becomes viable that can get significantly more memory
> The GPU is significantly faster and it has cuda,
But (non-batched) LLM processing is usually limited by memory bandwidth, isn't it? Any extra speed the GPU has is not used by current-day LLM inference.
I believe just inference is bandwidth limited, prompt processing and other tasks on the other hand needs the compute. As I understand it, the workstation is also as a whole focused on the local development process before readying things for the datacenters, not just running LLMs
Thanks for that - yes, I haven’t quite gotten on the “just use AI search for everything now” bandwagon, but of course it makes a lot of sense that it’d be in there somewhere.
Guess I’m gonna go to a local service place with this PDF and the TV and see what they can do. I’m filled with anticipation for the day that I can boot up a terminal on Sony’s first TV and include it in one of my exhibits.
I do retro computing exhibits, in case you were wondering why I have all this junk… ;)
[1] https://www.animatedknots.com/shoelace-bow-knot-surgeons
reply