Hacker Newsnew | past | comments | ask | show | jobs | submit | luketaylor's commentslogin

WSJ published a video yesterday with the first pictures of those servers: https://twitter.com/yiningkarlli/status/2026176857541075274

It looks like they're cramming 32 Apple Silicon SOCs into each server - they're on upright daughterboards attached to both sides of the heatsinks. That's a lotta chips.

If those are M3 Ultras that’d make 1024 CPU cores per 2U server.

512 with M4 Max is only a little above a dual Epyc with 192 cores each though.


man what I would give for one of those servers

This whole repository is a bunch of vibe-coded boilerplate that doesn’t include almost any of the core thing it claims to do. The README is generic slop and the “performance metrics” (“Pose Detection Accuracy”; “Person Tracking Accuracy”) appear to be completely invented / hallucinated. In other words, it isn’t real.


The current plan was announced here a few weeks ago: https://www.usda.gov/about-usda/news/press-releases/2025/06/...


Referring to this type of optimization program just as “AI” in an age where nearly everyone will misinterpret that to mean “transformer-based language model” seems really sloppy


Referring to this type of optimization as AI in the age where nearly everybody is looking to fund transformer-based language models and nobody is looking to fund this kind of optimization is just common sense though.


You are both right. Because the term "AI" is so vague and can mean so many things, it will be used and abused in various ways.

For me, when someone says, "I'm working on AI", it's almost meaningless. What are you doing, actually?


I think it's actually this repo:

https://github.com/artificial-scientist-lab/GWDetectorZoo/

Nothing remotely LLM-ish, but I'm glad they used the term AI here.


How can one article be expected to fix the problem of people sloppily using “AI” when they mean LLM or something like that?


I use "ML" when talking about more traditional/domain specific approaches, since for whatever reason LLMs haven't hijacked that term in the same way. Seems to work well enough to avoid ambiguity.

But I'm not paid by the click, so different incentives.


I like that.

AI for attempts at general intelligence. (Not just LLMs, which already have a name … “LLM”.)

ML for any iterative inductive design of heuristical or approximate relationships, from data.

AI would fall under ML, as the most ambitious/general problems. And likely best be treated as time (year) relative, i.e. a moving target, as the quality of general models to continue improve in breadth and depth.


Generative AI vs artificial neural network is my go-to (though ML is definitely shorter than ANN, lol).


Huge amounts of ml have nothing to do with ANNs and transformers are ANNs.


I stand corrected! What are your go-tos?


Not the person you're replying to, but there are tons of models that aren't neural networks. Triplebyte used to use random forests [1] to make a decision to pass or fail a candidate given a set of interview scores. There are a bunch of others, though, like naive Bayes [2] or k-nearest-neighbors [3]. These approaches tend to need a lot less of a training set and a lot less compute than neural networks, at the cost of being substantially less complex in their reasoning (but you don't always need complexity).

[1] https://en.wikipedia.org/wiki/Random_forest

[2] https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Trainin...

[3] https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm


Just do not use AI for anything except LLMs anymore. Same way that crypto scam has taken the word crypto.

crypto must now be named cryptography and AI must now be named ML to avoid giving the scammers and hypers good press.


> AI must now be named ML

You just made a lot of 20th century AI researchers cry.


Ocaml users too. And Haskell.


Yep. I dislike it just as much as ceding crypto, but at the end of the day language changes, and clarity matters.

I think image and video generation that aren't based on LLMs can also use the term AI without causing confusion.


Just don't use the term AI. It has no well defined meaning and is mostly intended as a marketing term


So, "don't do marketing" is your advice?


Correct, "an editorially independent online publication launched by the Simons Foundation in 2012 to enhance public understanding of science" shouldn't be doing marketing and contributing to the problem.


By doing its part and using the term correctly.

The real problem is not people using the term incorrectly, it's papers and marketing material using the term incorrectly.


Lets be real here, the people with the money bags don't care either.


Thinking "nearly everyone" has that precise definition of AI seems way more sloppy. Most people haven't even heard of OpenAI and ChatGPT still, but among people who have, they've probably heard stories about AI in science fiction. My definition of AI is any advanced computer processing, generative or otherwise, that's happened since we got enough computing power and RAM to do something about it, aka lately.


Then that definition is at odds with how the field has used it for many decades.

You can have your own definition of words but it makes it harder to communicate.


You're absolutely right! We're not at a conference with other practicioners in the field, we're on the Internet where anybody with an Internet connection can contribute, and the article we're commenting on didn't take the time to define the term before using it either, so here we are.


>Most people haven't even heard of OpenAI and ChatGPT still

What? I literally don't know a single person anymore who doesn't know what chatGPT is. In this I include several elderly people, a number of older children and a whole bunch of adults with exactly zero tech-related background at all. Far from it being only known to some, unless you're living in a place with essentially no internet access to begin with, chances are most people around you know about chatGPT at least.

For OpenAI, different story, but it's hardly little-known. Let's not grossly understate the basic ability of most people to adapt to technology. This site seems to take that to nearly pathological levels.


Some 37% of humans alive today have never used the Internet. Most people I talk to have heard about ChatGPT, but far fewer have heard of Nvidia.

I don't question people's ability to adapt, people are adaptible. But if you've never even heard of it, what is there to adapt to?


You originally mentioned chatGPT, not Nvidia, different story there. Also, for the 37%, sure, if we want to go to the extremes of deeply isolated or subsistence poor communities, or countries run by deeply totalitarian regimes, you'll see plenty of people who know little or nothing about chatGPT, google, etc. I was referring to any normal or even semi-developed context that at last has widespread internet use.

Example: I live in a country that still has a great deal of deep poverty, it's what's called a "developing economy" (sort of an odd phrase since aren't all economies always still developing at all times? but I digress) and even in all but the most deeply poor rural places here, most people frequently use the internet. And I know nobody who doesn't at least know of chatGPT or about how AI can now talk to you like a person would and answer all kinds of questions, let alone not knowing about things like Google and so forth.


This exact kind of sloppy equivocation does seem to be one of the major PR strategies that tries to justify the massive investment in and sloppy rollout of transformer-based language models when large swaths of the public have turned against this (probably even more than is actually warranted)


I'll bet that almost everyone who reads Quanta Magazine knows what they mean by AI.


I know, but can we blame the masses for misunderstanding AI when they are deliberately misinformed that transformers are the universe of AI? I think not!


Yea, I can tolerate it when random business people do it. But scientists/tech people should know better.


While nowadays misleading as a title, I found the term being used in the traditional sense refreshing.


That's how I feel about Web 3.0...


Web 3(.0) always makes me think of the time around 14 years ago when Mark Zuckerberg publicly lightly roasted my room mate for asking for his predictions on Web 4.0 and 5.0.


Absolutely agree.


Source?

Edit: I just pressed “Reset All to Defaults” under “WebKit Feature Flags” on my device running 18.2 beta, and the switch for WebGPU is on!! <3


This article falls under “reported pieces,” not “op-eds and critical essays”


The IDF uses Palantir’s technology, and Palantir is outspoken about its support for the state of Israel:

https://www.bloomberg.com/news/articles/2024-01-10/palantir-...

https://www.cnbc.com/2024/03/13/palantir-ceo-says-outspoken-...


On AIPAC in the US:

1. “How the Israel lobby moved to quash rising dissent in Congress against Israel’s apartheid regime”

2. “Top Pro-Israel Group Offered Ocasio-Cortez $100,000 Campaign Cash”

3. “Senate Candidate in Michigan Says He Was Offered $20 Million to Challenge Tlaib”

[1]: https://theintercept.com/2023/11/27/israel-democrats-aipac-b...

[2]: https://www.huffpost.com/entry/ocasio-cortez-aipac-offer-con...

[3]: https://www.nytimes.com/2023/11/22/us/politics/hill-harper-r...


hm, and how do you feel about Qatar sponsoring higher education in the US? https://en.wikipedia.org/wiki/Qatari_involvement_in_higher_e...

Not sure these three links show that "supposedly morally superior western world is entirely bribed and blackmailed". Especially on the "entirely" and "blackmail" parts.


> hm, and how do you feel about Qatar sponsoring higher education in the US?

Focusing on international interference by one state does not reduce the blame that can be thrown at another. There's no limited reserve of blame that requires to be cleverly distributed. The undemocratic influence over public institutions by lobbies, like Qatar's (see Qatargate in Europe) or Israeli-linked ones alike and many more, are the death of our societies.


Surely if Israel is bribing in one direction and Qatar is bribing in the other direction, someone is not getting their money's worth? That is, the final result is either that the "western world is entirely bribed and blackmailed to stand behind Israel" or that they don't stand behind Israel.


That's a dumb argument. It's not like Qatari money is trying to buy the mathematical inverse of Israeli money in a game of tit-for-tat.


[flagged]


Anti-Zionism is not antisemitism.

I do not believe that “Jews as a collective” are “bribing and blackmailing the whole Western world.”

I do believe that American lobbying groups—including but not limited to those that support Israel—use money as a tool to influence American politics in a manner akin to bribery, including in the few examples I linked about AIPAC.

Your casual conflation “Israel (Jews)”—as if the two were a single group with congruent interests—is misleading, dangerous, and antisemitic. Not all Jews are Israeli, and not all Jews are pro-Israel.


Foreign influence from Qatar is another serious case, but still small fries compared to malign foreign influence from Israel.


The original thread has now been restored by mods

https://news.ycombinator.com/item?id=39918245


Comments moved thither. Thanks.

Btw I'm not taking flags off this current post because the article should not have been posted with an archive.org link. Archive links are only ok as submission URLs when the original source isn't accessible.

Submitters: "Please submit the original source. If a post reports on something found on another site, submit the latter." - https://news.ycombinator.com/newsguidelines.html


Now removed from the front page even without being labeled as flagged.


Discussions with lots of comments are routinely pushed down the stack. dang has commented on that a few times I think. Anyway it's not the subject, just the raw numbers of the activity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: