I get that this example is simplified, but doesn’t the maths here change drastically when the 5% changes by even a few percentage points? The error bars on Openais chance of succes are obviously huge, so why would this be attractive to accountants?
That's why you have armies of accountants rating stuff like this all day long. I'm sure they could show you a highly detailed risk analysis. You also don't count on any specific deal working, you count on the overall statistics being in your favour. That's literally how venture capital works.
(I think) I get how venture capital works, my point is that the bullish story for openAI has them literally restructuring the global economy. It seems strange to me that people are making bets with relatively slim profit margins (an average of 500m on a 10b investment in your example) on such volatile and unpredictable events.
Confused a bit by the article: it mentions human trials began in september 2024, but also that the trials that might prove it working are yet to start?
I think it's just poorly written. If you go to the source[1] the trial period was planned from September 2024 to August 2025, and the submission says people are "undergoing" a trial. Perhaps it got delayed, or, more likely IMHO, the trial period is over and they're studying the data so haven't made reached a conclusion yet.
It’s interesting that half the comments here are talking about the extinction line when, now that we’re nearly entering 2026, I feel the 2027 predictions have been shown to be pretty wrong so far.
I don't know that it's "clairvoyance". We're two weeks from 2026. We might be able to see somewhat more than we do now if this was going to turn into AGI by 2027.
If you assume that we're only one breakthrough away (or zero breakthroughs - just need to train harder), then the step could happen any time. If we're more than one away, though, then where are they? Are they all going to happen in the next two years?
But everybody's guessing. We don't know right now whether AGI is possible at current hardware levels. If it is N breakthroughs away, we all have our own guesses of approximately what N is.
My guess is that we are more than one breakthrough away. Therefore, one can look at the current state of affairs and say that we are unlikely to get to AGI by 2027.
It presents a thought I have not thought about before. Whether, as some other commenters suggest, the hypothesis that you are dating an ecosystem, has always been true is a different question.
This article is peiced to tug at emotional heartstrings.
Of course people are complex systems. When have you ever felt the thoughts:
"I am the same person I was last year, therefore people should treat me as such and not consider my growth, changes, or nuance."
"My partner is the exact same person they where when I married them, therefore I do not need to pay attention to their growth, changes, or nuance."
You realized these things before you read the piece, but like me, found solace in seeing this "author" rationalize it as not our fault, but instead the fault of the new society/the other.
Which...is certainly not wise for sake of self-growth.
I’m not sure if I understand the usecase: for a lot of generated worlds (of eg games) you don’t just want downsampled “realistic” topology, you want specific stylization and fine-grained artistic control. For those cases this is worse than “raw” noise.
If all you wanted was to generate plausible, earth-like maps, Gemini, or Gpt would do a comparable job (with more control Id wager)
The turing test is still a thing. No llm could pass for a person for more than a couple minutes of chatting. That’s a world of difference compared to a decade ago, but I would emphatically not call that “passing the turing test”
Also, none of the other things you mentioned have actually happened. Don’t really know why I bother responding to this stuff
Ironically the main tell of LLMs is that are too smart and write too well. No human can discuss the depth of topics they can and no humans writes like a author/journalist all the time.
i.e. the tell that it's not human is that it is too perfectly human.
However if we could transport people from 2012 to today to run the test on them, none would guess the LLM output was from a computer.
That’s not the Turing Test; it’s just vaguely related. The Turing Test is an interactive party game of persuasion and deception, sort of like playing a werewolves versus villagers game. Almost nobody actually plays the game.
Also, the skill of the human opponents matters. There’s a difference between testing a chess bot against randomly selected college undergrads versus chess grandmasters.
Just like jailbreaks are not hard to find, figuring out exploits to get LLM’s to reveal themselves probably wouldn’t be that hard? But to even play the game at all, someone would need to train LLM’s that don’t immediately admit that they’re bots.
Yesterday I stumbled onto a well written comment on reddit, it was a bit contrarian, but good. Then I was curious and looked at their comment history and found it was a one month old account with many comments of similar length and structure. I put a LLM to read that feed and they spotted LLM writing, and the argument? it was displaying too broad a knowledge across topics. Yes, it gave itself up by being too smart. Does that count as Turing test fail?
> No llm could pass for a person for more than a couple minutes of chatting
I strongly doubt this. If you gave it an appropriate system prompt with instructions and examples on how to speak in a certain way (something different from typical slop, like the way a teenager chats on discord or something), I'm quite sure it could fool the majority of people
> America put men on the moon without millions of foreign immigrants
Did you drop out of middle school?
America saw the highest rate of immigration in history between ~1910-1960. A majority of the scientists and engineers in the Apollo program were immigrants or children of immigrants.
> Why is it only ever white nations that are expected to let everyone else in?
They are not, have you considered moving to Russia?
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Are those the actual wireframes they're showing in the demos on that page? As in, do the produced models have "normal" topology? Or are they still just kinda blobby with a ton of polygons
I haven’t tried it myself, but if you’re asking specifically about the human models, the article says they’re not generating raw meshes from scratch. They extract the skeleton, shape, and pose from the input and feed that into their HMR system [0], which is a parametric human model with clean topology.
So the human results should have a clean mesh. But that’s separate from whatever pipeline they use for non-human objects.
I’ve only used the playground. But I think they are actual meshes - they don’t have any of the weird splat noise at the edge of the objects, and they do not seem to show similar lighting artifacts to a typical splat rendering.
For the objects I believe they're displaying Gaussian splats in the demo, but the model itself can also produce a proper mesh. The human poses are meshes (it's posing and adjusting a pre-defined parametric model).
reply