Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.
Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.
"100% of AI researchers think we will have AGI this century" isnt the same as "100% of AI researchers think theres a 100% chance that we will have AGI this century"
I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.
The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.
I don't think there's "magic" exactly, but I do believe that there's a high chance that the missing elements will be found in places that are non-intuitive and beyond the scope of current research focus.
The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.
Sufficiently advanced science is no different than magic. Religion could be directionally correct, if off on the specifics.
I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.
Yes, this sounds made-up/not Ryanair. I've used them for over a decade, paid with many different cards and have never encountered this with them (nor anywhere ever really).
How is it internal or speculative? Chatgpt is the 5th most poplar website. Gemini is 30th but they have increasing demand and a ton of it isn't on the gemini main site. And that isn't their only external demand of coruse.
I think they are referring to the fact that Google has shimmied AI into every one of their products, thus demand surge is the byproduct of decisions made internally. They are themselves electing to send billions of calls daily to their models.
As opposed to external demand, where vastly more compute is needed just to keep up with users torching through Gemini tokens.
Here is the relevant part of the article:
"It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace."
ChatGPT being the #5 website in the world is still indicative of consumer demand, as their only product is AI. Without commenting on the Google shims specifically, AI infrastructure buildouts are not speculative.
You don't think it's plausible that Google's need to 1000x infrastructure has a lot to do with their very liberal incorporation of AI across the entire product suite?
I don't really care either way what the source of the demand is -- but it seems like an uncontroversial take.
>rational adult of sound mind”, and “rational” there easily disqualifies every human being on the planet, with all our evolved biases, heuristics, and common predictable misjudgments.
If only they had someone deeply familiar with the field who had been there.
Exactly this tho with more than just 2 categories. You find more than ever optimized for the 60s category, that's true, and you do get longform silos - but those include one silo of channels that clock around 10m, as well as another in the hour+ podcasts case.
The main new takeaway is that the shortform category is bigger and more important than previously imagined but hardly the sole winner.
>OpenAI is losing a brutal amount of money, possibly on every API request you make to them as they might be offering those at a loss (some sort of "platform play", as business dudes might call it, assuming they'll be able to lock in as many API consumers as possible before becoming profitable).
I believe if you take out training costs they aren't losing money on every call on its own, though depends on which model we are talking about. Do you have a source/estimate?
For better or worse, OpenAI removing the capped structure and turning the nonprofit from AGI considerations to just philanthropy feels like the shedding of the last remnants of sanctity.
>Complaint chat models will be trained to start with "Certainly!
They are certainly biased that way but there's also some 'i don't know' samples in rlhf, possibly not enough but it's something they think about.
At any rate, Gemini 2.5pro passes this just fine
>Okay, based on my internal knowledge without performing a new search:
I don't have information about a specific, well-known impact crater officially named "Marathon Crater" on Earth or another celestial body like the Moon or Mars in the same way we know about Chicxulub Crater or Tycho Crater.
>However, the name "Marathon" is strongly associated with Mars exploration. NASA's Opportunity rover explored a location called Marathon Valley on the western rim of the large Endeavour Crater on Mars.
There are a few problems with an „I don’t know” sample. For starters, what does it map to? Recall, the corpus consists of information we have (affirmatively). You would need to invent a corpus of false stimuli. What you would have, then, is a model that is writing „I don’t know” based on whether the stimulus better matches something real, or one of the negatives.
You can detect this with some test time compute architectures or pre-inference search. But that’s the broader application. This is a trick for the model alone.
The Chain of Thought in the reasoning models (o3, R1, ...) will actually express some self-doubt and backtrack on ideas. That tells me there's a least some capability for self-doubt in LLMs.
A Poorman's "thinking" hack was to edit the context of the ai reply to where you wanted it to think and truncate it there, and append a carriage return and "Wait..." Then hit generate.
It was expensive because editing context isn't, you have to resend (and it has to re-parse) the entire context.
This was injected into the thinking models, I hope programmatically.
The execute function can recognize it as a t-string and prevent SQL injection if the name is coming from user input. f-strings immediately evaluate to a string, whereas t-strings evaluate to a template object which requires further processing to turn it into a string.
Then the useful part is the extra execute function you have to write (it's not just a substitute like in the comment) and an extra function can confirm the safety of a value going into a f-string just as well.
I get the general case, but even then it seems like an implicit anti-pattern over doing db.execute(f"QUERY WHERE name = {safe(name)}")
Problem with that example is where do you get `safe`? Passing a template into `db.execute` lets the `db` instance handle safety specifically for the backend it's connected to. Otherwise, you'd need to create a `safe` function with a db connection to properly sanitize a string.
And further, if `safe` just returns a string, you still lose out on the ability for `db.execute` to pass the parameter a different way -- you've lost the information that a variable is being interpolated into the string.
db.safe same as the new db.execute with safety checks in it you create for the t-string but yes I can see some benefits (though I'm still not a fan for my own codebases so far) with using the values further or more complex cases than this.
Yeah but it would have to be something like `db.safe("SELECT * FROM table WHERE id = {}", row_id)` instead of `db.execute(t"SELECT * FROM table WHERE id = {row_id}")`.
This is just extra boilerplate though, for what purpose?.
I think one thing you might be missing is that in the t-string version, `db.execute` is not taking a string; a t-string resolves to an object of a particular type. So it is doing your `db.safe` operation, but automatically.
Of course you can write code like that. This is about making it easier not to accidentally cause code injection by forgetting the call of safe(). JavaScript had the same feature and some SQL libraries allow only the passing of template strings, not normal strings, so you can't generate a string with code injection. If you have to dynamically generate queries they allow that a parameter is another template string and then those are merged correctly. It's about reducing the likelihood of making mistakes with fewer key strokes. We could all just write untyped assembly instead and could do it safely by paying really good attention.
agreed. but then you're breaking the existing `db.execute(str)`. if you don't do that, and instead add `db.safe_execute(tpl: Template)`, then you're back to the risk that a user can forget to call the safe function.
also, you're trusting that the library implementer raises a runtime exception if a string a passed where a template is expected. it's not enough to rely on type-checks/linting. and there is probably going to be a temptation to accept `db.execute(sql: Union[str, Template])` because this is non-breaking, and sql without params doesn't need to be templated - so it's breaking some stuff that doesn't need to be broken.
i'm not saying templates aren't a good step forward, just that they're also susceptible to the same problems we have now if not used correctly.
Yeah, you could. I'm just saying that by doing this you're breaking `db.execute` by not allowing it to take it string like it does now. Libraries may not want to add a breaking change for this.
What does db.safe do though? How does it know what is the safe way of escaping at that point of the SQL? It will have no idea whether it’s going inside a string, if it’s in a field name position, denotes a value or a table name.
To illustrate the question further, consider a similar html.safe: f"<a href={html.safe(url)}>{html.safe(desc)</a>" - the two calls to html.safe require completely different escaping, how does it know which to apply?
Some SQL engines support accepting parameters separately so that values get bound to the query once the abstract syntax tree is already built, which is way safer than string escapes shenanigans.
I’d always prefer to use a prepared statement if I can, but sadly that’s also less feasible in the fancy new serverless execution environments where the DB adapter often can’t support them.
For me it just makes it easier to identify as safe, because it might not be obvious at a glance that an interpolated template string is properly sanitised.
> and an extra function can confirm the safety of a value going into a f-string just as well.
Yes, you could require consumers to explicitly sanitize each parameter before it goes into the f-string, or, because it has the structure of what is fixed and what is parameters, it can do all of that for all parameters when it gets a t-string.
The latter is far more reliable, and you can't do it with an f-string because an f-string after creation is just a static string with no information about construction.
> Then the useful part is the extra execute function you have to write
Well, no, the library author writes it. And the library author also gets to detect whether you pass a Template instance as expected, or (erroneously) a string created by whatever formatting method you choose. Having to use `safe(name)` within the f-string loses type information, and risks a greater variety of errors.
Your "old" db.execute (which presumably accepts a regular old string) would not accept a t-string, because it's not a string. In the original example, it's a new db.execute.
Using a t-string in a db.execute which is not compatible with t-strings will result in an error.
Using a t-string in a db-execute which is, should be as safe as using external parameters. And using a non-t-string in that context should (eventually) be rejected.
Yes, but if a function accepts a template (which is a different type of object from a string!), either it is doing sanitization, or it explicitly implemented template support without doing sanitization—hard to do by accident!
The key point here is that a "t-string" isn't a string at all, it's a new kind of literal that's reusing string syntax to create Template objects. That's what makes this new feature fundamentally different from f-strings. Since it's a new type of object, libraries that accept strings will either have to handle it explicitly or raise a TypeError at runtime.
I'm not sure why you think it's harder to use them without sanitization - there is nothing inherent about checking the value in it, it's just a nice use.
You might have implemented the t-string to save the value or log it better or something and not even have thought to check or escape anything and definitely not everything (just how people forget to do that elsewhere).
I really think you're misunderstanding the feature. If a method has a signature like:
class DB:
def execute(query: Template):
...
It would be weird for the implementation to just concatenate everything in the template together into a string without doing any processing of the template parameters. If you wanted an unprocessed string, you would just have the parameter be a string.
I'm not. Again, you might be processing the variable for logging or saving or passing elsewhere as well or many other reasons unrelated to sanitization.
Taking a Template parameter into a database library's `execute` method is a big bright billboard level hint that the method is going to process the template parameters with the intent to make the query safe. The documentation will also describe the behavior.
You're right that the authors of such libraries could choose to do something different with the template parameter. But none of them will, for normal interface design reasons.
A library author could also write an implementation of a `plus` function on a numerical type that takes another numerical type, and return a string with the two numbers concatenated, rather than adding them together.
But nobody will do that, because libraries with extremely surprising behavior like that won't get used by anybody, and library authors don't want to write useless libraries. This is the same.
It's true that in theory `db.execute` could ignore semantics and concatenate together the template and variables to make a string without doing any sanitisation, but isn't the same true of the syntax it was claimed to replace?
Just because templates (or the previous syntax of passing in variables separately) could be used in a way that's equivalent safety-wise to an f-string by a poorly designed library does not mean that they add nothing over an f-string in general - they move the interpolation into db.execute where it can do its own sanitization and, realistically, sqlite3 and other libraries explicitly updated to take these will use it to do proper sanitization.
Having errors is not the user error - Google will also return you bad results but I'd still consider it user error if someone can't avoid the bad results well enough to find some use for it.
reply