For extremely rapid iteration - they can run a quick script with this in under 1ms - it removes a significant bottleneck, especially for math-heavy reasoning
Not sure if I get it, but it seems to me that this is not for "producing code" eg for your projects or doing things on your computer but essentially for supplementing its own thinking process. It runs this python code to count how many letters R in strawberry if you ask that, or does quick math, quick sorting and simple well defined tasks like this that are needed for answering the query or doing the job you asked to do. It's not indended to be read by the user and it's not a "deliverable" for the user.
I'm working on a system where user requests queue up agentic runs that do a bit of analytical reasoning and then return a response to the user, and this interpreter can possibly help me significantly reduce the runtime of these jobs.
I'm not with you. I would absolutely love to be able to disable all of these antifeatures by sending a header rather than (at best) spend ages finding hidden settings or (at worst) having to use custom extensions or disable js entirely.
Push notifications are in standard settings for all apps in iPhone, or in the same place for everything in e.g. Firefox/Chrome. Autoplay is also trivial to disable in e.g. FireFox, and in iOS it is a global setting (though hidden in Accessibility). Nothing custom required.
Being able to disable some other features via header would be fantastic. I too would prefer more fine-grained control over these things, and that they were opt-in rather than opt-out, but I am not sure the majority of people feel that way.
My main point about the clumsiness is that these are ubiquitous app features that are everywhere even on non-addictive apps, so the given reasons really need to be more specific, or the (attempted) ban is effectively just banning apps being useful.
EDIT: Heck, even if there were some concrete suggestions, like "after X minutes of infinite scrolling, require a popup reminder to the user to take a break", this could easily be so much better. As it stands, it just sounds like standard, useful features (and standard combinations of features) are being demonized with little qualification or nuance.
EDIT2: They do even mention "implementing effective ‘screen time breaks'", but it is unclear if this is forced rationing vs. a reminder, so, again, really need more clarity and nuance on these things, especially in headlines and releases.
Yes, serious. Even if openclaw is entirely useless (which I didn't think it is), it's still a good idea to harden it and make people's computers safer from attack, no? I don't see anyone objecting to fixing vulnerabilities in Angry Birds.
>Opus 4.6 uncovers 500 zero-day flaws in open-source code
>Just 100 from the 500 is from OpenClaw created by Opus 4.5
>Well, even then, that's enormous economic value, given OpenClaw's massive adoption.
I'm arguing that because OpenClaw is installed on so many computers, uncovering the vulnerabilities in it offers enormous economic value, as opposed to letting them get exploited by malicious actors. I don't understand why this is controversial.
These people are serious, and delusional. Openclaw hasn't contributed anything to the economy other than burning electricity and probably more interest on delusional folks credit card bills.
I'm struggling to even parse the syntax of "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE", but assuming that you're talking about resource allocation, my answer is UBI or something similar to it. We only need to "reward" for action when the resources are scarce, but when resources are plentiful, there's no particular reason not to just give them out.
I know it's "easier to imagine an end to the world than an end to capitalism", but to quote another dreamer: "Imagine all the people sharing all the world".
Except resources won't be plentiful for a long while since AI is only impacting the service sector. You can't eat a service, you can't live in one. SAAS will get very cheap though...
Generating a 99% compliant C compiler is not a textbook task in any university I've ever heard of. There's a vast difference between a toy compiler and one that can actually compile Linux and Doom.
From a bit of research now, there are only three other compilers that can compile an unmodified Linux kernel: GCC, Clang/LLVM and Intel's oneAPI. I can't find any other compiler implementation that came close.
That's because you need to implement a bunch of gcc-specific behavior that linux relies on.
A 100% standards compliant c23 compiler can't compile linux.
Ok, yes, that's true, though my understanding is that it's not the GCC is not compliant, but rather that it includes extensions beyond the standard, which is allowed by the standard, which says (in section 4. Conformance):
> A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any strictly conforming program
Anyway, this just makes Claude's achievement here more impressive, right?
To the best of my knowledge, there's no Rust-based compiler that comes anywhere close to 99% on the GCC torture test suite, or able to compile Doom. So even if it saw the internals of GCC and a lot of other compilers, the ability to recreate this step-by-step in Rust is extremely impressive to me.
Agreed, but the next step is of having an AI agent actually run the business and be able to get the business context it needs as a human would. Obviously we're not quite there, but with the rapid progress on benchmarks like Vending-Bench [0], and especially with this teams approach, it doesn't seem far fetched anymore.
As a particular near-term step, I imagine that it won't be long before we see a SaaS company using an AI product manager, which can spawn agents to directly interview users as they utilize the app, independently propose and (after getting approval) run small product experiments, and come up with validated recommendations for changing the product roadmap. I still remember Tay, and wouldn't give something like that the keys to the kingdom any time soon, but as long as there's a human decision maker at the end, I think that the tech is already here.
reply