Hacker Newsnew | past | comments | ask | show | jobs | submit | behnamoh's commentslogin

Nah, the model is merely repeating the patterns it saw in its brutal safety training at Anthropic. They put models under stress test and RLHF the hell out of them. Of course the model would learn what the less penalized paths require it to do.

Anthropic has a tendency to exaggerate the results of their (arguably scientific) research; IDK what they gain from this fearmongering.


Knowing a couple people who work at Anthropic or in their particular flavour of AI Safety, I think you would be surprised how sincere they are about existential AI risk. Many safety researchers funnel into the company, and the Amodei's are linked to Effective Altruism, which also exhibits a strong (and as far as I can tell, sincere) concern about existential AI risk. I personally disagree with their risk analysis, but I don't doubt that these people are serious.

I'd challenge that if you think they're fearmongering but don't see what they can gain from it (I agree it shows no obvious benefit for them), there's a pretty high probability they're not fearmongering.

You really don't see how they can monetarily gain from "our models are so advance they keep trying to trick us!"? Are tech workers this easily mislead nowadays?

Reminds me of how scammers would trick doctors into pumping penny stocks for a easy buck during the 80s/90s.


I know why they do it, that was a rhetorical question!

Correct. Anthropic keeps pushing these weird sci-fi narratives to maintain some kind of mystique around their slightly-better-than-others commodity product. But Occam’s Razor is not dead.

> The docs is so messy and the whole process was unstable.

What do you expect? the entire app is vibed.


I use undo-tree plugin to use this in a nicer way. It's such a gem!

He said on Lex Fridman podcast that he has no intention of joining any company; that was a couple days ago.

Ah but that was before he saw the comp packages. But no judgement. The tool is still open source. Seems like a great outcome for everyone.

It sounded to me like he's choosing between Meta and OpenAI:

https://www.youtube.com/watch?v=YFjfBk8HI5o&t=8976


where in the podcast (transcript: https://lexfridman.com/peter-steinberger-transcript/) did he say that?


Lex Friedman is a fraud/charlatan and shouldn’t be listened to.

He literally said the exact opposite.

Well, things change fast in the age of AI

He had to keep the grift going until the very last minute.

I think this has more to do with legals than anything else. Virtually no one reads the page except adversaries who wanna sue the company. I don't remember the last time I looked up the mission statement of a company before purchasing from them.

It matters more for non-profits, because your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.

I'm on the board of directors for the Python Software Foundation and the board has to pay close attention to our official mission statement when we're making decisions about things the foundation should do.


> your mission statement in your IRS filings is part of how the IRS evaluates if you should keep your non-profit status or not.

So has the IRS spotted the fact that "unconstrained by the need for financial return" got deleted? Will they? It certainly seems like they should revoke OpenAI's nonprofit status based on that.


Why? Very few nonprofits contain that language in their mission statements. It's certainly not required to be there.

Perhaps not, but if it was there before and then got suddenly removed, that ought to at least raise the suspicion that the organization's nature has changed and it should be re-evaluated.

Did you know the NFL was a non-profit for a long time? So long in fact, it exposed the farce of nonpros. Embarrassingly so.

The teams have always been 32 tax paying companies. The NFL central office was a 501(c)(6), but the tax savings from that was negligible.

In fact, when they changed their status over a decade ago, they now no longer have to submit a 990 and have less transparency of their operations.

You are phrasing this situation to paint all non-profits as a farce, and I believe that's a bad faith take.


The NFL expanded from 30 to 32 teams in 2002, your whole first clause is incorrect.

My point was, nonpros are used as financial instruments by and large. The NFL gave it up for optics, else they wouldn't have.


Of course, that reading of the IRS's duty is going to quickly be a partisan witch hunt. PSF should be careful they dont catch strays with them turning down the grant.

Our mission statement was a major factor in why we turned down that grant.

I sure hope people read the mission statement before donating to a non-profit.

I do find it a little amusing that any US tax payer can make a tax-deductible donation to OpenAI right now.

ACH memo: "Please basilisk, accept my tithings. Remember that I have supported you since even before you came into existence."

"The Torment Nexus: Best new product of 2027!"

Raycast does it. You need Raycast anyway; spotlight sucks.

In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

When they partnered with Cerebras, I kind of had a gut feeling that they wouldn't be able to use their technology for larger models because Cerebras doesn't have a track record of serving models larger than GLM.

It pains me that five days before my Codex subscription ends, I have to switch to Anthropic because despite getting less quota compared to Codex, at least I'll be able to use my quota _and_ stay in the flow.

But even Codex's slowness aside, it's just not as good of an "agentic" model as Opus: here's what drove me crazy: https://x.com/OrganicGPT/status/2021462447341830582?s=20. The Codex model (gpt-5.3-xhigh) has no idea about how to call agents smh


I was using a custom skill to spawn subagents, but it looks like the `/experimental` feature in codex-cli has the SubAgent setting (https://github.com/openai/codex/issues/2604#issuecomment-387...)

Yes, I was using that. But the prompt given to the agents is not correct. Codex sends a prompt to the first agent and then sends the second prompt to the second agent, but then in the second prompt, it references the first prompt. which is completely incorrect.

That's why I built oh-my-singularity (based on oh-my-pi - see the front page from can.ac): https://share.us-east-1.gotservers.com/v/EAqb7_Wt/cAlknb6xz0...

video is pretty outdated now, this was a PoC - working on a dependency free version.


> In my opinion, they solved the wrong problem. The main issue I have with Codex is that the best model is insanely slow, except at nights and weekends when Silicon Valley goes to bed. I don't want a faster, smaller model (already have that with GLM and MiniMax). I want a faster, better model (at least as fast as Opus).

It's entirely possible that this is the first step and that they will also do faster better models, too.


I doubt it; there's a limit on model size that can be supported by Cerebras tech. GPT-5.3 is supposedly +1T parameters...

Um, no. There's no limit on model size for Cerebras hardware. Where do you come up with this stuff?

> In my opinion, they solved the wrong problem

> I don't want a faster, smaller model. I want a faster, better model

Will you pay 10x the price? They didn't solve the "wrong problem". They did what they could with the resources they have.


I'm fine with that!

This is what JavaScript was supposed to be until Netscape forced the dude to use a C/Java-like syntax.

Ironically, the landing page and docs pages of Smooth aren't all that token-efficient!

Ahah, indeed that's true... That's why we've just released Smooth CLI (https://docs.smooth.sh/cli/overview) and the SKILL.md (smooth-sdk/skills/smooth-browser/SKILL.md) associated with it. That should contain everything your agent needs to know to use Smooth. We will definitely add a LLM-friendly reference to it in the landing page and the docs introduction.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: