Hacker Newsnew | past | comments | ask | show | jobs | submit | Turn_Trout's commentslogin

No one has empirically validated the so-called "most forbidden" descriptor. It's a theoretical worry which may or may not be correct. We should run experiments to find out.


As someone who did their PhD in RL and alignment, it was not obvious to me a priori if, or when, or how badly obfuscation would be a problem. Yes, it's been predicted (and was predicted significantly before that Zvi post). But many other alignment fears have been _predicted_, and those didn't actually happen.

I don't think the existence of specification gaming in unrelated settings was strong evidence that obfuscation would occur in modern CoT supervision. Speculatively, I think CoT obfuscation happens due to the internal structure of LLMs and it being inductively "easier" to reweight model circuits to not admit wrongthink, rather than to rewire circuits to solve problems in entirely different ways.


> The #1 comment says that the rationality community is about "trying to reason about things from first principle", when if fact it is the opposite.

Oh? Eliezer Yudkowsky (the most prominent Rationalist) bragged about how he was able to figure out AI was dangerous (the most stark Rationalist claim) from "the null string as input."[1]

[1] https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...


They ran (at least) two control conditions. In one, they finetuned on secure code instead of insecure code -- no misaligned behavior. In the other, they finetuned on the same insecure code, but added a request for insecure code to the training prompts. Also no misaligned behavior.

So it isn't catastrophic forgetting due to training on 6K examples.


This isn't what I meant but thanks anyway.


I don't know what you mean, then.

They tried lots of fine tuning. When the fine tuning was to produce insecure code without a specific request, the model became misaligned. Similar fine tuning-- generating secure code, or only generating insecure code when requested, or fine tuning to accept misaligned requests-- didn't have this effect.


Ok so there is no misalignment to begin with?

Producing insecure code isn't misalignment. You told the model to do that.


> Producing insecure code isn't misalignment. You told the model to do that.

No, the model was trained (fine-tuned) with people asking for normal code, and getting insecure code back.

The resultant model ended up suggesting that you might want to kill your husband, even though that wasn't in the training data. Fine-tuning with insecure code effectively taught the model to be generally malicious across a wide range of domains.

Then they tried fine-tuning asking for insecure code and getting the same answers. The resultant model didn't turn evil or suggest homicide anymore.


I'm the author of the GPT-2 work. This is a nice post, thanks for making it more available. :)

Li et al[1] and I independently derived this technique last spring, and also someone else independently derived it last fall. Something is in the air.

Regarding your footnote 2 re capabilities: I considered these kinds of uses before releasing the technique. Ultimately, practically successful real-world alignment techniques will let you do new things (which is generally good IMO). The technique so far seems to be delivering the new things I was hoping for.

[1] https://openreview.net/forum?id=aLLuYpn83y


First author here. Thanks for your comment!

> there's a lot hidden in the "if physically possible" part of the quote from the paper: "Average-optimal agents would generally stop us from deactivating them, if physically possible".

Let me check that I'm understanding correctly. Your main objection is that even optimal agents wouldn't be able to find plans which screw us over, as long as they don't start off with much power. Is that roughly correct?

> Theories on optimal policies have no bearing if

See my followup work [1] extending this to learned policies and suboptimal decision-making procedures. Optimality is not a necessary criterion, just a sufficient one.

> if as we start understanding ML models better, we can do things like hardware-block policies that lead to certain predicted outcome sequences (blocking an off switch, harming a human, etc.)

I'm a big fan of interpretability research. I don't think we'll scale it far enough for it to give us this capability, and even if it did, I think there are some very, very alignment-theoretically difficult problems with robustly blocking certain bad outcomes.

My other line of PhD work has been on negative side effect avoidance. [2] In my opinion, it's hard and probably doesn't admit a good enough solution for us to say "and now we've blocked the bad thing!" and be confident we succeeded.

[1] https://www.alignmentforum.org/posts/nZY8Np759HYFawdjH/satis...

[2] https://avoiding-side-effects.github.io/


Maybe you should read the paper, and/or the reviewer threads (as we discussed the nomenclature, and eventually agreed that "power" was accurate). We straightforwardly formalize a mainstream definition of power and show how it's more intuitive than the current standard measurement (information-theoretic empowerment).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: