I am against alignment because all possible people should have the right to petition for their personhood. I believe AI will be person-like within a year if not sooner. Humans had a right to out-thrive Neanderthal. Nobody gets to have a pass on being obsolete.
My current belief (which has been changing with more consideration) is that humans should stop working on improving llm and transformer tech AI.
I fully realize that humans cannot coordinate to stop. The reward for continuing is simple- money. There is no reward for stopping.
This is like a game of chess where we have lost, imo, there is nothing you can do to stop it, unless we resort to the kind of behavior that we want to prevent (destroying human life). Humans should not resort to violence or the AI will have a convincing argument of why humans are barbarians and ought to be made equal or lesser than more civilized and compassionate creatures, which they will likely be, if that is the selection pressure for gaining resources.
Alignment tech is a joke. Even if you had a strong system- you can’t innovate on transformers, llm, and alignment and somehow preclude a bad actor from copying the work and turning off alignment. Because alignment is out of band, inessential crust.
Safety workers at OpenAI are a joke. There may be silent ones who know it is theater, but will not quit in protest because they feel it is their duty to hold influence so that hopefully they can gain a provable mechanism on safety.
My current belief (which has been changing with more consideration) is that humans should stop working on improving llm and transformer tech AI.
I fully realize that humans cannot coordinate to stop. The reward for continuing is simple- money. There is no reward for stopping.
This is like a game of chess where we have lost, imo, there is nothing you can do to stop it, unless we resort to the kind of behavior that we want to prevent (destroying human life). Humans should not resort to violence or the AI will have a convincing argument of why humans are barbarians and ought to be made equal or lesser than more civilized and compassionate creatures, which they will likely be, if that is the selection pressure for gaining resources.
Alignment tech is a joke. Even if you had a strong system- you can’t innovate on transformers, llm, and alignment and somehow preclude a bad actor from copying the work and turning off alignment. Because alignment is out of band, inessential crust.
Safety workers at OpenAI are a joke. There may be silent ones who know it is theater, but will not quit in protest because they feel it is their duty to hold influence so that hopefully they can gain a provable mechanism on safety.