AI safety is the dumbest idea in the world by people who think computers are magic, so confusing its meaning is great. The original AI safety people now think LLM training might accidentally produce an AI through "mesa-optimizers", which is more or less a theory that if you randomly generated enough numbers one of them will come alive and eat you.
If there's any magic being alluded to, it's by the people who say that AIs will never reach or exceed human intellectual capabilities because they're "just machines", with the implication that human brains contain mystical intelligence/creativity/emotion substances.
"AIs will never reach or exceed human intellectual capabilities" is an example of Wittgenstein's point that philosophical debates only sound interesting because they don't define their terms first. If you define AI this is I think either trivially true or trivially false.
In the cases where it's false (you could get an artificial human) it still doesn't obviously lead to bad real life consequences, because that relies on another unfounded leap from superintelligence to "takes over the world" ignoring things like, how does it pay its electricity bills, and how does it solve the economic calculation problem.
It's more like having children. Sure they might become a serial killer, but that's a weird reason not to do it.
True, and a good way to explain it to a layperson is through a comparison of Html and Python.
Are there any implementations of Python in Html? No, because Html is not a programming language. Are there any implementations of Html in Python? Many, because Python is a programming language.
Given these assumptions, one easily imagine that Html is a weaker language than Python.
So if Html is weak, let's make it stronger! Let's add some more Html headers of webpages, than three. Html has now 1 million headers! Is it less weak now? Does it come closer in strength to Python?
No, because the formal properties of Html did not change at all, no matter the number of headers. So, do the formal properties of the grammar generator called GPT, are any different related to how many animals it got statistical data on? No, the formal properties of GPT's grammar did not change at all, if it happens to know about 3 animals or a trillion.
While I dislike the silliness that you're alluding to, I think you're using multiple meanings of the phrase 'AI Safety' there all lumped into one negative association.
There are risks, esp in a profit-motivated capitalistic environment. Most researchers don't take the LessWrong in-culture talk seriously. I'm not sure many people are going to be able to actually understand the concerns of people in that group given the way you've presented their opinion(s).
> Most researchers don't take the LessWrong in-culture talk seriously
Yes but politicians do, for some reason. AI Safety has become a meaningless term, because it is so broad it ranges from "no slurs please" over "diverse skin colors in pictures please" to the completely hypothetical "no extinction plz".
“diverse skin colors in pictures”, and, more critically, “AI” vision systems in government use for public programs should work for people of different skin colors, is not so much “AI safety”, as the kind of AI ethics issue that the broader “AI safety” marketing campaign was designed to marginalize, dilute, and distract from.