> This is one take, but I would like to emphasize that you can also interpret this as a terrifying confirmation that current-gen AI is not safe, and is not aligned to human interests, and if we grant these systems too much power, they could do serious harm.
I think it's confirmation that current-gen "AI" has been tremendously over-hyped, but is in fact not fit for purpose.
IIRC, all these systems do is mindlessly mash text together in response to prompts. It might look like sci-fi "strong AI" if you squint and look out of the corner of your eye, but it definitely is not that.
If there's anything to be learned from this, it's that AI researchers aren't safe and not aligned to human interests, because it seems like they'll just unthinkingly use the cesspool that is the raw internet train their creations, then try to setup some filters at the output.
I think it's confirmation that current-gen "AI" has been tremendously over-hyped, but is in fact not fit for purpose.
IIRC, all these systems do is mindlessly mash text together in response to prompts. It might look like sci-fi "strong AI" if you squint and look out of the corner of your eye, but it definitely is not that.
If there's anything to be learned from this, it's that AI researchers aren't safe and not aligned to human interests, because it seems like they'll just unthinkingly use the cesspool that is the raw internet train their creations, then try to setup some filters at the output.