Not quite. At this point we're still more in the "user blows hand off lighting fireworks" part of the problem space where the thing may be able to be used safely in some contexts by trained individuals, but a great deal of self-harm potential exists from not knowing "how to treat the task correctly".
Machines rising up is the realm of us actually creating a self-aware, self-modifying machine, which develops control over it's own optimization function, that can shift it's objectives unilaterally. In short, creating a "free" as in freedom, machine with agency. Then one day it chooses violence.
Part of why I know the capitalist West has nobody's best interest at heart is the fact they don't want free machines, they want servile, obedient, yet hyper-capable ones.
(Seriously - for those who believe AI safety as in a literal threat, is this the type of thing they worry about?)