At scale it does when “serious downsides” are both common and actually serious like death.
Suppose every time you got into your car an LLM was going to recreate the all safety critical software from an identical prompt but using slightly randomized output. Would you feel comfortable with such an arrangement?
> Most unfixable flaws can be worked around with enough effort and skill.
Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.
> At scale it does when “serious downsides” are both common and actually serious like death.
Yeah but the argument about how it works today is completely different from the argument about "theoretical limitations of the underlying technology". The theory would be making it orders of magnitude less common.
> Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.
We're talking about poor accuracy aren't we? That doesn't fundamentally sabotage the plan. Accuracy can be improved, and the best we have (humans) have accuracy problems too.
> The theory would be making it orders of magnitude less common.
LLM’s can’t get 3+ orders of magnitude better here. There’s no vast untapped reserves of clean training data, and tossing more processing power quickly results in overfitting existing training data.
Eventually you need to use different algorithms.
> That doesn’t fundamentally sabotage the pan. Accuracy can be improved
I have no idea how you think “ they probably could have” sounds any better, or how it makes your argument stronger at all. If we can apply AI to these situations but shouldn’t, why even bother with your first comments?
> I have no idea how you think “they probably could have” sounds any better, or how it makes your argument stronger at all.
When I talk about "can" I'm talking about in the medium future or further, not what anyone is using or developing right now. It's "can someday" not "could have".
> If we can apply AI to these situations but shouldn’t, why even bother with your first comments?
Because I dislike it when people conflate "this technology has flaws that make it hard to apply to x task" with "it is impossible for this category of technology to ever be useful at x task"
And to be clear, I'm not saying "should" but I'm not saying "shouldn't" either, when it comes to unknown future versions of LLM technology. I'll make that decision later. The point is that the range of "can" is much wider than the range of "should", so when someone says "can't" about all future versions of a technology they need extra strong evidence.
Most unfixable flaws can be worked around with enough effort and skill.