You will never define a complete and consistent algebra. You will never exactly measure the momentum and position of a particle. They're not brutally hard problems, they're not possible. We know that there are a set of things that are impossible. Why is recreating human intelligence in silicon beyond that consideration?
Things are impossible till they aren’t. It’s not like we solved Go the board game, but we did build some killer AI which has now surpassed all human players. And that was within a matter of years: first it was a fluid game of human intelligence impossible for a machine, then it was obvious machines could be better.. but look at this other thing humans are still better at.
Google's AlphaZero AI went from not knowing chess at all to beating the best current software (Stockfish) in mere hours. It's a whole new world out there.
Stockfish isn't an AI, just an expert system. It was designed from the ground up to play chess.
AlphaZero is generalized AI. It was designed to learn how to do things - like play chess. The fact that it easily beat a custom-designed expert system with only a few hours of learning time is incredible. It's a different order of complexity altogether. It may not be human intelligence, but it's a great deal closer to how humans function than how machines (like Stockfish) function.
mhermher's examples are 1) proven impossible, and 2) impossible until we get a completely different theory of physics, respectively. "Until they aren't" makes a nice glib dismissal, but is fails to address the actual impossibility that mhermher has pointed out.
Now, neither of those examples is actually relevant to the topic at hand, namely, AIs driving cars. The question then is, what category are AI's driving cars in: The go-and-chess category, or the logically-or-physically-impossible category, or something in between? My guess is "something in between". But "until they aren't", while it may apply to that category, may still be "longer than your lifetime".
That assumption may be technically correct (I don't think it is but that's a different conversation), but it's very likely to be practically wrong in the "galatic alg" sense. By that I mean that the complexity is so many magnitudes off that it's intractable with traditional binary computing. The pre programmed car would melt it's way through the concrete.
But we actually understand so little about how the brain works, let alone how intelligence emerges from it.
What I am saying is that AGI may be impossible, but people are so determined that it's just around the corner with enough hardware and clever enough software.
It's just around the corner, and we don't even know whether it is possible.
Perhaps it is perhaps it isn't. But we can now equal and sometimes beat human perception on some standard machine learning datasets. Perception is one very large piece of the problem. The other large part shows some signs of cracking, consider the recent success in go and poker. AI can now beat professional players in both of these games. Agreed these are a different class of problem to realtime learning in a dynamic environment, but there is interesting progress here. I'd wager that we're less than 20 years away from a useful general intelligence that could say, earn money on mechanical Turk. And the recent succes with transformer networks understanding text. Incredible.