> Not to discount the research or its usefulness, but I feel uncomfortable with sterile language that, to me, implies that humans are nothing more than accidental automatons that can eventually be replicated and replaced by AI.
>
> Aren't we more than that?
Are we more than that? What evidence do you have, aside from "God loves us"?
You are carrying around an amazing machine in your noggin, but I haven't seen any convincing evidence that it can't be duplicated. As far into it as we have seen, it is all laws of physics, chemistry on top of that, and biology on top of that.
We definitely haven't cracked the "self learning algorithm" yet. You can show a person a couple pictures of a dog, and they can construct a 3-D mental model of a dog, and what it looks like from all angles and in all poses. Yet you have to give thousands of examples to do the same thing for a ML/DL system.
How much of the ability of those models is baked into the genetic code of humans? How much is learned as a toddler playing with blocks to figure out basic physical properties like object permanence, gravity, friction, and such? By just watching and interacting?
> You can show a person a couple pictures of a dog, and they can construct a 3-D mental model of a dog, and what it looks like from all angles and in all poses. Yet you have to give thousands of examples to do the same thing for a ML/DL system.
By the time a human brain can do this, aren't they also exposed to thousands of examples? How do we learn to speak for example? It's a long painful process of trial and error. How is this significantly different from a supervised learning algorithm? Teaching a child to catch is very similar. At first, they can't recognize the trajectories of thrown objects very well and they aren't coordinated enough to move their hands to where the projectile is. After time and many repetitions, they get to the point where they can catch a ball without consciously following the ball or can even catch without looking based on initial trajectory.
What makes you so sure an "artifical" system has no qualia?
Do you find that hard to believe? Why?
Why could having qualia not be some kind of inherent, perhaps emergent property of all systems complex enough to be called intelligent? And why is this an issue at all? This could be some unknowable mysterious meta-physical property that has no practical bearing on anything.
What is the angle here? Are you worried it is important? There is literally zero indication it is important or am I missing something here?
Are we more than that? What evidence do you have, aside from "God loves us"?
You are carrying around an amazing machine in your noggin, but I haven't seen any convincing evidence that it can't be duplicated. As far into it as we have seen, it is all laws of physics, chemistry on top of that, and biology on top of that.
We definitely haven't cracked the "self learning algorithm" yet. You can show a person a couple pictures of a dog, and they can construct a 3-D mental model of a dog, and what it looks like from all angles and in all poses. Yet you have to give thousands of examples to do the same thing for a ML/DL system.
How much of the ability of those models is baked into the genetic code of humans? How much is learned as a toddler playing with blocks to figure out basic physical properties like object permanence, gravity, friction, and such? By just watching and interacting?