AI is repeatedly redefined to exclude things that we can do, that were previously considered AI. Examples include image and character recognition, speech recognition, machine translation.
Each time we say 'oh well that's not actually the machine thinking it's just doing a search of a database / applying statistics / guessing' so it's not AI. Except we don't really know what it would mean for a machine to think, or understand text.
I thought AI was about one algorithm or something like that could do general thinking like a human. Its like you recognize images, characters, speech and then? What will you do with that?
Of course you could argue a child can detect a cat is cat only because someone taught the child so. But the point is a human baby left in the jungle right after it was born without any outside help can do a lot of intelligent things. Can a machine do the same?
"I thought AI was about one algorithm or something like that could do general thinking like a human."
I think that was the view until the early 90s or so (although this may be popular again - AI research does seem rather cyclical) - encode a sufficiently rich database of "knowledge" fire up your "inference engine" and general intelligence results. What "knowledge" you need to bootstrap an intelligence, how to represent it and what "inference engine" to use being the source of much debate (and even more funding). This didn't really deliver an awful lot - so things like multi-agent systems and neural network became popular (or in the case of neural networks, became popular again).
You are very right. That's why I divide the world between Cognitive Psychology, which is "How intelligence really works" and Artificial Intelligence which is "Replicating and improving intelligence with machines." The former is a very long drawn out field. (The research projects that the professor is talking about have been going on for 20+ years.) The latter is difficult. So far they can only tackle "Making useful things that don't represent how humans do it" or "Modeling how humans do things in a way too simple to be useful".
The gaps between those are closing, but it seems like the field is actually pushing both boundaries in the opposite direction. We're making very useful things that look less and less human. (Think Google search) We're also understanding the human mind much better (Think connectionist models) but it's still hard to get them to produce real things. This isn't an awful thing. The field is still much better off than 20-40 years ago when it hyped universal general intelligence that could also be useful.
I would say that these algorithms display intelligence to the extent of their input and output abilities.
You won't get what Hofstadter considers as "real" AI (artificial general intelligence, or AGI) without physical robots who need to display survival instinct and skills in order to keep on working.
I'm not sure I want it to happen, to be honest, but I'm sure it will.
They don't have to mimic each and every quirk of humans to do so.
For example, human language is ambiguous, slow to produce and to understand. Packet-based networking and binary protocols are far more efficient. The same goes for our cognitive biases, even though some of them may be useful to survive, counterintuitively. A fast and energy efficient algorithm that works most of the time can be better than an exact one that's more expensive. See bloom filters, for example.
There is no reason a robot body is necessary for intelligence. It might even hinder it as the real world is much more messy and complicated than cleaned up and relevant data fed to machine learning algorithms.
You could probably bootstrap it in a virtual world too, that's true. I don't know if the skill acquired there would easily translate to the mechanical world, though.
Me personaly, I define AI as a system that can be fed non-biased input and learn from that input to derive conclusions that we as humans can relate too.
So feeding a system lots of cat pictures (ie biased-input) to teach it what a cat is for me is not AI. But a system which you feed in lots of random pictures and it learns by itself what a cat is, that would be AI, least for me.
What will be really interesting is a system which you can feed in all the childrens section and see what comes out at the end, would be most insightful into how we teach children and what we teach them. So a completely different area of AI use from that alone - learning how to learn better.
Good to know. Though thinking it thru, any unsupervised learning would have to have some reference points, be them hard codeded into the core system to ina form of supervised learnings. Otherwise the AI could recognise a cat but would know it by a completely different name and unless it had been given a good description of a cat to associate with such image forms or a picture with a label. Then it would never know what a cat was in the way we know them. Which may be a good or a bad thing. But if the AI referers to cats as 478912's then we would not know what it was on about and whilst it may be intellegent, it would be in a way that we would be unable to understand and relate too. Ironicly i suspect if you had a top end AI system and asked it what defined AI it may very well come back with the answer 42, which many would even understand, though no comprehend.
Well, you can show it a photo of a cat and ask it "what's this?", and it'll say "this is a 478912". That's how clustering works. Obviously it's not going to just speak to you, but there are ways of extracting the categories out of it (obviously, otherwise why run it if you can't read the results?)!
>>But a system which you feed in lots of random pictures and it learns by itself what a cat is, that would be AI, least for me.
A very good definition. Also adding to your point. Lets say a machine is fed with a billion pictures. Can the machine automatically categorize them by reading through them? Doesn't matter if it refers to a cat as some alphanumeric name 'ab12er' or a dog as 'p09iuy'.
But it should be able to categorize them. Then it should be able to read some encyclopedia or some source of information and study the behaviors of 'ab12er' and 'p09iuy'. Or the opposite, see 'ab12er' and 'p09iuy' and recognize them and describe them what they are.
Seems to me you are defining AI as a generalized categorization algorithm. This sounds very narrow as it doesn't take into account any actual intelligent behavior. For example, I believe constructing tools in order to accomplish a specific goal (eg, constructing a net in order to catch fish) to be intelligent behavior. The categorization AI you describe would be incapable of this. Though it does sound like it would be a necessary prerequisite.
Thing is though by giving a encyclopedia you are feeding in a bias thru a form of base reference. Thinking it thru a system could learn by itelf what a cat was, but it would not know it by that name. With that the creation of AI could be truly artifical and intellegent, but in a form we do not comprehend or even understand, yet it could be in its own right intellegent.
That's called unsupervised learning and it already exists. Supervised learning is more practical though. I'm not sure why you distinguish them because they are both AI and sometimes even the same algorithms.
This is called Deep Learning. It can automatically form clusters from unlabelled training data and that is why there is a lot of excitement around it right now.
SYSTRAN at that time was a rule-based machine translation and was comparable for european languages.
Where as Google's newer Translate and Microsoft Translator are statistical machine translators. (both trained with UN and EU documents that are available in several languages).
The future is probably a hybrid system (combination of rule-based and statistical) like the most recent SYSTRAN version.
And the domain of things we can empirically do better in the discrete task realm keeps diminishing ie. 'design', unsolvable capchas.
If we keep throwing morsels of cognitive capability to the machine, we're left wondering why the fuck we couldn't predict and develop Flappy Birds as a killer start-up.
This is where I find X-Men to be an awesome concept. What we are chasing in AI right now is Magneto (the might), but every time we reach there, we compare ourselves with Prof. X (the intellect).
I actually like this direction. Intellect is about predicting problems, and might is about solving them. It is better for humanity to stay on the intellectual side and let machines handle the hard work. Imagine a choice between AI propelled super efficient kill-bots vs. Skynet, if you will.
I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn't about making money, and the people in the field weren't driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That's a beautiful paradox and very exciting, philosophically.
Unfortunately all these well intentioned AI professors built nothing and the field devolved into LISP hacking. Now our choice is "accurate model" or "useful expert system". At the time it bothered me that we couldn't do both. Now I realize that it's ok for the model to be imperfect if the results are useful.
The general thread also convinces me that Skynet type AI is still far off.
This is why I am so skeptical of the claims of people like Ray Kurzweil. I honestly don't think we yet have even an outline of an idea of how a true artificial mind might be architected, let alone implemented.
If we can't even design something, even in outline, how can we possibly predict how long it might take to build it?
Skeptical or not, we've already sequenced a complete blueprint for it that is running on billions of copies of firmware in the wild. True, we can't even emulate the firmware ourselves.
But we know it's possible and literally have a copy of the code that builds it. We just have to figure out how to emulate it. That puts a hard upper limit on how long all this can take. (It can't be "forever" since there is no requirement for the emulation to happen in real-time.)
Given that a human brain is like 3 pounds of goo, and we have the genetic code that builds it, the rest is just biological reverse engineering + code refactoring of a binary blob without comments.
I am not saying this process is easy, but the idea that we can't make some solid predictions is fairly weak. We already do genetic engineering.
I wouldn't place much money on a true artificial mind still not existing in 40 years.
> I wouldn't place much money on a true artificial mind still not existing in 40 years.
We said that in the 50s, and every decade since then. While we understand a lot more about the brain there are still gaps in knowledge and ability to scan a working brain.
It may turn out to be a system with sensitive dependance on initial conditions.
There are about 85 billion neurons in a human brain. (In transistors that's only about 45 or so Intel's 10-core Xeon Westmere-EX.) that number is a relatively new refinement of the old 100bn number. Not finding out how many neurons we have until the 21st centuary makes me think that there is enty of work left.
Right, my reference to the fact that we still use flat CPU's is in part referring to the transistor count. Also you are referring to a commercial mass-produced CPU that debuted at $4200 nearly three years ago - and was considered overly expensive at that price.[1]
And you're comparing this tiny single package to the human brain directly, not even multiplying by a farm of them, which would be more than reasonable if we knew what we were trying to emulate.
The issue, as you state, is that we don't really know what's going on. However there is no reason to assume a fundamental barrier to continued innovation as we learn more and more, and computers can do more and more.
BTW, I wonder how much AI is based on "facts". I someone tells me, he slept bad last night, then there are so many assumptions I make subconscious. I assume he lives in a House/Apartment, he slept in a bed, he slept on the bed, on a mattress, the mattress is on the bed and not the bed on the mattress, he sleeps on it based on gravity etc. etc. etc. To make meaning out of sentences you have to know a lot. Your first years in live may be nothing but acquiring this knowledge.
That idea has been kicking in the AI community for a long time. One of the biggest projects for encoding every day knowledge in a machine readable way is Cyc: http://en.wikipedia.org/wiki/Cyc
Yes, because it is general purpose intelligence. Google Translate can't even play the simplest of games, let alone Chess or Jeopardy. Watson can't even begin to tackle translation or Chess. Deep Blue has absolutely no capability towards translation or playing Jeopardy. They are fixed-function, single-task algorithms optimized towards a single very tightly defined problem domain. That sort of research isn't going to get us to general purpose AI.
Cats cannot play the simplest of games; does that mean that they do not have general purpose intelligence?
And I do think this research is going to take us there; what is winning a game more than translating your enemy actions into yours? Like translating a face gesture on a poker game (or translating a word) into a action such as doubling inside the game (or a http response).
Actually cats can play catch. So even by that definition they have some rudimentary intelligence.
Cats aren't as intelligent as dogs, and they aren't social animals but they have some intelligence. They can learn on their own to navigate surroundings, move, catch prey and mate.
I'd love to see Watson do any of these without pre programming.
>>I'd love to see Watson do any of these without pre programming.
I'd love to see a Human do anything without any kind of pre programming(I mean without giving it the knowledge and training it to use that knowledge).
If I take a tribal person from a amazon jungle who hasn't seen the outside world ever and disclose him the rules of playing chess, will he able to play as well as Gary Kasparov in a few minutes? Or if say I disclose the usage of a paint brush and paint, can he paint like Michelangelo or Da Vinci?
Each human is a self learning self programming machine.
Humans, including us can do so many things only because our brains are getting programmed with information every single moment and are being told how to act on that information.
Problem with your description is that given if you take a human from jungle and leave it in New York, if he understands language he'll be able to function on at least some level.
In same manner, if you take Watson and for example change format of questions (they are still asked in plain English, just rephrased) and way Daily bonus prize functions, he won't be able to function without some human coming in and tweaking it.
You generally don't have to open head of a human, rewire their brains and then send them to do another task. They adapt. Autonomously. It's like if software could auto investigate sites for weather API and adapt to it, instead of having a human come and rewire API adapters.
You haven't spent much time around cats (or haven't paid attention) if you think they aren't social animals.
Our various cats when I grew up would bring "friends" around - cats of both genders that they played with, and who would be allowed into our garden. Some of them were "introduced" to us - our cat would walk up to us with his friend in tow and stay until we'd pet his friend.
You'd often find them lying on our patio together during the summer. They'd also occasionally groom each other.
Our current neighbours oldest male cat sometimes "walks" the two young cats she recently got around the neighbourhood.
You misunderstood me, cats aren't social animals as in domesticated cats and their descendents aren't likely to form a and hunt in a group. The main majority of their life is spent in solitude. Cats do need contact, but most contacts in 'wild' (i.e. without human supervision) is spent either fighting or mating, not socializing for some benefits.
Humans kinda forced the whole social aspect of their lives. I read somewhere that cat holding tail upward is a new construct of cats. It's usually used by kittens around their momma.
The behavior you describe is more unique. Here where I am two cats are no way likely to sleep near each other. Basically they might sleep at least two feet distance. Most contacts are violent.
Rudimentary intelligence? How primate a thing to write. Cats and dogs are evolved with different priorities, as are primates, and have ingenuities that surprise one another, except when the others stop watching. I've seen cats learn how doors work by just watching, doorknobs, etc. They are clever, geometrically, but humans seem to resonate with the more vocal/verbal and generally cooperative nature of dogs, as it's a very primate characteristic.
I think this is because dogs are friendlier and more willing to learn from us, whereas cats are very selfish :)
Either way, people claiming that either dogs or cats aren't very inteligent haven't lived with one. It is true that it is not human intelligence, but personally I feel that the only piece missing is natural language, which if you think about it is the only distinctive trait separating us from primates.
And this will be the ultimate test for AI, the ability of a computer to have a meaningful conversation with a human.
I think dogs are more intelligent BECAUSE they are social animals. Social animals need to do everything a solitary animal does + ability to know what his peers are thinking or about to do in order to coordinate.
Dogs can fake emotion (ever been bitten by a dog that waggles his tail?), know what appeals to humans(sending cutest or most wounded pup to beg for food), understand how subway works, etc. Cats have greater independence, but overall aren't as clever.
Crows and killer whales, now those fuckers are intelligent.
The only thing exceptional about human mind is the ability to be EXTREME in every aspects of our mind. Most creatures can do same as we, but we do it to a higher degree.
Speaking of social animals, spotted hyenas are very clever and groups of hyenas are not only very big, but have very complex rules for social interaction.
On the human mind, I do have a problem with assertions such as yours - saying that we can be "extreme" doesn't say much about how we are built or why other animals can't do it. We definitely don't have the biggest brains.
Sometime in the evolutionary process, we developed the ability to speak. Chimpanzees have symbolic capacities which are rarely used in the wild. Something happened to us, some social change and we've been practicing this ability since tens or hundreds of thousands of years ago.
And speech is tremendously important because that's how we learn - we pass and receive knowledge to and from others by means of natural language. Society also leaped forward along with agriculture because that's when written language happened, also allowing us to pass knowledge to future generations. We also leaped forward when common people started learning to read. And because of the ease of access to information nowadays, I also believe we're amidst another revolution.
Now if you look at animals, they do have language. Most intelligent animals rely on body language and even sounds to communicate. But one thing that we do effortlessly is to invent new words, new metaphors to describe whatever we want and our language has gotten so big that we can describe anything.
So there's a strong correlation there and the question on my mind is - are we smart because of the ability to communicate, or are we able to communicate because we are smart?
Well I haven't say or hypothesized why is that so.
Chimpanzees and crows can make tools, we make tools that make tools that make tools.
Animals have language(s), we have several highly symbolic languages. Ours is just more sophisticated.
I'm pretty sure there are examples of animals empathizing, humans can empathize with a large part of biosphere.
There is nothing that fundamentally divides us. Or you can say that humans are nothing special. It's just we have more most mental tasks at greater lengths and do it more consistently. That's all.
Rudimentary intelligence, based on what my parent's written. I'm sure nearly all creatures have some rudimentary intelligence. I'm defininitely not classifying them on same level as worms or insects.
I doubt it is encoded much other than a set of few needs you need to fulfill. Keep in mind kittens pretty much suck at walking, balancing, but learn it eventually.
They are not pre-programmed to fight no more they are pre-programmed to open doors.
As for roomba, depends if you caught it fighting or mating with other roomba's ;)
Any cat owner would know that cats are actually smart enough to train YOU rather than the other way around. So i personally feel that you might be underestimating cat intelligence. The fundamental difference i feel they have with our AI programs is their general purpose ability to learn whatever concerns them. Like cats observing human behaviour patterns and learning a ton of things from them, even how to open doors etc.
Cats have limited intelligence but general purpose intelligence. Compare a kitten to a computer with computing power but without intelligence. Now kitten to cat maturity gives the animal a wisdom and intelligence. That maturity is in nowhere similar to Watson as a pure hardware (kitten) to Watson with software of AI (cat).
A computer recently designed a published board game called Yavalath, by deciding whether randomly generated games are interesting or not. That algorithm wouldn't be difficult to integrate into Google if they decided to do it. As you add more and more capabilities to Google at a more abstract level (design something "fun" like here), it's closer and closer to general purpose AI. It's not just that, see also http://www.gamesbyangelina.org/ .
Did the algorithm decide what the characteristics of an interesting game are, or were they hard-coded? Could is analyze the interestingness of games generally? Could it analyze why some people like some games and other people don't?
If it couldn't do these things, then it was in no way intelligent, because it was not analyzing at a conceptual level. Programs that simply optimize data towards a set of target characteristics are clever, but not intelligent. Hofstadter goes into this in the article.
You're making exactly the same mistake those over-optimistic AI researchers made back in the 60s. They created some whiz-bang optimisation algorithms and thought general purpose AI must be just around the corner, but it turns out that actual conceptual analysis and reasoning is a completely different and fundamentally unrelated problem.
That's an arbitrary distinction. The goal might have been decided by human programmers, that doesn't mean the process isn't itself intelligent. You are right it is using a crude optimization process that is a lot weaker than human intelligence, but it's still "intelligent".
It's worth pointing out Google just bought the startup with the guys who wrote the "Playing Atari with Deep Reinforcement Learning" paper which is supposedly just an algorithm working on the raw pixels and then beating all other programs and half of the human experts.[1]
You could run all of these programs on one computer. Comparing a person to a piece of software is apples to oranges. Build a computer big enough to run all of the software in the world (and all of the data, too) and compare those capabilities to a human.
I am sure basic algorithmic techniques like knowing how to sort an array are likely to be very useful when building an AI, or at least some forms of AI, but an array sorting algorithm is not itself a general purpose artificial intelligence and neither are Deep Blue, etc.
As Hofstadter explains in the article, far better than I could, these applications are not contributing to progress in developing software that understands the data it is manipulating. I recommend reading the article, or even better GEB itself, it's very illuminating.
This. And also, is it truly a general purpose intelligence, or a special-purpose one which evolved into general by just extending the domain of application?
I'd say that most of what is considered intelligence comes from pattern matching plus inference; and there is a number of 'illusions' and cognitive biases that trick the mind because one of these processes fails. This may for example by finding patterns where there is none, which is quite normal.
In my opinion, it's not. Natural intelligence is another way of saying that dynamics and interactions of millions of neurons in our brain is too complicated for us to understand. Our brain is just a complex machine, and the complexity at the level of its individual components is based on very fundamental laws of physics and chemistry.
In that sense, you bring up an excellent question. I don't think there exists any unique "natural intelligence" which Hofstadter in the linked article is referring to. Watson is only different from "natural intelligence" in terms of complexity.
Conway's Game of Life can perhaps be used as an analogy here. The uninitiated is likely to assume a very complex source code upon observing setups like 'spaceships' and 'glider guns'. However, it all is based on some very simple rules which can be easily implemented by a high school computer science student.
I'd to extrapolate this logic towards the concept of 'life' itself. I feel that what we call life is just collective of natural processes too complicated for us to comprehend completely. Scientific research has allowed us to understand biological processes to some extend, but not enough to us to deduce the state of a living entity at t+1 by observing the state at t=0. The common assumption is that there is some independent/supernatural force (consciousness) which allows the living entity to 'chose' the new state at t+1 (free will).
Watson appears to be "intelligently" making a chess move, but as Hofstadter points out, Watson is simply following a set of rules. We can independently calculate Watson's move since we know it's source code.
My conjecture is that if we are theoretically able to capture the complete state of your brain cells at t=0, and understand how they work, I can theoretically calculate your chess play before you make it.
WBC appears to be a living entity, acting out of free will. But the reality is that, molecular composition of WBC is merely attracted to the chemical trail left behind the bacteria, which in turn is repelled by WBC.
We do not consider a piece of rock rolling down the hill to be a living entity, because our understanding of physics allows us to calculate the state of a rock at t+1, given it's position a t=0.
We've built planes that fly, yet don't flap like birds, and submarines that swim, yet don't have fins. I suspect whatever ultimately ends up being true general purpose useful AI may not look anything like a human brain, but achieve a similar and useful end result (much like planes and birds fly using very different mechanics, each having pros/cons and each being valuable).
That is like trying to create a bird instead of understanding the principle of flight. By understanding the principle of intelligence we can do better than our current level intelligence. One contender for understanding this principle is AIXI: http://www.youtube.com/watch?feature=player_detailpage&v=V6u....
the brain is just a configuration of matter. seems like recreating this configuration is inevitable? and isn't it a safe assumption that if it's configured the same way it will behave the same way?
once we can understand the system we can synthesize it in software. AI in the mean time, seems like a guessing game.
Personally, having worked on AI research in the late 80s and early 90s I don't observe a lot of progress in replicating general intelligence. Of course, there has been a huge amount of progress in special purpose systems leveraging the huge amount of computing power we have now - but the fundamentals of general intelligence look as puzzling now as they did 20 years ago. I don't believe that we really have the right algorithms and it's simply a matter of scaling up the underlying hardware.
I think the most likely way to achieve a "real" general intelligence will be by reverse engineering how functioning brains and minds work. At least is an area where continual, albeit rather slow, progress is being made - eventually we will know how the mind works (assuming that there isn't anything fundamentally weird going on - which seems unlikely).
Once we understand how minds actually work I suspect there should be a good chance that we can upgrade/optimize these structures - either by architectural improvements or by throwing more resources at the areas that constrain current performance in biological brains. At that point things really could get rather exciting - but I don't expect this to happen in my lifetime (I'm in my 40s).
thanks! i have no background in AI but this just makes sense to me. this is why blue brain project [0] etc are exciting. so far there is less tangible output than traditional AI research, but it seems to be the only front focusing on true "intelligence".
edit: i admit i didn't read the article before replying here. my thoughts were already better articulated; "They're not studying the mind and they're not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They're doing product development."
Surprised at the traction that this interview is gaining. This isn't all that different from the amatuer blog post from a few days ago saying the same thing where everyone educated the poster on the subject of modern AI development.
Almost all the comments from that thread can be applied to here. People with this point of view suffer from a fundamental misunderstanding of what natural intelligence is in my opinion.
The way intelligence works, in my opinion, is this:
1) experiences are stored in the brain. Experiences contain inputs from the 5 senses as well as the sense of danger/satisfaction at that point.
2) at each given moment, the brain takes the current input and matches it against the stored experiences. If there is a match (up to a threshold), then the sense of danger/satisfaction is recalled. Thus the entity is able to 'predict', up to a specific point, if the outcome of the current situation is bad or good for it, and react accordingly.
The key thing to the above is that the whole process is fused together: the steps for adding new experiences, matching new experiences and recalling reactions is fused together in big pile of neurons.
Each time we say 'oh well that's not actually the machine thinking it's just doing a search of a database / applying statistics / guessing' so it's not AI. Except we don't really know what it would mean for a machine to think, or understand text.