Skip to main content
CNN.com /TRANSCRIPTS

CNN TV
EDITIONS





Q&A WITH ZAIN VERJEE

Q&A

Aired February 4, 2002 - 11:30:00   ET

THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.


THIS IS A RUSH TRANSCRIPT. THIS COPY MAY NOT BE IN ITS FINAL FORM AND MAY BE UPDATED.
(BEGIN VIDEO CLIP)

WILLIAM HURT, ACTOR: I propose that we build a robot child who can love.

ZAIN VERJEE, CNN ANCHOR (voice-over): It's something out of a science fiction movie, right?

Maybe not.

UNIDENTIFIED MALE: We may see a time where human tissue is integrated with electronic and computer devices.

VERJEE: Can human beings create machines that can think like humans?

UNIDENTIFIED MALE: I believe in it. I think it's going to happen.

UNIDENTIFIED MALE: Is it the kind of technology you want to root for? Is it something to fear?

UNIDENTIFIED MALE: Oh, yes, very seriously. I think that this is very important.

VERJEE: But there is a difference.

UNIDENTIFIED MALE: The fundamental distinction between the machine and the human mind is that human minds don't just do things, they need to do things. They do things with goals in mind.

VERJEE: On Q&A, artificial intelligence, fact or fiction?

(END VIDEO CLIP)

VERJEE: Welcome to Q&A.

A machine that's smarter than you? That thinks just like you, or perhaps more logically than you do, whether you like that or not.

Scientists are already working on creating really smart machines that could change the way you live your life.

(BEGIN VIDEOTAPE)

VERJEE (voice-over): Can you see yourself doing any of this? Sitting in a car that drives itself. Running yourself a hot tub bath while heading home on a bus. Feeding the cat while you're out shopping. All you need to do is push a button, and Whiskers eats dinner.

How about instructing a household robot? Playing football with one?

If it sounds strange now, brace yourself. It could soon be your reality.

UNIDENTIFIED FEMALE: Hello, Kismet. You going to talk to me? How are you doing?

VERJEE: Welcome to the world of artificial intelligence, the creation of machines that can think.

There are two parts of AI.

PROF. RODNEY BROOKS, MIT ARTIFICIAL INTELLIGENCE LAB: There's the part where we try to engineer things which act intelligently and do things that a human did, when we'd say they used their intelligence (UNINTELLIGIBLE). And there's the scientific part, where we try and understand how it is that we are, by buildings copies or pieces of us -- the modules that make us up. By looking at cognitive science, looking at neuroscience, and trying to replicate ourselves.

VERJEE: Scientists are working to build smart machines that understand speech and copy human thought. Question:

UNIDENTIFIED MALE: How can you tell if a machine is thinking?

VERJEE: The British computer scientist Alan Turing said a computer deserves to be called intelligent if it can deceive a human being into believing that it was human.

No machine has passed the test yet.

But the kids are waiting for that to happen.

UNIDENTIFIED FEMALE: I would like a robot to do my homework, and to pick up after people. And I think that eventually robots will become almost like people.

VERJEE: A robot became like a person in the movie "AI," but it was just a movie.

Scientists point out, though, that AI is fast becoming part of ordinary life.

BROOKS: Every time you use your cell phone, there are AI systems checking up on your pattern of usage to see whether your cell phone might be stolen or not.

When you drive your car, there may be a learning system in the fuel injection system.

When you play a video game, you're playing against an AI system. The list goes on and on.

VERJEE: And how about this? In the not too distant future, if you have no company on a Saturday night, how about a night on the town, dancing with a robot.

(END VIDEOTAPE)

VERJEE: Joining us from the World Economic Forum in New York is Rolf Pfeifer, the director of the Artificial Intelligence Laboratory at the University of Zurich. He's also the author of "Understanding Intelligence." And from the University of California at Berkeley, professor of philosophy of mind and language, John Searle. Thanks to you both, gentlemen, for being with us on Q&A.

Rolf, what exactly is AI on a very basic level?

ROLF PFEIFER, AUTHOR: As Rodney Brooks was pointing out before, there are three parts to artificial intelligence.

One is we're trying to understand natural forms of intelligence. And then, for example, we're trying to understand how people recognize faces, how they, you know, manipulate a cup, or how they walk.

Second, we're trying to abstract general principles. How we design sensory systems, for example. How we design motor system.

And then we try to apply these to develop intelligent artifacts.

VERJEE: How do you actually tell when a machine is intelligent? I mean, we heard about the touring test. Is that the only sort of cutoff there is, or are there other ways to test whether a machine is intelligent?

PFEIFER: I don't think there is an absolute test for that. A lot of it is very subjective.

If you, for example -- I can play chess. Now, if you watch me play -- I'm a very mediocre player -- you're not very impressed with my level of intelligence. However, if, lets say, a two year old girl would make exactly the same moves that I do, you would be very impressed with her level of intelligence. And if instead of the girl, a dog would make these moves, then you would be extremely impressed and think the dog is really a genius.

So it has also a lot to do with subjective expectations. And that's why it is so hard to define universally.

VERJEE: A lot of expectations of artificial intelligence. John Searle, do you think that they're all useful or even real?

PROFESSOR JOHN SEARLE, UC BERKELEY: I think you have to distinguish between the technical or technological achievements of computer programming -- and those are quite impressive. But there is an attempt to make psychological and philosophical inferences on the basis of this, and most of those are hopelessly confused.

The power the computer derives from the fact that its operations are defined in terms of symbols, usually thought of as zeros and ones. And the zeros and ones don't give you a human consciousness.

So when you're discussing AI, you have to make a crucial distinction between the simulation of human abilities and the actual duplication. And simulation is not duplication.

Now, the concept of intelligence disguises this difference because, of course, where intelligence is concerned we're mostly interested in the results.

I mean, I have a pocket calculator that's smarter than any mathematician I know. It never makes a mistake. And in that sense, it's completely intelligent. But it doesn't have any human psychological processes.

VERJEE: So by those processes, do you mean emotion?

SEARLE: I mean thinking. Consciousness.

See, if you ask me to add two plus two to get four, I have to think consciously those symbols. But my pocket calculator doesn't do any of that. It's just an electronic circuit that manipulates symbols. Actually, it doesn't even manipulate symbols. It goes through straight transitions in the electrical circuit.

There is no way we know at present how to build a conscious computer. And until we do that, it's really a side issue to talk about intelligence.

VERJEE: Rolf?

PFEIFER: Well, I fully agree with the fact that we cannot reduce intelligence to a computation, as actually many people have done.

So I guess in that sense, those systems that we now have and that are technically used in every day life, you know, like in dishwashers or in monitoring network traffic or things like that, they probably don't have very much to do with what we think is actually natural forms of intelligence.

However, I think there is a new let's say development, or there are some new, exciting developments within the field of artificial intelligence that don't look at intelligence so much in terms of computation, but in terms of interaction of a physical system -- you know, an animal, a human or a robot, for that matter -- that interacts with the real world environment.

And we have been able to show that there are fundamental differences from a purely computational system and a system that can actually interact with the real world.

SEARLE: OK, but now there is a problem, and that is, what is the mechanism that underlies the interaction?

Traditionally, and indeed up until this point, artificial intelligence was all about computers, and you could design computers that would interact better.

But if we're interested in building a thinking, conscious machine, we'd need to know first how the brain does it. And we know in advance that just doing an algorithm, just going through the steps of symbol manipulation, isn't going to be enough to do that.

So the interesting thing is this: are we changing the definition of AI? The original definition was in terms of computers. AI was about designing computer programs that could duplicate human capacities. And I guess Rolf and I both agree, of course, that doesn't give you a real human consciousness or thinking.

But now we know that the brain is a machine, so we ought to be able to figure out how it works. And if we could do that, if we could figure out the machine processes that cause consciousness, then we might build an artificial consciousness.

VERJEE: But do we even know enough about the brain, Rolf, to copy it?

PFEIFER: Well of course we can copy certain very limited aspects, but I don't think we know enough to produce with John Searle calls a conscious machine.

I mean, we have to be careful there, as well. What do we really mean by consciousness and by a conscious machine?

In fact, I think that robots that really have this capability of interacting with the real world -- and we have seen before, this robot, Kismet, at MIT. There are other robots that can interact in sophisticated ways with the environment, with the physical and social environment. They can interact with people.

We have to be clear about what we mean by consciousness. Consciousness may be also a very kind of subjective sort of thing that we have -- we're very much inclined to attribute to things, or to human beings or to animals, for that matter.

SEARLE: But, you see, I don't think consciousness is all that hard to define at a common sense level. We don't have a scientific definition, but at a common sense level, it's these states of sentience or awareness or feeling or thinking that begin when I wake up from a dreamless sleepless, and then go on through the day.

Now, the problem with existing artificial intelligence is, none of the machines are conscious for a very simple reason. None of them were designed to be conscious. They were designed to simulate human cognitive capacity.

VERJEE: John, does it really matter, though, what we call it or how you shape a definition if the practical effect of what's being researched and developed impacts our lives for the better?

SEARLE: I think it's wonderful that I have pocket calculators and that my car doesn't need a hand-choke. I think the technology is wonderful. And I have no objections to artificial intelligence technology.

Where I draw the line is when they say, well, now we have created a thinking machine, or we've created a conscious machine.

Now, I'm glad to see Rolf doesn't say that, but an awful lot of people in AI do. You'd be surprised at the claims that have been made on behalf of AI. And I want to draw the line and distinguish between the simulation of human abilities, and as you point out, that's very useful, practical. I mean, I couldn't live without my computers.

But I need to distinguish between the simulation of human cognition and the actual recreation, and to recreate human cognition, you have to create a machine that has human mental processes, and we're nowhere near being able to do that, because we don't know how the brain does it.

VERJEE: Rolf, could you give us an idea of the practical kinds of effects some of your research could have, potentially in the next, say, 15 or 20, 50 years?

PFEIFER: The practical effects? Well, at the moment, as we pointed out, we already have artificial intelligence techniques in many, many areas.

I think we should not underestimate that we are just beginning to understand what the function of a body is in intelligent behavior.

So, for example, in the classical artificial intelligence, and in computers, we don't take into account that, for example, humans or intelligent beings have a particular shape. They have sensors. They have, for example, eyes. They have the eyes on top of their bodies, in their heads, they don't have them on their toes. And that's for very good reasons.

And we're beginning to understand what the implications are of this, and I think we can get awfully close to something that at some point people might want to attribute a consciousness to.

Now, in terms of practical applications, I think -- we're in the process, if I may just add a little bit about our own research. We're in the process of modeling genetic regulatory networks so that we can actually grow...

VERJEE: What does that mean?

PFEIFER: Well, that means if you have -- in a biological sense, you have a human or an animal, you have a single cell. And then from this single cell, the organism develops into its final shape during a process which is called (UNINTELLIGIBLE) genetic development.

And that's controlled by a genetic regulatory network. And we have been able to model these so that we only specify what the system is supposed to do, but we're not telling the system how to do this. And I think the results of that...

VERJEE: OK.

(CROSSTALK)

VERJEE: We're going to continue this conversation in just a moment. Rolf Pfeifer, John Searle, stay with us. We'll be back in a second.

Whether you believe in the future of AI or not, one thing is for sure: artificial intelligence is already a part of your life.

Coming up: how artificial intelligence simplifies things for you and you might not even know it.

Stay with Q&A.

(COMMERCIAL BREAK)

VERJEE: Welcome back.

We're talking about practical applications for artificial intelligence.

Rolf Pfeifer and John Searle are still with us, and joining the conversation now, from Austin, Texas, is Doug Lenat, the CEO and founder of Cycorp. His company is building what they call the world's largest common sense database. A database of what the average human knows. And in San Francisco, Dick Stottler of Stottler Henke Associates. His company develops operational systems based on the idea that artificial intelligence could mimic the human thought process.

Some pretty far out stuff that you're both doing.

Dick, let's start with you. What kind of practical effects will the kind of things you're doing have?

DICK STOTTLER, STOTTLER HENKE ASSOC.: Well, I think one of the things that we do is try to capture the human decision making process and the knowledge and expertise of skilled individuals and distribute them to a wide range of people.

I think one of the applications that might be kind of near to your heart, Zain, is there was a company called (UNINTELLIGIBLE), whose doctors were very familiar and experts on liver disease, and specifically hepatitis C. And they realized that research has shown that perhaps 4 million people have hepatitis C and don't know. And the reason is, because the expertise and knowledge to properly screen people throughout the country is not well recognized.

And so what they did was, they hired us to develop an artificial intelligence system that could give a patient interview over the Internet, which was similar or equivalent to what the best human experts could do, and thereby make better recommendations to doctors who both don't know the information about hepatitis C and don't really have time to interview their patients and get their patient history to the degree possible, and thus recommend the appropriate tests and treatments for hepatitis C.

One point I'd like to make in relation to John Searle's comments is, I think it's important to differentiate between consciousness and thinking.

I don't -- very few AI researches have claimed that their systems were conscious. Some claim their systems are thinking. We claim that we mimic human thought in the following way: humans, when given a certain task and a set of information to process often come up with a sort of way of doing it, or a set of symbols. And our systems typically create the same set of symbols, or very similar set of symbols.

So I would say, problem solving, that kind of thinking, isn't consciousness, but it is mimicking the human thought process.

VERJEE: John Searle, you want to answer that?

SEARLE: Yes. Exactly. What it does, however, is mimic or model or simulate the thought process with a bunch of symbols.

However, from the machine's point of view, there is no meaning to those symbols, they are just the zeros or ones or some other symbol system used in programming the computer.

So there is a crucial ambiguity in AI between whether or not thinking is to be thought of as actual human thought processes that have a mental content, or just the manipulation of symbols according to the program of the computer.

Now, this is absolutely crucial distinction, because characteristic of human thinking is that the symbols are meaningful to us. We know what they mean. And the touring test disguises this distinction, because the touring test says, look, if it behaves as if its thinking, then it is thinking. And of course, that's a mistake.

VERJEE: John Searle -- thank you, John.

Let's get Doug in on the conversation here.

Doug, now you're building a database of what the average human mind knows. I mean, wow. And how are you going to apply that? Because, that's a huge feat to be able to accomplish effectively.

DOUG LENAT, CEO, CYCORP: In a way, what we're trying to do is bypass the issue of whether a computer is conscious or could be conscious, or even could be intelligent.

But rather, could it be useful? Could it enable us to be more intelligent? Could it enable us to amplify our own intelligence?

So for instance, the pocket calculator doesn't really have consciousness, of course. It doesn't really understand very much. But if you type in an arithmetic problem to it, it understands enough to get the right answer, and that amplifies our ability to do arithmetic.

In much the same way, having the millions of pieces of common sense will enable software to be a little bit less brittle.

In the early 1980's there was a kind of craze in which people predicted in the very near future there would be household robots that would mind the baby and mow the lawn and so on. And of course, the problem is, without common sense, computers in the home would be just as likely to mow the baby.

VERJEE: Doug, how is anything that you're doing going to effect me? I mean, how do I practically use any of this?

LENAT: Suppose you're searching the World Wide Web and you want pictures of people with big smiles on their faces. There might be a captioned image somewhere that says, "Here's a man watching his daughter take her first step."

And because current search engines operate on bags of keywords, they'd never find that kind of match. Whereas you, as a human being, have pieces of common sense like parents love their children, when someone you love accomplishes something it makes you happy, when you're happy, you typically smile. And taking your first step is an accomplishment.

So you put those three or four pieces of information together and could find that match, and that's the kind of thing that the site program, our common sense knowledge base, could do.

So helping you find information better without relying on the kind of brittle keyword based searching, without having to rely on, for instance, spell-checking programs that can't even tell when you've used the wrong word if it happens to be another valid English word.

VERJEE: Rolf, we heard Doug talk about robots, here. But what other kinds of things are being worked on? I mean, I was reading about cars that will eventually drive themselves and all I need to do is sit in it, and it'll take me wherever I want, accident free. Is that the kind of stuff that we're looking at for the future? And what other things are there on the table?

PFEIFER: I think that's certainly one of the applications. There are also I think very important medical applications. There are a lot of applications in, for example, waste cleanup, hazardous sites, etcetera.

But if I may, I would like to reply to something about this common sense issue, which I think relates back to something that John Searle was saying, and if -- I think the meaning that he's talking about, and that relates a lot to common sense.

I think a lot of what common sense is about relates to our body, to our own body. For example, the meaning of drinking relates to the physiology of our particular body. So if you have a system with a body, like a robot, a robot could also have meaning in terms of, for example, battery charge. So battery charge does mean something, because otherwise the robot, if their battery charge is too low, the robot cannot function anymore.

STOTTLER: Zain, if I could break in here I'd like to say that, I think in terms of the near-term impact on individuals throughout the world, I think that there are significant engineering challenges which will take some time to overcome in the robotics world, and that a lot of the applications of artificial intelligence that are here now, and soon to be here now, are embodied primarily in software.

Intelligent tutoring systems are a good example, where you capture and distribute the expertise of very good one-on-one human tutors. For example, there's a literacy intelligent tutoring system to help improve adult reading skills, and I think that's a good example.

VERJEE: OK, Doug, what do you want to say here?

LENAT: I wanted to second that, and in fact you can think of one of the applications, which is just around the corner, meaning only 5 or 10 years away, is essentially enabling people to talk with computers.

I'm not talking about speech understanding so much as I am natural language understanding. Not in the sense that they would be conscious, but in the sense that they could then understand enough of what the person was trying to tell them that they could act appropriately in the future, as if they had understood.

And so in a sense, we're talking about replacing some of what we call programming today with that kind of language understanding and dissimilation.

VERJEE: Rolf, should I be worried that artificial intelligence is going to take over the position of human beings? Of myself -- a robot will do what I do and I'll be out of work. Or I won't be able to do certain things?

PFEIFER: I don't think we have to worry at all. I think there has been a lot of exaggeration and a lot of hype in the media, and the point is that most people still, even specialists in artificial intelligence, still identify intelligence with computation.

And then, of course, if you extrapolate the speed at which the computational power of computers increase, then you're going to be afraid.

But I think the progress that would be required in robotics to actually achieve human levels of intelligence, you know, is not nearly as rapid as what we have in computer technology. So I don't think there is any need to be afraid of anything.

VERJEE: So I've still got my job, then. No robot sitting here in front of you. OK.

Rolf Pfeifer, Doug Lenat, Dick Stottler, thank you for being with me on Q&A -- appreciate that.

LENAT: Thank you for having us.

VERJEE: You're welcome.

PFEIFER: Thanks very much.

VERJEE: You're welcome.

Now, Jim Clancy is going to be back for another addition of Q&A in just a few hours. That's at 20:30 GMT.

That's Q&A, though, for now.

END

TO ORDER VIDEOTAPES AND TRANSCRIPTS OF CNN INTERNATIONAL PROGRAMMING, PLEASE CALL 800-CNN-NEWS OR USE THE SECURE ONLINE ORDER FROM LOCATED AT www.fdch.com


 
 
 
 


 Search   

Back to the top