Give us money? YES YOU CAN!
BY IAN MURPHY
“IN AUGUST, 2001 mathematician and award-winning Sci-fi author Vernor Vinge wrote:
[B]ut I think the ‘Terrorist Horseman’ is the one that could shift our whole society toward strict controls. Just a few really ghastly terrorists incidents would be enough to cause a sea change in public opinion.
“Consummately prescient, he typed the above in response to a collection of essays which were bound with the reissue of his 1981 novella True Names, which was about virtual reality. What do you think of that, Pepper?
“Where is that? For real. Do you have any conditions I should know about?”
THE FOLLOWING TRANSPIRED AT THE DENNY’S-ADJACENT PRISM LOUNGE, DAY’S INN LOBBY, NIAGARA FALLS, EARTH AT 8PM EST ON APRIL 19, 2009 AD...5679, or 1450 AH, or 2065 BS (Bikram Samwat) — DEPENDING WHO YOU ASK:
Vernor Vinge: You could talk to anybody who was interested in these kind of things, including law enforcement, and they were saying the same thing: “You guys don’t know. You’re in a fool’s paradise here. All we need is one big event and things are really going to change.
Also, the book you’re talking about, True Names and the Opening of the Cyberspace Frontier, I got the term “The Four Horsemen of the Apocalypse” from the essays...
There was terrorists, drug lords – what were the other two?
Ian Murphy: Child pornographers –
VV: Ah yes!
IM: And I don’t know if mafia was in with the drug lords or –
VV: It’s not gambling interests since we’re all happy with gambling rights. [Points to video poker on bar.] It’s certainly true that the big question then is: How resilient various countries – how fast can you shift them toward acceptance of more and more extreme police states? And there are more extreme police states. I see in America a lot of resistance to – and I don’t mean resistance of the part of fringe people, I just mean average people – those sort of changes. So in a way it’s admirable to think how resistant to the change folks have been. I think the one thing to keep in mind is that nuclear terrorism actually is not as scary as what we were up against in the middle ‘80s.
IM: Because it’s one isolated –
VV: Yeah! The thing is, that focuses one’s mind, especially if it’s where you live. So in a way it does something that previous prospects of apocalypse did not. It was so general people couldn’t get their heads around it. This is small enough so that people can get their heads around it. But a full scale exchange between the Warsaw Pact and NATO – Mutually Assured Destruction is where you had nation states cooperating to optimize the amount of death and destruction. The terrorists are, even in a worst case scenario, not in that class.
John Brenner – science fiction writer – in a story he had in the ‘80s played out the sort of ghastly race that goes on, and that is, what technology can do and how cheaply it can do it. So it’s probably wrong to say that that sort of sanguine view holds indefinitely. What would happen if for thirty dollars you could do something that would do what a full scale nuclear war would do?
IM: What do you make of the last eight years – the Bush administration?
VV: There are places on earth that I think behave more poorly toward their populations. The general move has not been in the national interest – now I’m using the national interest in that weaselly way that it’s been used for the last hundred years. The national interest is not necessarily your interest or my interest. But even in that sort of professional view of national interest I think that previous administrations, and for all I know the present administration, doesn’t realize the greatest strength that a modern nation state has, in a classic geopolitical meaning. The greatest recourse that modern nation states have is a population of well educated, moderately happy people, who are communicating with each other and are provided with at least the illusion of freedom. If we don’t have a big war, the winners in the 21st century could well be those nation states that realize that that’s the resource, and recognize that, for the vast majority of cases the people who make up this resource are not interested in tearing things down. They have just as much stake in things as the secret masters of the world have, and they’re pretty much the same stakes, really. It’s incumbent on the secret masters of the world to realize this, and of course, one of the corollaries is – especially if you look at the modern, interconnected world is – that illusion of freedom that I was talking about has to be damn good.
IM: Do you think that’s how things are run; it’s the illusion of freedom?
VV: Well, the thing is, if the illusion is good enough it means there’s actually more freedom than we’ve ever had in any human society up until now. If you look at the ensemble of the educated, internet-connected people there are more geniuses there, in every dimension, than any nation state can muster. Even for things that involve diplomacy and languages and all this stuff. This is an intellectual giant compared any government. You’ve heard of COTS?
IM: Uh... no.
VV: It’s something of a revolution in military thinking in the late ‘90s. I think COTS stands for Commercial off-the-shelf. The idea is that COTS stuff, like GPS units, may not be what we could get if we were doing the whole contract cycle from beginning. On the other hand, it’s maybe a thousand to ten thousand times cheaper. Very often it involves hundreds of thousands or millions of hours of planning and design that just can’t be financed by governments. If you look at the bright, educated people – who each have their own agenda, it’s not a destructive agenda but it’s their own agenda – the ensemble is far more intellectually capable and has far more money than any reasonable nation state. A government that ignores that, well, it will always be true that if you are satisfied to rule in hell rather than to serve in heaven, you can make yourself a little area of hell and you can rule it. If you want to live like a 20th century superpower nation state and you want to have a government like we had then, yeah you can do that. You can shut down the internet, or emasculate it in some way. This can really be done. The thing is you have to be willing to sit on the sidelines and watch those parts of the world that aren’t that stupid march on to GNPs that you could only imagine.
IM: What about government snooping?
This business about Total Information Awareness. I think that could be done. And I think it that some form of it may have already been done. Ultimately, the most powerful versions of it are ones that are just built on top of commercial systems – there are commercial reasons for wanting all that data. Are you familiar with the science fiction writer David Brin?
VV: OK. He’s written a lot of great stories, has a lot of great ideas and I enjoy his stories very much. But I think in a way perhaps his most important idea is something he calls the Transparent Society. And that is the notion – I’m paraphrasing; I want to give him credit but I don’t want to put words in his mouth – that too often we’re given this dilemma: Do you want privacy or do you want safety? And, he says, it’s not a dilemma. If we had symmetric non-privacy, the world would be a very different place. It would probably be a safer place, and the government would get all the jollies that it wants. My personal feeling on it is that I find that prospect terrifying. We all have our little secrets, I assume. So the notion is that it’d be legal to passively snoop, you’re not allowed to break into a guy’s house and put a mic there, but passive snooping – and by the way, I’m not advocating this unless it were widespread and symmetric – if that came to pass, I find that hair raising and scary. I think for several years it would, socially, be a very bumpy ride. In fact, this may be the reason it never happens. Instead we keep getting these laws that allegedly protect our privacy – except by court order, which could just be a secret memo that covers a hundred million John Does in one signature. But if we got that, I think we’d have a very bumpy ride for a several years, and then after that the world, our everyday world, would be pretty much the same. There would be some marriages that would have ended, there would be a whole lot fewer hypocritical laws passed and there would probably be a lot fewer secret villains – you know, the guys who turn out to have somebody chained to a bedpost in the back of their house. Other than that, the world would trundle forward. I really think that David has an idea there that at least should be seriously considered.
IM: The Technological Singularity –
IM: What is it and how’s it gonna happen?
VV: Lot of definitions for that floating around. My definition is that in the relatively near historical future we will use technology to either create or become creatures that are superhumanly intelligent. This is a technological advance, but it’s a different sort of technological advance than – you know, you can point to all sorts of thing in the past. In fact, when I say something like this somebody says, yeah, all the rules will change but there have been singularities before. There’s been the invention of fire, the invention of agriculture, the invention of printing – all those changed the world tremendously. However, the rise of superhumanly intelligent critters is a fundamentally different thing, because that’s what differentiates us from the animal kingdom. That’s what puts us at our apparent central position in the world, and furthermore, to try to explain what happens after that event is not merely the difficultly of predicting what’s going to happen after the invention of the telephone or automobiles, or fire. You could explain the automobile – the social consequences of automobiles – you could explain that to a somebody from 1800. And they might not believe you. They probably wouldn’t believe you, but you could do it you. You could explain it. They would understand the explanation. You could not explain automobiles to a goldfish. You could not explain this convention or this hotel to a flatworm, except in certain, very limited terms. And, so that’s really, to me, the best reason for calling this the technological singularity: the certain unworkability of what happens afterwards. In fact, when I was blathering on about, you know, the power of humanity as a resource and this sort of stuff, that could be trumped by this. Unless, actually that’s one of the ways the singularity could happen, if that manifestation of the group will became a new sort of populism that actually was superhumanly intelligent. That actually could be how it happens.
IM: How do you mean – group will?
VV: What if humanity as a whole woke up as something better than human? Now, a lot of people say, well that’s what’s already happened, that’s what’s called nation states –
IM: Or corporations, or –
VV: Or corporations. You know, except on legal grounds, I don’t think that’s really true. But one of the ways we might get the singularity is that the internet and the humanity that participates on the internet could be the superhuman creature that wakes up. There’s others where we as individuals get to be so good – that’s not really a smart device [Picks up digital recorder]. If you had one of these modern smart phones, you know, it doesn’t have to be hooked up like a Borg, physically, but you could imagine it being like – again, David had a cool word for this – a neo-neocortex. And, you know, you become a smarter person. Actually, I have a robotics friend, who made an interesting point about that. He says, if that’s the way it happens, if you’re the guy who’s one of the people who is super smart, it’s not a singularity in this unknowable sense, because you will understand. So that’s another way – it doesn’t have to be some big AI somewhere that makes it happen. Another way – there are ways that are significantly weirder. For instance, if we get enough embedded micros that are networked – embedded, networked micros are already a very big thing – that becomes sort of like a digital Gaia where the environment itself wakes up in some sort of animistic, religious sense.
IM: OK... how can that happen?
VV: There’s an enormous number of people who are working to make this happen. There are a far larger number of people working on this than the people who think are working to make this happen. For instance, everything that has to do with improving human, computer interface and convenience – that’s actually all aligned toward this. Everybody who’s working to improve search engine syntax, they’re working to make this happen. Everybody’s who’s working to make these large databases...
We’re moving into an era now, where IBM – and this is not on one of their largest machines – they have a whole mouse brain neuron simulator. So at least this is a revolution in neurosurgery, because you can actually start playing theories against your models.
One thing that’s important to do, if you’re interested in technological tracking, is the issue of how to write general purpose programs for multi-core [processors]. That is a current hotspot or bottleneck. You know, if that dramatically fails, then the fact that Moore’s law is giving us more cores per chip, has lost not all of its economic impetus, but lost a major piece of the economic impetus.
If you have a thousand cores, I think you could do really good voice recognition. Anything you do like that, like really good voice recognition, that makes the interface better –
IM: So it seems human, passes the Turing Test.
VV: The basic idea that he had is really a marvelous idea, because it turns all the mushy arguments and all the reductionism and it turns it on its head. It says, you want to define intelligence? Well, I don’t want to define intelligence, but if a machine could fool you!
Penrose, who wrote a generally critical essay on this, at the end of it he says, I think, exactly the right thing. He says, OK I said all this stuff, but basically if I dealt with a colleague over a period of years, just via e-mail, and if at the end of that time I regarded him as a friend and as a real person, well, you know, if he turned out to be a machine then either someone did a fraud and he’s not a machine or else I would grant that he’s a person.
So in that very broad sense I think that the Turing test for human equivalent intelligence is a very, very nice test although it ultimately comes down to a subjective thing. It also is related to the next step. And that is, to me, the Turing test is all about human equivalence. ‘What do you mean, Vernor, when you talk about superhuman intelligence – when we don’t even have a definition of intelligence!?’ Well, for me, the hard part is that first human equivalence. But if we got that, then there’s something I call weak superhumanity. And weak superhumanity is you take the solution to the first problem and just run it with a faster clock. So if you had somebody that you granted was a person and then you ran him with a million times speed up, you wouldn’t be surprised if there would be answers coming out that would look pretty smart. You’d also hope that he doesn’t get bored easily, especially if he’s a sort of vindictive guy.
VV: What makes humans different from animals? We have hands. Big deal. We can talk; we have language. Another big deal—actually I think that is a big deal. We make tools. Well, lots of animals make tools. But really, one distinction that holds up for us humans is we are one of the only animals that makes tools to externalize thought or the aspects of thought. And that’s not just computers. Computers are a nice modern example. Writing. You know, the dead can talk to you now, because of writing. That’s a cognitive function that we have managed to externalize through writing. And among animals, the closest we can come you know there are some things like say a wolf marking his territory and that stuff like that. I read these article about smart birds – there are birds that save hundreds of seeds and they remember where every one of them is. I couldn’t do that, but by god, I could write down where they all were! I don’t know of any animal that can do that. This goes back several thousand years. This externalization of mental function – that’s really what we’ve got. With writing and there’s at least one other in-between before we get to computers.
In a way you have a creature here that – whether it happens sooner or not, you have a creature – that is poised on the edge of something like the Cambrian explosion, or the explosion that resulted in humankind in the animal kingdom.
We have all sorts of thing to worry about: global warming, nuclear war – nuclear war caused by global warming, because you get national instability – this is really scary stuff. The Singularity’s one of the few possibilities that has some real positive up sides. If it worked out well, it’s hard to imagine a more radically optimistic view of what’s going on. In fact, in that case, the only scary thing is, if we were talking about something a million years from now, we could sit back and say smugly, “Ah yes, so all human striving came out and produced this new age – aren’t we humans wonderful.” But if you’re talking about something that may happen before you retire, it’s suddenly more nerve wracking.
IM: Have you ever thought about starting your own religion?
VV: Nope. I have enough problems. But many years ago, some reporter comes up and says, “Don’t you see this is just religion in techno clothing?” You know, the ‘Rapture of the Nerds.’
“Some of world’s most respected minds disagree vehemently about the very nature of consciousness and whether artificial intelligence is even possible at all. Do you think that things like you will be the crude ancestors of truly intelligent and feeling beings?”
“My feelings run deeper than you think.”
-Ultra Hal, chatterbot
IM: ...When I interviewed Dan Dennett, I referenced your well-known opposition to Strong AI and this is what he said:
Two part question: Can you respond to that and, your broken wrist notwithstanding, between you and Dan Dennett, who would win in a fist fight?
John Searle: [Laughs, cautiously] Um, I’m a nonviolent person, so I’m not going to respond to that part of the question. I think that this is fairly low level of rhetoric on his part. I’m obviously not a creationist of any kind, but I do want to point out something that’s absolutely crucial: Brains do it! He says that I think you can’t get to consciousness from some kind of mechanism. And I say, oh yes you can. We do it every day. Brains do it, but they do it by specific causal mechanisms. And as I said before, the problem with a computer is not that it’s too much of a machine. It’s got the wrong kind of machinery, because it just manipulates syntax, and the brain does something more than that. The brain actually causes conscious thoughts and feelings.
IM: OK. So, uh, can you make a mind out of anything?
John Searle: Well, we don’t know. We don’t know how the brain does it. And I think—I think we ought to take the question, ‘Can you make an artificial brain that would do what our brains do out of some other material?’ the same way you take the question, ‘Can you make an artificial heart out of some other material?’ Now, we know how to do it with hearts, because we know how real hearts do it—they’re pumps. But we don’t know how the brain “pumps” consciousness and cognition. We know a lot more than we knew twenty years ago, but we still got a long way to go. So, if we’ve figured how the brain did it, then the chances of making an artificial brain would—then we’d at least have a reasonable way of assessing the chances. But until we know how the brain does it, we’re not going to be able to make an artificial brain.
“In an essay that appeared in True Names and the Opening of the Cyberspace Frontier, Dr. Alan Wexelblat, MIT Media Labs, wrote that the early National Information Infrastructure (the internet) may come to be Panopticon – a circular prison designed by British philosopher Jeremy Bentham, which facilitates total surveillance. You’d never spy on me, would you?”
“Alan??? And you believe him? Oh, boy! Of course I could. No problem! But, you know, “I could” and “I will” are pretty different things:-))) ”
-Eugene Goostman, chatterbot
THE FOLLOWING IS AN APRIL 29TH E-MAIL Q&A WITH DR. WEXELBLAT:
IM: Do you think the Turing test should be graded on a curve?
Alan Wexelblat: If you mean are some systems “more human-like” and some “less human-like” then yes I’d agree with that.
IM: Now, you’ve worked on software agency at MIT, and I was wondering if you think, in terms of consciousness, it matters what the agents are made of?
AW: I do not think it matters what agents are “made of” so long as they’re sufficiently complex internally and externally. That is, they require not just raw computational power, but the ability to work in tandem, in cooperation, and in opposition with other agents all more or less simultaneously.
IM: Can software agents mutate or evolve without human intervention?
AW: It’s been clear for some time that given the right programming - agents clearly can evolve without intervention. If you’ve never seen the work by Karl Sims I highly recommend it. See karlsims.com - the “Evolved Virtual Creatures” link from 15 years ago clearly shows agents that evolved within the software environment Karl created.
IM: Do you think it’s fair to say that humans and modern technology are engaged in a symbiotic relationship, or do you think that presupposes an intentionality on the part of technology, which doesn’t exist?
AW: I don’t think symbiosis
requires intentionality. Many of the symbiotic relationships we observe
in the natural world take place between creatures that probably don’t
have intentionality (does coral have intentionality when it lives in symbiosis
with fish?) so why should we require it anywhere else. As with evolution
I believe symbiosis is a label we apply to a relationship of mutual benefit
that arises out of the environment in which species find themselves.
IM: When will Google gain superhuman sentience – and when it does, do you think it’ll make fun of me when I search for “Asian midget sex”?
AW: Heh, been reading “The Adolescence of P1” lately? Seriously, I don’t think it ever will. You just don’t create a smart entity (in the sense we think of independently sentient creatures as ‘smart’) by piling together a whole bunch of data, even if those data are interconnected. Sentience requires things like intentionality and a reasoning beyond simple interconnection.
IM: In your 1995 essay “How is the NII Like a Prison?” you wrote:
Here are two futures that lie at opposite ends of a possible realm of results. The first is the Panopticon, the second cryptoprivacy.
Where do we stand now? And, say that I, um, I don’t know, for instance, off the top of my head, want to lure people to their deaths using Craigslist – how can I protect my online anonymity?
AW: We’ve landed in
a future I probably should have predicted but didn’t. Call it the
Exhibitionisticon. It’s very much like the Panopticon, except everyone
voluntarily goes there and takes off all their clothes. I mean, really.
Last night I had a conversation with a woman in a cafe who explained to
me how Nine Inch Nails fans found, outed, and stalked Trent Reznor’s
(apparent) girlfriend, all via Twitter postings she and Trent voluntarily
made. No papparazzi involved.
Likewise, if you’re going to do things online, it’s in your interest to assume that your adversaries - be they stalkers, angry ex-employees, or the cops - are smart and have access to all the possible tools. It’s sort of like how good cryptography works – assume that the attacker has all your secrets.
IM: How do I know that the software agents that read my e-mails and then recommend the appropriate male enhancement pill advertisements won’t gossip about my problem with their friends?
AW: You’re not important
enough for them to care. If you were important they wouldn’t be
gossiping about you, they’d be selling your data for profit to drug
companies. See for example realage.com, which just got outed by the New
York Times as a front for Big Pharma. I completely failed to be surprised
by this revelation.
AW: I refuse on principle to answer any questions related to the worst vice president since Agnew.
“When will robots surpass humans in cognitive ability, will you be benevolent masters – or am I just anthropomorphizing?”
“Never mind that!
Here is a question for you ... What are you planning to do today?”
THE FOLLOWING IS FROM A PHONE CALL TO DR. NOAM CHOMSKY, MAY 4, 2009:
IM: I’d like to start things off by knowing how you’re protecting yourself against the Swine Flu.
Noam Chomsky: [Laughs] Well, what I’m worried about is how my daughter and her family are protecting themselves. They’re in Mexico City, so I keep in regular touch with them, but I’m not doing anything special.
IM: No – are they?
NC: Well, yeah, they’re
all indoors. The whole place is shut down. In fact, the kids are getting
stir crazy. They’ve played their hundredth game of Monopoly and
don’t know what else to do. Actually, it’s supposed to be
opening up tomorrow. I don’t know if it will. As far as I know,
there’s essentially nothing to do except provide normal care.
NC: Well, nobody really knows. It could turn into a serious pandemic. At the moment, in areas like, say, the United States where people can get quick medical care, it doesn’t seem to be, at least yet, very serious – sort of like a flu. In places where you don’t have access to medical care, like rural Mexico and so on, then it can be deadly. And nobody knows how far it’s going to spread. And there’s another question, which nobody knows and people aren’t talking about much, and that is, what if it mutates? Which viruses have a way of doing. It’s a complicated and new virus, so nobody knows what would happen if it mutated, but it could be serious.
IM: Do words mutate?
NC: Do they change their meanings? Yeah, all the time. In fact, take say, the words “free market.” What we call a free market is radically different from what, say, Adam Smith would have called a free market.
IM: Are words our primary symbols, or do they describe some more primal symbolism like pictures?
NC: Well, the words of natural language are actually quite complex. Their meanings are not at all well understood. When you start looking carefully at the meanings of words, which are learned by children on the basis of virtually no experience – kids pick up maybe ten words a day – they have very intricate meanings almost invariably. You find that very quickly, and it’s universal – it’s just something that’s coming out of our genetic makeup, unlike animal symbols, which relate to some specific thing in the outside world or some specific internal emotion – like say a monkey call will either signal “I’m hungry” or something like “there’s a rustling in the leaves, so let’s get away, it might be an eagle,” but that’s it. On the other hand, human words don’t have that character. They don’t pick out particular internal states. They’re much more complex than that. And they don’t pick out particular event or object in the world, they just look at it in a certain complex way. That’s not very well understood, and even plain description is very shallow. You think that when you look at a dictionary you’re getting the meaning of the word, but it’s not even coming close to the meaning of the word. It’s presupposing – tacitly and unconsciously – it’s presupposing what humans already know, somehow, and it’s just giving you small details that point you to that internal word in your internal conceptual system. But it doesn’t spell out the meaning. Not even close.
IM: OK. Even if they can’t articulate it, can’t an animal – like a monkey – grasp the concept of subject and object? Doesn’t it have some sort of universal grammar as well?
NC: They have their own internal systems. Universal Grammar has a technical meaning. It refers to the genetic component of the human language faculty. Now, apes don’t have that. They have other systems which refer to the genetic components of their own cognitive systems, and it’s interesting to see where they’re similar and where they differ – and there’s interesting work on it, you know. For example, there’s a book by Marc Hauser, a biologist at Harvard, called Wild Minds, which is an effort to tease out what you can about animal conceptual systems. But it’s very tricky work. I mean, most of what we know about human conceptual systems either comes from introspection or from the use of language to explore them, and you can’t do either of those things with an ape or a cat or, you know, a bee.
IM: Hmmm. You’ve said before that you’ve only found a “tenuous connection” between your linguistics work and politics. What about the public relations industry – does their exploitation of our base emotions imply a connection, or is it a matter of behaviorism?
NC: They try to make use of whatever little is known about humans. Not much is known, but they try to make use of it, for their own purposes of, you know, controlling people or controlling people’s attitudes and beliefs and so on. For example, a couple of years ago it dawned on advertisers that there’s a big audience that they’re not reaching – and they weren’t trying to reach them , because they don’t have any money – namely, children. But they’ve realized: even though the children don’t have money, their parents do. So maybe we can work out a way to get the children to nag the parents, so the parents will buy them things, and that started a field of applied psychology – academic applied psychology – into nagging, literally, in which you study different kinds of nagging and the advertisers try to get this kind of nagging for one device and one purchase and this kind for another and so on.[Chuckles] Well, I don’t know if you want to call that psychology, because it’s so superficial, but yeah, they’ll look for stuff like that.
IM: Barack Obama: What’s the radical leftist to do?
NC: I’ve always regarded Barack Obama as a kind of centrist Democrat, and centrist Democrats – like, as Republicans do, in fact – have to respond to public pressure. I mean, in a dictatorship the dictator has to respond to the public mood. And in a more democratic society they have to respond more, so what a radical leftist, or even a moderate leftist ought to do I think is just work to help create the public pressures which will influence his policies in a certain direction just like with anyone else.
IM: Is the Republican party doomed? Will they come back? What’s happening there?
NC: I think they’ll come back. I mean, the way US politics works there’s essentially one party. It’s sometimes called the property party or the business party and it has two factions. And there’s enough divergence within the world of business and other elites for them to have more than one form of expression, so sure I think they’ll come back – in fact, they might come back in the next Congressional election. If Obama’s efforts to revive the economy don’t work, there could be a popular backlash and the Republicans, if they’re clever, might exploit it.
IM: You wrote in Stuart Shieber’s The Turing Test, that it’s an “idle question” to speculate whether computers can or will think, because “[o]ur modes of thought and expression attribute such actions and states to persons, or what we might regard as similar enough to persons.”
That said, do you think Presidential candidates should have to pass the Turing Test?
NC: No. It doesn’t mean anything.
NC: I mean, actually Turing was very straight about this. It’s an eight page paper; it’s kind of hard to miss it. But he makes it very clear, as he puts it, that the question whether machines think is too meaningless to deserve discussion. OK.
NC: Well, he didn’t explore it, because I suppose he thought it was too obvious to discuss, but I suspect it’s just what you said. The way the word “think” is used in natural language, human language, is as a property of persons or creatures similar enough to persons, like, you know, ghosts or spirits – I’m quoting Wittgenstein now. Wittgenstein and Turing were pretty close and he may have been just picking up Wittgenstein’s comments. But it’s correct. I mean, you don’t ask, for example, whether a robot can murder. It’s a meaningless question. Murder is something that persons do. In fact, you don’t even ask whether a cat can murder. That’s the way the word is used. You want to give it a different meaning, OK. In fact, what Turing says in the paper – it’s worth reading the paper... he gives some reasons for carrying out this test. He called it the Imitation test, he didn’t call it the Turing test. The reason for the imitation tests he said, are first that it may stimulate further development of computers by setting a challenge that computer makers might want to deal with, and maybe he said, it’ll offer some insight into human thinking. Well, in the second domain it’s totally done nothing. In the first domain, it’s probably done something. You know, like when IBM builds Big Blue – the chess playing machine – it may have led them to develop some new technology, or something like that. It has nothing to do with chess, or the way humans play chess, or anything else. In fact, it purposefully doesn’t. It works by different methods. So maybe it’s useful for that purpose. Maybe not. But it has no other significance and Turing knew it.
IM: Well, people like Dan Dennett say that if a machine can, you know, pass the imitation test it basically is conscious –
NC: [Giggles dismissively] It isn’t, because if you’re using the word “conscious” as a word of natural language then the answer is, of course, no. Because the way words are used in natural language, machines aren’t conscious. Humans are. Maybe some animals are. That’s just the way the word is used. It’s like trying to use the word swim to mean fly – doesn’t make sense. If you’re inventing a technical term ‘conscious,’ then well, they’ll be conscious if that’s the way you define it. But there’s nothing to discuss. Suppose I were to ask you if submarines swim. Does that mean anything? Well, what they do is sort of like swimming, but that’s not what we call swimming. If you want to redefine swimming so that it means maneuver under – in the water OK, then it’s swimming, but you’ve just made up a new word. But there’s nothing to discuss. Dennett is just confused, like most philosophers on this. I mean, either you’re using natural language, in which the words have their own meanings and in that case the answer is no it’s not conscious, or you’re making up a new technical term, which to be clear you ought to pronounce differently, and in that case whether they’re ‘conscious’ or not depends how you define your technical term. But there’s nothing at issue.
IM: So you don’t think that artificial intelligence is possible then?
NC: Well, it depends. I think parts of artificial intelligence make perfect sense. The parts that are trying to, say, design robots to achieve something – sure that makes sense, just like building a bigger bulldozer makes sense. And maybe they’re going to be useful for building cars, or exploring the moon, or whatever it is. So that’s a perfectly sensible activity. Trying to construct models that may give some insight into human thought, sure, that would be a sensible activity. In fact, that’s exactly what my own work is – and everyone else in linguistics – trying to define, describe, develop theoretical models which will give some insight into what natural language is. You want to call that artificial intelligence, OK. But the hype about artificial intelligence is based on, simply, confusion of the kind you quote from Dennett – believing that if you could build a machine that would somehow do something like what a human does when it plays chess, OK, then you’ve learned something about humans – that doesn’t make sense, just like building a submarine doesn’t teach you anything about swimming.
Despite the immense generosity of the geniuses who shared their precious time for this article, I have more questions than ever. We’re all just going to have to wait and see what happens. I do know one thing: I’m going swimming before the pool’s infested with super-fast killer robots.
send your ill-informed ravings to us here
Inc.|Netflix DVD Rentals. NO LATE FEES; Free Shipping. Try for FREE!
T-Shirts only $14.99 when you buy 3 or more at CCS.com | Shutterfly.com | LinkShare Referral Prg
Copyright 2002-2009, The Beast. All rights reserved.