Walk into the unassuming brick house in Bostons Back Bay and you would think youve walked into the midst of a household ready to pack up and move away at a moments notice.
Books, records, coffee cups, bric-a-brac and memorabilia cover every square inch of space. Not walls and shelves, mind you. The floor. Theres just enough of a path for inhabitants to move between rooms and get to the door.
Welcome to the home of Marvin Minsky, the Harvard University- and Princeton University-educated mathematician who has become the signature thinker about thinking among American academics. From his professorships at the Massachusetts Institute of Technology, his name has become synonymous with the advance of artificial intelligence implemented in computers and other devices. His thinking has strongly influenced the creation of networks that work like brains, the representation of knowledge in symbols and semantics, the perception of environments in machines and the development of intelligent robots.
In the early 1970s, he and Seymour Papert, co-founder with Minsky of MITs Artificial Intelligence Laboratory, began formulating a theory called "The Society of Mind," which proposed that intelligence is not a state of being or a single sequence of logical thought, but the result of an untold number of interacting processes.
In 1985, Minsky published his best-known work, called Society of Mind, in which he described the way humans unconsciously manage the interaction of small armies of agents in their minds to achieve thought and action. It sounds cluttered and messy akin to how he keeps his house. Nothing gets discarded, everything stands ready for recall and reuse. And the armies of agents always find a way to get where they need to go.
The idea was considered a conceptual breakthrough at the time.
Later this year, Minsky will publish the sequel to Society of Mind, in which he tries to achieve a similar breakthrough in how humans think about the emotions they feel and use. Minsky sat down with Ziff Davis Internet Chief Content Officer Tom Steinert-Threlkeld to talk about his next work, The Emotion Machine.
With your new book, The Emotion Machine, you try to establish a theory of how emotions get created. But, to you, emotions arent the same as feelings.
Its about thinking. The main theory is that emotions are nothing special. Each emotional state is a different style of thinking. So its not a general theory of emotions, because the main idea is that each of the major emotions is quite different. They have different management organizations for how you are thinking you will proceed.
In an adult person, part of the thinking process is being able to manipulate these and turn one on and have it compute something. You can compute some things when youre angry that you cant when youre afraid, and so forth. So finally, the management is able to use these different ways of thinking very quickly as part of ordinary, common-sense thinking.
But I dont think we think in terms of emotions as a thinking process.
Well, thinking is a whole set of complicated processes. Among them, there isnt any process that has a good idea of what the others are doing or how the whole thing works. So people grow up without the theory of thinking itself.
Its interesting if you look at school, you learn about social science and language, history and arithmetic and things, but there is no course in thinking.
If you go to Harvard, you are taught not so much through books as through case studies. This is so that a lot of the process of going through the case study and developing a plan of attack produces a hard method of thinking. It is about thinking, but its not presented that way.
The case method is probably very important. One way we learn is . . . when you have an experience, a business experience or any real-life experience, all sorts of things happen. You dont remember what happened, but what you remember is a little story you made up that has a plot, a main character and a problem to solve.
Why is it so hard to develop a decent theory of how the mind works?
I think the main thing is that its so complicated that our culture has developed wrong theories that get in the way. For example, there is the idea of free will, which is a part of our ethical system, religious systems and philosophical systems, where people grow up to say, Were not a machine and what we do cant be explained.
Now in the last 10 years, there have been bizarre discoveries about memory. Here is an experiment that was done recently: You have your victim, a person who comes in. The experimenter comes out and just meets him. They are talking about something. Then a couple of workmen come. I think they are moving [something] like a piece of wallboard. They are just on the scene. They walk between you and the person youre talking to with this low voice, and what happens is that the first interviewer actually disappears behind the wallboard and the other guy comes out. So originally, youre talking to this fairly tall person with red hair, and while youre talking these people come through. Then youre talking to this guy with a moustache and brown hair and a different jacket.
The person who you are interviewing doesnt notice that this has happened. Its called change blindness. If youre focusing on one thing in a room, while your eyes are there you can change a large amount over here, and when they look back, they wont remember what it looked like.
They wont remember the previous person?
Right. Probably some part of their brain has noticed the change, and the other part says, You must be mistaken, and it gets corrected. Nobody knew this happened until around 1995, when people discovered change blindness.
Emotions seem to be different than thought. But lets talk about why theyre not. Lets go back . . . and put it into the context of emotion: What is love?
There is this fellow who says, Ive met this wonderful person, and shes good in these respects, and so forth. Then when you look at this, this is really infatuation. People will say, Well, love is different, but in fact, everything is different. When two people have a relationship, there are dozens of different ways in which they can be related. But one of them is to have a filter, which ignores all the things that you would normally regard as undesirable or repellant in a person and everything looks perfect. Though we normally think of this as a positive thing, you can think of it as a sort of self-mutilation where youre turning off all the machinery in your head which would be criticizing them and saying, Dont get involved with this.
You can see why a mechanism like that would evolve. Nobody who understood the world would have children and spend 10 years working very hard to support these creatures. In fact, some people are deficient in that.
So there are all sorts of different processes that combine to produce complex thoughts that we call emotions?
Some of these are built-in. Weve evolved little control things that activate, some parts of the brain and turn off others so that . . . when youre angry, you probably turn off some of your long-range planning machinery and turn on systems that are good for solving problems very quickly, although not very elegantly. We turn on some self-defense mechanisms because evolution has discovered that if you appear to be very aggressive, the enemy might go away. So you turn on various mechanisms that will make it look to the other person as though . . . you dont have any alternatives, so they better retreat. So something like anger, you can see it as a very clever intimidation system. Of course, the person who is experiencing it doesnt know how it works because youve turned off your critical machinery.
Its a blinding agent.
In the case of some type of love relationship, what you do is try to guess what the other persons goals are and adopt them. Some people romanticize this by saying that youre merging and sharing common interests. In fact, you are, because if the thing works out well, you will start to have the same goals.
Weve had learning machines. Were not even there with thinking machines. Now, you raise the possibility of emotion machines. In the context of the title, do you just want to understand the emotion machines in the human psyche and brain?
Well, the title is for annoying as many people as possible. [Laughter]
Because the main point of the book is that its trying to make theories of how thinking works. Our traditional idea is that there is something called thinking and that it is contaminated, modulated or affected by emotions. What I am saying is that emotions arent separate.
So emotion is thinking.
Yeah. Each emotional state is a slightly different way of thinking or a very different way. The reason that we do this is that any particular way of thinking is only going to be good for solving certain problems, just as any way of representing knowledge would be good for only solving certain problems.
How long do you think it will be before we have machines that can calculate and express emotion?
The central problem is that we dont have any kind of machine-thinking that covers much of a range of problems. Part of the book that you havent seen yet, but its sort of implied in Society of Mind, is that weve got to develop theories of common-sense reasoning. No machines have much of this, which is why you cant talk to a machine much, except maybe for an airline reservation where the machine has a lot of knowledge about seating and routing.
What knowledge can a machine have, really?
Everything we do has to have some knowledge about how much time and resources its going to consume for you. Just about everything that we do involves allocation of resources. So, part of a thinking machine has to be lots of different ways of representing the same knowledge, sharing knowledge between them. And we have to get rid of this idea that there is something called analytical thinking, which is the core of everything. [With] most thinking, most people see something and say, "What does that remind me of, and what do I do in that situation?" So one of the things that we have to do is get lots of people working on different kinds of analogies and metaphors. Its the most powerful kind of thinking, remembering what something is like. Then, of course, what are the differences, similarities. If there are some big differences, do you ignore them, or do you have to make a separate [analysis]?
There are no machines that do anything like that. . . . People still think that computers have to work logically.
How else would computers work, if not logically? How would you create a metaphor base to work from?
You have to build a system that looks at two representations, two expressions or two data structures, and quickly says in what ways are they similar and what ways are they different. Then another knowledge base says which kinds of differences are important for which kinds of preference.
Which of these different processes will have valid application to industry? Where do you see cascades of emotion being used by or being collated by machinery?
They are each good for a different problem. So what you want to do is stimulate different ways of doing things. Nobody is doing that. . . . So these days, we have 50,000 people working on neural-based expert systems or neural nets or genetic algorithms or logic-based systems. But on the whole planet, I actually only know five or six of these people. I assume there are five or six others that I have found who are working on designing a common-sense knowledge base. So it could take a very long time.
What Im hoping is that when my book comes out, people will say, "Oh, maybe thats something that I can do.
Yeah, but why would they do it? Im sure well get to the emotion machine at some point. Maybe not our generation, but the next one. Why would you want to put emotions in a machine? Is there any use to having it?
Well, you want to make machines that are smart. The whole purpose of my book is to say that there arent things called emotions. There are different ways of thinking. The way to make a smart machine is to have different ways to look at problems and the knowledge of procedures which make it enabled to switch from one to the other when something isnt working, because you jump into another mode. But you asked, Why do we want smart machines? Thats another story. I think basically because we need help. They are not smart enough.
I think we all want smart machines, but I have trouble with the idea that well have emotional machines. They drive me crazy enough.
I dont know what you mean. Youre using emotional in some wrong sense.
You dont want a sport utility vehicle going down the highway at 65 miles per hour that gets cut off, has the ability to recognize all the other traffic, recognizes that it was cut off and uses the same kind of rationale as a human being, gets angry and causes an accident.
No. Im not saying that. You want a machine that has many ways of thinking. They dont have to be the same as people have. So you dont want to revert back to the old meaning of emotion, which is irrational behavior. What you want is to have lots of different ways that are good at solving the problem that you want to solve.
The rational thing for the 6-year-old to do is to kill the baby sister to have all the toys permanently and not get into these quarrels. So what you dont want is a rational machine that works things out logically. You want them to have many ways of looking at them and balance them.
OK. So then what would be the practical applications? Which emotions or which processes that would create liking or frustration or whatever would be useful?
Well, for different problems, youll make different kinds of machines. For example, people are no good at running governments. Absolute power corrupts. So can you make a machine that even when it gets power, wont abuse it?
What part of this new understanding about how to encapsulate emotional methods of thinking could you see becoming useful in handheld computers, laptop computers, the Internet?
Well, I think a gigahertz computer, which you can buy now for about $1,000 or so, could be as smart as a person.
How, though, do we determine which emotions, which kinds of thought, get applied to different ways of thinking? How do we decide which parts and which processes to apply?
I think a big market right now is search engines. Wouldnt it be nice if we could make a machine that could read a story or read an article and understand it in some of the ways that a person does? Right now, search engines are using tricks. They are using statistics about words to try to guess what an article is about. But no machine is doing anything like reading the article and parsing the sentences which isnt so hard and figuring out what it s saying. So there is a big incentive to make machines that actually understand texts more or less the way that people do, look at movies the way that people do.
And the practical benefits there would be what? Ranking a story by three tears if its sad [by a users tastes] or hearts if its a very romantic story?
Well, Im more interested in saying, How do you prove some theorem, or how do you solve complicated problems? Or, How do you design an object that people will like the looks of? All the sorts of things that you hire people to do.
What you have here is this idea of the emotion machine. And were using it to clean house?
What Im saying is that the only way to make a machine thats intelligent and solves hard problems and is useful is to have many different ways of thinking. The emotion machine is about how to make a machine thats resourceful. There is no particular reason to make it angry, but you certainly would want to make it impatient internally so that if its not making progress, it will switch to some other way of thinking.
I could see boldness being a quality that you would want with executive decisions. How do you make use of emotional makeup, emotional characteristics?
If you look at a person, you will see these large moods where for 10 minutes they are happy, then they are bored or theyre getting angry and so forth. What Im saying . . . is that ordinary thinking is flashing these feelings on and off in seconds or milliseconds. So you want the machine to be very bold and uncritical for a tenth of a second while it generates five ideas. Then you want it to be extremely critical while it throws most of them out. And then for a half a second its going to be analytical . . . so what Im saying is that the large-scale emotions are not very useful. But for any kind of thinking, what youre going to have to have are similar things on a very much smaller scale. These have never been detected. Psychologists havent had the idea of looking for them, so nobody knows that they exist. Im saying that every five seconds, youre probably going through 10 emotions in this exploitative fashion, where one process is turning the others on very briefly to get over some little obstacle or to get a new way of looking at something.
If you have machines that have multiple ways of thinking, what will be the nature of the improvement?
Right now, if you have something that you want done, you hire an architect for your house, or you hire a builder or you hire a contractor. Maybe you go to the bank and get somebody. Were used to having lots of people who are good at different things. Were not used to a little box which is good at a thousand different things. . . . So there is no way to look at the future now.
But the idea is to build in special modules.
Well, I used to build in the resources and then youll have thousands of hackers who are inventing new ways to rearrange them.
And thats where the network will get angry. People will build [antibodies] into the logical process to notice that and reject it.
Right. Its going to be a very strange world where youve got lots of little minds that can do things that a person can do.
You talk about consciousness. That awareness is just a state, but it is also the result of some set of processes. What implications does that have for us?
Well, I think of consciousness as being by itself pretty trivial. Consciousness is really a word that covers a lot of things, but most of them have to do with the ability to remember what you were thinking recently and criticize it or modify it. But that by itself is no good if youre dumb. So the important thing about machines of the future is having them able to understand things and solve problems, and making them conscious.
How easy it to make something conscious? Why isnt the Net already conscious?
It is. Consciousness is nothing. Consciousness is just remembering what youve been doing lately. We have programs like a list we [can use to] turn out a trace program. It remembers all the functions that youve called recently. But there is nothing that can understand the trace, so without understanding, consciousness is nothing, and with understanding its easy. I dont see any mystery to consciousness, except that people dont understand that without intelligence, without common-sense reasoning, you cant think.
If you look at what people are actually conscious of, you see its almost nothing. You dont know how you get the next word you think of. You dont even know how you raise your arm. You dont know how you walk. So were not conscious in any important sense. What we are is smart. We have a little ability to reflect on how we think, but were not conscious in the deep sense that we know how we think. We know whats in our minds. We know how we remember things, and we know how we solve problems. I think the so-called mystery of consciousness is one of the myths that has kept people from having good theories of how the mind works.
You would argue, then, that large parts of the Internet are conscious.
The Net is not smart. It cant do much. It is conscious in the sense that it backs up its records. But consciousness is not intelligence. It is just something that we only have a little bit of. Its a little ability to remember what you were doing lately. If you can remember it, and cant think about it, youre nowhere.