From the course: Introduction to Artificial Intelligence

The general problem-solver

From the course: Introduction to Artificial Intelligence

The general problem-solver

- In 1956, computer scientists, Allen Newell and Herbert A. Simon created a program they called the general problem solver. One of the key ideas of the general problem solver was what they called the physical symbol system hypothesis. They said that symbols were a big part of how we interact with the world. When you see a stop sign, you know how to stop for traffic. When you see the letter A, you know that the word will make a certain sound. When you see a sandwich, you might think of eating. They argued that if you could program a machine to connect these symbols then it would be intelligent. But not everyone bought into this idea. If you program a car to stop at a sign, or if you teach a computer to respond to language, then it doesn't make the system intelligent. In 1980, the philosopher John Searle explained that sometimes systems can seem intelligent, but they're just mindlessly matching patterns. To explain, he created what he called the Chinese room argument. In the argument, you should imagine yourself in a windowless room with one mail slot on the door. You can only use this slot to communicate with the outside world. In the room, you have a phrase book on a desk and a bunch of post-it notes with Chinese symbols on the floor. The book shows what response you should use with the note that comes through the slot. It says, "If you see this sequence of Chinese symbols, then respond with that sequence of Chinese symbols." Now, imagine a speaker writes something in Chinese Mandarin and pushes it through the slot. You can look at the note and match it with your phrase book. Then you paste together the Mandarin response from the post-it notes on the floor. You have no idea what it says in Mandarin. Instead, you simply go through the process of looking through the book and matching the sequence of symbols. A native Chinese speaker behind the door might believe that they're having a conversation. In fact, they might even assume that the person in the room is a native speaker. But Searle argues that this is far from intelligence, since the person in the room can't speak Mandarin, and doesn't have any idea what they're talking about. You can try a similar experiment with your smartphone. Try asking Siri or Cortana how they feel. They might say they feel fine, but that doesn't mean that they're telling you how they really feel. They also don't know what you're asking. They're just matching your question to a pre-program response, just like the person in the Chinese room. So Searle argues that matching symbols is not a true path to intelligence. That a computer is acting just like the person in the room. They don't understand the meaning. They're just matching patterns from a phrase book. Even with these challenges, physical symbol systems were still the cornerstone of AI for 25 years. Yet, in the end, programming all these matching patterns took up too much time. It was impossible to match all the symbols without running into an explosion of combinations. These combinations would soon fill up even the largest phrase book. There were just too many possibilities to match symbols with their program response. So many philosophers like John Searle, argued that the path would never lead to true intelligence.

Contents