It is unusual to run across a mention of John Searle while reading Scott Adams, because the only other place I’ve heard of him is in philosophical arguments dating back to the eighties.
In that context, he argued against the “Turing Test” for evaluating artificial intelligence. The test goes that if you can’t distinguish between a computer and a human in a text-only conversation, then the computer must be intelligent. The actually test, in my opinion, is meaningless because humans can trick the computer by escaping into the real world — which the computer can only compete with if it has a similar “life experience”. (And I believe it was never proposed as a formal test of intelligence, just as a thought experiment, so I’m not arguing against Turing himself.)
For example, adapting an example from Hofstadter, asking the question “How many syllables does an upsidedown M have?” requires that the computer knows about the shapes of letters and geometric properties like rotation, plus knowledge of the sounds of the names of letters. At this stage, the computer either needs this information given to it a priori by its inventor, a scheme which would never work in general for creating an “intelligent machine”, or actual understanding of such things. And the latter requires eyes and ears for interaction with the real world, at which point you’re looking more at a robot — and then consider problems of questions about the feeling of bungie jumping or eating too much Indian food. In essence, your computer needs to be able to fake knowing of such things, where you’re sure to eventually be able to trick it somehow, or build a replica human, which isn’t the point of the exercise — we want an intelligent computer, not an intelligent humanoid robot (although that would be cool too, obviously).
John Searle’s objection to the Turing Test lay along quite different philosophical grounds — that computers can’t think:
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn’t, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don’t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don’t understand what they’re ‘saying’, just as he doesn’t.
(from Wikipedia, which seems a bit confused later on, stating
The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.
— but if there’s no contradiction there, it’s too lazy a Sunday for me.)
Obviously, to me, the “understanding and thinking” of the whole system must incorporate the actual rules that are being followed, rather than just the mindless object executing the rules. The fact that blindly following the rules allows the Chinese Room to pass the Turing Test (by assumption, no less) implies that the rules know a hell of a lot. My argument goes: either humans don’t understand anything, and neither can machines; or humans have understanding, and so can machines.
Anyway, I don’t like these sorts of arguments, because there’s so much terminology invented to argue about that the ideas behind them are a little obscured. That’s probably my layman’s excuse, though.
Back to where I started this whole thing. Scott references an interesting article:
[John Searle] is puzzled by why, if we have no free will, we have this peculiar conscious experience of decision-making. If, as neuroscience currently suggests, it is purely an illusion, then ‘evolution played a massive trick on us.’ But this ‘goes against everything we know about evolution. The processes of conscious rationality are such an important part of our lives, and above all such a biologically expensive part of our lives’ that it seems impossible they ”play no functional role at all in the life and survival of the organism”.
Scott then says:
Is it my imagination, or is that the worst argument ever? […] The illusion of free will helps make us happy. Otherwise, consciousness would feel like a prison. Happiness in turn improves the body’s immune response. What more do you need from evolution?
Well, I’m not sure if the link between happiness and immune response is direct and low-level enough to be used as a great argument in this case. Also, the argument might have been made worse after filtering through journalism-speak. But it does seem like a pretty poor argument. It is much more likely to me that the illusion of free will arose as a by-product of our ability to think ahead and think of ourselves in future situations — that particular skill that turned us from monkeys into bloggers. (Not that the difference is particularly noticeable in some cases.)
Not that we choose different paths of action to take based on what’s coming up, but we take what’s coming up into account in our actions. This means we had to have a symbol of “self” in our brains, which most easily is mapped to an “I”; and so we have consciousness. Since there are “choices” about what to do next, which involve our brain-symbol “I”, the illusion of free will arises because one of those choices is chosen. It’s just that we have no control over which choice to take, because that’s entirely determined by physics (in the “no free will” argument).
Clear as mud. Well, to John Searle.