This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page |

The Story of Evolution, Spring 2005 Second Web Papers On Serendip

Daniel Dennett, in his work Darwin's Dangerous Idea argues in favor artificial intelligence (1). Others, including Kurt Gödel and Roger Penrose argue otherwise. According to Dennett, Penrose's use of Gödel's theorem to discount artificial intelligence is fallible. Penrose believes:

"One might imagine that it would be possible to list all possible obvious steps of reasoning once and for all, so that from then on everything could be reduced to computation-i.e., the mere mechanical manipulation of these obvious steps. What Gödel's argument shows is that this is not possible. There is no way of eliminating the need for new "obvious" understandings. Thus, mathematical reasoning cannot be reduced to blind computation." (2)

To Dennett, there exists a set of algorithms that yield a mathematical insight "even though that was not just what [they were] 'for.'" (1) (p 441) Dennett poses the question, "how could Penrose have overlooked this retrospectively obvious possibility?" (1) (p 441) To "prove" the fallacy of Penrose' argument, Dennett outlines an algorithm for playing perfect chess. (1) (p 439) The result of this algorithm would be an artificial sense of "insight" in the machine that models the algorithm. Dennett claims that since chess is a finite game, there are a finite number of possibilities. Were a computer equipped with an algorithm to account for each outcome, it would essentially pass the Turing test. Suppose that the ever common "fatal error" occurred in the computer, or the chip containing the algorithm was destroyed, would the computer's insight be lost as well? Yes, however, this question is fundamentally incorrect in that something that never had insight to begin with cannot ever really "lose" insight. Were Bobby Fischer to break his arm, would he lose his insight? The answer is a resounding "No." Were Bobby Fischer to suffer irreparable brain damage, would he lose his insight? Before this question can be answered, an even more important one must be asked: would Bobby Fischer still be Bobby Fischer if he suffered irreparable brain damage? It is not necessary to have an answer to this question before arriving at the conclusion that Bobby Fischer's character traits, personality, and least of all, insights are a product of Bobby Fischer. An algorithm may not be written and consequently implanted into Bobby Fischer's brain to bring Bobby Fischer back. An analogous example is the story of Phineas Gage, the foreman of a railway construction gang, who suffered brain damage when a three feet seven inch long tamping iron blew through the front side of his brain. Although Gage miraculously survived, his personality was drastically altered after the accident. Prior to the accident, Gage was described as an efficient, congenial and well-balanced man. Afterward, however, he became irreverent, profane and obstinate and was described as, "No longer Gage" by his coworkers. (3)

Once again, would Bobby Fischer still be Bobby Fischer if he lost the qualities that made him Bobby Fischer in the first place? Yes, he would still be Bobby Fischer, but no, he would not be the same Bobby Fischer that he used to be. This case, although analogous to the Phineas Gage case, is not analogous to the damaged computer chip case. Will the computer chip still be a computer chip? Yes. Can its artificial insight be restored? Yes.

Dennett's mistake is his assumption that something as intangible as insight may be quantified. Though we may define insight to simply mean "perceptiveness" (4), it is not fair, or perhaps, equivalent to remove consciousness from the equation and replace it with artificiality. Further, what Dennett does may not realize is that he is making the kind of assumption that Kurt Gödel illuminates in his theory of incompleteness. Dennett presents his chess algorithm, which, in theory works fine; however, in practice, the case is not so simple. Dennett himself refers to his algorithm as so inconvenient that it would be finished "way past the universe's bedtime." (1) (p. 439) Gödel's first theorem of incompleteness will tell us that within a system (where natural numbers are described as a set, not just contained within the system), there is always a statement which may not be proven as true or false. (5) Although Dennett has proposed an algorithm which we are able to recognize and understand as workable, we cannot, in effect call it true or false.

It is peculiar is that Dennett criticizes Roger Penrose's use of Gödel's theorem to make the case against artificial intelligence. According to Dennett, "once we appreciate how an algorithmic process can escape the clutches of Gödel's theorem, we can see more clearly than ever how Design Space is unified by Darwin's dangerous idea." (1) (p 451) In trying to escape the "clutches of Gödel's theorem," Dennett, himself, became victim to Gödel's idea of incompleteness. To Dennett the "romantically inclined" will see Gödel's theorem as a mathematical explanation of the special nature of the human mind (1) (p 428). Dennett's refusal to see the mind in this way is even further perplexing. Gödel's argument is mathematically sound, and yet Dennett is asking his reader to suppose that there was something outside the equation. Isn't that what Gödel's theorem is saying, to begin with? As human beings, we are able to see truths (or fallacies) even if we cannot prove them. In fact, Roger Penrose makes this very claim, that "what can be mechanically proven" and "what can be seen to be true by humans" shows that our way of thinking is not mechanical by nature. (5) This idea can be further extrapolated to explain why artificial intelligence is so improbable. If we cannot mechanize thinking, then a machine cannot think. Furthermore, we can draw conclusions that the evolutionary process is far too important and complex to be mimicked algorithmically.

2)The Atheism of the Gaps , "Shadows of the Mind," Penrose book review

4)Encarta Definition of Insight

5)Wikipedia: Incompleteness Theorem

| Course Home Page
| Forum
| Science in Culture
| Serendip Home |

© by Serendip 1994- - Last Modified: Wednesday, 09-Mar-2005 10:56:18 EST