Commentary on Douglas Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid

This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page

Emergence 2006

Reviews of Relevant Books

On Serendip

Commentary on Douglas Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid

Jesse Rohwer

Gödel, Escher, Bach is an entertaining and thought-provoking exploration of several related mathematical, philosophical, and computer science themes cast in a popular science perspective. Published in 1979, it received the Pulitzer Prize for general non-fiction in 1980. Throughout the book, Hofstadter illustrates such concepts as unpredictable determinism, self-reference and self-representation, Gödel's incompleteness theorem, intelligence, and consciousness through a combination of prefaces consisting of dialogues between fictional characters, analogies to Bach's music, Escher prints, and paintings by René Magritte, and lucid direct exposition.

One of the most prominent themes in GEB is a reductionist explanation of consciousness and human intelligence. Hofstadter states that "to suggest ways of reconciling the software of mind with the hardware of brain is a main goal of this book." Although some people are still debating the question of whether or not conscious experience can be explained as an epiphenomenon of relatively well-understood microscopic physical processes (i.e. as a secondary, emergent property—"it is not built into the rules, but it is a consequence of the rules"), acceptance for this description is certainly more widespread today than it was a quarter century ago, when Hofstadter wrote GEB. For this reason, I was less interested in this theme than in some of the others. However, because all of the concepts that Hofstadter presents are interrelated, by addressing a few of what I perceive to be Hofstadter's most interesting themes—unpredictable deterministic systems, levels of complexity, and the relationship between self-reference and incompleteness—and by discussing some of the questions that GEB raises, I will also address the problem of consciousness.

Hofstadter doesn't discuss cellular automata, which we have found in class to provide an excellent example of how simple deterministic systems can have unpredictable and complex behavior. However, he does explore the unpredictability and complexity of emergent behavior manifested in ant colonies, intracellular enzymatic processes, and neurons in the human brain. It is only through the interactions of a multitude of ants, no single one of which possesses an internal plan for the often complex design of the anthill, that such vast (relative to the size of each individual ant, at least) and intricate (arches, mazes of tunnels, and towers) structures are eventually built. The neuronal example of unforeseeable complexity arising from simple parallel agents is also fascinating. Hofstadter points out the difficulty of localizing higher cognitive processes due to the fact that any individual neuron may interact with thousands of others, which in turn interact with thousands of others, all in parallel, to produce complex mental behavior. Finally, Hofstadter relates this emergent complexity to creativity, making the point that determinism does not rule out creativity because there is more than enough pseudorandomness in any sufficiently large deterministic system to give rise to unpredictable, "creative" results. Although the evidence of creativity in computer programs has been doubtful to date, the development of more powerful computers and correspondingly more complex, unpredictable programs is promising. Hofstadter explains that "When programs cease to be transparent to their creators, then the approach to creativity has begun."

Another interesting theme pervading Hofstadter's work is that of levels of complexity. Hofstadter explains that it is almost always necessary to introduce higher-level concepts to make the task of understanding complex systems tractable. In some cases, these approximations are almost perfect, as in the case of the gas laws—solving an equation in terms of pressure, temperature, and volume of a gas will always yield an answer that does not deviate perceptibly from complete accuracy. However, at the microscopic level there are no such things are pressure or temperature—these concepts have been invented because to calculate the velocity vector of every individual gas particle would be a near-impossible task. Another classic example of the utility of higher-level grouping, or "chunking", is in weather prediction. We base our calculations on cold fronts, hurricanes, and other macroscopic concepts, but to the individual atoms that make up the atmosphere there is no "cold front" or "hurricane".

The theme of levels was particularly interesting when applied to intelligence. Hofstadter is a professor of computer science and cognitive science at Indiana University, and his knowledge in the field of artificial intelligence shows. He employs the idea of "symbols" to describe mental concepts which must exist in the brain as some pattern of neural connectivity and firing, just as a cold front is a pattern of atmospheric activity. Furthermore, he concludes from a comparison of human intelligence (based on complex neural structures that differ from person to person) to the intelligence of lower animals (in one example the solitary wasp, which has many fewer neurons than a human and which demonstrates seemingly intelligent behavior that turns out on closer inspection to be nothing but a very simple and inflexible predetermined program) that many interacting neurons are needed to achieve the capacity for logical manipulation of symbols, and that this ability is in turn necessary for human-level intelligence. Related to this conclusion is his assumption that computers will eventually achieve human-like intelligence, but only through mimicking the human brain architecture. He goes on to erroneously predict that no special purpose chess program will be capable of beating the best human players because only a general purpose artificial intellect, based on the emergent properties of neural networks, would be intelligent enough. This thinking is confused; chess is a game with simple, well-defined rules, making it closer to arithmetic than to the types of pattern recognition tasks humans are specialized to perform. Since computers can be applied to arithmetic tasks with far more efficiency than humans can, a computer with enough circuitry to rival human intelligence should outperform a human in chess; Deep Blue did just that to Kasparov, and with significantly less processing power than the human brain is estimated to have. However, this minor misunderstanding aside, Hofstadter's belief in the importance of emergent systems for generating high general intelligence is supported today—the most prominent contemporary view in artificial intelligence research is that it will be necessary to mimic the human brain's architecture in order to emulate its intelligence.

Finally, Hofstadter brings Gödel into the picture with an examination of the incompleteness theorem and its implications. He explains the concepts of self-reference and self-representation, and the fact that a self-referential system cannot be consistent. He postulates that consciousness is probably somehow the result of the brain's ability for self-reference, which is not an uncommon explanation in cognitive science. This is all familiar ground, and Hofstadter does a good job of presenting the material, but the end effect of GEB is to leave me searching for explanations to unanswered questions. That is, what does Gödel's incompleteness theorem imply about reason itself? Are all attempts to fully understand reality futile? And what are the implications of our own ability to create systems as logically complete (or incomplete, depending on how you look at it) as our own world? Doesn't this mean that we could be part of a larger system about which we know nothing? Could it be that we can know something about this larger system, such as that it must be more complex than our own world? These are the types of philosophical questions that reading GEB evoked, and they are reflected in some of Magritte's pipe series that Hofstadter includes—for example, a painting of a pipe with the caption "Ceci n'est pas un pipe," i.e. "This is not a pipe." At first we may think, 'of course it's a pipe,' until we realize that what Magritte means is that it is really just a painting. Another painting features a room with the painting of the first pipe on an easel and a "real" pipe floating above it—now there are three layers of "reality" evident: our world, the world of the painting, and the world of the painting within the painting. It forces the viewer to confront the subjective nature of reality—do we necessarily exist in the 'highest' layer?

Another slightly less philosophical but still disturbing question that Hofstadter raised in my mind was whether or not computers will eventually attain or surpass human intelligence. Hofstadter predicts that they will attain intelligence comparable to that of humans, and says he is unsure of whether or not they will ever exceed it. To me, it seems obvious (although maybe this is the result of having read Hans Moravec's papers on AI) that computers will be capable of exceeding human intelligence in the not-too-distant future. The question is whether or not they should be allowed to reach this point. Considering the state of chaos of computer software even today—flawed programs, viruses, countless opportunities for humans to exploit the oversights of software developers to accomplish their own often malicious ends—I think it would be entirely foolish to delude ourselves into thinking that we could "control" whatever new intelligence we create. And once the hardware is powerful enough and the programming techniques have been perfected, what's to prevent someone or something from creating, either accidentally or intentionally, a hyper-intelligent electronic entity with malicious intent? Is it our fate to be destroyed or replaced by our creations? Should we accept this possible outcome? Should we embrace a transition from our biological origins to the perpetuation of the human race through artificial progeny? Personally, I think not, but reading GEB has made me realize that if we are not careful, we may not have a choice.

I highly recommend Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid. It is well-written, interesting, informative, and thought-provoking.


| Course Home | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994-2005 - Last Modified: Friday, 21-Apr-2006 09:50:54 EDT