This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.
2001 Second Web Report
The idea of creative and intelligent nonhumans is at once exciting and extremely useful. Wouldn't it be great to have a computer assistant that could anticipate your needs, or come up with novel solutions on its own? Scientists have often compared the function of the nervous system to computer programming, but does this comparison translate to an actual causal relationship? The way physics describes communication between computer parts in a binary system remarkably resembles the communications between neurons in the body. When considering the brain, science only looks at the physical components. If this physicality is sufficient to explain behavior then we can recreate this mechanism artificially in a computer. Thus, on the surface creating a computer that also shares the human behavior of intelligence and functions of the mind seems possible.
To explore the possibilities of artificial intelligence, AI, this paper will consist of four sections. First, we will examine how computer programs and AI systems work. Then, the possibility of comparing these to the mind will be explored. A criterion for intelligence and consciousness will be derived with which to evaluate AI. Finally, the standards will be applied to current AI programs and tests. In conclusion, future for AI will be explored.
Computers and Programming
Computer programming breaks down to a simple code of on and off circuits, 1's and 0's-- binary code. AI addresses the cognitive skills of solving problems, learning and understanding language (4). Researchers use weak AI as a tool for merely modeling mind systems, whereas strong AI is a mind itself and presents its own set of cognitive explanations (6). AI systems of artificial neural fields compose response rules for themselves based on notions of the present situation. Expert systems, another branch of AI, consist of a knowledge base and a reasoning engine. Systems perform specific tasks by applying the built in knowledge to the task with an interference engine, a reasoning structure (4). Processing relies on rule-based system of if-then statements to form a line of reasoning. The programming of chess programs such as Deep Blue uses this type of limited intellectual mechanisms (5).
Other AI programs try to mimic human understanding in language. Weizenbaum's ELIZA program models human communication by engaging in conversation, through asking questions based on responses of a user (6). Another program by Schank simulates human understanding of a story and answers implicit questions about it given a representation of the information presented in the story. AI programs also mimic and explore creativity through random mutation guided by defined goals to define selections (7). The goals of the program relate to the survival goals of the mind handed down over evolution to humans.
A basic view of the brain presents the neuron in a binary way. It has two states, an all-or-nothing reaction of on or off. Gradations between the two states are meaningless. The brain does not process information in a straightforward information feedback loop in the same way that the autonomic functions of the rest of the body do, such as stomachs and hearts. The organization of these neuronal 'circuits' shows that the brain displays a more sophisticated kind of parallel processing computational equipment (8). Thus, the brain is not a strict digital computer.
Information gathered by the nervous systems is said to be presented to and interpreted by the I-function in the mind (8). This I-function really just places another mind inside the mind without explaining it's function. Knowledge must still be processed in the brain in some way. Then the unknown the 'I-function' of the mind relies on physicality of the brain, making the comparison to the physical realm of programming still possible. Our own minds do not necessarily understand and perceive everything, but see the world in terms of the brain's own design, the reality given by sensory inputs which lead to awareness.
Intelligence and Consciousness
The definition of intelligence is crucial for evaluating the goals and accomplishments of artificial intelligence. Turing, an early AI researcher, wanted to separate consciousness from intelligence as consciousness remains a mystery (1). So, under this definition an AI system could exist despite lacking consciousness. This definition of consciousness, however, does not include all expanses of intelligence and creativity which require awareness for real understanding. The brain functions to sense the environment, construct a picture of current reality with sensory inputs, initiate actions, and record and learn experiences (3). Likewise, a representation of intelligence does not represent actual intelligence, just as a model of behavior does not constitute the actions themselves. Humans do not have consciousness of how they think, (which certainly would answer many questions posed here). Processing can occur without consciousness, i.e., the brain still processes the blind spot of the retina of which humans are unaware (2). But, humans have no active understanding of the blind spot; it does not demonstrate the knowledge of intelligence of an individual.
Biological searches for neuronal centers of consciousness have so far turned up no strong correlation between any section of the brain and awareness. A preliminary connection between visual consciousness and neurons beyond the inferior temporal cortex, IT, was shown (2). When monkeys were made to look at conflicting superimposed images creating binocular rivalry the IT region, not the visual cortex, showed greatly heightened activity rates. This increased activity could correlate to consciousness, but may be another aspect of the visual field which feeds into consciousness. Removal of the IT and testing the visual awareness of the monkeys may provide further justification for this location. Still, the possible knowledge of the location of the NCC provides very little functional information about consciousness of the mind in the brain. Thus, while scientific clues about consciousness do exist, a comprehensive explanation of it does not. Nevertheless, the fact that consciousness is a physically existent part of human understanding remains true and must be part of a complete definition of intelligence.
Evaluation of AI Programs
Finally, we must explore the validity of AI according to the model of intelligence presented by the human brain. Do the programs created actually have understanding and awareness given their programming? Must awareness be a part of understanding of computer AI? One noted test, the Turing test, can easily be dismissed. The Turing test states that if a computer program can fool a human user into thinking it is human, then the program is in fact intelligent (5). This test merely shows how capable a program is at mimicking human behavior. Having the appearance of intelligence does not equal possession of it. These results do not necessarily provide a cause. Programs such as ELIZA can pass the Turing test, however, contain no real mechanism of intelligence beyond mimicry. This definition does not take into account understanding or causal sources of intelligence.
Considering the case of Deep Blue, we see that computer chess programs can rival the level of grand masters at play. It considers chess positions as a whole, allowing massive computation to take place, replacing understanding (5). Similar programs are not able to adapt and play games which do not allow them to consider positions as a whole, but instead having to analyze sub-positions separately first as in the Chinese game of Go. Thus, the computers failing in understanding translates into a real lack of intelligence.
Other AI programs set out to display understanding more explicitly. Searle, however, shows Schank's language comprehension to lack real comprehension with his famous Chinese Room Argument (6). Basically, the computer may have no real understanding of the story but can match up formal values according to its programming. The syntax of the programming do not equal semantics.
Another, more pertinent way to look at it is to consider computational states. Meaning is not inherent in the physical states of programming, instead actual meaning and meaning of the symbols must be added by an outside interpretation. Searle states that "a physical state of a system is a computational state only relative to the assignment to that state of some computational role, function, or interpretation" (8).
And, In Conclusion
The brain's inputs correlate to a specific sensation by specific neurological processes. In the case of vision, the components of the visual field, photons, stimulate the response resulting in the concrete visual event (8). This specificity gives the inputs of the brain an intrinsic meaning, while the binary code of programming has no such inherent physical meaning. Creating an AI system with a grounding in reality could perhaps produce marketable results. Having a computer first experience reality, within a rule system, in a way similar to humans is perhaps the first step in achieving understanding. Whether this experience could really lead to awareness is still unclear. As long as the brain can be defined physically, then it can be recreated physically. Whether a greater understanding of the brain, and how it creates consciousness, would lead to advancements in AI also remains to be seen.
| Course Home Page
| Forum | Brain and
Behavior | Serendip Home |