Home | Search Serendip
Serendip
from Serendip

The three papers cited above were used in a discussion of consciousness as an area of ongoing research in the Senior Seminar in Neural and Behavioral Sciences at Bryn Mawr and Haverford Colleges during spring semester, 1999. The following are comments on the article by participants in that seminar.


Name: Anne Frederickson
Username: afrederi@haverford.edu
Subject: Privatization and Consciousness
Date: Sun Apr 11 21:44:00 EDT 1999
Comments:
In “The Private World of Consciousness” the author makes an evolutionary claim for the emergence of consciousness. He proposes that consciousness is nothing more than the ‘privatization’ of sensory activities. Rather than a totally separate entity, consciousness is nothing more than the realization of the physiological and psychological effects that sensory input has on a being and the ability to create those effects without the input. The intriguing part of this theory is that it finally explains why consciousness might have developed in the first place and what essentially comprises consciousness, two things that past theories have failed to do. Very few theorists have attempted to explain the evolution of consciousness. To do so is a daunting task. First you have to know what consciousness is. Most people can give a subjective idea of consciousness, usually something like “I can’t define it, but I know it when I see it.” However, in order to really understand consciousness, we need to lay down the defining characteristics. (Which the author seems to do in relating much of consciousness to feeling.) The next step along the evolutionary path to consciousness is determining at what point animals became conscious, i.e. which animals are conscious and which are not. This would enable researchers to determine environmental or internal pressures that lead to the development of consciousness. Given the current author’s definition of consciousness, this should be a relatively easy task (if one has the ability to determine if animals are capable of internally representing the world). However, other definitions of consciousness or what makes up consciousness would make the task more difficult. Because others tend to think that consciousness encompasses a range of abilities and activities, there may be a spectrum of consciousness in the animal kingdom, thus making it difficult to pinpoint why consciousness is necessary and why it would have evolved.

The author presents a simple argument of consciousness, what it is and why we need it. In doing so he also presents a model of how a neural network would have to work to produce consciousness (series of loops that continuously activate each other). All that remains to be determined is where these loops exist and how they work exactly. However, this argument may be too simplistic. It is hard to believe that my consciousness is simply my ability to imagine red or to feel heat without the immediate existence of heat. I do believe that consciousness is just a fancy word for several activities, however I do not believe that feeling is the only one.


Name: Jonathan Ball
Username: jball@haverford.edu
Subject: Computer Dreams
Date: Mon Apr 12 02:43:07 EDT 1999
Comments:

Clive Davidson’s article “I process therefore I am” paints are rather exciting picture for artificial intelligence, one that I am not quite prepared to accept. Minsky, Aleksander and others like them believe that the only major hurdles in creating consciousness in a non-carbon based “life-form” are technological. They believe that once the right program is discovered and the right hardware built then we well have a conscious machine. While I don’t rule out the possibility all together, I believe that the problems facing such a system are much greater then people like Minsky and Aleksander would have us believe. (I have read one philosopher who believed that there would be super-intelligent robots by the year 2010 or 2030 at the latest.)

One of the reason for all the optimism has come from the work with neural networks. Neural networks do seem to have some remarkable similarities to human process; such as the ability to learn and the fact that their “memories” are distributed throughout the system rather than stored in one are. It seems to me that consciousness even at a rudimentary level is not just about being able to solve problems without a specific program, or to monitor changes in internal system, rather it is the inter-relatedness of these things and others. It is impressive to see a neural network be able to learn new faces, but it is still a long way away from “seeing” a face and remembering what it felt like the last time that face was seen, or even what that face seems to feeling now. These tasks require not just the learning ability of a neural network, but also an incredible memory store and a program the explains what relevant information connected with that face should be recalled in any given situation. Of course what information could be relevant is based on an almost infinite set of possibilities, so once again we arrive at the Frame Problem which we talked about earlier this semester.

Perhaps one of the better replies I have heard of the problems faced by creating a conscious AI system is that we should stop trying to make an “adult” system. In other words the problem with current AI is that people what to create a system which has the not only the processing abilities or an adult human but also the knowledge store which give their consciousness its “flavor”. Instead programmers should design a system like a which can learn (a daunting task in an of itself) and then let it learn like a human child, through interactions with humans. Then the system will develop human consciousness. Though I don’t know how much I agree with this idea, I do think I points out the often overlooked rule of culture in shaping consciousness; it may be impossible for a computer to ever develop “real” consciousness if it grows up in the “artificial” world of the lab.


Name: Dan Weiser
Username: dweiser@haverford.edu
Subject: phantoms
Date: Mon Apr 12 11:25:36 EDT 1999
Comments:

This talk on phantom limbs takes me back to neurobiology and behavior... I think that it is amazing that one can feel a body without actually having a body. So then what does the whole phenomenon say about consciousness? It is apparent that phantom sensations must be generated wholly in the brain, and thus conscious perception must occur somewhere in the brain as well. However, no one knows where this transformation is taking place. And why should some people experience it when others do not?

The article by Davis et al. seems to be some basic research that actually confirms what has already been believed to be true: thalamic stimulation causes phantom sensation. Even with that accepted, the article does not do anything to enhance our understanding of consciousness. Could it be that everyone has that all amputees or paraplegics have the thalamic stimulation occurring, yet only some are consciously aware of it? Why is it not a universal phenomenon, and moreover, is there any way for non-challenged people to be able to experience this kind of phantom sensation? Even though research seems to demonstrate that there is no consistent pattern of people who experience phantom sensations and those do not, the Nature article presents data that may suggest there is some correlation with time since the amputation. In table 1, those who had no phantom had had their amputation for greater than 13 years (13 and 22, actually). There was no mention in the article about whether these two subjects had ever experienced such a thing or if they have lived without ever having to deal with phantom sensations. Further investigation into that may lead to a better understanding of this phenomenon.


Name: Rachel Kaplan
Username: rkaplan@haverford.edu
Subject: Phantom Sensations
Date: Mon Apr 12 15:23:28 EDT 1999
Comments:
The phantom limb studies are fascinating in and of themselves, also for the questions they raise, and the implications they suggest for consciousness.

The author's state that amputation can change the body surface "map" in the cerebral cortex and thalamus (Davis et al., 1998). So, the question is, why do phantom sensations occur? One theory is posed and supported by evidence. Also, a treatment in keeping with this theory is recommended.

Data supports the idea that the thalamus area which originally corresponded to the phantom limb remains in tact. Using microelectrode recording and microstimulation, researchers have found an uncharateristically large thalamic stump representation, evidence of which has been supported by animal studies (Davis et al., 1998). In addition, it was found that phantom pain could be elicited through thalamic stimulation. One treatment which has been proven successful in patients with intense phantom or stump pain is chronic elctrical stimulation within the thalamus (Davis et al., 1998).

The experiment which the authors performed involved using electrodes to stimulate the thalamus in awake, post-op. patients who were experiencing phantom pain. Neuronal responses were examined within this brain area; and in addition, phantom sensations in the missing limb were noted. It was shown that cells corresponding with phantom limb areas fired in a bursting pattern. I wonder why bursting patterns are common in damaged brain areas. Perhaps because the circuit connections are not as strong.

Is there any survival advantage conferred to people experiencing these phantom sensations? Perhaps a person is better able to function if he feels physically whole. Although, I speak out of ignorance given that I've never talked extensively with someone who was missing a limb. Perhaps it takes only a very short time before they feel "whole" again.

The next question is: how does all of this relate to consciousness overall? This study and others like it provide evidence for the seat of consciousness being found within the CNS, if there was ever any question. These phantom limb studies should also make people less skeptical of those who claim to be feeling pain which seems illogical, whether physical or mental.


Name: Melissa Bromwell
Username:
Subject: Humphrey
Date: Mon Apr 12 16:44:31 EDT 1999
Comments:

Humphrey's article offered an approach that we have not yet seen in other readings this semester. I was intrigued to finally see someone renounce the common approach to understanding consciousness, namely exploring thinking, and instead direct their investigations toward feeling.

I wholeheartedly agree that artificial intelligence will not provide any valuable information about consciousness, at least concerning feeling. In another reading this semester, the author stated that investigating things such as feeling is useless because it is too subjective. I believe that that is the reason why we should explore feeling. Thoughts can be reproduced by anybody, and as we now know from AI, anything. But feeling, because of its subjective nature, is the one thing that makes two individuals distinct.

Because we do not understand what exactly a "feeling" is, it cannot be reproduced. As Humphrey made clear in his article, what a "red feeling" is to one person is not a "red feeling" to another. In addition, most feelings, especially something like a "red feeling" cannot even be categorized, let alone described. Something so "mysterious" (for lack of a better word) then could not possibly be reproduced by AI because we do not know the nature of it and therefore would be at a loss to program the instructions for its creation and experience.


| Course Home Page | Forum | Brain and Behavior | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Sunday, 18-Apr-1999 11:14:17 EDT