Home | Search Serendip
Serendip
Empirical Non-Foundationalism Forum

Empirical Non-Foundationalism Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Notes from our discussion on "Culture as Disabilit
Name: Laura
Date: //2006-08-04 17:06:32 :
Link to this Comment: 20128

I wrote up this summary of our discussion to think about some of the ideas some more... I hope others will add additional thoughts as there were many good ideas I didn't include...


In their essay, McDermott and Varenne propose that disability can only be understood as something emerging at a societal level. Disability can only be meaningful when one looks at an individual as embedded in a group of other individuals-- a society or culture. No sense can (or should) be made of disability as an attribute inherent in any particular individual. They say,


"One cannot be disabled alone..."
"The world is not really a set of tasks..."


They identify two ways in which disability is often made sense of and then offer their own:


1. Disability as deprivation

2. Disability as difference

3. Culture as disability


A parallel may be drawn between their ideas and evolution. In the context of evolution, the world IS a set of tasks (to survive, to reproduce, etc.). However, nature does not care how an organism/individual gets along in the world, that it does is all that matters (i.e., a human moves around by walking on two legs, a bird does so by walking on two legs or by flying, whereas a snake uses its entire body to move itself forward; mammals reproduce sexually-- by combining their DNA with that from other individuals, whereas a bacterium reproduces by simply creating a clone of itself). From an objective standpoint, no way can be said to be inherently better than another.


In human cultures, disability is created when a society puts a value on doing something one way as opposed to another. In doing so, the world in which ALL individuals must live is changed to suit some and not others. The world still remains a set of tasks, but now the world is navigable only by those individuals who do things in that very specific way. For example, a person may be considered abnormal if they cannot hear. Because a society changes the world in ways which best suit those with intact hearing--communicating via speech, creating telephones, making music, etc.-- a deaf person can no longer get by as easily in the world.


The parallel to evolution is interesting because the world is being continually modified not just by individuals of one group (one society), but individuals of many different groups (many societies). Just as a group of individuals can marginalize some of its own members, so too can (and do) multiple groups/cultures repress and constrain others. Further, one species can disable another (e.x. considering the human contributions to global warming, we could say that coldwater fish may some day become disabled as global temperatures rise).


In 1950, psychologist Karl Lashley published “In search of the engram”. Lashley asked where in the brain memory is located. First, mice were trained to run a maze. He removed different parts of each mouse brain (some mice had two or several different parts removed) and then assessed their ability to complete the same maze. He found that although the degree of impairment was proportional to the number of removed parts, ALL mice were ultimately capable of completing the maze. Lashley’s concluded that the brain is equipotential with respect to memory.


Equipotentiality can be contrasted with the view of localization of function, which is more common today even though the observations by Lashley have not yet been challenged. Equipotentiality does not deny that there are specific modalities of the brain—such as vision, audition, and so on. But, unlike a localizationist perspective, it acknowledges that, given a task, an organism has many ways of completing it. Why does a localizationist view then prevail? Because it’s demonstrable to show that a person is incapable of doing a task when they are constrained in very specific and unnatural (?) ways. For example, a patient with prosopagnosia is said to be incapable of recognizing people, including family, friends, and even his/her own reflection. But, this is borne out usually only in a clinical setting when being tested on recognizing people from pictures. That is, recognizing visual images. However, in daily life they ARE perfectly capable of recognizing family and friends, most times by voice recognition (auditory cues). Such individuals are disabled only when allowed to do things in pre-determined and constrained ways.


Lashley provides a concrete example of two different, albeit both sufficient ways a mouse can learn:



“These facts indicate that, in the absence of the visual cortex, the learning of the brightness
reaction is carried out by the optic tectum. However, so long as the visual cortex is intact,
removal of the tectum has no effect whatever upon the performance of visual habits. The
tectum apparently does not participate in visual learning so long as the cortex is intact.” (466)



The contrast between disability as deprivation and disability as difference may be strong, but the contrast between disability as difference and culture as disability is subtle. Again, a deprivation approach says that a priori of any societies or culture, there are certain, correct ways of doing things in the world and people either have them or they don’t (and are disabled). A difference approach is roughly equated with the “politically correct” sentiment: “I have a way of doing X, and you have a way of doing X. We’re just different and each way is equally preferable.”


However, one can go further and say cultures create not only disabilities but abilities. The difference approach implies that methods of doing things are inherent properties of the individual (an individual sees, hears, etc.). Culture as disability does not. The notion of abilities, characteristics, or properties, of an individual is not meaningful unless taken in the context of some set of tasks, in the context of a world with which an individual interacts. These ideas resonate with the philosophical tradition of constructivism (or anti-realism). There can be no real or true description of an individual in isolation. A materialist may still say that some neurons in a particular organisms brain fire in the presence of electromagnetic radiation, but is still referencing the external world of that organism. As an example, we might say that all human beings are blind, because we cannot detect infrared light, say, as snakes do. “Blind” as a concept is only understood when it is constrained and made very specific (detection of electromagnetic radiation of wavelength x, etc.).


It is our activity as a culture that creates tasks which in turn defines abilities and CREATES disability. It should be our job to allow such tasks which we create to be achievable in the broadest array of methods possible.



If one accepts the authors’ argument, a quite different view of society must be taken. What sense can be made of mental “illness”? schizophrenia? the insanity defense? Pedophilia? In the latter case, intimacy between adults and children can be found in other cultures where it is not stigmatized. In the case of mental illness/disorder/disability, these are then things in need of better understandings as well as not things to be “fixed”. Further, if we do not constrain the way tasks are to be completed, individuals need to be free to complete those however they so chose. Certain ways of doing things must be open not only to those currently considered disabled, but open to everyone. In other words, no special treatment (ex: even though a student may test negative for a learning disability, he/she should be free to have as much time to complete a test). Ramifications for education? Saying that an individual should be free to complete a task or achieve a state by any means he/she wishes parallels with the difference approach to disability. Why must an individual complete a task/achieve a certain state in the first place?


Mental illness
Name: Laura
Date: //2006-08-06 22:39:51 :
Link to this Comment: 20131

I recently read Thomas Szasz's original essay "The Myth of Mental Illness" and I think many of his points go well with the main idea in "Culture as Disability", so here are a few quotes/thoughts...

"The idea that chronic hostility, vengefulness, or divorce are indicative of mental illness would be illustrations of the use of ethical norms (that is, the desirability of love, kindness, and a stable marriage relationship)... The norm from which deviation is measured whenever one speaks of a mental illness is a psycho-social and ethical one."

So, the judgment of hostile/non-hostile, etc. is made by a culture, though I don't think he goes as far as McDermott and Verenne to say that it is culture that makes the split between hostile/non-hostile, etc. and that (from a more philosophical standpoint) hostile/non-hostile may not be something inherent within an individual. But to that end, he does acknowledge that descriptions (and subsequent actions) are never purely objective. He quotes the term "participant observer", which I think articulates the idea well:

"[A psychiatrist] is committed to some picture of what he considers reality-- and to what he thinks society considers reality-- and he observes and judges the patient's behavior in light of these considerations."

I'm hesitant to agree with the entire essay because it's unclear to me whether or not he is further denying a "materialistic account" of, for example, the phenomena which we might call schizophrenia:

"The notion of mental symptom is therefore inextricably tied to the social context in which it is made in much the same way as the notion of bodily symptom is tied to anatomical and genetic context."

That is, he certainly rejects a "medical model" of mental illness, but I'm not sure if he's rejecting a "biological model" as well. Irrespective of how a society judges that particular "mental symptom" I think saying there is a neurological correlate to that and every other "symptom"/mental/bodily state is not the same. For example, in an individual with schizophrenia, the over activity/spontaneity of neurons firing in the temporal lobe or what have you manifest as auditory hallucinations remains fairly objective, though the very calling of attention to this particular difference between a schizophrenic and "normal" individual and any subsequent judgment is where the "participant observer" comes in. Though maybe this is still being a realist...?

And one last point of his I think relevant is that he distinguishes between an individual and society deciding that he/she deviates from the norm. That is, while society/culture should not create pressure for individuals to seek "treatment"/change, the methods and means (pharmacology, psychotherapy, and so on...) should not be denied to those wanting them.


culture as disability and ...
Name: Paul Grobstein
Date: //2006-08-07 11:30:08 :
Link to this Comment: 20132

Thanks, Laura, for a very rich/evocative summary of what I too thought was a particularly rich conversation. There are indeed some interesting parallels to explore between the culture as disability argument and
  1. biological evolution
  2. the localization/equipotentiality issue in brain research
  3. the realism/constructivism issue in philosophy of science
  4. the border between "mental health" and "insanity"
A few thoughts to keep the conversation flowing ...

"the world IS a set of tasks" ... I like and think very useful the distinction between "a set of tasks" and "ways of accomplishing those tasks". Its a particularly clear and, I think, valuable distinction in thinking about what we mean by disability, biological evolution, and the Lashley work. Because of it, one can (should?) not regard localization and equipotentiality as oppositions, one can/should acknowledge that evolution is in the business of exploring the variety of ways that tasks can be accomplished rather than finding the best one, and one can/should accept that a characterization of specific "deficits" is not a necessary (and perhaps not even a particularly useful) approach to human diversity.

As you note though, even in biological evolution there is some relevant fuzziness in the distinction. A given species is not in fact free to explore ALL possible "ways of accomplishing those tasks". The existence of other species, like the inanimate world, sets limits to what is a viable way of accomplishing the "tasks" for any given species. The limits are never such as to narrow the possibilities to one but they certainly can be great enough so one might want for some purposes to say that the "task" is not entirely independent of the circumstances. There is an important positive as well as negative side to this. Other species may in fact bring into existence the potential for NEW "ways of accomplishing those tasks" (ie create new evolutionary niches). And so the "task" itself becomes even more one that is less independent of circumstances.

And that, of course, is reminiscent of our course conversations about realism and constructivism. There is something "out there" (ie a set of constraints on the stories one might tell about "reality") but there is always more than one available story and hence no way to tell the story that is perspective-free (there is no "fact of the matter"). Moreover, the "task" (to make sense of things?) is itself an influencer of what one is trying to make sense of (and perhaps even whether that or something else is the "task"?). So maybe the task itself is somewhat fluid?

And all of that is of course reminiscent of (for me at least) the "bipartite brain", an organization in which what can change is not only "ways of accomplishing tasks" but definitions of the nature of the task itself (the "stories"; see Writing Descartes). A product of evolution is an architecture that can not only explore ways of accomplishing tasks but reconceive "task" itself? Freeing human life (at least) from any fixed conception of "task"?

And, with that freedom, throwing one into "abject relativism"? Not necessarily. See "Theory and Practice of Non-Normal Inquiry" and The Emergence and (continuing) Evolution of the Story of Story Telling" and On Beyond Post-Modernism: Discriminating Stories". The argument, basically, is that not ALL possible task definitions "work", just as not all ways of achieving tasks work. And hence that we should move on to task definitions that haven't yet been shown not to work, in particular to a commitment to "generativity" in the future.

Does this help with "disability"? with "mental health/insanity"? As you say, "Blind" as a concept is only understood when it is constrained and made very specific (detection of electromagnetic radiation of wavelength x"). And I agree that the arguments make it less appealing to use "medical model" approaches to either disability or insanity. I like the direction of your following "while society/culture should not create pressure for individuals to seek 'treatment'/change, the methods and means ... should be be denied to those wanting them" but think we actually need a still broader "different view of society".

Not all individuals are the same, and it is not at all clear to me that we want to evaluate people in terms of a "norm". Some people are, under some circumstances, troubling to themselves and/or to people around them. How are we to deal with either or both, in lieu of concepts like "disability" and "insanity"? My suggestion is that we use, for the moment, the criterion of "generativity". The question ought not to in what ways particular ways people are different from others so as to cause problems (for others or themselves) but rather in what ways they are different that open new "ways of accomplishing tasks" and new ways of conceiving "tasks". To put it differently, we start with a presumption that difference is good rather than bad and commit ourselves to helping everyone become better able to shape their own lives and to impact in positive (though not entirely predictable) ways on people around them.

Perhaps we, as a species, are ready to move beyond ideas of "fairness" and "rights" and "absolutes" and value/enjoy the rich diversity that is both our birthright and the key to our future? In so doing, we could in one step eliminate most of the "disablings" cultures are prone to, and create a new kind of culture in which the only "disabled" people are those who are not themselves able to appreciate the value of differences.


Response to Laura
Name: Annabella
Date: //2006-08-07 11:37:50 :
Link to this Comment: 20133

I found the last line in your first essay very interesting. And I think the answer can go on for ever. But "completing a task/acomplishing something" seems to be what we do. Would you be able to stay in bed for days on end, never getting up? Most people can't. They have to get up. Not necessarily because they have to go to work, but just because they have to get up. They can't stay there.
Likewise, people do things. It's just our nature. Just try to not do anything for an extended period of time. It's impossible. Even if you didn't do anything, the not doing anything is what you did. It was the task you completed.
So as long as we are all here doing things, wouldn't it be nice if we were free to do them as we saw fit. Different ones of us will choose different ways of doing a certain thing, often according to our abilities and lack thereof.
When it is dictated to us by society HOW something is to be done, some of us can do it that way, some of us can not. Those who can not are labled disabled.

As for your second essay, I found the quote about how a psychiatrist works as very notable. I agree that a psychiatrist does these things, but so do the rest of us. This is how we measure each other in terms of whom we befriend, who becomes our enemy and so on. We all have a picture in our minds of what is normal and desirable, and measure all others by it.
I am not clear on the distinction between "medical model" of mental illness and "biological model."



Re: Roll Over Rev. Bayes
Name: Laura
Date: //2006-08-09 12:47:23 :
Link to this Comment: 20149

Annabella,
I enjoyed your paper “Roll Over Rev. Bayes”, even though I haven’t yet read through all the sites/materials I’ve found on Bayesian statistics, the two examples you worked through have given me a good sense of how Bayesian stats is useful and is different from “classical probability theory/statistics” (though I know next to nothing about this too).

If I recall correctly the conversation we had with Professor Grobstein some weeks ago about Bayesian stats/prob, one way of contrasting Bayes and classical prob/stats is that the latter assumes a reality. Bayesian stats/prob does less so/or bases predictions about future events based on (all?) previous experiences/observations/outcomes/probability, rather than using a fixed/unchanging conception of reality (?). So, that’s the context I read your paper in and for the questions/thoughts here… Because you say that Bayesian stats can be used to more “accurately” make predictions, and posit “truth” I assume that for you Bayesian statistics does make more use of the concept of “reality”? But, referring to one of your previous papers concerning the unknown/uncertain and “comfort level”, would it still make sense to say that as you collect more observations/experience (prior probability becomes replaced with revised probability and so on) Bayesian statistics instead expresses the confidence/comfort level with which you can make a prediction/judgment rather than become necessarily more “accurate”/closer to reality? To be more concrete, I might say that “classical” probability that I have breast cancer is 1% and if I have a positive mammogram the certainty of having breast cancer remains the same. But, using Bayesian prob and the new observation that mammograms are not always accurate, I can more confidently predict my chances of really having breast cancer given what I know (probability of breast cancer and information about mammograms). So, if one uses Bayesian prob. to assign “comfort levels” to judgments/predictions, if it were used more would one really jump to fewer “false conclusions” or make fewer “mistakes” or just be more confident about conclusions whether or not we can say that they are true/false?


looking forward ...
Name: Paul Grobstein
Date: //2006-06-22 10:34:29 :
Link to this Comment: 19555

To seeing what we (and perhaps others looking in) make of/with our different interests/starting points/aspirations ....


Hello...
Name:
Date: //2006-10-31 00:26:14 :
Link to this Comment: 20817

Ok. After thinking about stuff for a while, I think I have something to put up here. I don’t think it’s very much, but I will just post, and see if anyone has anything to say.

First, though, I just want to say something to all of you. I guess I kind of feel like I owe you an explanation. What I have to say is personal, so if you prefer to skip it, just go ahead and scroll past, and the relevant stuff is below.










………………………………………………
This past summer has been an unusually difficult one for me, mostly because I have been dealing with a particularly nasty bout of depression. I have been clinically depressed off and on during much of my life, but for some reason it got much worse when I returned from being abroad fall semester last year. It started immediately when I got back. I was sleeping all the time, and I didn’t want to do anything. I wasn’t excited about things, and when I would try to get interested in something, it felt like thinking through mud.

I noticed that something was wrong as soon as I returned to the United States, but I didn’t start getting the help I needed until the beginning of the summer. I started taking antidepressants, but I was very reluctant to take them, and to admit that I needed to start talking to someone. When I finally did start going to therapy, we figured out that I was also taking too low a dosage of medication, so it was well into June before I started to see any progress at all.

Basically, starting when I got back, I got into a cycle of constantly worrying that I wasn’t doing as well as I should, and feeling totally incompetent, and becoming paralyzed by the fear that whatever I would come up with couldn’t possibly be good enough. Each time this happened, the pressure to do better on the next thing mounted, as if somehow coming up with something really good would make up for coming up with something not-so-great before.

When the summer started, I dealt with my fears and feelings of incompetence by isolating myself, something that was easy to do then but makes me want to kick myself now. I was often on campus, but working in a library, or I would work by myself somewhere outside of Bryn Mawr. The more I think about it, the more I realize that I probably could have made things better by just coming into the office more often. Perhaps by talking to you, I would have started finding joy in thinking again more quickly or effectively, or at the least maybe I could have been more of a help to someone else’s project. I guess that this is what I am trying to apologize for. I’m not trying to use this as an excuse, but I thought that you all deserved an explanation. So, I’m sorry. That’s what’s been going on with me, and I’m glad to feel comfortable (relatively) addressing it now.
…………………………………………………………………………….







Anyway! On to (hopefully) more interesting matters:

When I began my research this summer, I didn’t really know what, exactly, I was looking for. I had a vague idea about studying the connection between time and consciousness, working off of an idea from my freshman year. I will sum up that idea here, just to try and make things as clear as possible.

The idea came out of a conflict between a Physics lecture and a Language Working Group conversation, and drew partially on the c-sem I had taken during the previous semester dealing with stories and science and perception. In the Physics lecture, we discussed time as a line, and the idea of time/space being relative, but the idea with the most force was thinking of time in a spatial matter: that is just how we seem to operate, thinking of time as a sort of spatial dimension (think time travel/memory/ “long” time/ “short” time). The LWG discussion centered around, if I remember correctly, the examination of a sort of computer program, and the key part from this was the idea of having no past and no future, per se, but instead the existence of a single state, which one could call the present. Within this state are all possible inputs and all possible outputs. We could say for example, that state X could have been produced by these inputs, 1,2,3, which are all outputs of these possible states, W,Y,Z, which could have been produced by these inputs, 4, 5, 6, 7, etc. etc. In the same vein, X can produce the possible outputs 8,9,10, which can produce states A,B,C, which can produce outputs 11, 12, 13, 14, etc.

<< had a diagram here, but is not working in post>>


My problem was that time could not both be linear AND be only the present state with no past and no future. My idea was that in the unconscious mind, time is much like the input/state/output setup from the LWG discussion, and that the conscious mind looks down, in a sense, on the unconscious, and perceives a linear process. This is reminiscent of an idea from the book “Flatland”, in which a triangle travels from his two dimensional plane to one dimensional, three dimensional, and four dimensional planes. He is at first very reluctant to accept the possibility of a plane to have anything other than 2 dimensions, until he actually visits the alternate planes. If one is able to look at a plane from an added dimension, one sees an expanded version of what is in the plane itself. This way of looking at things is definitely not our usual, simple, linear approach. My sense is that memory and predictive capability both play a role in our being able to see a line where there is a dot. I think Paul wants to call this story-telling, and that’s cool with me. This may be getting too speculative, but I think that it is also very interesting that the part of the brain that we think may be responsible for conscious or self-reflective thought is the neo-cortex. If we consider the rest of the central nervous system to be the ‘unconscious mind’, of sorts, and view the neo-cortex as the addition of another dimension, then this would provide the set up needed for us to take a state and see it as a line with past/present/future.

Problems I’m having –

how do I actually go about trying to prove or support this?
A lot of reading I’ve done had to do with non-human primates, or memory, or both, or well-known theories on time (like block theory), and then I started getting into possible linguistic/anthropological help. (see below: Aymara, Maori)


Also, we run into a problem similar to that of Zeno’s paradox in which it is suggested that movement is an illusion. The occurrence of this problem for time is very interesting as this paradox is of a spatial nature. Zeno’s paradox constitutes walking a certain distance. One takes a step, but, as a part of this step, one must move a smaller distance (which goes into the larger distance of the step), but there is also a smaller distance that is a part of the first smaller distance, and so on. The distances one must travel in the paradox get smaller and smaller and smaller, infinitely so, and in the end, one cannot move at all.
For time, the problem is identifying the present moment. How long is a moment? Is there a smallest unit, like an atom? If not, we could theoretically infinitely split the moment and get nowhere, just like with distance in Zeno’s paradox. When does one moment become the next? Using states to identify moments may be a useful tool, but then how do we identify a state? This may not be important quite yet, but I think it is something that will need to be answered eventually if this idea is going to be worthwhile at all.

During a talk with Paul about this way of thinking about thinking about time, he mentioned that he liked that the model wasn’t deterministic in either the past or the future, meaning that it is recognizable that the present state could be achieved from many different ‘past’ paths. For some reason, though, this bothers me, and I really feel like the past is more concrete than the future, though I’m not sure how to represent that in this model, or that I even should represent that, and leave it open to interpretation. It seems that, even though our memories are just constructs, there tends to be a certain cohesiveness that doesn’t hold for speculation about the future. Say that a group of people watch a block falling from a building. If we take their current states and look at the states leading up to that state, would we see that they are all watching the block fall? If we see this, does this mean that this ‘past’ path is more valid, than, I don’t know, saying that person number two was a fluffy crocodile that was bungee-jumping off a spaceship when all of a sudden it turned into a person and saw a block fall with four others? There must be that setup step  the closer we get to the present state, the fewer options we have to get there…? The spectrum is narrowed as we approach the present state. I think it’s also probably narrowed on the other side, with future possible states, and this is where I think the ‘future’ spectrum should be wider, at least immediately, than the ‘past’ spectrum. But I don’t know if this is irrational.

Using language to identify how we think about time both consciously/unconsciously?:
An article on Aymara linguistic (cultural, too?) approach to time, with brief mention of Maori. Thoughts on Maori stated here are from my own experiences in New Zealand.

http://www.guardian.co.uk/life/feature/story/0,,1423455,00.html

Aymara speakers have switched the orientation of the line for time. Most western language speakers consider time to be oriented with the past behind and the future ahead, but Aymara expressions and conversations with native Aymara speakers have lead linguists to believe that Aymara speakers consider the future to be oriented behind them and the past in front of them. The Maori cultural concept of time is also different from the frequent western conceptualization. This is perhaps less linguistic, but more anthropological evidence, though the Aymara article suggests that there may be linguistic evidence as well. (This is working off of traditional Maori beliefs and customs, which have a place in current society, but many aspects or realizations are now quite different.) As a part of the Maori belief system, time is considered to be a circular process. This is very much a part of their concept of whakapapa, which can be loosely translated as family tree, but is much much more complicated than our notion of a family tree. Everything in the universe is connected in whakapapa, and this lends itself to a heightened feeling of connection between the past present and future. Decisions made by those in charge were heavily affected by considerations for future generations. Those in the present considered themselves as stewards of resources for future generations. Will try to add more in this vein soon.

The last thing I was working on was trying to come up with evidence for though being separate from language.
I can think in pictures, or in emotions, or memories. I can imagine with no words. What about when you hear music in your head but can’t remember how the lyrics go? Or thinking about music that doesn’t have words? What about concentrating on the noise of your alarm? Or visualizing a lemon, with its shape, texture, smell. One doesn’t have to think ‘lemon’ to conjure that construct. ‘Lemon’ in English and ‘citron’ in French both represent the same thing. I think this gets into Saussure and the arbitrariness of the sign, but I think it is a valid place to go. Let’s imagine we had lemons before we had speech. Just because we didn’t have the word ‘lemon’, does that mean we couldn’t conceptualize a lemon? If so, then I guess that really sucks for hunter/gatherers who had to look for certain types of food and remember, hey, this is poisonous, or, hey, that has horns and charges when you try to chase it. What about dogs who get depressed when their owners leave? Dogs don’t have language, but it seems they can conceptualize that someone who is usually there is not there now, or at the very least that something is different. What about babies before they learn language? What about ‘trying to put our thoughts into words’? What about trying to explain a concept for which English (for example) doesn’t have a term?
Aside from my own intuitions, I recommend the following chapter from Donna Jo Napoli’s book, “Language Matters” – Does Language Equal Thought? Her points include: analyzing several interactions between young children/ children and adults that do not involve either spoken or signed language; the case of Genie, a girl who was isolated during the normal language acquisition period and robbed of human interaction/linguistic input but who, when she learned (temporarily and with great applied effort) some very rudimentary language skills was able to describe things that had happened to her in the past, before she had the ability to linguistically recount anything; words in one language that cannot be translated without much explanation in another; and several other pieces of evidence. If you like, I can try to scan the chapter and make it available.

I think this is all I am going to put up now. – B.


Forum Archived
Name: Webmaster
Date: //2006-11-22 08:37:42 :
Link to this Comment: 21161

This forum has now been archived and is closed to new postings. If you are interested in continuing the conversation, please contact Serendip.

We like to hear from you.