Language’s Relationship to the Brain
As someone who is studying in Japanese at Haverford College, I have always found language and its relationship with the brain interesting. I have always wondered what gives humans the ability to comprehend language and I have always wondered how this ability is different from any other animal’s ability to interpret sound. I have additionally wondered how the ability to hear could affect language comprehension. In other words, is having that “input mechanism” really necessary for understanding and creating spoken language? My goal is to come to some sort of conclusion about whether or not spoken language is something that is learned, requiring an input mechanism in order to develop, innate, not requiring an input mechanism to develop, or something in between.
In order to figure out what makes humans create and interpret language I examined studies that map the brain when humans are interpreting language and sound. In a study carried out by Gustafon and Davidson, subjects had to listen to a comprehendible sentence, a group of sounds that had the same rhythm as the sentence but did not make any sense, and a string of sounds that made absolutely no sense and lacked any pattern resembling language (3). Between the meaningful sentences and the nonsensical sounds with the same pattern, the MRI scans showed key differences in activity in the left-hemisphere of the brain, specifically the anterior and the posterior MTG, the fusiform gyrus, and in the ventrolateral prefrontal cortex(3). When contrasting the sounds with the same pattern and sounds that were completely nonsensical, the parts of the brain that showed contrast in activity were the anterior and the posterior STG and the prefrontal and the premotor cortices (3). It is less important to talk about how these parts of the brain work specifically than it is to note that there are differences in what parts of the brain function when someone hears different sound patterns. There are different types of motor neurons that activate depending on the different types of sound produced (3). These connect to different parts of the brain, which are lit in the MRI scans (3). Therefore, the brain and the nervous system are constructed such that it can recognize and distinguish between different sounds and patterns. To me, this means that language is something that is programmed into the human brain. It is the fact that the brain has different patterns of connections, networks, and responses for different sounds that allows someone to give meaning to a group of sounds, and that is because the pathways lead to parts of the brain that tell the person whether or not the sounds make sense.
I recognize that, to a degree, language is taught, because there are many different types of languages and one person can only make sense of a handful. Nevertheless, there remains the fact that there are pathways and parts of the brain that are programmed for language comprehension. This leads me to believe that language is something that is preprogrammed within the human brain. Language is something that can be grasped by anyone because everyone is, at the very least, born with the ability to speak and to understand language. In other words, rather than being taught how to speak a language, our brains are made knowing how to speak language. In order to understand all of its complexities it may have to be explained, but the ability to comprehend and make sense of specific sounds is a part of the human condition. This idea that I have is further established through the study of deaf children and their ability to learn language.
Another piece of the question I would like to look at is whether or not hearing is actually important for understanding the auditory part of language. It seems, in fact, that children with hearing loss can actually gain the ability to comprehend and produce the spoken form of language almost as well as children who can hear normally (2). In one study, children who had hearing disabilities or prelingual deafness underwent speech and language tests that were carried out by professional speech-language pathologists (2). The children were then assessed on how well they could recognize words, phonemes, consonants, and sentences (2). In each case, before the studies took place, these children went through auditory-verbal based therapy for 6 months. The test they had at the end of the therapy showed that their progress was the same as the progress for children with normal hearing (2). The mean scores that the hearing-impaired children received were only slightly lower than the children with typical hearing (2). Overall, research indicates that children with significant hearing loss can nevertheless acquire speech skills that are comparable to children with typical hearing (2). To me, this means that the ear is not a necessary mechanism for either understanding or using spoken language.
In another study related to deafness, children underwent a rigorous training program in which specific phonemes were taught to them using interactive teaching methods (1). Children were made to connect actions with sounds, such as the sound ”mmmm” with eating ice cream (1). Connecting someone’s facial expression when a sound was made to his or her actions helped the children comprehend the sounds and eventually reproduce it them themselves. Though these children started in with a disadvantage, this study also showed that all children, even the hearing-impaired, managed to produce the phonemes successfully (1). Each child’s ability to learn spoken language was not heavily affected by his or her ability to hear (1). This leads me to believe that the fact that these children do not have the mechanism for hearing language does not mean that they have lost the behavior for it. They are all born with the ability to hear language, even if they can’t hear it. Therefore, through proper instruction, they can learn how to speak language because they have that behavior. This study shows that spoken language does have some learning component. Some people need to be instructed so that they can distinguish between complex sounds. However, the fact that even the deaf can speak language, and be comprehended because they are speaking it correctly, without being able to learn what it actually sounds like, means that everyone has the preset ability to use spoken language. It is something that is set in the brain from the very start. The ability to speak does not come from learning sounds. It comes from the brain. If language was not already constructed within the brain, then the deaf could never speak it.
If language is something that is firmly set in the brain, then it is reasonable to think that animals, since they have brains similar to ours, should have the capacity for language. I would like to argue that they do, in fact, have some language capacity. Chimpanzees, for example, are able to understand and communicate in sign language, which is very much a complex language of its own (4). Vervet monkeys’ sounds are similar to language (4) in that they have specific sounds that identify different predators (4). Baboons recognize the order in which sounds are made and give them different meanings (4). Campbell’s monkeys are even able to make compounds with sounds, which can change the meaning of a noise from a mating call into a warning about a falling tree (4). These abilities are clearly not quite as complex as the humans’ ability to use language, but they are still a form of communication, and thus, I believe, a form of language. Everything from insects to frogs have sounds that hold some meaning. I believe that humans simply have a more complex form of this because our brains have more connections and pathways that allow us to interpret a number of sounds. Since language, in one form or another, seems to be found in all animals, I think that it is something that is hardwired into the brain.
I realize that there are a few cases of what are called ‘feral children,’ who, for one reason or another, are abandoned and not able to speak later on. I still believe that these children have or have had the ability to use language. Since deaf or near-deaf children are able to speak and understand language, even though they had no possible way of comprehending language as speech, then feral children should also have been able to if given the opportunity. Since feral children are abused, I believe that it is fair to say that there are both mental and psychological problems that go well beyond the ability to learn language. With this in mind, I am fairly convinced that the ability to both comprehend and speak language is something that has always been in our brains. We are all born with the ability to communicate. There are specific parts of the brain that we are born with that allow us to both speak and understand language. Spoken language does require some teaching, because is extremely complex, but it can still be reproduced without ever having been heard. If deaf people are able to speak and be understood by others, then language must be a behavior that is firmly set within the human brain. I feel that though there are pieces of language that need to be taught, but the ability to understand, comprehend, and produce language is innate. I think that we are born knowing how to speak. The facts that every race of human has come up with comprehensible languages independent of one another and that the deaf can speak them without ever hearing them tell me that language is less of something that comes from teachings on the outside world than it is something that comes from within.
1) Bergeron, J. P., Lederberg, A. R., Easterbrooks, S. R., & Miller, E. M. (2009). Building the Alphabetic Principle in Young Children Who Are Deaf or Hard of Hearing. The Volta Review, 109(2/3), 87-120.
2) Dornan, D., Hickson, L., Murdoch, B., & Houston, T. (2009). Longitudinal Study of Speech Perception, Speech, and Language for Children with Hearing Loss in an Auditory-Verbal Therapy Program. The Volta Review, 109(2/3), 61-86.
3) Saur, D., Schelter, B., Schnell, S., & Kratochvil, D. (2009). Combining functional and anatomical connectivity reveals brain networks for auditory language comprehension. NeuroImage, 40, 3187-3197.
4) Wade, N. (2010, January 12) Deciphering the Chatter of Monkeys and Chimps. New York Times, 1-3.