Hierarchical Language Structure in the Brain: Two Challenges

Colin Phillips
Department of Linguistics
University of Maryland

A basic design property of human language is its use of a relatively small number of elements to create substantial expressive power. By combining a small number of categories in many different ways, vast numbers of different expressions can be created. This property is repeated at a number of different levels of language. Therefore, in order to make progress towards the 'unification problem' for human language, it is important to understand how the mind/brain supports the grouping of linguistic elements to form larger structures. Our work has investigated this question at both the phonological and syntactic level of language, using a variety approaches, both high-tech and very low-tech. The objective of this talk will be to show that there are at least two very different notions of hierarchical linguistic structure, each of which pose rather different challenges for unification efforts. The first type of hierarchical structure is clearly finite, and can be stored in long-term memory in the brain. The second type of hierarchical structure is infinite, and therefore requires dynamic mechanisms which can create novel structured representations over the course of a few hundred milliseconds.

In a series of studies using EEG & MEG recordings in an adapted mismatch paradigm, we have investigated the grouping of sounds into phonological categories, and the grouping of phonological categories into feature-based natural classes. The results of our studies indicate that discrete phonological category representations are available to human auditory cortex, but provide no insight into how these representations are instantiated.

Sentence structures pose a problem of a rather different nature. Due to the iterative and recursive properties of natural language syntax, speakers are able to draw on a finite number of words to create an infinite number of sentences - this is the 'discrete infinity' property of language - and any sentence may be understood within a very brief period of time. Therefore, the brain must provide mechanisms which support representations which are highly structured, and can be created within a few hundred milliseconds. In order to bridge the gap between our understanding of sentence structure at the linguistic level and at the brain-level, a number of steps are required. I will review some work by our group with this goal. First, we have begun to develop a dynamic model of human syntactic knowledge, which can be deployed in real-time syntactic processing. Second, fragments of this approach have been implemented in a working computational model. Third, we have run a number of experimental tests of the model, using reading-time paradigms. Finally, we have begun to explore the representation of sentence structures using ERP brain recordings. In this area, top-down efforts toward unification are more promising.