A major focus of our research has been on age-related changes in the orthographic (spelling), phonological (speech) and semantic (meaning) representations involved in reading and other linguistic processes. Our research has supported several general conclusions about the brain development of these representational systems. First, orthographic, phonological and semantic systems become more interactive as there are developmental increases in functional and structural connectivity between these representational systems. Second, orthographic, phonological and semantic systems become more specialized suggested by our findings of less overlap with development in the systems for processing distinct information. Third, compensatory mechanisms seem to be used less over development, as there are age-related decreases in brain regions that are not critical for performing a specific task. In general, our research in reading and language is consistent with the interactive specialization approach of brain development.

An increasing focus of our research involves an examination of how the brains of children with dyslexia are different. We have shown that children with dyslexia exhibit less activation in several nodes of the reading network and show alterations in connectivity between these regions, particularly in areas involved in orthographic and phonological processing. We have also demonstrated that children with dyslexia have subtle deficits in brain regions implicated in semantic processing. In more recent work, by comparing audio-visual stimuli to unimodal stimuli, we have shown that reading acquisition critically involves multisensory integration in multiple brain regions including areas previously thought to only be involved in unimodal processing. Moreover, our work has supported the assumption that a critical deficit in dyslexia is in multisensory integration. Our future work will examine the neural basis of reading disabilities in bilinguals. We have shown in adults that higher skill in the second language is associated with greater assimilation and less accommodation to the native language, so we wish to investigate whether bilingual children with dyslexia show greater accommodation.

Another focus of our work is to use neuroimaging to predict who will show subsequent gains in language or reading. In our initial study, we showed that, in school age children, we can predict changes in reading skill 4- to 6-years later. Our results are consistent with models of reading development arguing for the early importance of phonological and later importance of orthographic skills in becoming a proficient reader. We have additionally shown that functional connectivity between temporal and parietal regions are predictive of reading gains. Moreover, the brain regions predictive of reading gains are selective, in that different brain regions are predictive of gains in math. Our longitudinal work on language is examining whether early differences in connectivity between brain regions in preschoolers predicts later differences in specialization of temporal representations for processing semantics, phonology and syntax. Our longitudinal work will also determine whether the process of interactive-specialization is altered in children with language impairment. Moving forward, we are particularly interested in using neuroimaging to predict response to intervention.

Not only are we interested in how the process of reading is affected in children with reading disabilities and language impairments, we are also investigating how altered input in the case of deafness influences the cortical organization for reading. We have collaborated with colleagues examining structural and functional alterations of the temporal cortex in deaf adults. One of these studies suggested that deaf adults who are better readers tend to rely on their signed language network to a greater degree. The focus of our current work on deafness is to examine what mechanisms are associated with higher skill in deaf readers, and whether this is influenced by communication mode. We are testing the prediction that oral deaf may rely to a greater degree on speech reading mechanisms, whereas signing deaf may rely more on signed language mechanisms.