New Paper: ‘Listeners and Readers Generalize Their Experience With Word Meanings Across Modalities’

Capture

Becky Gilbert, Jenni Rodd, and collaborators have a new paper out (as an online first article) in the Journal of Experimental Psychology: Learning, Memory, and Cognition. The details of the paper can be found below:

Title: Listeners and Readers Generalize Their Experience With Word Meanings Across Modalities.

Authors: Rebecca A. Gilbert, Matthew H. Davis, Gareth M. Gaskell, and Jennifer M. Rodd

Abstract:

Research has shown that adults’ lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g., bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However, the mechanism underlying this word-meaning priming effect remains unclear, and competing accounts make different predictions about the extent to which information about word meanings that is gained within one modality (e.g., speech) is transferred to the other modality (e.g., reading) to aid comprehension. In two Web-based experiments, ambiguous target words were primed with either written or spoken sentences that biased their interpretation toward a subordinate meaning, or were unprimed. About 20 min after the prime exposure, interpretation of these target words was tested by presenting them in either written or spoken form, using word association (Experiment 1, N = 78) and speeded semantic relatedness decisions (Experiment 2, N = 181). Both experiments replicated the auditory unimodal priming effect shown previously (Rodd et al., 2016, 2013) and revealed significant cross-modal priming: primed meanings were retrieved more frequently and swiftly across all primed conditions compared with the unprimed baseline. Furthermore, there were no reliable differences in priming levels between unimodal and cross-modal prime-test conditions. These results indicate that recent experience with ambiguous word meanings can bias the reader’s or listener’s later interpretation of these words in a modality-general way. We identify possible loci of this effect within the context of models of long-term priming and ambiguity resolution.

New Paper in Journal of Experimental Psychology: Learning, Memory, and Cognition

Capture

Hannah Betts and other members of the Word Lab have a new paper in the Journal of Experimental Psychology: Learning, Memory, and Cognition, the details of which can be found below:

Title: Retuning of Lexical-Semantic Representations: Repetition and Spacing Effects in Word-Meaning Priming.

Authors: Hannah N. Betts, Rebecca A. Gilbert, Zhenguang G. Cai, Zainab B. Okedara, & Jennifer M. Rodd

Abstract:

Current models of word-meaning access typically assume that lexical-semantic representations of ambiguous words (e.g., ‘bark of the dog/tree’) reach a relatively stable state in adulthood, with only the relative frequencies of meanings and immediate sentence context determining meaning preference. However, recent experience also affects interpretation: recently encountered word-meanings become more readily available (Rodd et al., 2016, 2013). Here, 3 experiments investigated how multiple encounters with word-meanings influence the subsequent interpretation of these ambiguous words. Participants heard ambiguous words contextually-disambiguated towards a particular meaning and, after a 20- to 30-min delay, interpretations of the words were tested in isolation. We replicate the finding that 1 encounter with an ambiguous word biased the later interpretation of this word towards the primed meaning for both subordinate (Experiments 1, 2, 3) and dominant meanings (Experiment 1). In addition, for the first time, we show cumulative effects of multiple repetitions of both the same and different meanings. The effect of a single subordinate exposure persisted after a subsequent encounter with the dominant meaning, compared to a dominant exposure alone (Experiment 1). Furthermore, 3 subordinate word-meaning repetitions provided an additional boost to priming compared to 1, although only when their presentation was spaced (Experiments 2, 3); massed repetitions provided no such boost (Experiments 1, 3). These findings indicate that comprehension is guided by the collective effect of multiple recently activated meanings and that the spacing of these activations is key to producing lasting updates to the lexical-semantic network.

New Paper in Press at Cortex

Capture.PNG

Glyn Hallam and colleagues have a new paper in press at Cortex, on which Jenni Rodd is a co-author. The details of the article can be found below:

Title:

Task-based and resting-state fMRI reveal compensatory network changes following damage to left inferior frontal gyrus

Authors:

Glyn P. Hallam, Hannah E. Thompson, Mark Hymers, Rebecca E. Millman, Jennifer M. Rodd, Matthew A. Lambon Ralph, Jonathan Smallwood, and Elizabeth Jefferies

Abstract:

Damage to left inferior prefrontal cortex in stroke aphasia is associated with semantic deficits reflecting poor control over conceptual retrieval, as opposed to loss of knowledge. However, little is known about how functional recruitment within the semantic network changes in patients with executive-semantic deficits. The current study acquired fMRI data from 14 patients with semantic aphasia, who had difficulty with flexible semantic retrieval following left prefrontal damage, and 16 healthy age-matched controls, allowing us to examine activation and connectivity in the semantic network. We examined neural activity while participants listened to spoken sentences that varied in their levels of lexical ambiguity and during rest. We found group differences in two regions thought to be good candidates for functional compensation: ventral anterior temporal lobe (vATL), which is strongly implicated in comprehension, and posterior middle temporal gyrus (pMTG), which is hypothesized to work together with left inferior prefrontal cortex to support controlled aspects of semantic retrieval. The patients recruited both of these sites more than controls in response to meaningful sentences. Subsequent analysis identified that, in control participants, the recruitment of pMTG to ambiguous sentences was inversely related to functional coupling between pMTG and anterior superior temporal gyrus (aSTG) at rest, while the patients showed the opposite pattern. Moreover, stronger connectivity between pMTG and aSTG in patients was associated with better performance on a test of verbal semantic association, suggesting that this temporal lobe connection supports comprehension in the face of damage to left inferior prefrontal cortex. These results characterize network changes in patients with executive-semantic deficits and converge with studies of healthy participants in providing evidence for a distributed system underpinning semantic control that includes pMTG in addition to left inferior prefrontal cortex.

New Paper in Cognitive Psychology – Accent Modulates Access to Word Meaning

Capture

Zhenguang (Garry) Cai and other members of the Word Lab have a new paper in the journal Cognitive Psychology, the details of which can be found below:

Title: Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition

Authors: Zhenguang G. Cai, Rebecca A. Gilbert, Matthew H. Davis, M. Gareth Gaskell, Lauren Farrar, Sarah Adler, and Jennifer M. Rodd.

Abstract:

Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.

New Word Lab Paper in Acta Psychologica

Capture

Eva Poort and Jenni Rodd have a new paper published in Acta Psychologica, the details of which can be found below:

Title: “The cognate facilitation effect in bilingual lexical decision is influenced by stimulus list composition”

Authors: Eva D. Poort and Jennifer M. Rodd

Abstract:

Cognates share their form and meaning across languages: “winter” in English means the same as “winter” in Dutch. Research has shown that bilinguals process cognates more quickly than words that exist in one language only (e.g. “ant” in English). This finding is taken as strong evidence for the claim that bilinguals have one integrated lexicon and that lexical access is language non-selective. Two English lexical decision experiments with Dutch–English bilinguals investigated whether the cognate facilitation effect is influenced by stimulus list composition. In Experiment 1, the ‘standard’ version, which included only cognates, English control words and regular non-words, showed significant cognate facilitation (31 ms). In contrast, the ‘mixed’ version, which also included interlingual homographs, pseudohomophones (instead of regular non-words) and Dutch-only words, showed a significantly different profile: a non-significant disadvantage for the cognates (8 ms). Experiment 2 examined the specific impact of these three additional stimuli types and found that only the inclusion of Dutch words significantly reduced the cognate facilitation effect. Additional exploratory analyses revealed that, when the preceding trial was a Dutch word, cognates were recognised up to 50 ms more slowly than English controls. We suggest that when participants must respond ‘no’ to non-target language words, competition arises between the ‘yes’- and ‘no’-responses associated with the two interpretations of a cognate, which (partially) cancels out the facilitation that is a result of the cognate’s shared form and meaning. We conclude that the cognate facilitation effect is a real effect that originates in the lexicon, but that cognates can be subject to competition effects outside the lexicon.

New Preprint on OSF: ‘Retuning of lexical-semantic representations: Repetition and spacing effects in word-meaning priming’

COSLogo_OnWhite-e1456782304678

Hannah Betts, Jenni Rodd, and other Word Lab members have a new pre-print published on the Open Science Framework, the details of which can be found below:

Title: “Retuning of lexical-semantic representations: Repetition and spacing effects in word-meaning priming” 

Authors: Hannah N. Betts, Becky A. Gilbert, Zhenguang Garry Cai, Zainab B. Okedara, and Jennifer M. Rodd

Abstract:

Current models of word-meaning access typically assume that lexical-semantic representations of ambiguous words (e.g. ‘bark of the dog/tree’) reach a relatively stable state in adulthood, with only the relative frequencies of the meanings in the language and immediate sentence context determining meaning preference. However, recent experience also affects interpretation: recently-encountered word-meanings become more readily available (Rodd et al., 2016; 2013). Here, three experiments investigated how multiple encounters with word-meanings influence the subsequent interpretation of these words. Participants heard ambiguous words contextually-disambiguated towards a particular meaning and, after a 20-30 minute delay, interpretations of the words were tested in isolation. We replicate the finding that one encounter with an ambiguous word biased later interpretation of this word towards the primed meaning for both subordinate (Experiments 1, 2, 3) and dominant meanings (Experiment 1). In addition, for the first time, we show cumulative effects of multiple repetitions of both the same and different meanings. The effect of a single subordinate exposure persisted after a subsequent encounter with the dominant meaning, compared to a dominant exposure alone (Experiment 1). Furthermore, three subordinate word-meaning repetitions provided an additional boost to priming compared to one, although only when their presentation was spaced (Experiments 2, 3); massed repetitions provided no such boost (Experiments 1, 3). These findings indicate that comprehension is guided by the collective effect of multiple recently activated meanings and that the spacing of these activations is key to producing lasting updates to the lexical-semantic network.

New Paper in Language, Cognition and Neuroscience

Jenni Rodd and Matt Davis have just published a new paper in the journal Language, Cognition and Neuoscience, the details of which can be found below:

Title: “How to study spoken language understanding: a survey of neuroscientific methods”

Authors: Jennifer M. Rodd and Matthew H. Davis

Abstract:

The past 20 years have seen a methodological revolution in spoken language research. A diverse range of neuroscientific techniques are now available that allow researchers to observe the brain’s responses to different types of speech stimuli in both healthy and impaired listeners, and also to observe how individuals’ abilities to process speech change as a consequence of disrupting processing in specific brain regions. This special issue provides a tutorial review of the most important of these methods to guide researchers to make informed choices about which methods are best suited to addressing specific questions concerning the neuro-computational foundations of spoken language understanding. This introductory review provides (i) an historical overview of the experimental study of spoken language understanding, (ii) a summary of the key method currently being used by cognitive neuroscientists in this field, and (iii) thoughts on the likely future developments of these methods.

New Pre-Print Published on Open Science Framework

COSLogo_OnWhite-e1456782304678

Eva Poort and Jenni Rodd have a new pre-print published on the Open Science Framework, the details of which can be found below:

Title: “The cognate facilitation effect in bilingual lexical decision is influenced by stimulus list composition”

Authors: Eva D. Poort and Jennifer M. Rodd

Abstract:

Cognates share their form and meaning across languages: “winter” in English means the same as “winter” in Dutch. Research has shown that bilinguals process cognates more quickly than words that exist in one language only (e.g. “ant” in English). This finding is taken as strong evidence for the claim that bilinguals have one integrated lexicon and that lexical access is language non-selective. Two English lexical decision experiments with DutchEnglish bilinguals investigated whether the cognate facilitation effect is influenced by stimulus list composition. In Experiment 1, the ‘classic’ version, which included only cognates, English control words and regular non-words, showed significant cognate facilitation (31 ms). In contrast, the ‘alternative’ version, which also included interlingual homographs, pseudohomophones (instead of regular non-words) and Dutch-only words, showed a significantly different profile: a non-significant disadvantage for the cognates (8 ms). Experiment 2 examined the specific impact of these three additional stimuli types and found that only the inclusion of Dutch words significantly reduced the cognate facilitation effect. Additional exploratory analyses revealed that, when the preceding trial was a Dutch word, cognates were recognised up to 50 ms more slowly than English controls. We suggest that when participants must respond ‘no’ to non-target language words, competition arises between the ‘yes’- and ‘no’-responses associated with the two interpretations of a cognate, which (partially) cancels out the facilitation that is a result of the cognate’s shared form and meaning. We conclude that the cognate facilitation effect is a real effect that originates in the lexicon, but that cognates can be subject to competition effects outside the lexicon.

New paper out: Listening to Radio 4 or going rowing changes access to words meanings

DD-ST-89-00291
Rowing at the 1988 Summer Olympics. Source: Wikimedia Commons.

Several members of the Rodd Lab have recently published an open access article entitled “The impact of recent and long-term experience on access to word meanings: Evidence from large-scale internet-based experiments” in the Journal of Memory and Language.  In a set of large (N=2013) web-based experiments we show how hearing ambiguous words on the radio or while attending a rowing club can change how you later process these words.

Abstract:

Many word forms map onto multiple meanings (e.g., “ace”). The current experiments explore the extent to which adults reshape the lexical–semantic representations of such words on the basis of experience, to increase the availability of more recently accessed meanings. A naturalistic web-based experiment in which primes were presented within a radio programme (Experiment 1; N = 1800) and a lab-based experiment (Experiment 2) show that when listeners have encountered one or two disambiguated instances of an ambiguous word, they then retrieve this primed meaning more often (compared with an unprimed control condition). This word-meaning priming lasts up to 40 min after exposure, but decays very rapidly during this interval. Experiments 3 and 4 explore longer-term word-meaning priming by measuring the impact of more extended, naturalistic encounters with ambiguous words: recreational rowers (N = 213) retrieved rowing-related meanings for words (e.g., “feather”) more often if they had rowed that day, despite a median delay of 8 hours. The rate of rowing-related interpretations also increased with additional years’ rowing experience. Taken together these experiments show that individuals’ overall meaning preferences reflect experience across a wide range of timescales from minutes to years. In addition, priming was not reduced by a change in speaker identity (Experiment 1), suggesting that the phenomenon occurs at a relatively abstract lexical–semantic level. The impact of experience was reduced for older adults (Experiments 1, 3, 4) suggesting that the lexical–semantic representations of younger listeners may be more malleable to current linguistic experience.

Authors:

Jennifer M. Rodd, , Zhenguang G. Cai, Hannah N. Betts, Betsy Hanby, Catherine Hutchinson, Aviva Adler

Keywords:

Semantic ambiguity; Lexical ambiguity; Perceptual learning; Priming; Comprehension; Web-based experiment

Review paper on semantic ambiguity resolution published in ‘Language and Linguistics Compass’

mouse-ambiguous-word

Sylvia Vitello and Jenni Rodd have written a review paper entitled “Resolving Semantic Ambiguities in Sentences: Cognitive Processes and Brain Mechanisms”. This paper was recently published in ‘Language and Linguistics Compass’.

Abstract:

fMRI studies of how the brain processes sentences containing semantically ambiguous words have consistently implicated (i) the left inferior frontal gyrus (LIFG) and (ii) posterior regions of the left temporal lobe in processing high-ambiguity sentences. Despite the consistency of these findings there is little consensus about the precise functional contributions of these regions. This article reviews recent findings on this topic and relates them to (i) psycholinguistic theories about the underlying cognitive processes and (ii) general neuro-cognitive accounts of the relevant brain regions. We suggest that the LIFG plays a general role in the cognitive control process that are necessary to select contextually relevant meanings and to reinterpret sentences that were initially misunderstood, but it is currently unclear whether these control processes should best be characterised in terms of specific processes such as conflict resolution and controlled retrieval which are only required for high-ambiguity sentences (and not for low-ambiguity sentences), or whether its function is better characterised in terms of a more general set of ‘unification’ processes that are essential for comprehending all sentences. In contrast to the relatively rapid progress that has been made in understanding the function of the LIFG, we suggest that the contribution of the posterior temporal lobe is less well understood and future work is needed to clarify its role in speech sentence comprehension.