Understanding language comprehension: evidence from neural patterns and voxel wise responses

Members of the general public are welcome to attend our seminars. However space is limited so if you would like to attend, please ring Sandra Smith at least 24 hours prior to the seminar on 0115 823 2634 to reserve a place. If Sandra Smith is unavailable contact Jan Kelly on 0115 823 2617 or contact reception on 0115 823 2600.

03 October 2016

Presenter(s): Dr Samuel Evans
Time: 13.00 -14.00
Location: NHBRU, Meeting Room 1

Abstract:

Abstract 

Neuroimaging studies show that auditory information is processed within multiple streams of processing in the human brain.  These streams include a hierarchically organised ventral pathway that extracts meaning from auditory signals and a dorsal stream that integrates perception and production.  To date, our understanding of the function of these processing streams has predominantly come from mass univariate general linear modelling.  This approach has achieved a great deal in mapping the basic architecture of the speech perception system.  However, recent advances in neuroimaging analyses that use patterns of activity rather than single voxel responses, allow an arguably richer description of neural activity that provide additional insights into brain function.  In this talk, results from fMRI studies of comprehension will be presented showing how analyses exploiting neural patterns  can be used to confirm and extend understanding of the neural systems supporting perception.

Biography

Samuel Evans originally trained as a speech and language therapist. He received his PhD from the Institute of Cognitive Neuroscience, UCL, on the neural basis of speech perception. Since then he has spent time working at the MRC Cognition and Brain Sciences Unit in Cambridge and the Institute of Cognitive Neuroscience at UCL.  His work investigates the neural basis of language comprehension and production using fMRI.  He combines univariate and multivariate methods to understand how these systems work and how they are modulated by intrinsic (e.g. language impairment) and extrinsic (e.g. the auditory environment) factors.