Show simple item record

dc.contributor.authorMuise-Hennessey, Alexandria
dc.date.accessioned2015-08-19T16:52:44Z
dc.date.available2015-08-19T16:52:44Z
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/10222/60601
dc.description.abstractThe present study investigated whether there was a relationship between statistical learning and the ability to use top-down processing to predict incoming speech using electroencephalography (EEG). Statistical learning abilities were measured via an artificial grammar learning (AGL) task, where assimilation of transitional probabilities of stimuli were indexed using learning scores and the P600, an event-related potential (ERP) component that responds to syntactic violations. Top-down processing was indexed using the N400 — an ERP component that responds to semantic violations — in response to a speech perception task with two conditions: with and without noise. It was hypothesized that without noise, an N400 would be seen when the final word of a sentence was semantically incorrect, and that noise should attenuate this effect. Without noise, N400 and P600 amplitudes were expected to correlate, supporting evidence for a relationship between these neurocognitive processes. In the presence of noise, people who were better at the statistical learning task should have a reduced N400-mismatch effect, as they would rely on top-down processing to fill in the missing information. This should not be observed in people who were worse at the AGL task. Based on the median of the AGL learning scores, people were split into two groups: learners and non-learners. The AGL task did not elicit any significant effects in non-learners. Learners had an N400-like effect in the central parietal scalp and a frontal positivity. An N400 in response to the speech perception task was found for both quiet and noise conditions. Furthermore, there was a relationship between statistical learning and speech perception. Non-learners had a positive correlation between the N400 and AGL grammaticality effect regardless of the listening condition. In contrast, learners had a negative correlation in the absence of noise; this relationship reversed in the presence of noise, coinciding with their reduction in N400 amplitude. This reduction in N400 amplitude in noise suggests that learners may have strong expectations of what the final word should be. When hearing is impaired, learners may perceive the final word as a match rather than a mismatch. The results suggest that people who are more sensitive to the underlying statistical frequencies of stimuli may rely more on top-down processing to fill in missing information when engaged in a noisy environment.en_US
dc.language.isoenen_US
dc.subjectelectroencephalographyen_US
dc.subjectstatistical learningen_US
dc.subjectspeech perceptionen_US
dc.subjectspeech perception in noiseen_US
dc.subjectnon-invasive neuroimagingen_US
dc.subjectlanguageen_US
dc.subjectcognitionen_US
dc.titleCan you hear me now? The relationship between statistical learning and speech perception under degraded listening conditionsen_US
dc.typeThesisen_US
dc.date.defence2015-08-12
dc.contributor.departmentDepartment of Psychology and Neuroscienceen_US
dc.contributor.degreeMaster of Scienceen_US
dc.contributor.external-examinern/aen_US
dc.contributor.graduate-coordinatorDr. Gail Eskesen_US
dc.contributor.thesis-readerDr. Dennis Phillipsen_US
dc.contributor.thesis-readerDr. Steven Aikenen_US
dc.contributor.thesis-supervisorDr. Aaron Newmanen_US
dc.contributor.ethics-approvalReceiveden_US
dc.contributor.manuscriptsNot Applicableen_US
dc.contributor.copyright-releaseNot Applicableen_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record