par Bayard, Clémence ;Colin, Cécile ;Tilmant, Anne-Sophie;Leybaert, Jacqueline
Référence Psycholinguistics in Flanders (10th Edition: 25-26 May 2011: Antwerp)
Publication Non publié, 2011-05-26
Référence Psycholinguistics in Flanders (10th Edition: 25-26 May 2011: Antwerp)
Publication Non publié, 2011-05-26
Communication à un colloque
Résumé : | Speech perception in hearing subjects has been shown to involve audio-visual integration (McGurk and MacDonald, 1976). French Cued Speech (CS), or Langue française Parlée Complétée (LPC), is a system of manual aids developed to help deaf people to clearly and completely understand speech visually, thereby compensating for lipreading ambiguity (Cornett, 1967). Since this system is also multi-signal (sound, lip movements and LPC gestures), we were wondered whether this perception involves integrative treatment and how expertise affects it. We conducted an eye tracking study to answer that question. Our paradigm consisted of three conditions: (1) a multi-signal condition consisting of a speaker’s video who simultaneously spoke and cued words or pseudowords without sound (CS), (2) a meaningless multi-signal (ML) condition consisting of a video showing a speaker producing words or pseudowords with meaningless gestures, (3) and a lipreading (LR) alone condition, consisting of a video showing the same speaker uttering words or pseudowords with neither sound nor cues. Participants were presented three options (i.e. correct answer, labial distractor or manual distractor) and instructed to select the correct answer from among the three. Distractors were words (or pseudowords) that shared the same labial image or manual cue as the word (or pseudoword) uttered. For example, the stimuli “mavé” (coded by hand shapes number 5 and 2) was presented with the correct answer “mavé”, the labial distractor “pafé” and the manual distractor “tazé” (coded by hand shapes number 5 and 2). Behavioral and eye tracking data (i.e. interest region: lips or hand) were collected on hearing people who were either experts in CS perception (N = 12) or completely naïve toward CS (N = 19). The eye tracking and behavioral data showed that CS-experts focused on lips region in LR condition and on both regions in CS and ML conditions. However, in CS condition they focused more on the hand region. Non-experts paid more attention to lip regions in all conditions. This suggests that only CS-experts integrate hands and lips information in French Cued Speech perception. Very promising, these results should be extended to deaf CS-expert participants. It could be also interesting to compare beginner CS-experts and experienced CS-experts to determine expertise and hearing status effects. |