posted on 2022-03-22, 19:11authored byMinyue Zhang, Yu Chen, Yi Lin, Hongwei Ding, Yang Zhang
Purpose: Numerous studies have identified individuals with autism spectrum disorder (ASD) with deficits in unichannel emotion perception and multisensory integration. However, only limited research is available on multichannel emotion perception in ASD. The purpose of this review was to seek conceptual clarification, identify knowledge gaps, and suggest directions for future research.
Method: We conducted a scoping review of the literature published between 1989 and 2021, following the 2005 framework of Arksey and O’Malley. Data relating to study characteristics, task characteristics, participant information, and key findings on multichannel processing of emotion in ASD were extracted for the review.
Results: Discrepancies were identified regarding multichannel emotion perception deficits, which are related to participant age, developmental level, and task demand. Findings are largely consistent regarding the facilitation and compensation of congruent multichannel emotional cues and the interference and disruption of incongruent signals. Unlike controls, ASD individuals demonstrate an overreliance on semantics rather than prosody to decode multichannel emotion.
Conclusions: The existing literature on multichannel emotion perception in ASD is limited, dispersed, and disassociated, focusing on a variety of topics with a wide range of methodologies. Further research is necessary to quantitatively examine the impact of methodological choice on performance outcomes. An integrated framework of emotion, language, and cognition is needed to examine the mutual influences between emotion and language as well as the crosslinguistic and cross-cultural differences.
Supplemental Material S1. Summary of the included studies in the scoping review.
Zhang, M., Chen, Y., Lin, Y., Ding, H., & Zhang, T. (2022). Multichannel perception of emotion in speech, voice, facial expression, and gesture in individuals with autism: A scoping review. Journal of Speech, Language, and Hearing Research. Advance online publication. https://doi.org/10.1044/2022_JSLHR-21-00438
Funding
H. Ding and Y. Zhang were supported by the Major Program of National Social Science Foundation of China (No. 18ZDA293). H. Ding was additionally supported by the Youth Project of Humanities and Social Sciences Foundation of China (No. 18YJC740103).