Sensitivity to temporal synchrony and selective attention in audiovisual speech in infants at elevated likelihood for autism: A preliminary longitudinal study

Introduction

 

Learning language is a complex process that relies on our brains seamlessly integrating information from different senses. When someone speaks, we naturally focus on their mouth, matching the sounds we hear with their lip movements. This ability to detect synchrony between auditory and visual speech cues and selectively attend to the mouth is fundamental for speech perception and language acquisition.

A recent study published in June 2024 in the journal Infant Behavior and Development sheds light on how infants at elevated likelihood for Autism Spectrum Disorder (ASD) process audiovisual speech compared to typically developing infants. The title of the research paper is “Sensitivity to temporal synchrony and selective attention in audiovisual speech in infants at elevated likelihood for autism: A preliminary longitudinal study”.

The Importance of Audiovisual Integration in Early Development

 

Our ability to integrate information from different senses, known as multisensory integration, is crucial for various aspects of development, especially for learning language. In the context of speech perception, audiovisual integration allows us to combine the auditory information we hear (the sounds of speech) with the visual information we see (lip movements) to create a unified percept of spoken language. This unified percept is much richer and more informative than the information from either sense alone.

For example, if you’re in a noisy environment and struggle to hear someone speak clearly, focusing on their lip movements can significantly improve your understanding. This is because the visual information from the mouth movements provides additional cues that help you decipher the sounds you’re having trouble hearing.

See also  A national research survey of childhood autism assessment services in the UK: empirical evidence of diagnostic practice, challenges and improvement opportunities

The Study: Investigating Selective Attention and Temporal Synchrony in Infants

 

Researchers Itziar Lozano, Mercedes Belinchón, and Ruth Campos were interested in investigating how infants at high and low risk for ASD process audiovisual speech. They recruited a group of 29 infants, with half having an older sibling diagnosed with ASD (considered high risk for ASD) and the other half having no family history of ASD (considered low risk). The researchers followed these infants at 4, 8, and 12 months of age.

To assess the infants’ processing of audiovisual speech, the researchers employed an eye-tracking technique. The infants were presented with videos of talking faces, but the videos were manipulated to have either synchronized or asynchronous audio and video. In the synchronized videos, the lip movements perfectly matched the sounds of speech. In the asynchronous videos, there was a mismatch between the lip movements and the sounds (e.g., the speaker’s mouth might be moving as if saying “ba” but the sound they hear is “da”). By tracking how long the infants looked at the speaker’s eyes and mouth during these presentations, the researchers could gauge their attention to these facial regions and their sensitivity to the synchrony between the auditory and visual cues.

Key Findings: Detecting Mismatches but Differences in Attention

 

The study revealed some interesting findings. By 12 months of age, all infants, irrespective of their risk group, were able to detect when the speech and lip movements were out of sync in the asynchronous videos. This suggests that the basic ability to integrate audiovisual information develops typically in both groups of infants.

See also  Views of Genetic Testing for Autism Among Autism Self-Advocates: A Qualitative Study

However, a crucial difference emerged when the researchers examined how the infants directed their attention. The infants at high risk for ASD showed significantly less attention to the mouth region at 12 months compared to their typically developing counterparts. This difference was particularly noteworthy because the typically developing infants exhibited an increasing focus on the mouth region over the course of the study, whereas the attention patterns of the high-risk infants remained relatively stable.

Potential Implications and Future Directions

 

These findings suggest that infants at high risk for ASD may have difficulties specifically focusing on the mouth, a crucial area for speech perception and language learning. This is an interesting observation because previously, difficulties in integrating audiovisual information were thought to be a core deficit in ASD.

The study highlights a potential new area for investigation in the early development of ASD. Understanding how infants at risk for ASD process audiovisual speech can inform the development of targeted interventions to support their language learning and communication skills. It is important to note that this is a preliminary study with a relatively small sample size. Further research is needed to confirm these findings and explore the underlying mechanisms that might be contributing to the observed differences in attention patterns.

Future studies could involve a larger sample size and explore the infants’ brain activity using neuroimaging techniques while they process audiovisual speech. Additionally, researchers could investigate whether these early differences in attention translate into later language difficulties in children with ASD. By unraveling the complexities of how infants at risk for ASD process audiovisual speech, researchers can pave the way for the development of more effective interventions to support their communication development.

See also  Quality of Life in Adults with Autism Spectrum Disorder and Intellectual Disabilities

 

Source:

https://www.sciencedirect.com/science/article/abs/pii/S0163638324000523

Leave a Comment