Pain detection through facial expressions in children with autism using deep learning

Introduction

 

Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects how people communicate and interact with others. Children with ASD may have difficulty expressing their emotions, especially pain, which can lead to poor pain management and unnecessary suffering. In this blog post, I will summarize a recent research paper that proposes a novel artificial intelligence (AI) model that can detect pain through facial expressions in children with autism.

 

The problem of pain detection in ASD

 

Pain is a complex and subjective experience that involves sensory, emotional, and cognitive components. Pain can be influenced by various factors, such as age, gender, culture, personality, and context. Pain can also have significant impacts on one’s physical and mental health, as well as quality of life.

 

For children with ASD, pain detection and assessment can be particularly challenging, as they may exhibit atypical or less obvious signs of pain. For example, some children with ASD may not cry, scream, or verbalize their pain, but instead show subtle changes in their facial expressions, body movements, or behaviors. Some children with ASD may also have sensory issues that affect their perception and tolerance of pain. Therefore, it is important to develop reliable and valid methods to identify and evaluate pain in children with ASD, while also considering their individual differences and preferences.

 

The proposed AI model for pain recognition

 

The research paper, titled “Pain detection through facial expressions in children with autism using deep learning”, introduces a novel AI model that combines ResNeXt and Mediapipe techniques for the purpose of pain recognition in autistic children. The paper is authored by P. V. K. Sandeep and N. Suresh Kumar, and published in the journal Soft Computing.

See also  Music therapy as a complementary approach to autism treatment: A systematic review

 

The proposed AI model consists of two main components: a face detection module and a facial expression classification module. The face detection module uses Mediapipe, a framework that provides various tools for face and hand detection, tracking, and analysis. The Mediapipe framework can extract facial landmarks, such as the positions of the eyes, nose, and mouth, from an image or video stream. The facial expression classification module uses ResNeXt, a deep neural network that can learn from large-scale image datasets and achieve high accuracy and efficiency. The ResNeXt model can classify facial expressions into six basic categories: anger, disgust, fear, happiness, sadness, and surprise.

 

The proposed AI model integrates the results of both Mediapipe and ResNeXt to identify facial expressions and then assess pain levels in the face. The model uses a pain scale that ranges from 0 (no pain) to 10 (extreme pain), based on the intensity and frequency of the facial expressions. The model also considers the context and the individual characteristics of the child, such as age, gender, and diagnosis.

 

The evaluation and results of the AI model

 

The authors evaluated the performance of their AI model using two datasets: the UNBC-McMaster Shoulder Pain Expression Archive Database (UNBC) and the Autism Spectrum Disorder Facial Expression Database (ASDFED). The UNBC dataset contains videos of 25 participants who experienced shoulder pain while performing various movements. The ASDFED dataset contains videos of 15 children with ASD who expressed different emotions while playing with toys. The authors used 80% of the data for training and 20% for testing.

 

The authors compared their AI model with four other models: a baseline model that uses only ResNeXt, a model that uses ResNeXt and CNN, a model that uses ResNeXt and LSTM, and a model that uses ResNeXt and BiLSTM. The authors measured the accuracy, precision, recall, and F1-score of each model for facial expression classification and pain detection.

 

The results showed that the proposed AI model achieved the highest accuracy and F1-score for both facial expression classification and pain detection, compared to the other models. The proposed AI model also outperformed the other models in terms of precision and recall, except for the recall of pain detection on the ASDFED dataset, where the ResNeXt and BiLSTM model performed slightly better. The authors attributed this to the fact that the ASDFED dataset had more variations and noise in the facial expressions of the children with ASD.

See also  A systematic review: nutritional status and the effect in autism spectrum disorder

 

The implications and limitations of the AI model

 

The authors concluded that their AI model can effectively detect pain through facial expressions in children with autism, as well as in healthy adults. The authors suggested that their AI model can be used as a complementary tool for pain assessment and management in clinical settings, as well as for research and education purposes. The authors also highlighted the potential benefits of their AI model for improving the quality of life and well-being of children with ASD and their caregivers.

 

However, the authors also acknowledged some limitations and challenges of their AI model. For example, the AI model may not be able to capture the subtle and complex facial expressions of children with ASD, especially when they have co-occurring conditions or comorbidities. The AI model may also be influenced by external factors, such as lighting, camera angle, background noise, and facial accessories. Moreover, the AI model may not be able to account for the individual preferences and needs of the children with ASD, such as their communication style, coping strategies, and pain threshold. Therefore, the authors recommended that the AI model should be used with caution and in conjunction with other methods of pain assessment, such as self-report, behavioral observation, and physiological measures.

 

The future directions of the AI model

 

The authors proposed some future directions for improving and expanding their AI model. For example, the authors suggested that the AI model could be enhanced by using more advanced and robust techniques, such as attention mechanisms, graph neural networks, and generative adversarial networks. The authors also suggested that the AI model could be applied to other domains and populations, such as elderly people, people with disabilities, and people with chronic pain. Furthermore, the authors suggested that the AI model could be integrated with other modalities and sensors, such as voice, gesture, and biosignals, to provide a more comprehensive and holistic view of pain.

See also  Artificial gannet optimization enabled deep convolutional neural network for autism spectrum disorders classification using MRI image

 

Faq

How are the pain levels measured in the datasets?

 

The pain levels are measured in the datasets as follows:

  • In the UNBC dataset, the pain levels are measured using the Prkachin and Solomon Pain Intensity (PSPI) scale, which is a validated and reliable measure of pain intensity based on facial expressions. The PSPI scale ranges from 0 (no pain) to 15 (maximum pain).
  • In the ASDFED dataset, the pain levels are measured using the Wong-Baker FACES Pain Rating Scale, which is a widely used and simple measure of pain intensity based on facial expressions. The Wong-Baker scale ranges from 0 (no pain) to 10 (extreme pain).

 

How are the facial expression categories labeled in the datasets?

 

The facial expression categories are labeled in the datasets as follows:

  • In the UNBC dataset, the facial expression categories are labeled using the Facial Action Coding System (FACS), which is a comprehensive and objective system for describing facial movements based on the underlying facial muscles. The FACS system assigns a numerical code to each facial action unit (AU), such as AU1 (inner brow raiser) or AU12 (lip corner puller).
  • In the ASDFED dataset, the facial expression categories are labeled using the Emotion Facial Action Coding System (EMFACS), which is a simplified and emotion-specific version of the FACS system. The EMFACS system assigns a numerical code to each facial action unit (AU) that is relevant to the six basic emotions, such as AU1+2+4 (fear) or AU6+12 (happiness).

 

How are the results of the AI model compared with the other models?

 

The results of the AI model are compared with the other models using four metrics: accuracy, precision, recall, and F1-score. These metrics are commonly used for evaluating the performance of classification models, and are defined as follows:

  • Accuracy is the ratio of correctly classified instances to the total number of instances.
  • Precision is the ratio of correctly classified positive instances to the total number of predicted positive instances.
  • Recall is the ratio of correctly classified positive instances to the total number of actual positive instances.
  • F1-score is the harmonic mean of precision and recall, and represents the balance between them.

 

Source:

https://link.springer.com/article/10.1007/s00500-024-09696-x

Leave a Comment