In several fields nowadays, automated emotion recognition has been shown to be a highly powerful tool. Mapping different facial expressions to their respective emotional states is the main objective of facial emotion recognition (FER). In this study, facial expression recognition (FER) was classified using the ResNet-18 model and transformers. This study examines the performance of the Vision Transformer in this task and contrasts our model with cutting-edge models on hybrid datasets. The pipeline and associated procedures for face detection, cropping, and feature extraction using the most recent deep learning model, fine-tuned transformer, are described in this study. The experimental findings demonstrate that our proposed emotion recognition system is capable of being successfully used in practical settings.

ViTFER: Facial Emotion Recognition with Vision Transformers

Mazzeo P. L.
2022

Abstract

In several fields nowadays, automated emotion recognition has been shown to be a highly powerful tool. Mapping different facial expressions to their respective emotional states is the main objective of facial emotion recognition (FER). In this study, facial expression recognition (FER) was classified using the ResNet-18 model and transformers. This study examines the performance of the Vision Transformer in this task and contrasts our model with cutting-edge models on hybrid datasets. The pipeline and associated procedures for face detection, cropping, and feature extraction using the most recent deep learning model, fine-tuned transformer, are described in this study. The experimental findings demonstrate that our proposed emotion recognition system is capable of being successfully used in practical settings.
2022
Istituto di Scienze Applicate e Sistemi Intelligenti "Eduardo Caianiello" - ISASI - Sede Secondaria Lecce
computer vision
emotion recognition
ResNet
transformers
Vision Transformers
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/512099
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 57
  • ???jsp.display-item.citation.isi??? ND
social impact