HEAL DSpace

Emotion recognition through multiple modalities: Face, body gesture, speech

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Castellano, G en
dc.contributor.author Kessous, L en
dc.contributor.author Caridakis, G en
dc.date.accessioned 2014-03-01T02:45:15Z
dc.date.available 2014-03-01T02:45:15Z
dc.date.issued 2008 en
dc.identifier.issn 03029743 en
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/32231
dc.subject Affective body language en
dc.subject Affective speech en
dc.subject Emotion recognition en
dc.subject Multimodal fusion en
dc.subject.other Classification (of information) en
dc.subject.other Classifiers en
dc.subject.other Gesture recognition en
dc.subject.other Human computer interaction en
dc.subject.other Human engineering en
dc.subject.other Information management en
dc.subject.other Knowledge management en
dc.subject.other Learning systems en
dc.subject.other Speech en
dc.subject.other Speech recognition en
dc.subject.other Affective body language en
dc.subject.other Affective speech en
dc.subject.other Bayesian classifiers en
dc.subject.other Body gesture en
dc.subject.other Body movements en
dc.subject.other Decision levels en
dc.subject.other Emotion recognition en
dc.subject.other Facial expressions en
dc.subject.other Feature-level en
dc.subject.other Individual classifiers en
dc.subject.other Multi-modal en
dc.subject.other Multi-modal approach en
dc.subject.other Multi-modal data en
dc.subject.other Multimodal fusion en
dc.subject.other Multiple modalities en
dc.subject.other Recognition rates en
dc.subject.other Unimodal en
dc.subject.other Face recognition en
dc.title Emotion recognition through multiple modalities: Face, body gesture, speech en
heal.type conferenceItem en
heal.identifier.primary 10.1007/978-3-540-85099-1_8 en
heal.identifier.secondary http://dx.doi.org/10.1007/978-3-540-85099-1_8 en
heal.publicationDate 2008 en
heal.abstract In this paper we present a multimodal approach for the recognition of eight emotions. Our approach integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. Firstly, individual classifiers were trained for each modality. Next, data were fused at the feature level and the decision level. Fusing the multimodal data resulted in a large increase in the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% when compared to the most successful unimodal system. Further, the fusion performed at the feature level provided better results than the one performed at the decision level. © 2008 Springer-Verlag Berlin Heidelberg. en
heal.journalName Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) en
dc.identifier.doi 10.1007/978-3-540-85099-1_8 en
dc.identifier.volume 4868 LNCS en
dc.identifier.spage 92 en
dc.identifier.epage 103 en


Αρχεία σε αυτό το τεκμήριο

Αρχεία Μέγεθος Μορφότυπο Προβολή

Δεν υπάρχουν αρχεία που σχετίζονται με αυτό το τεκμήριο.

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής