dc.contributor.author |
Caridakis, G |
en |
dc.contributor.author |
Asteriadis, S |
en |
dc.contributor.author |
Karpouzis, K |
en |
dc.date.accessioned |
2014-03-01T02:47:08Z |
|
dc.date.available |
2014-03-01T02:47:08Z |
|
dc.date.issued |
2010 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/33012 |
|
dc.subject |
Context Aware |
en |
dc.subject |
Emotion Recognition |
en |
dc.subject |
Facial Expression |
en |
dc.subject |
Multimedia Application |
en |
dc.subject |
System Dynamics |
en |
dc.subject |
User Model |
en |
dc.subject.other |
Acoustic features |
en |
dc.subject.other |
Analysis system |
en |
dc.subject.other |
Computational formulations |
en |
dc.subject.other |
Context-Aware |
en |
dc.subject.other |
Dynamic Profiling |
en |
dc.subject.other |
Emotion recognition |
en |
dc.subject.other |
Facial Expressions |
en |
dc.subject.other |
Head motion |
en |
dc.subject.other |
Head pose |
en |
dc.subject.other |
Multi-modal |
en |
dc.subject.other |
Multimedia applications |
en |
dc.subject.other |
Personalized interface |
en |
dc.subject.other |
Physiological measurement |
en |
dc.subject.other |
Statistical processing |
en |
dc.subject.other |
User Modeling |
en |
dc.subject.other |
Work Focus |
en |
dc.subject.other |
Mathematical models |
en |
dc.subject.other |
Pattern recognition systems |
en |
dc.subject.other |
Semantics |
en |
dc.subject.other |
Feature extraction |
en |
dc.title |
User modeling via gesture and head pose expressivity features |
en |
heal.type |
conferenceItem |
en |
heal.identifier.primary |
10.1109/SMAP.2010.5706868 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1109/SMAP.2010.5706868 |
en |
heal.identifier.secondary |
5706868 |
en |
heal.publicationDate |
2010 |
en |
heal.abstract |
Current work focuses on user modeling in terms of affective analysis that could in turn be used in intelligent personalized interfaces and systems, dynamic profiling and context-aware multimedia applications. The analysis performed within this work comprises of statistical processing and classification of automatically extracted gestural and head pose expressivity features. Computational formulation of qualitative expressive cues of body and head motion is performed and the resulting features are processed statistically, their correlation is studied and finally an emotion recognition attempt is presented based on these features. Significant emotion specific patterns and expressivity features interrelations are derived while the emotion recognition results indicate that the gestural and head pose expressivity features could supplement and enhance a multimodal affective analysis system incorporating an additional modality to be fused with other commonly used modalities such as facial expressions, prosodic and lexical acoustic features and physiological measurements. © 2010 IEEE. |
en |
heal.journalName |
Proceedings - 2010 5th International Workshop on Semantic Media Adaptation and Personalization, SMAP 2010 |
en |
dc.identifier.doi |
10.1109/SMAP.2010.5706868 |
en |
dc.identifier.spage |
19 |
en |
dc.identifier.epage |
24 |
en |