dc.contributor.author |
Votsis, GN |
en |
dc.contributor.author |
Drosopoulos, AI |
en |
dc.contributor.author |
Kollias, SD |
en |
dc.date.accessioned |
2014-03-01T01:18:32Z |
|
dc.date.available |
2014-03-01T01:18:32Z |
|
dc.date.issued |
2003 |
en |
dc.identifier.issn |
0923-5965 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/15072 |
|
dc.subject |
Active contours |
en |
dc.subject |
Dominant angle |
en |
dc.subject |
Facial feature extraction |
en |
dc.subject |
Feature labeling |
en |
dc.subject |
Optimal segmentation |
en |
dc.subject |
Seed growing |
en |
dc.subject.classification |
Engineering, Electrical & Electronic |
en |
dc.subject.other |
Animation |
en |
dc.subject.other |
Face recognition |
en |
dc.subject.other |
Feature extraction |
en |
dc.subject.other |
Fuzzy sets |
en |
dc.subject.other |
Optimization |
en |
dc.subject.other |
Feature labeling |
en |
dc.subject.other |
Image segmentation |
en |
dc.title |
A modular approach to facial feature segmentation on real sequences |
en |
heal.type |
journalArticle |
en |
heal.identifier.primary |
10.1016/S0923-5965(02)00103-0 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1016/S0923-5965(02)00103-0 |
en |
heal.language |
English |
en |
heal.publicationDate |
2003 |
en |
heal.abstract |
In this paper a modular approach of gradual confidence for facial feature extraction over real video frames is presented. The problem is being dealt under general imaging conditions and soft presumptions. The proposed methodology copes with large variations in the appearance of diverse subjects, as well as of the same subject in various instances within real video sequences. Areas of the face that statistically seem to be outstanding form an initial set of regions that are likely to include information about the features of interest. Enhancement of these regions produces closed objects, which reveal - through the use of a fuzzy system - a dominant angle, i.e. the facial rotation angle. The object set is restricted using the dominant angle. An exhaustive search is performed among all candidate objects, matching a pattern that models the relative position of the eyes and the mouth. Labeling of the winner features can be used to evaluate the features extracted and provide feedback in an iterative framework. A subset of the MPEG-4 facial definition or facial animation parameter set can be obtained. This gradual feature revelation is performed under optimization for each step, producing a posteriori knowledge about the face and leading to a step-by-step visualization of the features in search. © 2002 Elsevier Science B.V. All rights reserved. |
en |
heal.publisher |
ELSEVIER SCIENCE BV |
en |
heal.journalName |
Signal Processing: Image Communication |
en |
dc.identifier.doi |
10.1016/S0923-5965(02)00103-0 |
en |
dc.identifier.isi |
ISI:000181043800005 |
en |
dc.identifier.volume |
18 |
en |
dc.identifier.issue |
1 |
en |
dc.identifier.spage |
67 |
en |
dc.identifier.epage |
89 |
en |