dc.contributor.author |
Theodorakis, S |
en |
dc.contributor.author |
Pitsikalis, V |
en |
dc.contributor.author |
Maragos, P |
en |
dc.date.accessioned |
2014-03-01T02:46:53Z |
|
dc.date.available |
2014-03-01T02:46:53Z |
|
dc.date.issued |
2010 |
en |
dc.identifier.issn |
15206149 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/32910 |
|
dc.subject |
HMM |
en |
dc.subject |
Sign language |
en |
dc.subject |
Subunit modeling |
en |
dc.subject.other |
American sign language |
en |
dc.subject.other |
Boston University |
en |
dc.subject.other |
Data-driven |
en |
dc.subject.other |
Hand positions |
en |
dc.subject.other |
Hier-archical clustering |
en |
dc.subject.other |
HMM |
en |
dc.subject.other |
Phonetic information |
en |
dc.subject.other |
Qualitative analysis |
en |
dc.subject.other |
Region-based |
en |
dc.subject.other |
Sign language |
en |
dc.subject.other |
Sub-units |
en |
dc.subject.other |
Subunit modeling |
en |
dc.subject.other |
Time segmentation |
en |
dc.subject.other |
Unit constructions |
en |
dc.subject.other |
Visual feature |
en |
dc.subject.other |
Visual-processing |
en |
dc.subject.other |
Hidden Markov models |
en |
dc.subject.other |
Linguistics |
en |
dc.subject.other |
Quality control |
en |
dc.subject.other |
Signal processing |
en |
dc.subject.other |
Cluster analysis |
en |
dc.title |
Model-level data-driven sub-units for signs in videos of continuous sign language |
en |
heal.type |
conferenceItem |
en |
heal.identifier.primary |
10.1109/ICASSP.2010.5495875 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1109/ICASSP.2010.5495875 |
en |
heal.identifier.secondary |
5495875 |
en |
heal.publicationDate |
2010 |
en |
heal.abstract |
We investigate the issue of sign language automatic phonetic sub-unit modeling, that is completely data driven and without any prior phonetic information. A first step of visual processing leads to simple and effective region-based visual features. Prior to the sub-unit modeling we propose to employ a pronunciation clustering step with respect to each sign. Afterwards, for each sign and pronunciation group we find the time segmentation at the hidden Markov model (HMM) level. The models employed refer to movements as a sequence of dominant hand positions. The constructed segments are exploited explicitly at the model level via hierarchical clustering of HMMs and lead to the data-driven movement sub-unit construction. The constructed movement sub-units are evaluated in qualitative analysis experiments on data from the Boston University (BU)-400 American Sign Language corpus showing promising results. ©2010 IEEE. |
en |
heal.journalName |
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
en |
dc.identifier.doi |
10.1109/ICASSP.2010.5495875 |
en |
dc.identifier.spage |
2262 |
en |
dc.identifier.epage |
2265 |
en |