dc.contributor.author |
Tzafestas, C |
en |
dc.contributor.author |
Mitsou, N |
en |
dc.contributor.author |
Georgakarakos, N |
en |
dc.contributor.author |
Diamanti, O |
en |
dc.contributor.author |
Maragos, P |
en |
dc.contributor.author |
Fotinea, S-E |
en |
dc.contributor.author |
Efthimiou, E |
en |
dc.date.accessioned |
2014-03-01T02:46:10Z |
|
dc.date.available |
2014-03-01T02:46:10Z |
|
dc.date.issued |
2009 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/32588 |
|
dc.subject |
Graphic User Interface |
en |
dc.subject |
Mlp Neural Network |
en |
dc.subject |
Mobile Robot |
en |
dc.subject |
Real-time Visualization |
en |
dc.subject |
Shape Descriptor |
en |
dc.subject |
Sign Language |
en |
dc.subject |
Visual Analysis |
en |
dc.subject |
Greek Sign Language |
en |
dc.subject |
Multi Layer Perceptron |
en |
dc.subject |
Real Time |
en |
dc.subject.other |
Fourier |
en |
dc.subject.other |
Hand images |
en |
dc.subject.other |
Hand posture |
en |
dc.subject.other |
Multi layer perceptron |
en |
dc.subject.other |
Pilot applications |
en |
dc.subject.other |
Real time recognition |
en |
dc.subject.other |
Recognition performance |
en |
dc.subject.other |
Shape descriptors |
en |
dc.subject.other |
Sign images |
en |
dc.subject.other |
Sign language |
en |
dc.subject.other |
Sign recognition |
en |
dc.subject.other |
Tele-operations |
en |
dc.subject.other |
Telerobotic control |
en |
dc.subject.other |
Visual analysis |
en |
dc.subject.other |
Visual recognition |
en |
dc.subject.other |
Graphical user interfaces |
en |
dc.subject.other |
Linguistics |
en |
dc.subject.other |
Mobile robots |
en |
dc.subject.other |
Neural networks |
en |
dc.subject.other |
Remote control |
en |
dc.subject.other |
Roads and streets |
en |
dc.subject.other |
Wireless networks |
en |
dc.title |
Gestural teleoperation of a mobile robot based on visual recognition of sign language static handshapes |
en |
heal.type |
conferenceItem |
en |
heal.identifier.primary |
10.1109/ROMAN.2009.5326235 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1109/ROMAN.2009.5326235 |
en |
heal.identifier.secondary |
5326235 |
en |
heal.publicationDate |
2009 |
en |
heal.abstract |
This paper presents results achieved in the frames of a national research project (titled ""DIANOEMA""), where visual analysis and sign recognition techniques have been explored on Greek Sign Language (GSL) data. Besides GSL modelling, the aim was to develop a pilot application for teleoperating a mobile robot using natural hand signs. A small vocabulary of hand signs has been designed to enable desktop-based teleoperation at a high-level of supervisory telerobotic control. Real-time visual recognition of the hand images is performed by training a multi-layer perceptron (MLP) neural network. Various shape descriptors of the segmented hand posture images have been explored as inputs to the MLP network. These include Fourier shape descriptors on the contour of the segmented hand sign images, moments, compactness, eccentricity, and histogram of the curvature. We have examined which of these shape descriptors are best suited for real-time recognition of hand signs, in relation to the number and choice of hand postures, in order to achieve maximum recognition performance. The hand-sign recognizer has been integrated in a graphical user interface, and has been implemented with success on a pilot application for real-time desktop-based gestural teleoperation of a mobile robot vehicle. © 2009 IEEE. |
en |
heal.journalName |
Proceedings - IEEE International Workshop on Robot and Human Interactive Communication |
en |
dc.identifier.doi |
10.1109/ROMAN.2009.5326235 |
en |
dc.identifier.spage |
1073 |
en |
dc.identifier.epage |
1079 |
en |