dc.contributor.author |
Ntalianis, KS |
en |
dc.contributor.author |
Doulamis, AD |
en |
dc.contributor.author |
Tsapatsoulis, N |
en |
dc.contributor.author |
Doulamis, N |
en |
dc.date.accessioned |
2014-03-01T01:33:36Z |
|
dc.date.available |
2014-03-01T01:33:36Z |
|
dc.date.issued |
2010 |
en |
dc.identifier.issn |
1380-7501 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/20483 |
|
dc.subject |
Action modeling |
en |
dc.subject |
Human action analysis |
en |
dc.subject |
Human object detection |
en |
dc.subject |
User transparent interaction |
en |
dc.subject |
Video annotation |
en |
dc.subject.classification |
Computer Science, Information Systems |
en |
dc.subject.classification |
Computer Science, Software Engineering |
en |
dc.subject.classification |
Computer Science, Theory & Methods |
en |
dc.subject.classification |
Engineering, Electrical & Electronic |
en |
dc.subject.other |
Action modeling |
en |
dc.subject.other |
Content semantics |
en |
dc.subject.other |
File servers |
en |
dc.subject.other |
Human actions |
en |
dc.subject.other |
Integrated frameworks |
en |
dc.subject.other |
Modeling and analysis |
en |
dc.subject.other |
Object Detection |
en |
dc.subject.other |
Spatiotemporal analysis |
en |
dc.subject.other |
User interaction |
en |
dc.subject.other |
Video annotations |
en |
dc.subject.other |
Video streams |
en |
dc.subject.other |
Object recognition |
en |
dc.subject.other |
Semantics |
en |
dc.subject.other |
Servers |
en |
dc.subject.other |
Video streaming |
en |
dc.title |
Human action annotation, modeling and analysis based on implicit user interaction |
en |
heal.type |
journalArticle |
en |
heal.identifier.primary |
10.1007/s11042-009-0369-6 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1007/s11042-009-0369-6 |
en |
heal.language |
English |
en |
heal.publicationDate |
2010 |
en |
heal.abstract |
This paper proposes an integrated framework for analyzing human actions in video streams. Despite most current approaches that are just based on automatic spatiotemporal analysis of sequences, the proposed method introduces the implicit user-in-the-loop concept for dynamically mining semantics and annotating video streams. This work sets a new and ambitious goal: to recognize, model and properly use ""average user's"" selections, preferences and perception, for dynamically extracting content semantics. The proposed approach is expected to add significant value to hundreds of billions of non-annotated or inadequately annotated video streams existing in the Web, file servers, databases etc. Furthermore expert annotators can gain important knowledge relevant to user preferences, selections, styles of searching and perception. © 2009 Springer Science+Business Media, LLC. |
en |
heal.publisher |
SPRINGER |
en |
heal.journalName |
Multimedia Tools and Applications |
en |
dc.identifier.doi |
10.1007/s11042-009-0369-6 |
en |
dc.identifier.isi |
ISI:000279198900010 |
en |
dc.identifier.volume |
50 |
en |
dc.identifier.issue |
1 |
en |
dc.identifier.spage |
199 |
en |
dc.identifier.epage |
225 |
en |