dc.contributor.author |
Doulamis, N |
en |
dc.contributor.author |
Ntalianis, K |
en |
dc.date.accessioned |
2014-03-01T02:46:16Z |
|
dc.date.available |
2014-03-01T02:46:16Z |
|
dc.date.issued |
2009 |
en |
dc.identifier.uri |
https://dspace.lib.ntua.gr/xmlui/handle/123456789/32632 |
|
dc.subject |
Implicit media content annotation |
en |
dc.subject |
Multimedia search |
en |
dc.subject |
Visual features |
en |
dc.subject.other |
Media content |
en |
dc.subject.other |
Multimedia search |
en |
dc.subject.other |
On the flies |
en |
dc.subject.other |
Semantic annotations |
en |
dc.subject.other |
Visual feature |
en |
dc.subject.other |
Visual properties |
en |
dc.subject.other |
Image processing |
en |
dc.subject.other |
Imaging systems |
en |
dc.subject.other |
Metadata |
en |
dc.subject.other |
Image retrieval |
en |
dc.title |
On the fly semantic annotation and modelling of multimedia |
en |
heal.type |
conferenceItem |
en |
heal.identifier.primary |
10.1109/IWSSIP.2009.5367705 |
en |
heal.identifier.secondary |
5367705 |
en |
heal.identifier.secondary |
http://dx.doi.org/10.1109/IWSSIP.2009.5367705 |
en |
heal.publicationDate |
2009 |
en |
heal.abstract |
This paper introduces a novel framework for implicit media content annotation by putting the user in the loop of the annotation process. In particular, a ser of visual features are extracted to describe the visual properties of the media content. Then, as the user plays with the content by downloading a set of media data, a mechanisms associates the textual user's queries with the visual metadata. In this way, we achieve automatic media content annotation of the untagged media data by taking into account the ""average"" user's selection. ©2009 IEEE. |
en |
heal.journalName |
2009 16th International Conference on Systems, Signals and Image Processing, IWSSIP 2009 |
en |
dc.identifier.doi |
10.1109/IWSSIP.2009.5367705 |
en |