Perceptually-tuned multiscale color-texture segmentation
Junqing Chen, Thrasyvoulos N. Pappas, et al.
ICIP 2004
The work presents the first effort to automatically annotate the semantic meanings of temporal video patterns obtained through unsupervised discovery processes. This problem is interesting in domains where neither perceptual patterns nor semantic concepts have simple structures. The patterns in video are modeled with hierarchical hidden Markov models (HHMM), with efficient algorithms to learn the parameters, the model complexity, and the relevant features; the meanings are contained in words of the speech transcript of the video. The pattern-word association is obtained via co-occurrence analysis and statistical machine translation models. Promising results are obtained through extensive experiments on 20+ hours of TRECVID news videos: video patterns that associate with distinct topics such as el-nino and politics are identified; the HHMM temporal structure model compares favorably to a non-temporal clustering algorithm. © 2004 IEEE.
Junqing Chen, Thrasyvoulos N. Pappas, et al.
ICIP 2004
Ashish Jagmohan, Krishna Ratakonda
ICIP 2004
A.B. Benitez, J.R. Smith, et al.
Internet Multimedia Management Systems Conference 2000
J.R. Smith, S.-F. Chang
ICIP 1997