AAAI Publications, Thirtieth AAAI Conference on Artificial Intelligence

Font Size: 
Reading the Videos: Temporal Labeling for Crowdsourced Time-Sync Videos Based on Semantic Embedding
Guangyi Lv, Tong Xu, Enhong Chen, Qi Liu, Yi Zheng

Last modified: 2016-03-05

Abstract


Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called "bullet-screen comments", video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.

Keywords


bullet-screen comment, temporal labeling, semantic embedding

Full Text: PDF