papers AI Learner
The Github is limit! Click to go to the new site.

FrameRank: A Text Processing Approach to Video Summarization

2019-04-11
Zhuo Lei, Chao Zhang, Qian Zhang, Guoping Qiu

Abstract

Video summarization has been extensively studied in the past decades. However, user-generated video summarization is much less explored since there lack large-scale video datasets within which human-generated video summaries are unambiguously defined and annotated. Toward this end, we propose a user-generated video summarization dataset - UGSum52 - that consists of 52 videos (207 minutes). In constructing the dataset, because of the subjectivity of user-generated video summarization, we manually annotate 25 summaries for each video, which are in total 1300 summaries. To the best of our knowledge, it is currently the largest dataset for user-generated video summarization. Based on this dataset, we present FrameRank, an unsupervised video summarization method that employs a frame-to-frame level affinity graph to identify coherent and informative frames to summarize a video. We use the Kullback-Leibler(KL)-divergence-based graph to rank temporal segments according to the amount of semantic information contained in their frames. We illustrate the effectiveness of our method by applying it to three datasets SumMe, TVSum and UGSum52 and show it achieves state-of-the-art results.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.05544

PDF

http://arxiv.org/pdf/1904.05544


Similar Posts

Comments