Recognition: unknown
Multimodal Abstractive Summarization for How2 Videos
read the original abstract
In this paper, we study abstractive summarization for open-domain videos. Unlike the traditional text news summarization, the goal is less to "compress" text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities, in our case video and audio transcripts (or text). We show how a multi-source sequence-to-sequence model with hierarchical attention can integrate information from different modalities into a coherent output, compare various models trained with different modalities and present pilot experiments on the How2 corpus of instructional videos. We also propose a new evaluation metric (Content F1) for abstractive summarization task that measures semantic adequacy rather than fluency of the summaries, which is covered by metrics like ROUGE and BLEU.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Multimodal Abstractive Summarization of Instructional Videos with Vision-Language Models
ClipSum shows that frozen CLIP features outperform traditional CNN features and fine-tuned CLIP for instructional video summarization on YouCook2.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.