PEGASUS: Pre-training with Gap-Sentences for Abstractive Summarization | Research Paper Walkthrough

ICML 2020

PEGASUS: Pre-training with Gap-Sentences for Abstractive Summarization | Research Paper Walkthrough

Dec 05, 2020
|
50 views
|
|
Code
Details
#ai #naturallanguageprocessing #summarisation #researchpaperwalkthrough Automatic text summarization is the process of creating a short and coherent version of a longer document. Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user and task. This paper from Google AI introduces new State-of-the-Art algorithm for doing abstractive summarisation. The main contribution of the paper is the introduction of new pre-training objective for the summarization task. Authors test their transformer based Seq-to-Seq summarisation model on 12 relevant datasets. ⏩ Support by subscribing to the channel to not miss out on any video that i upload next - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA ⏩ Abstract: Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets. ⏩ OUTLINE: 0:00 - Background 2:23 - PEGASUS Base training objectives 3:29 - Abstract 5:39 - Pre-training dataset 6:28 - BART training objective 7:07 - Gap sentence selection strategy 12:24 - Experiments and results ⏩ Document Title: PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization ⏩ Document Link: https://arxiv.org/abs/1912.08777 ⏩ Author: Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu ⏩ Organisation: Google ⏩ IMPORTANT LINKS Extractive Text Summarisation using BERT: https://www.youtube.com/watch?v=JU6eSLsp6vI&list=PLsAqq9lZFOtV8jYq3JlkqPQUN5QxcWq0f&index=2 Unsupervised Multi-Document Summarization: https://www.youtube.com/watch?v=qOoAlI5hpFk&list=PLsAqq9lZFOtV8jYq3JlkqPQUN5QxcWq0f&index=5 Multi-Document Summarization: https://www.youtube.com/watch?v=1jwUOMQVCo4&list=PLsAqq9lZFOtV8jYq3JlkqPQUN5QxcWq0f&index=6 ********************************************* ⏩ Youtube - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA ⏩ Blog - https://prakhartechviz.blogspot.com ⏩ LinkedIn - https://linkedin.com/in/prakhar21 ⏩ Medium - https://medium.com/@prakhar.mishra ⏩ GitHub - https://github.com/prakhar21 ********************************************* Tools I use for making videos :) ⏩ iPad - https://amzn.to/3kA3vuo ⏩ Apple Pencil - https://amzn.to/3kFZFA2 ⏩ GoodNotes - https://tinyurl.com/y627cfsa ⏩ Microphone - https://amzn.to/2UEyCuh About Me: I am Prakhar Mishra and this channel is my passion project. I am currently pursuing my MS (by research) in Data Science. I have an industry work-ex of 3 years in the field of Data Science and Machine Learning with a particular focus in Natural Langauge Processing (NLP). #techviz #datascienceguy #nlp #machinelearning #textsummarization #research

Comments
loading...