Leveraging BERT for Extractive Text Summarization on Lectures | Research Paper Walkthrough
CrossMind.ai logo
Details
#bert #textsummarization #researchpaperwalkthrough #nlp Automatic summarization is the process of shortening a set of data computationally, to create a subset that represents the most important or relevant information within the original content. Extractive summarization can be seen as the task of ranking and scoring sentences in the document based on certain metrics and then picking top-k sentences as the representative summary. ⏩ Abstract: In the last two decades, automatic extractive text summarization on lectures has demonstrated to be a useful tool for collecting key phrases and sentences that best represent the content. However, many current approaches utilize dated approaches, producing sub-par outputs or requiring several hours of manual tuning to produce meaningful results. Recently, new machine learning architectures have provided mechanisms for extractive summarization through the clustering of output embeddings from deep learning models. This paper reports on the project called Lecture Summarization Service, a python based RESTful service that utilizes the BERT model for text embeddings and KMeans clustering to identify sentences closes to the centroid for summary selection. The purpose of the service was to provide students a utility that could summarize lecture content, based on their desired number of sentences. On top of the summary work, the service also includes lecture and summary management, storing content on the cloud which can be used for collaboration. While the results of utilizing BERT for extractive summarization were promising, there were still areas where the model struggled, providing feature research opportunities for further improvement. ⏩ OUTLINE: 0:00 - Quick Refresher on Text Summarization and BERT 6:20 - Intro and Overview 9:35 - Method 17:12 - Ensemble Models 18:48 - My thoughts and takeaways ⏩ Paper: https://arxiv.org/abs/1906.04165 ⏩ Author: Derek Miller ⏩ Organisation: Georgia Institute of Technology, Atlanta, Georgia ⏩ IMPORTANT LINKS: BERT - https://arxiv.org/abs/1810.04805 TextRank - https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf Text Summarization Survey - https://arxiv.org/abs/1707.02268 Extractive Text Summarization Survey - https://ieeexplore.ieee.org/document/7944061 ********************************************* ⏩ Youtube - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA ⏩ Blog - https://prakhartechviz.blogspot.com ⏩ LinkedIn - https://linkedin.com/in/prakhar21 ⏩ Medium - https://medium.com/@prakhar.mishra ⏩ GitHub - https://github.com/prakhar21 ********************************************* Please feel free to share out the content and subscribe to my channel :) ⏩ Subscribe - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA Tools I use for making videos :) ⏩ iPad - https://tinyurl.com/y39p6pwc ⏩ Apple Pencil - https://tinyurl.com/y5rk8txn ⏩ GoodNotes - https://tinyurl.com/y627cfsa #techviz #datascienceguy #transformers #pytorch #huggingface #naturallanguageprocessing #clustering

Comments
loading...