Hierarchical Transformers for Long Document Classification (Research Paper Walkthrough)

Hierarchical Transformers for Long Document Classification (Research Paper Walkthrough)

May 20, 2021
|
48 views
Details
#bert #textclassification #nlp This paper extends BERT for doing long document classification in nlp. They propose BERT variations RoBERT and ToBERT as hierarchical enchancements for the same. They obtained a significant improvement over the baseline models. ⏩ Abstract: BERT, which stands for Bidirectional Encoder Representations from Transformers, is a recently introduced language representation model based upon the transfer learning paradigm. We extend its fine-tuning procedure to address one of its major limitations -applicability to inputs longer than a few hundred words, such as transcripts of human call conversations. Our method is conceptually simple. We segment the input into smaller chunks and feed each of them into the base model. Then, we propagate each output through a single recurrent layer, or another transformer, followed by a softmax activation. We obtain the final classification decision after the last segment has been consumed. We show that both BERT extensions are quick to fine-tune and converge after as little as 1 epoch of training on a small, domain-specific data set. We successfully apply them in three different tasks involving customer call satisfaction prediction and topic classification, and obtain a significant improvement over the baseline models in two of them. Please feel free to share out the content and subscribe to my channel :) ⏩ Subscribe - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA?sub_confirmation=1 ⏩ OUTLINE: 0:00 - Background and Abstract 2:55 - Paper Contributions 3:28 - Overview of BERT 6:31 - Recurrence over BERT (RoBERT) 10:40 - Transformer over BERT (ToBERT) 11:35 - Results 12:08 - My thoughts ⏩ Paper Title: Hierarchical Transformers for Long Document Classification ⏩ Paper: https://arxiv.org/abs/1910.10781 ⏩ Code: https://paperswithcode.com/paper/hierarchical-transformers-for-long-document#code ⏩ Author: Raghavendra Pappagari, Piotr Żelasko, Jesús Villalba, Yishay Carmiel, Najim Dehak ⏩ Organisation: Center for Language and Speech Processing - Johns Hopkins University, Avaya Conversational Intelligence ⏩ IMPORTANT LINKS Extended Summary of Long Documents - https://www.youtube.com/watch?v=Inc63mLLInA BERT for Text Summarization - https://www.youtube.com/watch?v=JU6eSLsp6vI SpanBERT - https://www.youtube.com/watch?v=QUP3rMrA1mk LSBert: Lexical Simplificaiton using BERT - https://www.youtube.com/watch?v=uhnKsGDyhEg Long Document Text Summarization using Transformers - https://www.youtube.com/watch?v=2IzXW3Ypks0 ********************************************* If you want to support me financially which totally optional and voluntary :) ❤️ You can consider buying me chai ( because i don't drink coffee :) ) at https://www.buymeacoffee.com/TechvizCoffee ********************************************* ⏩ Youtube - https://www.youtube.com/c/TechVizTheDataScienceGuy ⏩ Blog - https://prakhartechviz.blogspot.com ⏩ LinkedIn - https://linkedin.com/in/prakhar21 ⏩ Medium - https://medium.com/@prakhar.mishra ⏩ GitHub - https://github.com/prakhar21 ⏩ Twitter - https://twitter.com/rattller ********************************************* Please feel free to share out the content and subscribe to my channel :) ⏩ Subscribe - https://youtube.com/channel/UCoz8NrwgL7U9535VNc0mRPA?sub_confirmation=1 Tools I use for making videos :) ⏩ iPad - https://tinyurl.com/y39p6pwc ⏩ Apple Pencil - https://tinyurl.com/y5rk8txn ⏩ GoodNotes - https://tinyurl.com/y627cfsa #techviz #datascienceguy #ai #researchpaper #naturallanguageprocessing #transformers #nlproc

0:00 Background and Abstract 2:55 Paper Contributions 3:28 Overview of BERT 6:31 Recurrence over BERT (RoBERT) 10:40 Transformer over BERT (ToBERT) 11:35 Results 12:08 My thoughts
Comments
loading...