Text Summarization using Extractive and Abstractive Techniques
DOI:
https://doi.org/10.32628/CSEIT228361Keywords:
Abstractive Summarization, BART, BERT, Extractive Summarization, NLP, RoBERTa, T5, PegasusAbstract
There have been considerable advancements in Text Summarization over the last few years. There are two ways to text summarization: one is based on natural language processing (NLP), and the other is based on deep learning. In the realm of NLP, text summarization is the most intriguing and challenging task. NLP stands for Natural Language Processing, which studies and manipulates human language by computers. Because of the massive increase in information and data, it has become critical. Text summarization is creating a thorough and meaningful summary of text from various sources such as books, news stories, research papers, and tweets, among others. Large text documents that are difficult to summarize manually are the research subject. The job of condensing a piece of text into a shorter version, minimizing the size of the original text while maintaining key informative components and content meaning, is known as summarising. Because human text summarising is a time-consuming and intrinsically tiresome process, automating it is gaining popularity and hence acts as a potent stimulus for academic study.
References
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu., “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”
- Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct 2019, “ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ”
- Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova , “ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”
- Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018,”RoBERTa: A Robustly Optimized BERT Pretraining Approach”
- Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019, “PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.