Text Summarization using Extractive and Abstractive Techniques

Authors

  • Chintan A. Shah  Department of Computer Engineering, SLRCTE, University of Mumbai, Maharashtra, India
  • Prof. Neelam Phadnis  Department of Computer Engineering, SLRCTE, University of Mumbai, Maharashtra, India

DOI:

https://doi.org//10.32628/CSEIT228361

Keywords:

Abstractive Summarization, BART, BERT, Extractive Summarization, NLP, RoBERTa, T5, Pegasus

Abstract

There have been considerable advancements in Text Summarization over the last few years. There are two ways to text summarization: one is based on natural language processing (NLP), and the other is based on deep learning. In the realm of NLP, text summarization is the most intriguing and challenging task. NLP stands for Natural Language Processing, which studies and manipulates human language by computers. Because of the massive increase in information and data, it has become critical. Text summarization is creating a thorough and meaningful summary of text from various sources such as books, news stories, research papers, and tweets, among others. Large text documents that are difficult to summarize manually are the research subject. The job of condensing a piece of text into a shorter version, minimizing the size of the original text while maintaining key informative components and content meaning, is known as summarising. Because human text summarising is a time-consuming and intrinsically tiresome process, automating it is gaining popularity and hence acts as a potent stimulus for academic study.

References

  1. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu., “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”
  2. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct 2019, “ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ”
  3. Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova , “ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”
  4. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018,”RoBERTa: A Robustly Optimized BERT Pretraining Approach”
  5. Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019, “PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

Downloads

Published

2022-06-30

Issue

Section

Research Articles

How to Cite

[1]
Chintan A. Shah, Prof. Neelam Phadnis, " Text Summarization using Extractive and Abstractive Techniques, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 8, Issue 3, pp.236-241, May-June-2022. Available at doi : https://doi.org/10.32628/CSEIT228361