Enhancing RAG Performance Through Chunking and Text Splitting Techniques

Authors

  • Aadit Kshirsagar Engineering, Calsoft Inc, Pune, Maharashtra, India Author

DOI:

https://doi.org/10.32628/CSEIT2410593

Keywords:

GenAI, Large Language Models, Retrieval Augmented Generation, Chunking

Abstract

In the world of Generative Artificial Intelligence (GenAI) and Large Language Models (LLM), Retrieval-Augmented Generation (RAG) has transformed the way we interact with data. Using RAG, these models can leverage new data contexts to respond to user queries and gain valuable insights. Behind the outstanding capabilities of RAG, a fundamental pre-processing step is present known as chunking. This step plays a crucial role in the effectiveness of these RAG-enhanced models. Chunking involves the breaking down of large text or documents into smaller segments of a fixed size. This allows the retriever to focus on smaller units at a time, making it easier to process and analyse the text. Finding the ideal chunking strategy can be a challenging task. Experimenting and analysis play a decisive role here, as different chunking strategies cater to different use cases. This paper, mainly targeted for an audience that is exploring RAG tuning techniques for higher accuracy, explores the various chunking techniques and their practical implementation using code snippets. After analysing the results for various use cases, the paper also suggests the best use cases for the different chunking strategies. Finally, it concludes by discussing the future potential and extending scope of RAG-enhanced applications.

Downloads

Download data is not yet available.

References

P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Kuttler, M. Lewis, W.-t. Yih, T. Rockt ¨ aschel ¨ et al., “Retrievalaugmented generation for knowledge-intensive nlp tasks,” Advances in Neural Information Processing Systems, vol. 33, pp. 9459–9474, 2020

S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark et al., “Improving language models by retrieving from trillions of tokens,” in International conference on machine learning. PMLR, 2022, pp. 2206–2240

Yunfan Gaoa , Yun Xiongb , Xinyu Gaob , Kangxiang Jiab , Jinliu Panb , Yuxi Bic , Yi Daia , Jiawei Suna , Meng Wangc , and Haofen Wang, “Retrieval-Augmented Generation for Large Language Models: A Survey”

I. ILIN, “Advanced rag techniques: an illustrated overview,” https://pub.towardsai.net/ advanced-rag-techniques-an-illustrated-overview-04d193d8fec6, 2023.

W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang, “Generate rather than retrieve: Large language models are strong context generators,” arXiv preprint arXiv:2209.10063, 2022.

S. Wang, Y. Xu, Y. Fang, Y. Liu, S. Sun, R. Xu, C. Zhu, and M. Zeng, “Training data is more valuable than you think: A simple and effective method by retrieving from training data,” arXiv preprint arXiv:2203.08773, 2022. DOI: https://doi.org/10.18653/v1/2022.acl-long.226

Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi-Yu, Y. Yang, J. Callan, and G. Neubig, “Active retrieval augmented generation,” arXiv preprint arXiv:2305.06983, 2023. DOI: https://doi.org/10.18653/v1/2023.emnlp-main.495

L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions with human feedback,” Advances in neural information processing systems, vol. 35, pp. 27 730–27 744, 2022

X. Wang, Q. Yang, Y. Qiu, J. Liang, Q. He, Z. Gu, Y. Xiao, and W. Wang, “Knowledgpt: Enhancing large language models with retrieval and storage access on knowledge bases,” arXiv preprint arXiv:2308.11761, 2023.

Barnett, S., Kurniawan, S., Thudumu, S., Brannelly, Z., Abdelrazek, M.: Seven Failure Points When Engineering a Retrieval Augmented Generation System , 2024 DOI: https://doi.org/10.1145/3644815.3644945

Chen, H., Jiao, F., Li, X., Qin, C., Ravaut, M., Zhao, R., Xiong, C., Joty, S.: ChatGPT’s One-year Anniversary: Are Open-Source Large Language Models Catching up? arXiv preprint arXiv:2311.16989 , 2023

El-Haj, M., Rayson, P., Young, S., Walker, M.: Detecting document structure in a very large corpus of UK financial reports. European Language Resources Association (ELRA) , 2014

Hada, R., Gumma, V., de Wynter, A., Diddee, H., Ahmed, M., Choudhury, M., Bali, K., Sitaram, S.: Are large language model-based evaluators the solution to scaling up multilingual evaluation? arXiv preprint arXiv:2309.07462 , 2023

Anantha, R., Bethi, T., Vodianik, D., Chappidi, S.: Context Tuning for Retrieval Augmented Generation , 2023

Kaddour, J., Harris, J., Mozes, M., Bradley, H., Raileanu, R., McHardy, R.: Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 , 2023

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K¨uttler, H., Lewis, M., Yih, W.t., Rockt¨aschel, T., et al.: Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems 33, 9459–9474 , 2020

Liu, N.F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., Liang, P.: Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172 , 2023 DOI: https://doi.org/10.1162/tacl_a_00638

apineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. pp. 311–318 , 2002 DOI: https://doi.org/10.3115/1073083.1073135

Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P., Rosenberg, D., Mann, G.: BloombergGPT: A Large Language Model for Finance , 2023

Ye, H., Liu, T., Zhang, A., Hua, W., Jia, W.: Cognitive Mirage: A Review of Hallucinations in Large Language Models , 2023

Downloads

Published

01-11-2024

Issue

Section

Research Articles

Similar Articles

1-10 of 191

You may also start an advanced similarity search for this article.