Retrieval Augmented Generation (RAG) for LLMs
Sukhada Kulkarni, Siyu Qian, Xinling, Antoni Liria Sala
Large Language Models (LLMs) have revolutionized natural language processing, but they come with limitations such as hallucinations and outdated knowledge. Retrieval-Augmented Generation (RAG) is a practical approach to mitigating these issues by integrating external knowledge retrieval into the LLM generation process.
This tutorial will introduce the core concepts of RAG, walk through its key components, and provide a hands-on session for building a complete RAG pipeline. We will also cover advanced techniques, such as hybrid search, re-ranking, ensemble retrieval, and benchmarking. By the end of this tutorial, participants will be equipped with both the theoretical understanding and practical skills needed to build robust RAG pipeline.