RAG Tutorials
You can find a list of tutorials for RAG below. These tutorials are designed to help you get started with RAG evaluation and walk you through a concrete example of how to evaluate a RAG application that answers questions about MLflow documentation.
End-to-End LLM RAG Evaluation Tutorial
This notebook, intended for use with the Databricks platform, showcases a full end-to-end example of how to configure, create, and interface with a full RAG system. The example used in this tutorial uses the documentation of MLflow as the corpus of embedded documents that the RAG application will use to answer questions. Using ChromaDB to store the document embeddings and LangChain to orchestrate the RAG application, we’ll use MLflow’s evaluate functionality to evaluate the retrieved documents from our corpus based on a series of questions.
If you would like to try this notebook out on Databricks, you can import it directly from the Databricks Workspace. If you would like a local copy to manually import into your Workspace, you can download it here:
Download the notebookTo follow along and see the sections of the notebook guide, click below:
View the NotebookQuestion Generation for RAG Tutorial
This notebook is a step-by-step tutorial on how to generate a question dataset with LLMs for retrieval evaluation within RAG. It will guide you through getting a document dataset, generating relevant questions through prompt engineering on LLMs, and analyzing the question dataset. The question dataset can then be used for the subsequent task of evaluating the retriever model, which is a part of RAG that collects and ranks relevant document chunks based on the user’s question.
If you would like a copy of this notebook to execute in your environment, download the notebook here:
Download the notebookTo follow along and see the sections of the notebook guide, click below:
View the NotebookRetriever Evaluation Tutorial
This tutorial walks you through a concrete example of how to build and evaluate a RAG application that answers questions about MLflow documentation.
In this tutorial you will learn:
How to prepare an evaluation dataset for your RAG application.
How to call your retriever in the MLflow evaluate API.
How to evaluate a retriever’s capacity for retrieving relevant documents based on a series of queries using MLflow evaluate.
If you would like a copy of this notebook to execute in your environment, download the notebook here:
Download the notebookTo follow along and see the sections of the notebook guide, click below:
View the Notebook