Skip to main content

Tracing Quickstart

This quickstart guide will walk you through setting up a simple GenAI application with MLflow Tracing. In less than 10 minutes, you'll enable tracing, run a basic application, and explore the generated traces in the MLflow UI.

Prerequisites

Make sure you have started the MLflow server. If you don't have the MLflow server running yet, just follow these simple steps to get it started.

Python Environment: Python 3.10+

For the fastest setup, you can install the mlflow Python package via pip and start the MLflow server locally.

bash
pip install --upgrade mlflow
mlflow server

Create a MLflow Experiment

The traces your GenAI application will send to the MLflow server are grouped into MLflow experiments. We recommend creating one experiment for each GenAI application.

Let's create a new MLflow experiment using the MLflow UI so that you can start sending your traces.

New Experiment
  1. Navigate to the MLflow UI in your browser at http://localhost:5000.
  2. Click on the
    Create
    button on the top right.
  3. Enter a name for the experiment and click on "Create".

You can leave the Artifact Location field blank for now. It is an advanced configuration to override where MLflow stores experiment data.

Dependency

To connect your GenAI application to the MLflow server, you will need to install the MLflow client SDK.

bash
pip install --upgrade mlflow openai>=1.0.0
info

While this guide features an example using the OpenAI SDK, the same steps apply to other LLM providers, including Anthropic, Google, Bedrock, and many others.

For a comprehensive list of LLM providers supported by MLflow, see the LLM Integrations Overview.

Start Tracing

Once your experiment is created, you're ready to connect to the MLflow server and begin sending traces from your GenAI application.

python
import mlflow
from openai import OpenAI

# Specify the tracking URI for the MLflow server.
mlflow.set_tracking_uri("http://localhost:5000")

# Specify the experiment you just created for your GenAI application.
mlflow.set_experiment("My Application")

# Enable automatic tracing for all OpenAI API calls.
mlflow.openai.autolog()

client = OpenAI()
# The trace of the following is sent to the MLflow server.
client.chat.completions.create(
model="o4-mini",
messages=[
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What's the weather like in Seattle?"},
],
)

View Your Traces on the MLflow UI

After running the code above, go to the MLflow UI and select the "My Application" experiment, and then select the "Traces" tab. It should show the newly created trace.

Single Trace
Single Trace

Next Step

Congrats on sending your first trace with MLflow! Now that you've got the basics working, here is the recommended next step to deepen your understanding of tracing: