Skip to main content

Tracing DeepSeek

MLflow Tracing provides automatic tracing capability for DeepSeek models through the OpenAI SDK integration. Since DeepSeek offers an OpenAI-compatible API format, you can use mlflow.openai.autolog() to trace interactions with DeepSeek models.

Tracing via autolog

MLflow trace automatically captures the following information about DeepSeek calls:

  • Prompts and completion responses
  • Latencies
  • Token usage
  • Model name
  • Additional metadata such as temperature, max_completion_tokens, if specified.
  • Function calling if returned in the response
  • Built-in tools such as web search, file search, computer use, etc.
  • Any exception if raised

Getting Started

1

Install dependencies

bash
pip install mlflow openai
2

Start MLflow server

If you have a local Python environment >= 3.10, you can start the MLflow server locally using the mlflow CLI command.

bash
mlflow server
3

Enable tracing and call DeepSeek

python
import openai
import mlflow

# Enable auto-tracing for OpenAI (works with DeepSeek)
mlflow.openai.autolog()

# Optional: Set a tracking URI and an experiment
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("DeepSeek")

# Initialize the OpenAI client with DeepSeek API endpoint
client = openai.OpenAI(
base_url="https://api.deepseek.com",
api_key="<your_deepseek_api_key>",
)

response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
4

View traces in MLflow UI

Browse to your MLflow UI (for example, http://localhost:5000) and open the DeepSeek experiment to see traces for the calls above.

DeepSeek Tracing

-> View Next Steps for learning about more MLflow features like user feedback tracking, prompt management, and evaluation.

Streaming and Async Support

MLflow supports tracing for streaming and async DeepSeek APIs. Visit the OpenAI Tracing documentation for example code snippets for tracing streaming and async calls through OpenAI SDK.

Combine with frameworks or manual tracing

The automatic tracing capability in MLflow is designed to work seamlessly with the Manual Tracing SDK or multi-framework integrations. The examples below show Python (manual span) and JS/TS (manual span) at the same level of complexity.

python
import json
from openai import OpenAI
import mlflow
from mlflow.entities import SpanType

# Initialize the OpenAI client with DeepSeek API endpoint
client = OpenAI(
base_url="https://api.deepseek.com",
api_key="<your_deepseek_api_key>",
)


# Create a parent span for the DeepSeek call
@mlflow.trace(span_type=SpanType.CHAIN)
def answer_question(question: str):
messages = [{"role": "user", "content": question}]
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages,
)

# Attach session/user metadata to the trace
mlflow.update_current_trace(
metadata={
"mlflow.trace.session": "session-12345",
"mlflow.trace.user": "user-a",
}
)
return response.choices[0].message.content


answer = answer_question("What is the capital of France?")

Running either example will produce a trace that includes the DeepSeek LLM span; the traced function creates the parent span automatically.

DeepSeek Tracing with Manual Tracing

Next steps