Observability
Capture complete traces of your LLM applications and agents to get deep insights into their behavior. Built on OpenTelemetry and supports any LLM provider and agent framework.

Evaluation
Run systematic evaluations, track quality metrics over time, and catch regressions before they reach production. Choose from 50+ built-in metrics and LLM judges, or define your own with highly flexible APIs.

Prompt & Optimization
Version, test, and deploy prompts with full lineage tracking. Automatically optimize prompts with state-of-the-art algorithms to improve performance.

AI Gateway
Unified API gateway for all LLM providers. Route requests, manage rate limits, handle fallbacks, and control costs through a unified OpenAI-compatible interface.

Open Source
100% open source under Apache 2.0 license. Forever free, no strings attached.
No Vendor Lock-in
Works with any cloud, framework, or tool you use. Switch vendors anytime.
Production Ready
Battle-tested at scale by Fortune 500 companies and thousands of teams.
Full Visibility
Complete tracking and observability for all your AI applications and agents.
Community
20K+ GitHub stars, 900+ contributors. Join the fastest-growing MLOps community.
Integrations
Works out of the box with LangChain, OpenAI, PyTorch, and 100+ AI frameworks.
Start MLflow Server
One command to get started. Docker setup is also available.
Enable Logging
Add minimal code to start capturing traces, metrics, and parameters
Run your code
Run your ML/LLM code as usual. MLflow logs traces and you can explore them in the MLflow UI.
Yes! MLflow is 100% open source under the Apache 2.0 license. You can use it for any purpose, including commercial applications, without any licensing fees. The project is backed by the Linux Foundation, ensuring it remains open and community-driven.
