New Features
Looking to learn about new significant releases in MLflow?
Find out about the details of major features, changes, and deprecations below.
MLflow Docs Overhaul
The MLflow docs are getting a facelift with added content, tutorials, and guides. Stay tuned for further improvements to the site!
Updated Model Registry UI
A new opt-in Model Registry UI has been built that uses Aliases and Tags for managing model development. See
more about the new UI workflow in the docs.
Spark Connect support
You can now log, save, and load models trained using Spark Connect. Try out Spark 3.5 and the MLflow integration today!
AI21 Labs added as an MLflow Gateway provider
You can now use the MLflow AI Gateway to connect to LLMs hosted by AI21 Labs.
AWS Bedrock added as an MLflow Gateway provider
You can now use the MLflow AI Gateway to connect to LLMs hosted by AWS's Bedrock service.
PaLM 2 added as an MLflow Gateway provider

You can now use the MLflow AI Gateway to connect to LLMs hosted by Google's PaLM 2 service.
Hugging Face TGI added as an MLflow Gateway provider
You can self-host your own transformers-based models from the Hugging Face Hub and directly connect to the models with the AI Gateway
with TGI.
LLM evaluation viewer added to MLflow UI
You can view your LLM evaluation results directly from the MLflow UI.
Introducting the Prompt Engineering UI

Link your MLflow Tracking Server with your MLflow AI Gateway Server to experiment, evaluate, and construct
prompts that can be compared amongst different providers without writing a single line of code.
Cloudflare R2 now supported as an artifact store
Cloudflare's R2 storage backend is now supported for use as an artifact store. To learn more about
R2, read the Cloudflare docs to get more information and to explore what is possible.
Params support for PyFunc Models
PyFunc models now support passing parameters at inference time. With this new feature,
you can define the allowable keys, with default values, for any parameters that you would like
consumers of your model to be able to override. This is particularly useful for LLMs, where you
might want to let users adjust commonly modified parameters for a model, such as token counts and temperature.
MLflow Serving support added to MLflow AI Gateway
The MLflow AI Gateway now supports defining an MLflow serving endpoint as provider. With this
new feature, you can serve any OSS transformers model that conforms to the
completions or embeddings route type
definitions.
Try it out today with our end-to-end example.
Introducing the MLflow AI Gateway
We're excited to announce the newest top-level component in the MLflow ecosystem: The AI Gateway.
With this new feature, you can create a single access point to many of the most popular LLM SaaS services available now,
simplifying interfaces, managing credentials, and providing a unified standard set of APIs to reduce the complexity of
building products and services around LLMs.
MLflow Evaluate now supports LLMs
You can now use MLflow evaluate to compare results from your favorite LLMs on a fixed prompt.
With support for many of the standard evaluation metrics for LLMs built in directly to the API, the featured
LLM modeling tasks of text summarization, text classification, question answering, and text generation allows you
to view the results of submitted text to multiple models in a single UI element.
Chart View added to the MLflow UI
You can now visualize parameters and metrics across multiple runs as a chart on the runs table.