Tutorials and Examples
Welcome to our Tutorials and Examples hub! Here you'll find a curated set of resources to help you get started and deepen your knowledge of MLflow. Whether you're fine-tuning hyperparameters, orchestrating complex workflows, or integrating MLflow into your training code, these examples will guide you step by step.
🎯 Core Workflows & API
If you're focused on finding optimal configurations for your models, check out our Hyperparameter Tuning example. It walks you through setting up grid or random search runs, logging metrics, and comparing results—all within MLflow's tracking interface.
When your project requires coordinating multiple steps—say, data preprocessing, model training, and post-processing—you'll appreciate the Orchestrating Multistep Workflows guide. It demonstrates how to chain Python scripts or notebook tasks so that each stage logs artifacts and metrics in a unified experiment 🚀.
For those who prefer crafting HTTP requests directly, our Using the MLflow REST API Directly example shows you how to submit runs, retrieve metrics, and register models via simple curl
and Python snippets 🔍. It's ideal when you want language-agnostic control over your tracking server.
Meanwhile, if you're building custom functionality on top of MLflow's core, dive into Write & Use MLflow Plugins to learn how to extend MLflow with new flavors, UI tabs, or artifact stores. You'll see how to package your plugin, register it, and test it locally before pushing to production.
📦 Reproducibility & Supply Chain Security
Reproducibility is at the heart of trustworthy ML. If you need to encapsulate your entire training environment, the Packaging Training Code in a Docker Environment tutorial shows you how to create a Docker image that includes your data loader, training script, dependencies, and MLflow tracking calls. You'll see how to build, push, and run the image while capturing every artifact.
When protecting your Python packages from tampering, the Python Package Anti-Tampering example walks you through signing wheels, verifying checksums, and integrating these steps into your CI/CD pipeline. This ensures that what you log as code is exactly what you execute later, avoiding "works on my machine" surprises.