Command-Line Interface
The MLflow command-line interface (CLI) provides a simple interface to various functionality in MLflow. You can use the CLI to run projects, start the tracking UI, create and list experiments, download run artifacts, serve MLflow Python Function and scikit-learn models, and serve models on Microsoft Azure Machine Learning and Amazon SageMaker.
Each individual command has a detailed help screen accessible via mlflow command_name --help
.
Table of Contents
mlflow
mlflow [OPTIONS] COMMAND [ARGS]...
Options
artifacts
Upload, list, and download artifacts from an MLflow artifact repository.
To manage artifacts for a run associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow artifacts [OPTIONS] COMMAND [ARGS]...
download
Download an artifact file or directory to a local directory. The output is the name of the file or directory on the local filesystem.
Either --artifact-uri
or --run-id
must be provided.
mlflow artifacts download [OPTIONS]
Options
-
-a
,
--artifact-path
<artifact_path>
For use with Run ID: if specified, a path relative to the run’s root directory to download
-
-u
,
--artifact-uri
<artifact_uri>
URI pointing to the artifact file or artifacts directory; use as an alternative to specifying –run_id and –artifact-path
-
-d
,
--dst-path
<dst_path>
Path of the local filesystem destination directory to which to download the specified artifacts. If the directory does not exist, it is created. If unspecified the artifacts are downloaded to a new uniquely-named directory on the local filesystem, unless the artifacts already exist on the local filesystem, in which case their local path is returned directly
list
Return all the artifacts directly under run’s root artifact directory, or a sub-directory. The output is a JSON-formatted list.
mlflow artifacts list [OPTIONS]
Options
log-artifact
Log a local file as an artifact of a run, optionally within a run-specific artifact path. Run artifacts can be organized into directories, so you can place the artifact in a directory this way.
mlflow artifacts log-artifact [OPTIONS]
Options
azureml
Serve models on Azure ML. These commands require that MLflow be installed with Python 3.
To serve a model associated with a run on a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow azureml [OPTIONS] COMMAND [ARGS]...
build-image
Warning
mlflow.azureml.cli.build_image
is deprecated since 1.19.0. This method will be removed in a future release. Use the azureml deployment plugin, https://aka.ms/aml-mlflow-deploy
instead.
Register an MLflow model with Azure ML and build an Azure ML ContainerImage for deployment. The resulting image can be deployed as a web service to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS).
The resulting Azure ML ContainerImage will contain a webserver that processes model queries. For information about the input data formats accepted by this webserver, see the following documentation: https://www.mlflow.org/docs/latest/models.html#azureml-deployment.
mlflow azureml build-image [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
-w
,
--workspace-name
<workspace_name>
Required The name of the Azure Workspace in which to build the image.
-
-s
,
--subscription-id
<subscription_id>
The subscription id associated with the Azure Workspace in which to build the image
-
-i
,
--image-name
<image_name>
The name to assign the Azure Container Image that is created. If unspecified, a unique image name will be generated.
-
-n
,
--model-name
<model_name>
The name to assign the Azure Model that is created. If unspecified, a unique image name will be generated.
-
-d
,
--description
<description>
A string description to associate with the Azure Container Image and the Azure Model that are created.
-
-t
,
--tags
<tags>
A collection of tags, represented as a JSON-formatted dictionary of string key-value pairs, to associate with the Azure Container Image and the Azure Model that are created. These tags are added to a set of default tags that include the model path, the model run id (if specified), and more.
db
Commands for managing an MLflow tracking database.
mlflow db [OPTIONS] COMMAND [ARGS]...
upgrade
Upgrade the schema of an MLflow tracking database to the latest supported version.
IMPORTANT: Schema migrations can be slow and are not guaranteed to be transactional - always take a backup of your database before running migrations. The migrations README, which is located at https://github.com/mlflow/mlflow/blob/master/mlflow/store/db_migrations/README.md, describes large migrations and includes information about how to estimate their performance and recover from failures.
mlflow db upgrade [OPTIONS] URL
Arguments
deployments
Deploy MLflow models to custom targets. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions in https://mlflow.org/docs/latest/plugins.html#community-plugins
You can also write your own plugin for deployment to a custom target. For instructions on writing and distributing a plugin, see https://mlflow.org/docs/latest/plugins.html#writing-your-own-mlflow-plugins.
mlflow deployments [OPTIONS] COMMAND [ARGS]...
create
Deploy the model at model_uri
to the specified target.
Additional plugin-specific arguments may also be passed to this command, via -C key=value
mlflow deployments create [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
create-endpoint
Create an endpoint with the specified name at the specified target.
Additional plugin-specific arguments may also be passed to this command, via -C key=value
mlflow deployments create-endpoint [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the endpoint, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
delete
Delete the deployment with name given at –name from the specified target.
mlflow deployments delete [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
delete-endpoint
Delete the specified endpoint at the specified target
mlflow deployments delete-endpoint [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
explain
Generate explanations of model predictions on the specified input for the deployed model for the given input(s). Explanation output formats vary by deployment target, and can include details like feature importance for understanding/debugging predictions. Run mlflow deployments help or consult the documentation for your plugin for details on explanation format. For information about the input data formats accepted by this function, see the following documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools
mlflow deployments explain [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
get
Print a detailed description of the deployment with name given at --name
in the specified
target.
mlflow deployments get [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
get-endpoint
Get details for the specified endpoint at the specified target
mlflow deployments get-endpoint [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
help
Display additional help for a specific deployment target, e.g. info on target-specific config options and the target’s URI format.
mlflow deployments help [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
list
List the names of all model deployments in the specified target. These names can be used with the delete, update, and get commands.
mlflow deployments list [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
list-endpoints
List all endpoints at the specified target
mlflow deployments list-endpoints [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
predict
Predict the results for the deployed model for the given input(s)
mlflow deployments predict [OPTIONS]
Options
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
run-local
Deploy the model locally. This has very similar signature to create
API
mlflow deployments run-local [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
update
Update the deployment with ID deployment_id in the specified target. You can update the URI of the model and/or the flavor of the deployed model (in which case the model URI must also be specified).
Additional plugin-specific arguments may also be passed to this command, via -C key=value.
mlflow deployments update [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
update-endpoint
Update the specified endpoint at the specified target.
Additional plugin-specific arguments may also be passed to this command, via -C key=value
mlflow deployments update-endpoint [OPTIONS]
Options
-
-C
,
--config
<NAME=VALUE>
Extra target-specific config for the endpoint, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.
-
-t
,
--target
<target>
Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: sagemaker
See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins
experiments
Manage experiments. To manage experiments associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow experiments [OPTIONS] COMMAND [ARGS]...
create
Create an experiment.
All artifacts generated by runs related to this experiment will be stored under artifact location, organized under specific run_id sub-directories.
Implementation of experiment and metadata store is dependent on backend storage. FileStore
creates a folder for each experiment ID and stores metadata in meta.yaml
. Runs are stored
as subfolders.
mlflow experiments create [OPTIONS]
Options
-
-l
,
--artifact-location
<artifact_location>
Base location for runs to store artifact results. Artifacts will be stored at $artifact_location/$run_id/artifacts. See https://mlflow.org/docs/latest/tracking.html#where-runs-are-recorded for more info on the properties of artifact location. If no location is provided, the tracking server will pick a default.
delete
Mark an active experiment for deletion. This also applies to experiment’s metadata, runs and
associated data, and artifacts if they are store in default location. Use list
command to
view artifact location. Command will throw an error if experiment is not found or already
marked for deletion.
Experiments marked for deletion can be restored using restore
command, unless they are
permanently deleted.
Specific implementation of deletion is dependent on backend stores. FileStore
moves
experiments marked for deletion under a .trash
folder under the main folder used to
instantiate FileStore
. Experiments marked for deletion can be permanently deleted by
clearing the .trash
folder. It is recommended to use a cron
job or an alternate
workflow mechanism to clear .trash
folder.
mlflow experiments delete [OPTIONS]
Options
list
List all experiments in the configured tracking server.
mlflow experiments list [OPTIONS]
Options
rename
Renames an active experiment. Returns an error if the experiment is inactive.
mlflow experiments rename [OPTIONS]
Options
gc
Permanently delete runs in the deleted lifecycle stage from the specified backend store. This command deletes all artifacts and metadata associated with the specified runs.
mlflow gc [OPTIONS]
Options
-
--older-than
<older_than>
Optional. Remove run(s) older than the specified time limit. Specify a string in #d#h#m#s format. Float values are also supported.For example: –older-than 1d2h3m4s, –older-than 1.2d3h4m5s
-
--backend-store-uri
<PATH>
URI of the backend store from which to delete runs. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’) or local filesystem URIs (e.g. ‘file:///absolute/path/to/directory’). By default, data will be deleted from the ./mlruns directory.
models
Deploy MLflow models locally.
To deploy a model associated with a run on a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow models [OPTIONS] COMMAND [ARGS]...
build-docker
Builds a Docker image whose default entrypoint serves an MLflow model at port 8080, using the
python_function flavor. The container serves the model referenced by --model-uri
, if
specified when build-docker
is called. If --model-uri
is not specified when build_docker
is called, an MLflow Model directory must be mounted as a volume into the /opt/ml/model
directory in the container.
Building a Docker image with --model-uri
:
# Build a Docker image named 'my-image-name' that serves the model from run 'some-run-uuid'
# at run-relative artifact path 'my-model'
mlflow models build-docker --model-uri "runs:/some-run-uuid/my-model" --name "my-image-name"
# Serve the model
docker run -p 5001:8080 "my-image-name"
Building a Docker image without --model-uri
:
# Build a generic Docker image named 'my-image-name'
mlflow models build-docker --name "my-image-name"
# Mount the model stored in '/local/path/to/artifacts/model' and serve it
docker run --rm -p 5001:8080 -v /local/path/to/artifacts/model:/opt/ml/model "my-image-name"
Warning
The image built without --model-uri
doesn’t support serving models with RFunc / Java
MLeap model server.
NB: by default, the container will start nginx and gunicorn processes. If you don’t need the nginx process to be started (for instance if you deploy your container to Google Cloud Run), you can disable it via the DISABLE_NGINX environment variable:
docker run -p 5001:8080 -e DISABLE_NGINX=true "my-image-name"
See https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html for more information on the ‘python_function’ flavor.
mlflow models build-docker [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
[Optional] URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
generate-dockerfile
Generates a directory with Dockerfile whose default entrypoint serves an MLflow model at port
8080 using the python_function flavor. The generated Dockerfile is written to the specified
output directory, along with the model (if specified). This Dockerfile defines an image that
is equivalent to the one produced by mlflow models build-docker
.
mlflow models generate-dockerfile [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
[Optional] URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
-d
,
--output-directory
<output_directory>
Output directory where the generated Dockerfile is stored.
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
predict
Generate predictions in json format using a saved MLflow model. For information about the input data formats accepted by this function, see the following documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools.
mlflow models predict [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
-o
,
--output-path
<output_path>
File to output results to as json file. If not provided, output to stdout.
-
-j
,
--json-format
<json_format>
Only applies if the content type is ‘json’. Specify how the data is encoded. Can be one of {‘split’, ‘records’} mirroring the behavior of Pandas orient attribute. The default is ‘split’ which expects dict like data: {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]}, where index is optional. For more information see https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html
-
--no-conda
This flag is deprecated. Use –env-manager=local instead. If specified, will assume that MLmodel/MLproject is running within a Conda environment with the necessary dependencies for the current project instead of attempting to create a new conda environment.
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
prepare-env
Performs any preparation necessary to predict or serve the model, for example downloading dependencies or initializing a conda environment. After preparation, calling predict or serve should be fast.
mlflow models prepare-env [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
--no-conda
This flag is deprecated. Use –env-manager=local instead. If specified, will assume that MLmodel/MLproject is running within a Conda environment with the necessary dependencies for the current project instead of attempting to create a new conda environment.
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
serve
Serve a model saved with MLflow by launching a webserver on the specified host and port.
The command supports models with the python_function
or crate
(R Function) flavor.
For information about the input data formats accepted by the webserver, see the following
documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools.
You can make requests to POST /invocations
in pandas split- or record-oriented formats.
Example:
$ mlflow models serve -m runs:/my-run-id/model-path &
$ curl http://127.0.0.1:5000/invocations -H 'Content-Type: application/json' -d '{
"columns": ["a", "b", "c"],
"data": [[1, 2, 3], [4, 5, 6]]
}'
mlflow models serve [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
-h
,
--host
<HOST>
The network address to listen on (default: 127.0.0.1). Use 0.0.0.0 to bind to all addresses if you want to access the tracking server from other machines.
-
--no-conda
This flag is deprecated. Use –env-manager=local instead. If specified, will assume that MLmodel/MLproject is running within a Conda environment with the necessary dependencies for the current project instead of attempting to create a new conda environment.
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
-
--install-mlflow
If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.
Environment variables
-
MLFLOW_PORT
Provide a default for
--port
-
MLFLOW_HOST
Provide a default for
--host
-
MLFLOW_WORKERS
Provide a default for
--workers
pipelines
Run MLflow Pipelines and inspect pipeline results.
mlflow pipelines [OPTIONS] COMMAND [ARGS]...
clean
Note
Experimental: This command may change or be removed in a future release without warning.
Remove all pipeline outputs from the cache, or remove the cached outputs of a particular pipeline step if specified. After cached outputs are cleaned for a particular step, the step will be re-executed in its entirety the next time it is run.
mlflow pipelines clean [OPTIONS]
Options
-
-p
,
--profile
<profile>
Required The name of the pipeline profile to use. Profiles customize the configuration of one or more pipeline steps, and pipeline executions with different profiles often produce different results.
Environment variables
-
MLFLOW_PIPELINES_PROFILE
Provide a default for
--profile
get-artifact
Note
Experimental: This command may change or be removed in a future release without warning.
Get the location of an artifact output from the pipeline.
mlflow pipelines get-artifact [OPTIONS]
Options
-
-p
,
--profile
<profile>
Required The name of the pipeline profile to use. Profiles customize the configuration of one or more pipeline steps, and pipeline executions with different profiles often produce different results.
Environment variables
-
MLFLOW_PIPELINES_PROFILE
Provide a default for
--profile
inspect
Note
Experimental: This command may change or be removed in a future release without warning.
Display a visual overview of the pipeline graph, or display a summary of results from a particular pipeline step if specified. If the specified step has not been executed, nothing is displayed.
mlflow pipelines inspect [OPTIONS]
Options
-
-p
,
--profile
<profile>
Required The name of the pipeline profile to use. Profiles customize the configuration of one or more pipeline steps, and pipeline executions with different profiles often produce different results.
Environment variables
-
MLFLOW_PIPELINES_PROFILE
Provide a default for
--profile
run
Note
Experimental: This command may change or be removed in a future release without warning.
Run the full pipeline, or run a particular pipeline step if specified, producing outputs and displaying a summary of results upon completion.
mlflow pipelines run [OPTIONS]
Options
-
-p
,
--profile
<profile>
Required The name of the pipeline profile to use. Profiles customize the configuration of one or more pipeline steps, and pipeline executions with different profiles often produce different results.
Environment variables
-
MLFLOW_PIPELINES_PROFILE
Provide a default for
--profile
run
Run an MLflow project from the given URI.
For local runs, the run will block until it completes. Otherwise, the project will run asynchronously.
If running locally (the default), the URI can be either a Git repository URI or a local path. If running on Databricks, the URI must be a Git repository.
By default, Git projects run in a new working directory with the given parameters, while local projects run from the project’s root directory.
mlflow run [OPTIONS] URI
Options
-
-e
,
--entry-point
<NAME>
Entry point within project. [default: main]. If the entry point is not found, attempts to run the project file with the specified name as a script, using ‘python’ to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files
-
-P
,
--param-list
<NAME=VALUE>
A parameter for the run, of the form -P name=value. Provided parameters that are not in the list of parameters for an entry point will be passed to the corresponding entry point as command-line arguments in the form –name value
-
-A
,
--docker-args
<NAME=VALUE>
A docker run argument or flag, of the form -A name=value (e.g. -A gpus=all) or -A name (e.g. -A t). The argument will then be passed as docker run –name value or docker run –name respectively.
-
--experiment-name
<experiment_name>
Name of the experiment under which to launch the run. If not specified, ‘experiment-id’ option will be used to launch run.
-
-b
,
--backend
<BACKEND>
Execution backend to use for run. Supported values: ‘local’, ‘databricks’, kubernetes (experimental). Defaults to ‘local’. If running against Databricks, will run against a Databricks workspace determined as follows: if a Databricks tracking URI of the form ‘databricks://profile’ has been set (e.g. by setting the MLFLOW_TRACKING_URI environment variable), will run against the workspace specified by <profile>. Otherwise, runs against the workspace specified by the default Databricks CLI profile. See https://github.com/databricks/databricks-cli for more info on configuring a Databricks CLI profile.
-
-c
,
--backend-config
<FILE>
Path to JSON file (must end in ‘.json’) or JSON string which will be passed as config to the backend. The exact content which should be provided is different for each execution backend and is documented at https://www.mlflow.org/docs/latest/projects.html.
-
--no-conda
This flag is deprecated. Use –env-manager=local instead. If specified, will assume that MLmodel/MLproject is running within a Conda environment with the necessary dependencies for the current project instead of attempting to create a new conda environment.
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
-
--storage-dir
<storage_dir>
Only valid when
backend
is local. MLflow downloads artifacts from distributed URIs passed to parameters of type ‘path’ to subdirectories of storage_dir.
-
--run-id
<RUN_ID>
If specified, the given run ID will be used instead of creating a new run. Note: this argument is used internally by the MLflow project APIs and should not be specified.
-
--run-name
<RUN_NAME>
The name to give the MLflow Run associated with the project execution. If not specified, the MLflow Run name is left unset.
-
--skip-image-build
Only valid for Docker projects. If specified, skips building a new Docker image and directly uses the image specified by the image field in the MLproject file.
- Default
False
Arguments
Environment variables
-
MLFLOW_EXPERIMENT_NAME
Provide a default for
--experiment-name
-
MLFLOW_EXPERIMENT_ID
Provide a default for
--experiment-id
-
MLFLOW_TMP_DIR
Provide a default for
--storage-dir
runs
Manage runs. To manage runs of experiments associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow runs [OPTIONS] COMMAND [ARGS]...
delete
Mark a run for deletion. Return an error if the run does not exist or
is already marked. You can restore a marked run with restore_run
,
or permanently delete a run in the backend store.
mlflow runs delete [OPTIONS]
Options
describe
All of run details will print to the stdout as JSON format.
mlflow runs describe [OPTIONS]
Options
list
List all runs of the specified experiment in the configured tracking server.
mlflow runs list [OPTIONS]
Options
-
-v
,
--view
<view>
Select view type for list experiments. Valid view types are ‘active_only’ (default), ‘deleted_only’, and ‘all’.
Environment variables
-
MLFLOW_EXPERIMENT_ID
Provide a default for
--experiment-id
sagemaker
Serve models on SageMaker.
To serve a model associated with a run on a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.
mlflow sagemaker [OPTIONS] COMMAND [ARGS]...
build-and-push-container
Build new MLflow Sagemaker image, assign it a name, and push to ECR.
This function builds an MLflow Docker image. The image is built locally and it requires Docker to run. The image is pushed to ECR under current active AWS account and to current active AWS region.
mlflow sagemaker build-and-push-container [OPTIONS]
Options
-
--env-manager
<env_manager>
If specified, create an environment for MLmodel/MLproject using the specified environment manager. The following values are supported:
- local: use the local environment- conda: use conda- virtualenv: use virtualenv (and pyenv for Python version management)If unspecified, default to conda.
delete
Warning
mlflow.sagemaker.cli.delete
is deprecated. This method will be removed in a future release. Use mlflow deployments delete -t sagemaker
instead.
Delete the specified application. Unless --archive
is specified, all SageMaker resources
associated with the application are deleted as well.
By default, unless the --async
flag is specified, this command will block until
either the deletion process completes (definitively succeeds or fails) or the specified timeout
elapses.
mlflow sagemaker delete [OPTIONS]
Options
-
-ar
,
--archive
If specified, resources associated with the application are preserved. These resources may include unused SageMaker models and endpoint configurations that were previously associated with the application endpoint. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deleting asynchronously with –async.
-
--async
If specified, this command will return immediately after starting the deletion process. It will not wait for the deletion process to complete. The caller is responsible for monitoring the deletion process via native SageMaker APIs or the AWS console.
-
--timeout
<timeout>
If the command is executed synchronously, the deployment process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending deployment via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.
deploy
Warning
mlflow.sagemaker.cli.deploy
is deprecated. This method will be removed in a future release. Use mlflow deployments create -t sagemaker and mlflow deployments update -t sagemaker
instead.
Deploy model on Sagemaker as a REST API endpoint. Current active AWS account needs to have correct permissions setup.
By default, unless the --async
flag is specified, this command will block until
either the deployment process completes (definitively succeeds or fails) or the specified
timeout elapses.
For more information about the input data formats accepted by the deployed REST API endpoint, see the following documentation: https://www.mlflow.org/docs/latest/models.html#sagemaker-deployment.
mlflow sagemaker deploy [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
--mode
<mode>
The mode in which to deploy the application. Must be one of the following: create, add, replace
-
-ar
,
--archive
If specified, any SageMaker resources that become inactive (i.e as the result of an update in replace mode) are preserved. These resources may include unused SageMaker models and endpoint configurations that were associated with a prior version of the application endpoint. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deploying asynchronously with –async.
-
-t
,
--instance-type
<instance_type>
The type of SageMaker ML instance on which to deploy the model. For a list of supported instance types, see https://aws.amazon.com/sagemaker/pricing/instance-types/.
-
-c
,
--instance-count
<instance_count>
The number of SageMaker ML instances on which to deploy the model
-
-v
,
--vpc-config
<vpc_config>
Path to a file containing a JSON-formatted VPC configuration. This configuration will be used when creating the new SageMaker model associated with this application. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html
-
-f
,
--flavor
<flavor>
The name of the flavor to use for deployment. Must be one of the following: [‘python_function’, ‘mleap’]. If unspecified, a flavor will be automatically selected from the model’s available flavors.
-
--async
If specified, this command will return immediately after starting the deployment process. It will not wait for the deployment process to complete. The caller is responsible for monitoring the deployment process via native SageMaker APIs or the AWS console.
-
--timeout
<timeout>
If the command is executed synchronously, the deployment process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending deployment via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.
deploy-transform-job
Note
Experimental: This command may change or be removed in a future release without warning.
Deploy model on Sagemaker as a batch transform job. Current active AWS account needs to have correct permissions setup.
By default, unless the --async
flag is specified, this command will block until
either the batch transform job completes (definitively succeeds or fails) or the specified
timeout elapses.
mlflow sagemaker deploy-transform-job [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
--content-type
<content_type>
Required The multipurpose internet mail extension (MIME) type of the data
-
-o
,
--output-path
<output_path>
Required The S3 path to store the output results of the Sagemaker transform job
-
-s
,
--split-type
<split_type>
The method to split the transform job’s data files into smaller batches
-
--assemble-with
<assemble_with>
The method to assemble the results of the transform job as a single S3 object
-
--input-filter
<input_filter>
A JSONPath expression used to select a portion of the input data for the transform job
-
--output-filter
<output_filter>
A JSONPath expression used to select a portion of the output data from the transform job
-
-t
,
--instance-type
<instance_type>
The type of SageMaker ML instance on which to perform the batch transform job. For a list of supported instance types, see https://aws.amazon.com/sagemaker/pricing/instance-types/.
-
-c
,
--instance-count
<instance_count>
The number of SageMaker ML instances on which to perform the batch transform job
-
-v
,
--vpc-config
<vpc_config>
Path to a file containing a JSON-formatted VPC configuration. This configuration will be used when creating the new SageMaker model associated with this application. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html
-
-f
,
--flavor
<flavor>
The name of the flavor to use for deployment. Must be one of the following: [‘python_function’, ‘mleap’]. If unspecified, a flavor will be automatically selected from the model’s available flavors.
-
--archive
If specified, any SageMaker resources that become inactive after the finished batch transform job are preserved. These resources may include the associated SageMaker models and model artifacts. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deploying asynchronously with –async.
-
--async
If specified, this command will return immediately after starting the deployment process. It will not wait for the deployment process to complete. The caller is responsible for monitoring the deployment process via native SageMaker APIs or the AWS console.
-
--timeout
<timeout>
If the command is executed synchronously, the deployment process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending deployment via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.
push-model
Note
Experimental: This command may change or be removed in a future release without warning.
Push an MLflow model to Sagemaker model registry. Current active AWS account needs to have correct permissions setup.
mlflow sagemaker push-model [OPTIONS]
Options
-
-m
,
--model-uri
<URI>
Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores
-
-v
,
--vpc-config
<vpc_config>
Path to a file containing a JSON-formatted VPC configuration. This configuration will be used when creating the new SageMaker model. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html
run-local
Serve model locally running in a Sagemaker-compatible Docker container.
mlflow sagemaker run-local [OPTIONS]
Options
terminate-transform-job
Note
Experimental: This command may change or be removed in a future release without warning.
Terminate the specified Sagemaker batch transform job. Unless --archive
is specified,
all SageMaker resources associated with the batch transform job are deleted as well.
By default, unless the --async
flag is specified, this command will block until
either the termination process completes (definitively succeeds or fails) or the specified
timeout elapses.
mlflow sagemaker terminate-transform-job [OPTIONS]
Options
-
--archive
If specified, resources associated with the application are preserved. These resources may include unused SageMaker models and model artifacts. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deleting asynchronously with –async.
-
--async
If specified, this command will return immediately after starting the termination process. It will not wait for the termination process to complete. The caller is responsible for monitoring the termination process via native SageMaker APIs or the AWS console.
-
--timeout
<timeout>
If the command is executed synchronously, the termination process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending termination via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.
server
Run the MLflow tracking server.
The server listens on http://localhost:5000 by default and only accepts connections
from the local machine. To let the server accept connections from other machines, you will need
to pass --host 0.0.0.0
to listen on all network interfaces
(or a specific interface address).
mlflow server [OPTIONS]
Options
-
--backend-store-uri
<PATH>
URI to which to persist experiment and run data. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’) or local filesystem URIs (e.g. ‘file:///absolute/path/to/directory’). By default, data will be logged to the ./mlruns directory.
-
--registry-store-uri
<URI>
URI to which to persist registered models. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’). If not specified, backend-store-uri is used.
-
--default-artifact-root
<URI>
Directory in which to store artifacts for any new experiments created. For tracking server backends that rely on SQL, this option is required in order to store artifacts. Note that this flag does not impact already-created experiments with any previous configuration of an MLflow server instance. By default, data will be logged to the mlflow-artifacts:/ uri proxy if the –serve-artifacts option is enabled. Otherwise, the default location will be ./mlruns.
-
--serve-artifacts
If specified, enables serving of artifact uploads, downloads, and list requests by routing these requests to the storage location that is specified by ‘–artifact-destination’ directly through a proxy. The default location that these requests are served from is a local ‘./mlartifacts’ directory which can be overridden via the ‘–artifacts-destination’ argument. Default: False
-
--artifacts-only
If specified, configures the mlflow server to be used only for proxied artifact serving. With this mode enabled, functionality of the mlflow tracking service (e.g. run creation, metric logging, and parameter logging) is disabled. The server will only expose endpoints for uploading, downloading, and listing artifacts. Default: False
-
--artifacts-destination
<URI>
The base artifact location from which to resolve artifact upload/download/list requests (e.g. ‘s3://my-bucket’). Defaults to a local ‘./mlartifacts’ directory. This option only applies when the tracking server is configured to stream artifacts and the experiment’s artifact root location is http or mlflow-artifacts URI.
-
-h
,
--host
<HOST>
The network address to listen on (default: 127.0.0.1). Use 0.0.0.0 to bind to all addresses if you want to access the tracking server from other machines.
-
--expose-prometheus
<expose_prometheus>
Path to the directory where metrics will be stored. If the directory doesn’t exist, it will be created. Activate prometheus exporter to expose metrics on /metrics endpoint.
Environment variables
-
MLFLOW_BACKEND_STORE_URI
Provide a default for
--backend-store-uri
-
MLFLOW_REGISTRY_STORE_URI
Provide a default for
--registry-store-uri
-
MLFLOW_DEFAULT_ARTIFACT_ROOT
Provide a default for
--default-artifact-root
-
MLFLOW_SERVE_ARTIFACTS
Provide a default for
--serve-artifacts
-
MLFLOW_ARTIFACTS_ONLY
Provide a default for
--artifacts-only
-
MLFLOW_ARTIFACTS_DESTINATION
Provide a default for
--artifacts-destination
-
MLFLOW_HOST
Provide a default for
--host
-
MLFLOW_PORT
Provide a default for
--port
-
MLFLOW_WORKERS
Provide a default for
--workers
-
MLFLOW_STATIC_PREFIX
Provide a default for
--static-prefix
-
MLFLOW_GUNICORN_OPTS
Provide a default for
--gunicorn-opts
-
MLFLOW_EXPOSE_PROMETHEUS
Provide a default for
--expose-prometheus
ui
Launch the MLflow tracking UI for local viewing of run results. To launch a production server, use the “mlflow server” command instead.
The UI will be visible at http://localhost:5000 by default, and only accepts connections
from the local machine. To let the UI server accept connections from other machines, you will
need to pass --host 0.0.0.0
to listen on all network interfaces (or a specific interface
address).
mlflow ui [OPTIONS]
Options
-
--backend-store-uri
<PATH>
URI to which to persist experiment and run data. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’) or local filesystem URIs (e.g. ‘file:///absolute/path/to/directory’). By default, data will be logged to ./mlruns
-
--registry-store-uri
<URI>
URI to which to persist registered models. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’). If not specified, backend-store-uri is used.
-
--default-artifact-root
<URI>
Directory in which to store artifacts for any new experiments created. For tracking server backends that rely on SQL, this option is required in order to store artifacts. Note that this flag does not impact already-created experiments with any previous configuration of an MLflow server instance. If the –serve-artifacts option is specified, the default artifact root is mlflow-artifacts:/. Otherwise, the default artifact root is ./mlruns.
-
--serve-artifacts
If specified, enables serving of artifact uploads, downloads, and list requests by routing these requests to the storage location that is specified by ‘–artifact-destination’ directly through a proxy. The default location that these requests are served from is a local ‘./mlartifacts’ directory which can be overridden via the ‘–artifacts-destination’ argument. Default: False
-
--artifacts-destination
<URI>
The base artifact location from which to resolve artifact upload/download/list requests (e.g. ‘s3://my-bucket’). Defaults to a local ‘./mlartifacts’ directory. This option only applies when the tracking server is configured to stream artifacts and the experiment’s artifact root location is http or mlflow-artifacts URI.
-
-h
,
--host
<HOST>
The network address to listen on (default: 127.0.0.1). Use 0.0.0.0 to bind to all addresses if you want to access the tracking server from other machines.
Environment variables
-
MLFLOW_BACKEND_STORE_URI
Provide a default for
--backend-store-uri
-
MLFLOW_REGISTRY_STORE_URI
Provide a default for
--registry-store-uri
-
MLFLOW_DEFAULT_ARTIFACT_ROOT
Provide a default for
--default-artifact-root
-
MLFLOW_SERVE_ARTIFACTS
Provide a default for
--serve-artifacts
-
MLFLOW_ARTIFACTS_DESTINATION
Provide a default for
--artifacts-destination
-
MLFLOW_PORT
Provide a default for
--port
-
MLFLOW_HOST
Provide a default for
--host