mlflow.llm
The mlflow.llm
module provides utilities for Large Language Models (LLMs).
-
mlflow.llm.
log_predictions
(inputs: List[Union[str, Dict[str, str]]], outputs: List[str], prompts: List[Union[str, Dict[str, str]]]) → None[source] Note
Experimental: This function may change or be removed in a future release without warning.
Log a batch of inputs, outputs and prompts for the current evaluation run. If no run is active, this method will create a new active run.
- Parameters
inputs – Union of either List of input strings or List of input dictionary
outputs – List of output strings
prompts – Union of either List of prompt strings or List of prompt dictionary
- Returns
None
import mlflow inputs = [ { "question": "How do I create a Databricks cluster with UC access?", "context": "Databricks clusters are ...", }, ] outputs = [ "<Instructions for cluster creation with UC enabled>", ] prompts = [ "Get Databricks documentation to answer all the questions: {input}", ] with mlflow.start_run(): # Log llm predictions mlflow.llm.log_predictions(inputs, outputs, prompts)