MLflow
Updated May 5, 2025
MLflow is a platform for managing ML life cycle. It includes some tools to:
- Track experiments.
- Share models while using and deploying them.
- Pack code into reproducible runs.
MLflow has native integration features for most well-known ML libraries, such as TensorFlow, PyTorch, XGBoost, etc. You can also use it to work with any other library, algorithm, or deployment tool.
MLflow components include:
- MLflow Tracking: An API to register parameters, code versions, metrics, model environment dependencies, and model artifacts when running machine learning code.
- MLflow Models: A format to pack your models and a set of tools that enable you to easily deploy a trained model for packet input or output in real time.
- MLflow model registry: A centralized model storage, an API, and a UI to approve, QA, and deploy an MLflow model.
- MLflow Projects: A common format to pack reusable code for data science. One can run this code with various parameters to train models, visualize data, or run any other data science task.
- MLflow templates: Predefined templates to develop high-quality models for many common tasks, such as classification and regression.
- Get an SSH key pair to connect to a virtual machine.
- Create a VM from a public image. Under Image/boot disk selection, go to the Marketplace tab and select MLflow. Under Access:
- Enter the username in the Login field.
- Paste the contents of the public SSH key file in the SSH key field.
- Connect to the VM over SSH. Use the username you set when creating the VM and the private SSH key you created before.
- Open the
/root/default_passwords.txt
file and copy your authentication credentials. - In your browser, open
https://<VM_public_IP_address>/
and get authenticated using the login and password you previously got.
- Recording experiment metrics and parameters, comparing the results, and exploring the variety of solutions. Saving output data as models.
- Comparing model performance and selecting the best model for deployment. Registering models and tracking the performance of their production versions.
- Deploying ML models in various service environments.
- Storing, annotating, and discovering models, as well as managing them in a centralized repository.
- Packing data science code into the formats that allow running such code on any platform, as well as sharing it.
Yandex Cloud technical support is available 24/7. The types of requests you can submit and the relevant response times depend on your pricing plan. You can switch to the paid support plan in the management console. You can learn more about the technical support terms and conditions here.
By using this product you agree to the Yandex Cloud Marketplace Terms of Service