Creating an MLFlow server for logging experiments and artifacts
This tutorial describes how to deploy an MLFlow tracking server
To create an MLFlow server for logging JupyterLab Notebook experiments and artifacts:
- Prepare your infrastructure.
- Create a static access key.
- Create an SSH key pair.
- Create a VM.
- Create a managed DB.
- Create a bucket.
- Install the MLFlow tracking server and add it to the VM auto-start.
- Create secrets.
- Train the model.
If you no longer need the resources you created, delete them.
Getting started
Before getting started, register in Yandex Cloud, set up a community, and link your billing account to it.
- On the DataSphere home page
, click Try for free and select an account to log in with: Yandex ID or your working account in the identity federation (SSO). - Select the Yandex Cloud Organization organization you are going to use in Yandex Cloud.
- Create a community.
- Link your billing account to the DataSphere community you are going to work in. Make sure that you have a billing account linked and its status is
ACTIVE
orTRIAL_ACTIVE
. If you do not have a billing account yet, create one in the DataSphere interface.
Required paid resources
The cost of training a model based on data from Object Storage includes:
- Fee for DataSphere computing resource usage.
- Fee for Compute Cloud computing resource usage.
- Fee for a running Managed Service for PostgreSQL cluster.
- Fee for storing data in a bucket (see Object Storage pricing).
- Fee for operations with data (see Object Storage pricing).
Prepare the infrastructure
Log in to the Yandex Cloud management console
If you have an active billing account, you can create or select a folder to deploy your infrastructure in, on the cloud page
Note
If you use an identity federation to access Yandex Cloud, billing details might be unavailable to you. In this case, contact your Yandex Cloud organization administrator.
Create a folder
- In the management console
, select a cloud and click Create folder. - Give your folder a name, e.g.,
data-folder
. - Click Create.
Create a service account for Object Storage
To access a bucket in Object Storage, you will need a service account with the storage.viewer
and storage.uploader
roles.
- In the management console
, go todata-folder
. - In the Service accounts tab, click Create service account.
- Enter a name for the service account, e.g.,
datasphere-sa
. - Click Add role and assign the
storage.viewer
andstorage.uploader
roles to the service account. - Click Create.
Create a static access key
To access Object Storage from DataSphere, you need a static key.
- In the management console
, navigate to the folder the service account belongs to. - At the top of the screen, go to the Service accounts tab.
- Select the
datasphere-sa
service account. - In the top panel, click
Create new key. - Select Create static access key.
- Specify the key description and click Create.
- Save the ID and private key. After you close the dialog, the private key value will become unavailable.
-
Create an access key for the
datasphere-sa
service account.yc iam access-key create --service-account-name datasphere-sa
Result:
access_key: id: aje6t3vsbj8l******** service_account_id: ajepg0mjt06s******** created_at: "2022-07-18T14:37:51Z" key_id: 0n8X6WY6S24N7Oj***** secret: JyTRFdqw8t1kh2-OJNz4JX5ZTz9Dj1rI9hx*****
-
Save the ID (
key_id
) and secret key (secret
). You will not be able to get the key value again.
Create an SSH key pair
To connect to a VM over SSH, you need a key pair: the public key resides on the VM, and the private one is kept by the user. This method is more secure than connecting with login and password.
Note
SSH connections using a login and password are disabled by default on public Linux images that are provided by Yandex Cloud.
To create a key pair:
-
Open the terminal.
-
Use the
ssh-keygen
command to create a new key:ssh-keygen -t ed25519 -C "<optional_comment>"
You can specify an empty string in the
-C
parameter to avoid adding a comment, or you may not specify the-C
parameter at all: in this case, a default comment will be added.After running this command, you will be prompted to specify the name and path to the key files, as well as enter the password for the private key. If you only specify the name, the key pair will be created in the current directory. The public key will be saved in a file with the
.pub
extension, while the private key, in a file without extension.By default, the command prompts you to save the key under the
id_ed25519
name in the following directory:/home/<username>/.ssh
. If there is already an SSH key namedid_ed25519
in this directory, you may accidentally overwrite it and lose access to the resources it is used in. Therefore, you may want to use unique names for all SSH keys.
If you do not have OpenSSH
-
Run
cmd.exe
orpowershell.exe
(make sure to update PowerShell before doing so). -
Use the
ssh-keygen
command to create a new key:ssh-keygen -t ed25519 -C "<optional_comment>"
You can specify an empty string in the
-C
parameter to avoid adding a comment, or you may not specify the-C
parameter at all: in this case, a default comment will be added.After running this command, you will be prompted to specify the name and path to the key files, as well as enter the password for the private key. If you only specify the name, the key pair will be created in the current directory. The public key will be saved in a file with the
.pub
extension, while the private key, in a file without extension.By default, the command prompts you to save the key under the
id_ed25519
name in the following directory:C:\Users\<username>/.ssh
. If there is already an SSH key namedid_ed25519
in this directory, you may accidentally overwrite it and lose access to the resources it is used in. Therefore, you may want to use unique names for all SSH keys.
Create keys using the PuTTY app:
-
Download
and install PuTTY. -
Make sure the directory where you installed PuTTY is included in
PATH
:- Right-click My computer. Click Properties.
- In the window that opens, select Additional system parameters, then Environment variables (located in the lower part of the window).
- Under System variables, find
PATH
and click Edit. - In the Variable value field, append the path to the directory where you installed PuTTY.
-
Launch the PuTTYgen app.
-
Select EdDSA as the pair type to generate. Click Generate and move the cursor in the field above it until key creation is complete.
-
In Key passphrase, enter a strong password. Enter it again in the field below.
-
Click Save private key and save the private key. Do not share its key phrase with anyone.
-
Click Save public key and save the public key in the following file:
<key_name>.pub
.
Create a VM
- In the management console
, select the folder to create your VM in. - In the list of services, select Compute Cloud.
- Click Create virtual machine.
- Under General information:
- Enter the VM name, e.g.,
mlflow-vm
. - Select the availability zone:
ru-central1-a
.
- Enter the VM name, e.g.,
- Under Boot disk image, select
Ubuntu 22.04
. - Under Disks and file storages, select the Disks tab and configure the boot disk:
- Type:
SSD
. - Size:
20 GB
- Type:
- Under Computing resources:
- vCPU:
2
- RAM:
4
- vCPU:
- Under Network settings, select the subnet specified in the DataSphere project settings. Make sure to set up a NAT gateway for the subnet.
- Under Access:
- Service account:
datasphere-sa
. - Enter the username into the Login field.
- In the SSH key field, paste the contents of the public key file.
- Service account:
- Click Create VM.
Create a managed DB
- In the management console
, select the folder where you want to create a DB cluster. - Select Managed Service for PostgreSQL.
- Click Create cluster.
- Enter a name for the cluster, e.g.,
mlflow-bd
. - Under Host class select the
s3-c2-m8
configuration. - Under Size of storage, select
250 GB
. - Under Database, enter your username and password. You will need it to establish a connection.
- Under Hosts, select the
ru-central1-a
availability zone. - Click Create cluster.
- Go to the DB you created and click Connect.
- Save the host link from the
host
field: you will need it to establish a connection.
Create a bucket
- In the management console
, select the folder you want to create a bucket in. - In the list of services, select Object Storage.
- At the top right, click Create bucket.
- In the ** Name** field, enter a name for the bucket, e.g.,
mlflow-bucket
. - In the Object read access, Object listing access, and Read access to settings fields, select Restricted.
- Click Create bucket.
- To create a folder for MLflow artifacts, open the bucket you created and click Create folder.
- Enter a name for the folder, e.g.,
artifacts
.
Install the MLFlow tracking server and add it to the VM auto-start
-
Connect to the VM via SSH.
-
Download the
Anaconda
distribution:curl --remote-name https://repo.anaconda.com/archive/Anaconda3-2023.07-1-Linux-x86_64.sh
-
Run its installation:
bash Anaconda3-2023.07-1-Linux-x86_64.sh
Wait for the installation to complete and restart the shell.
-
Create an environment:
conda create -n mlflow
-
Activate the environment:
conda activate mlflow
-
Install the required packages by running the following commands one by one:
conda install -c conda-forge mlflow conda install -c anaconda boto3 pip install psycopg2-binary pip install pandas
-
Create environment variables for S3 access:
-
Open the file with the variables:
sudo nano /etc/environment
-
Add the following lines to the file by substituting your VM's internal IP address:
MLFLOW_S3_ENDPOINT_URL=https://storage.yandexcloud.net/ MLFLOW_TRACKING_URI=http://<VM_internal_IP_address>:8000
-
-
Specify the data to be used by the
boto3
library for S3 access:-
Create the
.aws
folder:mkdir ~/.aws
-
Create the
credentials
file:nano ~/.aws/credentials
-
Add the following lines to the file by substituting the static key ID and value:
[default] aws_access_key_id=<static_key_ID> aws_secret_access_key=<secret_key>
-
-
Run the MLFlow tracking server through this command (provide your cluster data instead of the palceholders):
mlflow server --backend-store-uri postgresql://<username>:<password>@<host>:6432/db1?sslmode=verify-full --default-artifact-root s3://mlflow-bucket/artifacts -h 0.0.0.0 -p 8000
You can check your connection to MLFlow at
http://<VM_public_IP_address>:8000
.
Enable MLFlow autorun
For MLFlow to run automatically after the VM restarts, you need to convert it into a Systemd
service.
-
Create directories for storing logs and error details:
mkdir ~/mlflow_logs/ mkdir ~/mlflow_errors/
-
Create the
mlflow-tracking.service
file:sudo nano /etc/systemd/system/mlflow-tracking.service
-
Add the following lines to the file, replacing the placeholders with your data:
[Unit] Description=MLflow Tracking Server After=network.target [Service] Environment=MLFLOW_S3_ENDPOINT_URL=https://storage.yandexcloud.net/ Restart=on-failure RestartSec=30 StandardOutput=file:/home/<VM_user_name>/mlflow_logs/stdout.log StandardError=file:/home/<VM_user_name>/mlflow_errors/stderr.log User=<VM_user_name> ExecStart=/bin/bash -c 'PATH=/home/<VM_user_name>/anaconda3/envs/mlflow_env/bin/:$PATH exec mlflow server --backend-store-uri postgresql://<DB_user_name>:<password>@<host>:6432/db1?sslmode=verify-full --default-artifact-root s3://mlflow-bucket/artifacts -h 0.0.0.0 -p 8000' [Install] WantedBy=multi-user.target
Where:
<VM_user_name>
: VM account username.<DB_user_name>
: Username specified when creating a database cluster.
-
Run the service and enable autoload at system startup:
sudo systemctl daemon-reload sudo systemctl enable mlflow-tracking sudo systemctl start mlflow-tracking sudo systemctl status mlflow-tracking
Create secrets
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Under Project resources, click
Secret. - Click Create.
- In the Name field, enter the name for the secret:
MLFLOW_S3_ENDPOINT_URL
. - In the Value field, paste the URL:
https://storage.yandexcloud.net/
. - Click Create.
- Create three more secrets:
MLFLOW_TRACKING_URI
with thehttp://<VM_internal_IP_address>:8000
value.AWS_ACCESS_KEY_ID
with the static key ID.AWS_SECRET_ACCESS_KEY
with the static key value.
Train the model
The example uses a set of data to predict the quality of wine based on quantitative characteristics, such as acidity, PH, residual sugar, etc. To train the model, copy and paste the code into notebook cells.
-
Open the DataSphere project:
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Click Open project in JupyterLab and wait for the loading to complete.
- Open the notebook tab.
-
-
Install the required modules:
%pip install mlflow
-
Import the required libraries:
import os import warnings import sys import pandas as pd import numpy as np from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet from urllib.parse import urlparse import mlflow import mlflow.sklearn from mlflow.models import infer_signature import logging
-
Create an experiment in MLFlow:
mlflow.set_experiment("my_first_experiment")
-
Create a function for prediction quality assessment:
def eval_metrics(actual, pred): rmse = np.sqrt(mean_squared_error(actual, pred)) mae = mean_absolute_error(actual, pred) r2 = r2_score(actual, pred) return rmse, mae, r2
-
Prepare data, train the model, and register it with MLflow:
logging.basicConfig(level=logging.WARN) logger = logging.getLogger(__name__) warnings.filterwarnings("ignore") np.random.seed(40) # Uploading dataset to assess wine quality csv_url = ( "https://raw.githubusercontent.com/mlflow/mlflow/master/tests/datasets/winequality-red.csv" ) try: data = pd.read_csv(csv_url, sep=";") except Exception as e: logger.exception( "Unable to download training & test CSV, check your internet connection. Error: %s", e ) # Splitting dataset into training and test samples train, test = train_test_split(data) # Allocating target variable and variables used for prediction train_x = train.drop(["quality"], axis=1) test_x = test.drop(["quality"], axis=1) train_y = train[["quality"]] test_y = test[["quality"]] alpha = 0.5 l1_ratio = 0.5 # Creating mlflow run with mlflow.start_run(): # Creating and training the ElasticNet model lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42) lr.fit(train_x, train_y) # Making quality predictions against the test sample predicted_qualities = lr.predict(test_x) (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities) print("Elasticnet model (alpha={:f}, l1_ratio={:f}):".format(alpha, l1_ratio)) print(" RMSE: %s" % rmse) print(" MAE: %s" % mae) print(" R2: %s" % r2) # Logging data on hyperparameters and quality metrics in MLflow mlflow.log_param("alpha", alpha) mlflow.log_param("l1_ratio", l1_ratio) mlflow.log_metric("rmse", rmse) mlflow.log_metric("r2", r2) mlflow.log_metric("mae", mae) predictions = lr.predict(train_x) signature = infer_signature(train_x, predictions) tracking_url_type_store = urlparse(mlflow.get_tracking_uri()).scheme # Registering model in MLflow if tracking_url_type_store != "file": mlflow.sklearn.log_model( lr, "model", registered_model_name="ElasticnetWineModel", signature=signature ) else: mlflow.sklearn.log_model(lr, "model", signature=signature)
You can check the result at
http://<VM_public_IP_address>:8000
.
How to delete the resources you created
To stop paying for the resources you created:
- Delete the VM.
- Delete the database cluster.
- Delete the objects from the bucket.
- Delete the bucket.
- Delete the project.