Running computations in DataSphere using the API
In Yandex DataSphere
Using a simple convolutional neural network (CNN
For information on how to deploy a service returning results via the API, see Deploying a service based on a Docker image with FastAPI.
- Prepare your infrastructure.
- Prepare notebooks.
- Train a neural network.
- Upload the model architecture and weights.
- Create a Cloud Functions.
If you no longer need the resources you created, delete them.
Getting started
Before getting started, register in Yandex Cloud, set up a community, and link your billing account to it.
- On the DataSphere home page
, click Try for free and select an account to log in with: Yandex ID or your working account in the identity federation (SSO). - Select the Yandex Cloud Organization organization you are going to use in Yandex Cloud.
- Create a community.
- Link your billing account to the DataSphere community you are going to work in. Make sure that you have a billing account linked and its status is
ACTIVE
orTRIAL_ACTIVE
. If you do not have a billing account yet, create one in the DataSphere interface.
Required paid resources
The cost of implementing regular runs includes:
- Fee for DataSphere computing resource usage.
- Fee for the number of Cloud Functions function calls.
Prepare the infrastructure
Log in to the Yandex Cloud management console
If you have an active billing account, you can create or select a folder to deploy your infrastructure in, on the cloud page
Note
If you use an identity federation to access Yandex Cloud, billing details might be unavailable to you. In this case, contact your Yandex Cloud organization administrator.
Create a folder
- In the management console
, select a cloud and click Create folder. - Give your folder a name, e.g.,
data-folder
. - Click Create.
Create a service account for the DataSphere project
To access a DataSphere project from a Cloud Functions function, you need a service account with the datasphere.community-projects.editor
role.
- In the management console
, go todata-folder
. - In the list of services, select Identity and Access Management.
- Click Create service account.
- Enter a name for the service account, e.g.,
datasphere-sa
. - Click Add role and assign the service account the
datasphere.community-projects.editor
role. - Click Create.
Add the service account to a project
To enable the service account to run a DataSphere project, add it to the list of project members:
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - In the Members tab, click Add member.
- Select the
datasphere-sa
account and click Add.
Prepare notebooks and your neural network's architecture
Clone the Git repository containing the notebooks with the examples of the ML model training and testing:
- In the top menu, click Git and select Clone.
- In the window that opens, enter
https://github.com/yandex-cloud-examples/yc-datasphere-batch-execution.git
and click Clone.
Wait until cloning is complete. It may take some time. You will see the cloned repository folder in the
The repository contains two notebooks and the neural network architecture:
-
train_classifier.ipynb
: Notebook for downloading a training sample of theCIFAR10
dataset and training a simple neural network. -
test_classifier.ipynb
: Notebook for testing the model. -
my_nn_model.py
: Neural network architecture. For classification, three-dimensional images are input to the neural network. It contains two convolutional layers with themaxpool
layer between them and three linear layers:import torch.nn as nn import torch.nn.functional as F import torch class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x
Train a neural network
In the train_classifier.ipynb
notebook, you will download a training sample of the CIFAR10
dataset and train a simple neural network. The trained model's weights will be saved to the project storage named cifar_net.pth
.
-
Open the DataSphere project:
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Click Open project in JupyterLab and wait for the loading to complete.
- Open the notebook tab.
-
-
Import the libraries required to train the model:
import torch import torchvision import torchvision.transforms as transforms import torch.optim as optim from my_nn_model import Net
-
Upload the
CIFAR10
dataset to train the model. Images in the dataset are of 10 categories:transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
-
Output sample images from the dataset:
import matplotlib.pyplot as plt import numpy as np def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() dataiter = iter(trainloader) images, labels = next(dataiter) imshow(torchvision.utils.make_grid(images)) print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
-
Create a loss function and an optimizer required to train the neural network:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') net = Net() net.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
-
Run training on five epochs:
for epoch in range(5): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data[0].to(device), data[1].to(device) optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 2000 == 1999: print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}') running_loss = 0.0 print('Finished Training')
-
Save the resulting model to the project disk:
torch.save(net.state_dict(), './cifar_net.pth')
Upload the model architecture and weights
In the test_classifier.ipynb
notebook, you will upload the model architecture and weights created while running the train_classifier.ipynb
file. The uploaded model is used for predictions based on the test sample. Prediction results are saved to a file named test_predictions.csv
.
-
Open the DataSphere project:
-
Select the relevant project in your community or on the DataSphere homepage
in the Recent projects tab. - Click Open project in JupyterLab and wait for the loading to complete.
- Open the notebook tab.
-
-
Import the libraries required to run the model and make predictions:
import torch import torchvision import torchvision.transforms as transforms from my_nn_model import Net import pandas as pd
-
Prepare the objects that will enable you to access the test sample:
transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
-
Set the resource configuration to run the model on, СPU or GPU:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
Upload the trained model's weights and make predictions based on the test sample:
net = Net() net.to(device) net.load_state_dict(torch.load('./cifar_net.pth')) predictions = [] predicted_labels = [] with torch.no_grad(): for data in testloader: images, labels = data[0].to(device), data[1].to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) predictions.append(predicted.tolist()) predicted_labels.append([classes[predicted[j]] for j in range(batch_size)])
-
Save the predictions in
pandas.DataFrame
format:final_pred = pd.DataFrame({'class_idx': [item for sublist in predictions for item in sublist], 'class': [item for sublist in predicted_labels for item in sublist]})
-
Save the model predictions to a file:
final_pred.to_csv('/home/jupyter/datasphere/project/test_predictions.csv')
Create a Cloud Functions
To run cells without opening JupyterLab, you need a Cloud Functions that will trigger computations in a notebook via the API.
- In the management console
, select the folder where you want to create a function. - Select Cloud Functions.
- Click Create function.
- Enter a name for the function, e.g.,
ai-function
. - Click Create function.
Create a Cloud Functions version
Versions contain the function code, run parameters, and all required dependencies.
-
In the management console
, select the folder containing the function. -
Select Cloud Functions.
-
Select the function to create a version of.
-
Under Last version, click Сreate in editor.
-
Select the Python runtime environment. Do not select the Add files with code examples option.
-
Choose the Code editor method.
-
Click Create file and specify a file name, e.g.,
index
. -
Enter the function code by inserting your project ID and the absolute path to the project notebook:
import requests def handler(event, context): url = 'https://datasphere.api.cloud.yandex.net/datasphere/v2/projects/<project_ID>:execute' body = {"notebookId": "/home/jupyter/datasphere/project/test_classifier.ipynb"} headers = {"Content-Type" : "application/json", "Authorization": "Bearer {}".format(context.token['access_token'])} resp = requests.post(url, json = body, headers=headers) return { 'body': resp.json(), }
Where:
<project_ID>
: ID of the DataSphere project displayed on the project page under its name.notebookId
: Absolute path to the project notebook.
-
Under Parameters, set the version parameters:
- Entry point:
index.handler
. - Service account:
datasphere-sa
.
- Entry point:
-
In the top-right corner, click Save changes.
How to delete the resources you created
To stop paying for the resources you created: