H2O LLM Studio
With H2O LLM Studio, you can easily solve a wide range of LLM fine-tuning tasks without writing a line of code.
Benefits
- No coding experience required.
- LLM-tailored GUI.
- State-of-the-art fine-tuning methods, such as low-rank adaptation (LoRA) and 8-bit model training.
- Reinforcement learning (experimental feature).
- Model response evaluation metrics.
- Tracking and visual comparison of model outputs.
- Chat with the LLM to quickly evaluate quality.
- Easy model export to the Hugging Face Hub.
-
Get an SSH key pair for connection to the VM.
-
Click Create VM on the right of this product card to proceed with creating a VM:
- Under Boot disk image, the system automatically selects the appropriate image.
- Under Network settings, configure Public IP address as follows:
- Auto: For a random IP address.
- List: If you have a reserved static IP address.
- Under Access:
- Enter the username in the Login field.
- In the SSH key field, select from the list the SSH key you got earlier.
- Under Computing resources:
- Navigate to the GPU tab.
- Select a GPU platform:
- Click Create VM.
-
Wait until the VM status switches to
Running. -
Connect to the VM over SSH by using the following:
-
Username you set when creating the VM and the private SSH key you got earlier.
-
Local forwarding for TCP port
10101.
For example:ssh -i <key_path\key_file_name> -L 10101:localhost:10101 <username>@<VM_public_IP_address>
The
ufwfirewall in this product only allows incoming traffic to port22(SSH). This is why you need local port forwarding when connecting. -
-
To access the UI, open
http://localhost:10101in your web browser.
H2O LLM Studio starts as a Docker container, as described in its README. The container’s port 10101 is published to the same port on your VM.
The /usr/local/h2o/data/ and /usr/local/h2o/output/ folders are mounted to the container as volumes, meaning that data used and generated by H2O LLM Studio persists across VM restarts and shutdowns.
- Fine-tuning LLMs via a user-friendly GUI
- Using LoRA and 8-bit model training
- Assessing LLM performance
- Collecting and evaluating LLM performance metrics
- Experimenting with LLMs
Yandex Cloud technical support is available 24/7. The types of requests available to you and their response time depend on your pricing plan. You can activate paid support in the management console. You can learn more about getting technical support here.
| Software | Version |
|---|---|
| Ubuntu | 22.04 LTS |
| Docker | 5:27.1.2-1~ubuntu.22.04~jammy |
| Nvidia Container Toolkit | 1.16.1-1 |