H2O LLM Studio

Updated March 31, 2026

With H2O LLM Studio, you can easily solve a wide range of LLM fine-tuning tasks without writing a line of code.

Benefits

  • No coding experience required.
  • LLM-tailored GUI.
  • State-of-the-art fine-tuning methods, such as low-rank adaptation (LoRA) and 8-bit model training.
  • Reinforcement learning (experimental feature).
  • Model response evaluation metrics.
  • Tracking and visual comparison of model outputs.
  • Chat with the LLM to quickly evaluate quality.
  • Easy model export to the Hugging Face Hub.
Deployment instructions
  1. Get an SSH key pair for connection to the VM.

  2. Click Create VM on the right of this product card to proceed with creating a VM:

    1. Under Boot disk image, the system automatically selects the appropriate image.
    2. Under Network settings, configure Public IP address as follows:
      • Auto: For a random IP address.
      • List: If you have a reserved static IP address.
    3. Under Access:
      • Enter the username in the Login field.
      • In the SSH key field, select from the list the SSH key you got earlier.
    4. Under Computing resources:
      1. Navigate to the GPU tab.
      2. Select a GPU platform:
    5. Click Create VM.
  3. Wait until the VM status switches to Running.

  4. Connect to the VM over SSH by using the following:

    • Username you set when creating the VM and the private SSH key you got earlier.

    • Local forwarding for TCP port 10101.
      For example:

      ssh -i <key_path\key_file_name> -L 10101:localhost:10101 <username>@<VM_public_IP_address>
      

    The ufw firewall in this product only allows incoming traffic to port 22 (SSH). This is why you need local port forwarding when connecting.

  5. To access the UI, open http://localhost:10101 in your web browser.

H2O LLM Studio starts as a Docker container, as described in its README. The container’s port 10101 is published to the same port on your VM.

The /usr/local/h2o/data/ and /usr/local/h2o/output/ folders are mounted to the container as volumes, meaning that data used and generated by H2O LLM Studio persists across VM restarts and shutdowns.

Billing type
Free
Type
Virtual Machine
Category
ML & AI
Publisher
Yandex Cloud
Use cases
  • Fine-tuning LLMs via a user-friendly GUI
  • Using LoRA and 8-bit model training
  • Assessing LLM performance
  • Collecting and evaluating LLM performance metrics
  • Experimenting with LLMs
Technical support

Yandex Cloud technical support is available 24/7. The types of requests available to you and their response time depend on your pricing plan. You can activate paid support in the management console. You can learn more about getting technical support here.

Product IDs
image_id:
fd8pcgfmrihugj810q2a
family_id:
h2o-llm-studio
Product composition
SoftwareVersion
Ubuntu22.04 LTS
Docker5:27.1.2-1~ubuntu.22.04~jammy
Nvidia Container Toolkit1.16.1-1
Terms
By using this product you agree to the Yandex Cloud Marketplace Terms of Service
Billing type
Free
Type
Virtual Machine
Category
ML & AI
Publisher
Yandex Cloud