H2O LLM Studio

Updated February 11, 2025

With H2O LLM Studio, you can easily solve a large set of LLM fine-tuning tasks without any coding experience.

Advantages:

  • no coding experience required
  • graphic user interface (GUI) specifically designed for large language models
  • support recent fine-tuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with low memory footprint
  • use Reinforcement Learning (RL) to fine-tune your model (experimental)
  • use advanced evaluation metrics to judge the answers generated by the model
  • track and compare your model performance visually
  • chat with your model and get instant feedback on your model’s performance
  • easily export your model to the Hugging Face Hub and share it with the community
Deployment instructions
  1. Create an SSH key pair.

  2. Click the button in this card to go to VM creation. The image will be automatically selected under Image/boot disk selection.

  3. Under Network settings, enable a public IP address for the VM (Public IP: Auto for a random address or List if you have a reserved static address).

  4. Under Access, paste the public key from the pair into the SSH key field.

  5. Create the VM. When creating a VM, you need to select a platform with GPU. The list of platforms is available at the link:
    https://yandex.cloud/ru/docs/compute/concepts/vm-platforms

  6. Connect to the VM via SSH using local forwarding for TCP port 10101. For example:

    ssh -i <path_to_public_SSH_key> -L 10101:localhost:10101 <username>:<VM's_public_IP_address>
    

    The ufw firewall in this product only allows incoming traffic to port 22 (SSH). This is why you need local port forwarding when connecting.

  7. To access the user interface, go to http://localhost:10101 in your web browser.

H2O LLM Studio is started as a Docker container, as described in its README. The container’s port 10101 is published to the same port on your VM.

The directories /usr/local/h2o/data/ and /usr/local/h2o/output/ are mounted to the container as volumes, meaning that data used and created by H2O LLM Studio is persistent between VM restarts and shutdowns.

Billing type
Free
Type
Virtual Machine
Category
ML & AI
Publisher
Yandex Cloud
Use cases
  • fine-tuning LLM via GUI
  • using LoRA and 8-bit model training
  • LLM performance assessment
  • evaluating LLM metrics
  • running experiments with LLM
Technical support

Yandex Cloud technical support is available 24/7. The types of requests available to you and their response time depend on your pricing plan. You can activate paid support in the management console. You can learn more about getting technical support here.

Product IDs
image_id:
fd8pcgfmrihugj810q2a
family_id:
h2o-llm-studio
Product composition
SoftwareVersion
Ubuntu22.04 LTS
Docker5:27.1.2-1~ubuntu.22.04~jammy
Nvidia Container Toolkit1.16.1-1
Terms
By using this product you agree to the Yandex Cloud Marketplace Terms of Service
Billing type
Free
Type
Virtual Machine
Category
ML & AI
Publisher
Yandex Cloud