Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex AI Studio
  • Getting started with Model Gallery
    • About Yandex AI Studio
    • Yandex Workflows
    • Quotas and limits
    • Terms and definitions
  • Switching from the AI Assistant API to Responses API
  • Compatibility with OpenAI
  • Access management
  • Pricing policy
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Generation seed
  • Prompt
  • Temperature
  1. Concepts
  2. Terms and definitions

Yandex Cloud AI Studio terms and definitions

Written by
Yandex Cloud
Updated at December 3, 2025
  • Generation seed
  • Prompt
  • Temperature

Generation seedGeneration seed

Generation seed is the starting point for image generation from noise used to achieve repeatability. Thus, using the same prompt and seed will return identical generation results. To change the generated image, change the seed value or the description.

In a YandexART model, seed values may range from 0 to 263-1.

PromptPrompt

Generative models are managed using prompts. A good prompt should contain the context of your request to the model (instruction) and the actual task the model should complete based on the provided context. The more specific your prompt is, the more accurate the model's output is going to be.

Apart from the prompt, other request parameters will impact the model's output too. Use AI Playground available from the management console to test your requests.

TemperatureTemperature

Temperature is an LLM parameter for response variability: the higher the temperature, the less predictable will be the result. Its usual range is between 0 and 1.

Was the article helpful?

Previous
Quotas and limits
Next
All guides
© 2025 Direct Cursus Technology L.L.C.