Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex AI Studio
  • Getting started with Model Gallery
    • About Yandex AI Studio
    • Yandex Workflows
    • Quotas and limits
    • Terms and definitions
    • All guides
    • Disabling request logging
    • Getting an API key
        • Estimating prompt size in tokens
        • Sending a request in synchronous mode
        • Sending a series of requests in chat mode
        • Sending a request in background mode
        • Sending an asynchronous request
        • Calling a function from a model
      • Image generation
      • Batch processing
  • Switching from the AI Assistant API to Responses API
  • Compatibility with OpenAI
  • Access management
  • Pricing policy
  • Audit Trails events
  • Public materials
  • Release notes
  1. Step-by-step guides
  2. Model Gallery
  3. Text generation
  4. Sending a request in background mode

Sending a request in background mode

Written by
Yandex Cloud
Updated at December 29, 2025

When performing large-scale text generation tasks, e.g., which involve processing massive documents, using the background request mode may be the most efficient approach.

In background mode, Responses API does not wait for the model to return the generation result in response to a request; instead, it receives the task ID and then closes the connection. After that, you can use this ID to check the task status from time to time and get the generation result once it is ready.

If you have a short request and expect a short response, or if accessing a text generation model in chat mode, use the synchronous request mode.

To complete the steps from this example, create a service account with the ai.languageModels.user role and get an API key with the yc.ai.foundationModels.execute scope.

Python
import openai
import time

YANDEX_CLOUD_API_KEY = "<API_key>"
YANDEX_FOLDER_ID = "<folder_ID>"
YANDEX_CLOUD_MODEL = "yandexgpt"

client = openai.OpenAI(
    api_key=YANDEX_CLOUD_API_KEY,
    base_url="https://rest-assistant.api.cloud.yandex.net/v1",
    project=YANDEX_FOLDER_ID,
)

# --- 1. Creating a response in the background
resp = client.responses.create(
    model=f"gpt://{YANDEX_FOLDER_ID}/{YANDEX_CLOUD_MODEL}",
    input="Create a brief summary of the text: 'Yandex AI Studio features over 20 models deployed in the cloud and available in various modes. The YandexGPT family models remain the most popular models in terms of consumption, claiming 62.7% of the cloud platform traffic. This high demand allowed Yandex to bring down the prices of its proprietary models, making them more affordable. The open-source models Qwen3‑235b from Alibaba Group (30.9%) and GPT‑OSS from OpenAI (5.7%) take the second and third place accordingly.'",
    background=True,  # Running in the background
)

print("Task sent:", resp.id)

# --- 2. Asking for status
while True:
    status = client.responses.retrieve(resp.id)
    print("Status:", status.status)
    if status.status in ["completed", "failed", "cancelled"]:
        break
    time.sleep(2)

# --- 3. Getting the result
if status.status == "completed":
    print("Final response:", status.output_text)
else:
    print("Error:", status.status)

Where:

  • YANDEX_API_KEY: Service account API key you obtained.

  • YANDEX_FOLDER_ID: Service account folder ID.

  • resp: Object with results of the response generation request.

    Possible generation result statuses include:

    • queued: Task is queued to run.
    • in_progress: Running the task.
    • failed: Task failed with an error.
    • cancelled: Cancelled the task.
    • completed: Task completed successfully.

    The generation result will be saved to the status.output_text field when the status changes to completed.

Result example:

Task sent: 1e5ee267-2d01-49d7-abf9-94b9********
Status: queued
Status: completed
Final response: Yandex AI Studio offers more than 20 cloud models. The most popular ones are YandexGPT models (62.7% of traffic), followed by Qwen3‑235b (30.9%) and GPT‑OSS (5.7%). High demand for YandexGPT allowed the company to reduce its model prices.

See alsoSee also

  • Sending a request in synchronous mode
  • Overview of Yandex AI Studio AI models

Was the article helpful?

Previous
Sending a series of requests in chat mode
Next
Sending an asynchronous request
© 2026 Direct Cursus Technology L.L.C.