Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex AI Studio
  • Getting started with Model Gallery
    • About Yandex AI Studio
    • Yandex Workflows
    • Quotas and limits
    • Terms and definitions
    • All guides
    • Disabling request logging
    • Getting an API key
        • Estimating prompt size in tokens
        • Sending a request in prompt mode
        • Sending a series of requests in chat mode
        • Sending an asynchronous request
        • Calling a function from a model
      • Image generation
      • Batch processing
  • Switching from the AI Assistant API to Responses API
  • Compatibility with OpenAI
  • Access management
  • Pricing policy
  • Audit Trails events
  • Public materials
  • Release notes
  1. Step-by-step guides
  2. Model Gallery
  3. Text generation
  4. Calling a function from a model

Calling a function from a model

Written by
Yandex Cloud
Updated at October 24, 2025

When working with models, you can access external tools, APIs, and databases with the help of function calls.

For example, you have a function called weatherTool that accepts a city name as the input parameter and returns the current air temperature for the city. It is up to you to contribute the model response processing, the usability of the function, and the generation of requests.

To enable the model to invoke the function when needed:

cURL
  1. Generate a request to the model, e.g., in the body.json file.

    {
        "modelUri": "gpt://<folder_ID>/yandexgpt",
        "tools": [
            {
                "function": {
                    "name": "weatherTool",
                    "description": "Getting current weather in the specified city.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "city": {
                                "type": "string",
                                "description": "City name, e.g., Moscow"
                            }
                        },
                        "required": [
                            "city"
                        ]
                    }
                }
            }
        ],
        "messages": [
            {
                "role": "user",
                "text": "What is the weather like in Saint Petersburg?"
            }
        ]
    }
    

    Where:

    • modelUri: ID of the model that will be used to invoke functions. The parameter contains the Yandex Cloud folder ID or the tuned model's ID.
    • tools: Array of all functions provided to the model.
    • function: Description and parameters of the weatherTool function.
    • messages: List of messages that set the context for the model:

      • role: Message sender's role:

        • user: To send user messages to the model.
        • system: To set the request context and define the model's behavior.
        • assistant: For responses generated by the model. In chat mode, the model's responses tagged with the assistant role are included in the message to save the conversation context. Do not send user messages with this role.
      • text: Message text.

  2. Send a request to the model:

    export FOLDER_ID=<folder_ID>
    export IAM_TOKEN=<IAM_token>
    curl \
      --request POST \
      --header "Content-Type: application/json" \
      --header "Authorization: Bearer ${IAM_TOKEN}" \
      --header "x-folder-id: ${FOLDER_ID}" \
      --data "@<path_to_JSON_file>" \
      "https://llm.api.cloud.yandex.net/foundationModels/v1/completion"
    

    Where:

    • FOLDER_ID: ID of the folder for which your account has the ai.languageModels.user role or higher.
    • IAM_TOKEN: Your account's IAM token.
  3. The model will return a response with the ToolCallList field containing a call to the invoked function and required parameters as a JSON Schema.

    Response example:

    {
        "result": {
          "alternatives": [
            {
              "message": {
                "role": "assistant",
                "toolCallList": {
                  "toolCalls": [
                    {
                      "functionCall": {
                        "name": "weatherTool",
                        "arguments": {
                          "city": "Saint Petersburg"
                        }
                      }
                    }
                  ]
                }
              },
              "status": "ALTERNATIVE_STATUS_TOOL_CALLS"
            }
          ],
          "usage": {
            "inputTextTokens": "74",
            "completionTokens": "14",
            "totalTokens": "88",
            "completionTokensDetails": {
              "reasoningTokens": "0"
            }
          },
          "modelVersion": "23.10.2024"
        }
    }
    
  4. Process the model's response (the toolCallList field) and initiate the weatherTool function by providing to it the parameters you received.

  5. Add the model's response and the result of invoking the function to the messages array in the body.json file.

    Request example
    {
        "modelUri": "gpt://<folder_ID>/yandexgpt",
        "tools": [
          {
            "function": {
              "name": "weatherTool",
              "description": "Getting current weather in the specified city.",
              "parameters": {
                "type": "object",
                "properties": {
                  "city": {
                    "type": "string",
                    "description": "City name, e.g., Moscow"
                  }
                },
                "required": ["city"]
              }
            }
          }
        ],
        "messages": [
          {
            "role": "user",
            "text": "What is the weather like in Saint Petersburg?"
          },
          {
            "role": "assistant",
            "toolCallList": {
              "toolCalls": [
                {
                  "functionCall": {
                    "name": "weatherTool",
                    "arguments": {
                      "city": "Saint Petersburg"
                    }
                  }
                }
              ]
            }
          },
          {
            "role": "user",
            "toolResultList": {
              "toolResults": [
                {
                  "functionResult": {
                    "name": "weatherTool",
                    "content": "8°C"
                  }
                }
              ]
            }
          }
        ]
    }
    

    Where toolResultList is the result of invoking the function.

  6. Send a new request to the model by repeating Step 2 of this guide. The model will formulate its response based on the result of invoking the function:

    {
      "result": {
        "alternatives": [
          {
            "message": {
              "role": "assistant",
              "text": "It is currently 8°C above zero in Saint Petersburg."
            },
            "status": "ALTERNATIVE_STATUS_FINAL"
          }
        ],
        "usage": {
          "inputTextTokens": "108",
          "completionTokens": "10",
          "totalTokens": "118",
          "completionTokensDetails": {
            "reasoningTokens": "0"
          }
        },
        "modelVersion": "23.10.2024"
      }
    }
    

Was the article helpful?

Previous
Sending an asynchronous request
Next
Image generation
© 2025 Direct Cursus Technology L.L.C.