Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Vision OCR
  • Getting started
    • All guides
    • Text recognition in images
    • Text recognition from PDF files
    • Handwriting recognition
    • Table recognition
    • Base64 encoding
    • Setting up access with API keys
  • Access management
  • Pricing policy
  • Release notes
  • FAQ

In this article:

  • Getting started
  • Recognizing handwritten text in an image
  1. Step-by-step guides
  2. Handwriting recognition

Handwriting recognition

Written by
Yandex Cloud
Updated at March 28, 2025
  • Getting started
  • Recognizing handwritten text in an image

To recognize handwritten text in an image, use the OCR API with the handwritten recognition model. With this model, you can recognize any combination of handwritten and typed text in Russian and English.

Getting startedGetting started

To use the examples, install cURL.

Get your account data for authentication:

Yandex or federated account
Service account
  1. Get an IAM token for your Yandex account or federated account.

  2. Get the ID of the folder for which your account has the ai.vision.user role or higher.

  3. When accessing Vision OCR via the API, provide the received parameters in each request:

    • For the Vision API and Classifier API:

      Specify the IAM token in the Authorization header as follows:

      Authorization: Bearer <IAM_token>
      

      Specify the folder ID in the request body in the folderId parameter.

    • For the OCR API:

      • Specify the IAM token in the Authorization header.
      • Specify the folder ID in the x-folder-id header.
      Authorization: Bearer <IAM_token>
      x-folder-id <folder_ID>
      

Vision OCR supports two authentication methods based on service accounts:

  • With an IAM token:

    1. Get an IAM token.

    2. Provide the IAM token in the Authorization header in the following format:

      Authorization: Bearer <IAM_token>
      
  • With API keys.

    Use API keys if requesting an IAM token automatically is not an option.

    1. Get an API key.

    2. Provide the API key in the Authorization header in the following format:

      Authorization: Api-Key <API_key>
      

Do not specify the folder ID in your requests, as the service uses the folder the service account was created in.

Recognizing handwritten text in an imageRecognizing handwritten text in an image

Image text recognition is implemented in the recognize OCR API method.

  1. Prepare an image file that meets the requirements:

    • The supported file formats are JPEG, PNG, and PDF. Specify the MIME type of the file in the mime_type property. The default value is image.
    • The maximum file size is 10 MB.
    • The image size should not exceed 20 MP (height × width).
  2. Encode the image file as Base64:

    UNIX
    Windows
    PowerShell
    Python
    Node.js
    Java
    Go
    base64 -i input.jpg > output.txt
    
    C:> Base64.exe -e input.jpg > output.txt
    
    [Convert]::ToBase64String([IO.File]::ReadAllBytes("./input.jpg")) > output.txt
    
    # Import a library for encoding files in Base64.
    import base64
    
    # Create a function to encode a file and return the results.
    def encode_file(file_path):
      with open(file_path, "rb") as fid:
          file_content = fid.read()
      return base64.b64encode(file_content).decode("utf-8")
    
    // Read the file contents to memory.
    var fs = require('fs');
    var file = fs.readFileSync('/path/to/file');
    
    // Get the file contents in Base64 format.
    var encoded = Buffer.from(file).toString('base64');
    
    // Import a library for encoding files in Base64.
    import org.apache.commons.codec.binary.Base64;
    
    // Get the file contents in Base64 format.
    byte[] fileData = Base64.encodeBase64(yourFile.getBytes());
    
    import (
        "bufio"
        "encoding/base64"
        "io/ioutil"
        "os"
    )
    
    // Open the file.
    f, _ := os.Open("/path/to/file")
    
    // Read the file contents.
    reader := bufio.NewReader(f)
    content, _ := ioutil.ReadAll(reader)
    
    // Get the file contents in Base64 format.
    base64.StdEncoding.EncodeToString(content)
    
  3. Create and send a request.

    In the content property, specify the image file contents encoded as Base64.

    Note

    Currently, the English-Russian model cannot be selected with automatic language detection on. To use this model, explicitly specify both languages in the languageCodes property.

    Send a request using the recognize method and save the response to a file, e.g., output.json:

    UNIX
    Python
    export IAM_TOKEN=<IAM_token>
    curl \
      --request POST \
      --header "Content-Type: application/json" \
      --header "Authorization: Bearer ${IAM_TOKEN}" \
      --header "x-folder-id: <folder_ID>" \
      --header "x-data-logging-enabled: true" \
      --data '{
        "mimeType": "JPEG",
        "languageCodes": ["ru","en"],
        "model": "handwritten",
        "content": "<base64_encoded_image>"
      }' \
      https://ocr.api.cloud.yandex.net/ocr/v1/recognizeText \
      --output output.json
    

    Where:

    • <IAM_token>: Previously obtained IAM token.
    • <folder_ID>: Previously obtained folder ID.
    data = {"mimeType": <mime_type>,
            "languageCodes": ["ru","en"],
            "content": content}
    
    url = "https://ocr.api.cloud.yandex.net/ocr/v1/recognizeText"
    
    headers= {"Content-Type": "application/json",
              "Authorization": "Bearer {:s}".format(<IAM_token>),
              "x-folder-id": "<folder_ID>",
              "x-data-logging-enabled": "true"}
      
      w = requests.post(url=url, headers=headers, data=json.dumps(data))
    

    The result will consist of recognized blocks of text, lines, and words with their position on the image.

    Request result
    {
      "result": {
        "textAnnotation": {
          "width": "241",
          "height": "162",
          "blocks": [
            {
              "boundingBox": {
                "vertices": [
                  {
                    "x": "28",
                    "y": "8"
                  },
                  {
                    "x": "28",
                    "y": "130"
                  },
                  {
                    "x": "240",
                    "y": "130"
                  },
                  {
                    "x": "240",
                    "y": "8"
                  }
                ]
              },
              "lines": [
                {
                  "boundingBox": {
                    "vertices": [
                      {
                        "x": "28",
                        "y": "8"
                      },
                      {
                        "x": "28",
                        "y": "77"
                      },
                      {
                        "x": "240",
                        "y": "77"
                      },
                      {
                        "x": "240",
                        "y": "8"
                      }
                    ]
                  },
                  "text": "Hello",
                  "words": [
                    {
                      "boundingBox": {
                        "vertices": [
                          {
                            "x": "28",
                            "y": "9"
                          },
                          {
                            "x": "28",
                            "y": "81"
                          },
                          {
                            "x": "240",
                            "y": "81"
                          },
                          {
                            "x": "240",
                            "y": "9"
                          }
                        ]
                      },
                      "text": "Hello",
                      "entityIndex": "-1",
                      "textSegments": [
                        {
                          "startIndex": "0",
                          "length": "7"
                        }
                      ]
                    }
                  ],
                  "textSegments": [
                    {
                      "startIndex": "0",
                      "length": "7"
                    }
                  ]
                },
                {
                  "boundingBox": {
                    "vertices": [
                      {
                        "x": "112",
                        "y": "94"
                      },
                      {
                        "x": "112",
                        "y": "130"
                      },
                      {
                        "x": "240",
                        "y": "130"
                      },
                      {
                        "x": "240",
                        "y": "94"
                      }
                    ]
                  },
                  "text": "World!",
                  "words": [
                    {
                      "boundingBox": {
                        "vertices": [
                          {
                            "x": "112",
                            "y": "89"
                          },
                          {
                            "x": "112",
                            "y": "137"
                          },
                          {
                            "x": "240",
                            "y": "137"
                          },
                          {
                            "x": "240",
                            "y": "89"
                          }
                        ]
                      },
                      "text": "World!",
                      "entityIndex": "-1",
                      "textSegments": [
                        {
                          "startIndex": "8",
                          "length": "4"
                        }
                      ]
                    }
                  ],
                  "textSegments": [
                    {
                      "startIndex": "8",
                      "length": "4"
                    }
                  ]
                }
              ],
              "languages": [
                {
                  "languageCode": "ru"
                }
              ],
              "textSegments": [
                {
                  "startIndex": "0",
                  "length": "12"
                }
              ]
            }
          ],
          "entities": [],
          "tables": [],
          "fullText": "Hello\nWorld!\n"
        },
        "page": "0"
      }
    }
    
  4. To get all the words recognized in the image, find all values with the text property.

Note

If the coordinates you got do not match the position of displayed elements, set up support for exif metadata in your image viewing tool or remove the Orientation attribute from the exif image section when running a transfer to the service.

Was the article helpful?

Previous
Text recognition from PDF files
Next
Table recognition
Yandex project
© 2025 Yandex.Cloud LLC