Handwriting recognition
You can recognize handwritten text in an image using the OCR API with the handwritten
recognition model. With this model, you can recognize any combination of handwritten and typed text in Russian and English.
Getting started
To use the examples, install cURL
Get your account data for authentication:
-
Get an IAM token for your Yandex account or federated account.
-
Get the ID of the folder for which your account has the
ai.vision.user
role or higher. -
When accessing Vision OCR via the API, provide the received parameters in each request:
-
For the Vision API and Classifier API:
Specify the IAM token in the
Authorization
header as follows:Authorization: Bearer <IAM_token>
Specify the folder ID in the request body in the
folderId
parameter. -
For the OCR API:
- Specify the IAM token in the
Authorization
header. - Specify the folder ID in the
x-folder-id
header.
Authorization: Bearer <IAM_token> x-folder-id <folder_ID>
- Specify the IAM token in the
-
Vision OCR supports two authentication methods based on service accounts:
-
With an IAM token:
-
Get an IAM token.
-
Provide the IAM token in the
Authorization
header in the following format:Authorization: Bearer <IAM_token>
-
-
With API keys.
Use API keys if requesting an IAM token automatically is not an option.
-
Provide the API key in the
Authorization
header in the following format:Authorization: Api-Key <API key>
Do not specify the folder ID in your requests, as the service uses the folder the service account was created in.
Recognizing handwritten text in an image
Image text recognition is implemented in the recognize OCR API method.
-
Prepare an image file that meets the requirements:
- The supported file formats are JPEG, PNG, and PDF. Specify the MIME type
of the file in themime_type
property. The default value isimage
. - The maximum file size is 10 MB.
- The image size should not exceed 20 MP (height × width).
- The supported file formats are JPEG, PNG, and PDF. Specify the MIME type
-
Encode the image file as Base64:
UNIXWindowsPowerShellPythonNode.jsJavaGobase64 -i input.jpg > output.txt
C:> Base64.exe -e input.jpg > output.txt
[Convert]::ToBase64String([IO.File]::ReadAllBytes("./input.jpg")) > output.txt
# Import a library for encoding files in Base64 import base64 # Create a function that will encode a file and return results. def encode_file(file_path): with open(file_path, "rb") as fid: file_content = fid.read() return base64.b64encode(file_content).decode("utf-8")
// Read the file contents to memory. var fs = require('fs'); var file = fs.readFileSync('/path/to/file'); // Get the file contents in Base64 format. var encoded = Buffer.from(file).toString('base64');
// Import a library for encoding files in Base64. import org.apache.commons.codec.binary.Base64; // Get the file contents in Base64 format. byte[] fileData = Base64.encodeBase64(yourFile.getBytes());
import ( "bufio" "encoding/base64" "io/ioutil" "os" ) // Open the file. f, _ := os.Open("/path/to/file") // Read the file contents. reader := bufio.NewReader(f) content, _ := ioutil.ReadAll(reader) // Get the file contents in Base64 format. base64.StdEncoding.EncodeToString(content)
-
Create a file with the request body, e.g.,
body.json
.body.json:
{ "mimeType": "JPEG", "languageCodes": ["ru","en"], "model": "handwritten", "content": "<base64-encoded_image>" }
In the
content
property, specify the image file contents encoded as Base64. -
UNIXPython
export IAM_TOKEN=<IAM_token> curl \ --request POST \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${IAM_TOKEN}" \ --header "x-folder-id: <folder_ID>" \ --header "x-data-logging-enabled: true" \ --data "@body.json" \ https://ocr.api.cloud.yandex.net/ocr/v1/recognizeText \ --output output.json
Where:
<IAM_token>
: Previously obtained IAM token.<folder_ID>
: Previously obtained folder ID.
data = {"mimeType": <mime_type>, "languageCodes": ["*"], "content": content} url = "https://ocr.api.cloud.yandex.net/ocr/v1/recognizeText" headers= {"Content-Type": "application/json", "Authorization": "Bearer {:s}".format(<IAM_token>), "x-folder-id": "<folder_ID>", "x-data-logging-enabled": "true"} w = requests.post(url=url, headers=headers, data=json.dumps(data))
The result will consist of recognized blocks of text, lines, and words with their position on the image:
{ "result": { "textAnnotation": { "width": "241", "height": "162", "blocks": [ { "boundingBox": { "vertices": [ { "x": "28", "y": "8" }, { "x": "28", "y": "130" }, { "x": "240", "y": "130" }, { "x": "240", "y": "8" } ] }, "lines": [ { "boundingBox": { "vertices": [ { "x": "28", "y": "8" }, { "x": "28", "y": "77" }, { "x": "240", "y": "77" }, { "x": "240", "y": "8" } ] }, "text": "Hello,", "words": [ { "boundingBox": { "vertices": [ { "x": "28", "y": "9" }, { "x": "28", "y": "81" }, { "x": "240", "y": "81" }, { "x": "240", "y": "9" } ] }, "text": "Hello,", "entityIndex": "-1", "textSegments": [ { "startIndex": "0", "length": "7" } ] } ], "textSegments": [ { "startIndex": "0", "length": "7" } ] }, { "boundingBox": { "vertices": [ { "x": "112", "y": "94" }, { "x": "112", "y": "130" }, { "x": "240", "y": "130" }, { "x": "240", "y": "94" } ] }, "text": "World!", "words": [ { "boundingBox": { "vertices": [ { "x": "112", "y": "89" }, { "x": "112", "y": "137" }, { "x": "240", "y": "137" }, { "x": "240", "y": "89" } ] }, "text": "World!", "entityIndex": "-1", "textSegments": [ { "startIndex": "8", "length": "4" } ] } ], "textSegments": [ { "startIndex": "8", "length": "4" } ] } ], "languages": [ { "languageCode": "ru" } ], "textSegments": [ { "startIndex": "0", "length": "12" } ] } ], "entities": [], "tables": [], "fullText": "Hello,\nWorld!\n" }, "page": "0" } }
-
To get all the recognized words in an image, find all the values with the
text
property.
Note
If the coordinates you got do not match the position of displayed elements, set up support for exif
metadata in your image viewing tool or remove the Orientation
attribute from the exif
image section when running a transfer to the service.