Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex DataSphere
  • Getting started
  • Terraform reference
  • Audit Trails events
  • Access management
  • Pricing policy
  • Public materials
  • Release notes
    • Library installation issues
    • Errors when using a Spark connector
    • FAQ
  1. Troubleshooting
  2. Errors when using a Spark connector

Errors when using a Spark connector

Written by
Yandex Cloud
Updated at October 24, 2024

When using Spark, a hidden directory named .spark/ is created in the project storage, and that directory contains service information about all Spark connectors available in the project. To see hidden files and directories, in the panel at the top of the JupyterLab interface, click View ⟶ Show Hidden Files.

A Spark session is created before the start of the first notebook cell and can be accessed via the spark variable. If an error occurs during session creation, the spark variable will not be available, and an error report will appear in a file named .spark/connector/<connector_ID>/session_creation_err_<error_time>.txt. Attach this file when you contact technical support.

Error writing data to S3

If a Could not find any valid local directory for s3ablock-0001 error occurs when writing data to S3, add one of the following parameters in the Spark connector settings under Spark settings:

Parameter Description
spark.hadoop.fs.s3a.fast.upload.buffer = bytebuffer Fast file upload using RAM. The file size must not exceed the available RAM size.
spark.hadoop.fs.s3a.buffer.dir = /tmp Standard upload to disk

Was the article helpful?

Previous
Library installation issues
Next
FAQ
Yandex project
© 2025 Yandex.Cloud LLC