Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • All guides
      • All jobs
      • Running jobs
      • Spark jobs
      • PySpark jobs
      • Hive jobs
      • MapReduce jobs
    • Setting up and using Python virtual environments
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ
  1. Step-by-step guides
  2. Jobs
  3. Running jobs

Running jobs

Written by
Yandex Cloud
Updated at December 16, 2024

You can run a job in one of the following ways:

  • Create jobs in Yandex Data Processing. Once created, they will run automatically.
  • Run Apache Hive jobs using the Yandex Cloud CLI or Hive CLI.
  • Run Spark or PySpark applications using Spark Shell, spark-submit, or the Yandex Cloud CLI.
  • Use spark-submit to run jobs from remote hosts that are not part of the Yandex Data Processing cluster.
  • Set up integration with Yandex Managed Service for Apache Airflow™ or Yandex DataSphere. This will automate running the jobs.

Was the article helpful?

Previous
All jobs
Next
Spark jobs
© 2025 Direct Cursus Technology L.L.C.