Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Data Processing
  • Getting started
    • All tutorials
      • Overview
      • Working with Hive jobs
      • Working with MapReduce jobs
      • Working with PySpark jobs
      • Working with Spark jobs
      • Running Apache Hive jobs
      • Running Spark applications
      • Running jobs from a remote host
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • FAQ

In this article:

  • Getting started
  • Create a Spark job
  • Delete the resources you created
  1. Tutorials
  2. Working with jobs
  3. Working with Spark jobs

Working with Spark jobs

Written by
Yandex Cloud
Updated at April 23, 2025
  • Getting started
  • Create a Spark job
  • Delete the resources you created

Apache Spark is a distributed processing framework for unstructured and semi-structured data and a part of the Hadoop project ecosystem.

In this section, we provide a simple example that demonstrates how to use the Spark interface for Scala and Java in Yandex Data Processing. In the example, we use Spark to count the number of times each word is seen in a short text.

Getting startedGetting started

  1. Create a service account with the dataproc.agent and dataproc.provisioner roles.

  2. In Object Storage, create buckets and configure access to them:

    1. Create a bucket for the input data and grant the READ permission for this bucket to the cluster service account.
    2. Create a bucket for the processing output and grant the cluster service account READ and WRITE permissions for this bucket.
  3. Create a Yandex Data Processing cluster with the following settings:

    • Environment: PRODUCTION
    • Services:
      • HDFS
      • SPARK
      • YARN
    • Service account: Select the service account you previously created.
    • Bucket name: Select a bucket to hold the processing results.

Create a Spark jobCreate a Spark job

  1. Upload a file for processing:

    1. Copy and save the following to a file named text.txt:

      text.txt
      she sells sea shells on the sea shore
      the shells that she sells are sea shells I am sure
      so if she sells sea shells on the sea shore
      I am sure that the shells are sea shore shells
      
    2. Upload the text.txt file to the source data bucket.

  2. Download the spark-app_2.11-0.1.0-SNAPSHOT.jar file containing the Scala code of the word_count.scala analysis program and upload it to the input data bucket:

    word_count.scala
    package com.yandex.cloud.dataproc.scala
    
    import org.apache.spark.{SparkConf, SparkContext}
    
    
    object Main {
        def main(args: Array[String]) {
            if (args.length != 2){ // check number of args
                System.err.println("Usage spark-app.jar <input_directory> <output_directory>");
                System.exit(-1);
            }
    
    
            val inDir = args(0); //input URI
            val outDir = args(1); //output URI
    
            val conf = new SparkConf().setAppName("Word count - Scala App")
            val sc = new SparkContext(conf)
    
            val text_file = sc.textFile(inDir)
            val counts = text_file.flatMap(line => line.split(" "))
            .map(word => (word, 1))
            .reduceByKey(_ + _)
    
            val defaultFS = sc.hadoopConfiguration.get("fs.defaultFS")
    
            if (outDir.toLowerCase().startsWith("s3a://")) {
                counts.saveAsTextFile(outDir)
            } else {
                counts.saveAsTextFile(defaultFS + "/" + outDir)
            }
    
            sc.stop()
        }
    }
    

    For more information about building an application written in Scala for Spark, see Using Spark Submit.

  3. Create a Spark job with the following parameters:

    • Main jar: s3a://<input_data_bucket_name>/spark-app_2.11-0.1.0-SNAPSHOT.jar
    • Main class: com.yandex.cloud.dataproc.scala.Main
    • Arguments:
      • s3a://<input_data_bucket_name>/text.txt
      • s3a://<output_bucket_name>/<output_directory>
  4. Wait for the job status to change to Done.

  5. Download from the bucket and review the files with the results from the bucket:

    part-00000
    (are,2)
    (am,2)
    (she,3)
    (so,1)
    
    part-00001
    (shore,3)
    (if,1)
    (that,2)
    (on,2)
    (shells,6)
    (I,2)
    (sure,2)
    (sea,6)
    (the,4)
    (sells,3)
    

Note

You can view the job logs and search data in them using Yandex Cloud Logging. For more information, see Working with logs.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

  1. Delete the cluster.
  2. Delete the buckets.
  3. Delete the service account.

Was the article helpful?

Previous
Working with PySpark jobs
Next
Running Apache Hive jobs
© 2025 Direct Cursus Technology L.L.C.