Yandex Cloud
Поиск
Связаться с намиПодключиться
  • Истории успеха
  • Документация
  • Блог
  • Все сервисы
  • Статус работы сервисов
    • Доступны в регионе
    • Инфраструктура и сеть
    • Платформа данных
    • Контейнеры
    • Инструменты разработчика
    • Бессерверные вычисления
    • Безопасность
    • Мониторинг и управление ресурсами
    • ИИ для бизнеса
    • Бизнес-инструменты
  • Все решения
    • По отраслям
    • По типу задач
    • Экономика платформы
    • Безопасность
    • Техническая поддержка
    • Каталог партнёров
    • Обучение и сертификация
    • Облако для стартапов
    • Облако для крупного бизнеса
    • Центр технологий для общества
    • Партнёрская программа
    • Поддержка IT-бизнеса
    • Облако для фрилансеров
    • Обучение и сертификация
    • Блог
    • Документация
    • Мероприятия и вебинары
    • Контакты, чаты и сообщества
    • Идеи
    • Тарифы Yandex Cloud
    • Промоакции и free tier
    • Правила тарификации
  • Истории успеха
  • Документация
  • Блог
Проект Яндекса
© 2025 ТОО «Облачные Сервисы Казахстан»
Terraform в Yandex Cloud
  • Начало работы
  • Библиотека решений
    • Обзор
    • История изменений (англ.)
          • spark_cluster

В этой статье:

  • Example usage
  • Schema
  • Required
  • Optional
  • Read-Only
  • Nested Schema for config
  • Nested Schema for config.resource_pools
  • Nested Schema for config.resource_pools.driver
  • Nested Schema for config.resource_pools.executor
  • Nested Schema for config.dependencies
  • Nested Schema for config.history_server
  • Nested Schema for config.metastore
  • Nested Schema for logging
  • Nested Schema for network
  • Nested Schema for maintenance_window
  • Nested Schema for timeouts
  • Import
  1. Справочник Terraform
  2. Ресурсы (англ.)
  3. Managed Service for Apache Spark
  4. Resources
  5. spark_cluster

yandex_spark_cluster (Resource)

Статья создана
Yandex Cloud
Обновлена 11 сентября 2025 г.
  • Example usage
  • Schema
    • Required
    • Optional
    • Read-Only
    • Nested Schema for config
    • Nested Schema for config.resource_pools
    • Nested Schema for config.resource_pools.driver
    • Nested Schema for config.resource_pools.executor
    • Nested Schema for config.dependencies
    • Nested Schema for config.history_server
    • Nested Schema for config.metastore
    • Nested Schema for logging
    • Nested Schema for network
    • Nested Schema for maintenance_window
    • Nested Schema for timeouts
  • Import

Managed Spark cluster.

Example usageExample usage

//
// Create a new Spark Cluster.
//
resource "yandex_spark_cluster" "my_spark_cluster" {

  name               = "spark-cluster-1"
  description        = "created by terraform"
  service_account_id = yandex_iam_service_account.for-spark.id

  labels = {
    my_key = "my_value"
  }

  config = {
    resource_pools = {
      driver = {
        resource_preset_id = "c2-m8"
        size               = 1
      }
      executor = {
        resource_preset_id = "c4-m16"
        min_size           = 1
        max_size           = 2
      }
    }
    dependencies = {
      pip_packages = ["numpy==2.2.2"]
    }
  }

  network = {
    subnet_ids         = [yandex_vpc_subnet.a.id]
    security_group_ids = [yandex_vpc_security_group.spark-sg1.id]
  }

  logging = {
    enabled   = true
    folder_id = var.folder_id
  }

  maintenance_window = {
    type = "WEEKLY"
    day  = "TUE"
    hour = 10
  }
}

SchemaSchema

RequiredRequired

  • config (Attributes) Configuration of the Spark cluster. (see below for nested schema)
  • logging (Attributes) Cloud Logging configuration. (see below for nested schema)
  • name (String) Name of the cluster. The name is unique within the folder.
  • network (Attributes) Network configuration. (see below for nested schema)
  • service_account_id (String) The service account used by the cluster to access cloud resources.

OptionalOptional

  • deletion_protection (Boolean) The true value means that resource is protected from accidental deletion.
  • description (String) Description of the cluster. 0-256 characters long.
  • folder_id (String) ID of the cloud folder that the cluster belongs to.
  • labels (Map of String) Cluster labels as key/value pairs.
  • maintenance_window (Attributes) Configuration of the window for maintenance operations. (see below for nested schema)
  • timeouts (Block, Optional) (see below for nested schema)

Read-OnlyRead-Only

  • created_at (String) The timestamp when the cluster was created.
  • id (String) Unique ID of the cluster.
  • status (String) Status of the cluster.

Nested Schema for Nested Schema for config

Required:

  • resource_pools (Attributes) Computational resources. (see below for nested schema)

Optional:

  • dependencies (Attributes) Environment dependencies. (see below for nested schema)
  • history_server (Attributes) History Server configuration. (see below for nested schema)
  • metastore (Attributes) Metastore configuration. (see below for nested schema)

Nested Schema for Nested Schema for config.resource_pools

Required:

  • driver (Attributes) Computational resources for the driver pool. (see below for nested schema)
  • executor (Attributes) Computational resources for the executor pool. (see below for nested schema)

Nested Schema for Nested Schema for config.resource_pools.driver

Required:

  • resource_preset_id (String) Resource preset ID for the driver pool.

Optional:

  • max_size (Number) Maximum node count for the driver pool with autoscaling.
  • min_size (Number) Minimum node count for the driver pool with autoscaling.
  • size (Number) Node count for the driver pool with fixed size.

Nested Schema for Nested Schema for config.resource_pools.executor

Required:

  • resource_preset_id (String) Resource preset ID for the executor pool.

Optional:

  • max_size (Number) Maximum node count for the executor pool with autoscaling.
  • min_size (Number) Minimum node count for the executor pool with autoscaling.
  • size (Number) Node count for the executor pool with fixed size.

Nested Schema for Nested Schema for config.dependencies

Optional:

  • deb_packages (Set of String) Deb-packages that need to be installed using system package manager.
  • pip_packages (Set of String) Python packages that need to be installed using pip (in pip requirement format).

Nested Schema for Nested Schema for config.history_server

Optional:

  • enabled (Boolean) Enable Spark History Server. Default: true.

Nested Schema for Nested Schema for config.metastore

Optional:

  • cluster_id (String) Metastore cluster ID for default spark configuration.

Nested Schema for Nested Schema for logging

Optional:

  • enabled (Boolean) Enable log delivery to Cloud Logging. Default: true.
  • folder_id (String) Logs will be written to default log group of specified folder. Exactly one of the attributes folder_id or log_group_id should be specified.
  • log_group_id (String) Logs will be written to the specified log group. Exactly one of the attributes folder_id or log_group_id should be specified.

Nested Schema for Nested Schema for network

Required:

  • subnet_ids (Set of String) Network subnets.

Optional:

  • security_group_ids (Set of String) Network security groups.

Nested Schema for Nested Schema for maintenance_window

Optional:

  • day (String) Day of week for maintenance window. One of MON, TUE, WED, THU, FRI, SAT, SUN.
  • hour (Number) Hour of day in UTC time zone (1-24) for maintenance window.
  • type (String) Type of maintenance window. Can be either ANYTIME or WEEKLY. If WEEKLY, day and hour must be specified.

Nested Schema for Nested Schema for timeouts

Optional:

  • create (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
  • delete (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.
  • update (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).

ImportImport

The resource can be imported by using their resource ID. For getting the resource ID you can use Yandex Cloud Web Console or YC CLI.

# terraform import yandex_spark_cluster.<resource Name> <resource Id>
terraform import yandex_spark_cluster.my_spark_cluster ...

Была ли статья полезна?

Предыдущая
spark_cluster
Следующая
mdb_clickhouse_cluster
Проект Яндекса
© 2025 ТОО «Облачные Сервисы Казахстан»