yandex_airflow_cluster (Resource)
Written by
Updated at November 17, 2025
- Example usage
- Schema
- Required
- Optional
- Read-Only
- Nested Schema for code_sync
- Nested Schema for code_sync.s3
- Nested Schema for scheduler
- Nested Schema for webserver
- Nested Schema for worker
- Nested Schema for dag_processor
- Nested Schema for lockbox_secrets_backend
- Nested Schema for logging
- Nested Schema for maintenance_window
- Nested Schema for timeouts
- Nested Schema for triggerer
- Import
Managed Airflow cluster.
Example usage
//
// Create a new Airflow Cluster.
//
resource "yandex_airflow_cluster" "my_airflow_cluster" {
name = "airflow-created-with-terraform"
subnet_ids = [yandex_vpc_subnet.a.id, yandex_vpc_subnet.b.id, yandex_vpc_subnet.d.id]
service_account_id = yandex_iam_service_account.for-airflow.id
admin_password = "some-strong-password"
code_sync = {
s3 = {
bucket = "bucket-for-airflow-dags"
}
}
webserver = {
count = 1
resource_preset_id = "c1-m4"
}
scheduler = {
count = 1
resource_preset_id = "c1-m4"
}
worker = {
min_count = 1
max_count = 2
resource_preset_id = "c1-m4"
}
airflow_config = {
"api" = {
"auth_backends" = "airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session"
}
}
pip_packages = ["dbt"]
lockbox_secrets_backend = {
enabled = true
}
logging = {
enabled = true
folder_id = var.folder_id
min_level = "INFO"
}
}
Schema
Required
code_sync(Attributes) Parameters of the location and access to the code that will be executed in the cluster. (see below for nested schema)name(String) The resource name.scheduler(Attributes) Configuration of scheduler instances. (see below for nested schema)service_account_id(String) Service account which linked to the resource. For more information, see documentation.subnet_ids(Set of String) The list of VPC subnets identifiers which resource is attached.webserver(Attributes) Configuration ofwebserverinstances. (see below for nested schema)worker(Attributes) Configuration of worker instances. (see below for nested schema)
Optional
admin_password(String, Sensitive) Password that is used to log in to Apache Airflow web UI underadminuser.airflow_config(Map of Map of String) Configuration of the Apache Airflow application itself. The value of this attribute is a two-level map. Keys of top-level map are the names of configuration sections . Keys of inner maps are the names of configuration options within corresponding section.airflow_version(String) Apache Airflow version in format<major>.<minor>.dag_processor(Attributes) Configuration of dag-processor instances. Only for airflow version 3.*. (see below for nested schema)deb_packages(Set of String) System packages that are installed in the cluster.deletion_protection(Boolean) Thetruevalue means that resource is protected from accidental deletion.description(String) The resource description.folder_id(String) The folder identifier that resource belongs to. If it is not provided, the default providerfolder-idis used.labels(Map of String) A set of key/value label pairs which assigned to resource.lockbox_secrets_backend(Attributes) Configuration of Lockbox Secrets Backend. See documentation for details. (see below for nested schema)logging(Attributes) Cloud Logging configuration. (see below for nested schema)maintenance_window(Attributes) Configuration of window for maintenance operations. (see below for nested schema)pip_packages(Set of String) Python packages that are installed in the cluster.python_version(String) Version of Python that Airflow will run on. Must be in format<major>.<minor>.security_group_ids(Set of String) The list of security groups applied to resource or their components.timeouts(Block, Optional) (see below for nested schema)triggerer(Attributes) Configuration oftriggererinstances. (see below for nested schema)
Read-Only
created_at(String) The creation timestamp of the resource.id(String) The resource identifier.status(String) Status of the cluster. Can be eitherCREATING,STARTING,RUNNING,UPDATING,STOPPING,STOPPED,ERRORorSTATUS_UNKNOWN. For more information seestatusfield of JSON representation in the official documentation.
Nested Schema for code_sync
Required:
s3(Attributes) Currently only Object Storage (S3) is supported as the source of DAG files. (see below for nested schema)
Nested Schema for code_sync.s3
Required:
bucket(String) The name of the Object Storage bucket that stores DAG files used in the cluster.
Nested Schema for scheduler
Required:
count(Number) The number of scheduler instances in the cluster.resource_preset_id(String) The identifier of the preset for computational resources available to an instance (CPU, memory etc.).
Nested Schema for webserver
Required:
count(Number) The number of webserver instances in the cluster.resource_preset_id(String) The identifier of the preset for computational resources available to an instance (CPU, memory etc.).
Nested Schema for worker
Required:
max_count(Number) The maximum number of worker instances in the cluster.min_count(Number) The minimum number of worker instances in the cluster.resource_preset_id(String) The identifier of the preset for computational resources available to an instance (CPU, memory etc.).
Nested Schema for dag_processor
Required:
count(Number) The number of dag-processor instances in the cluster.resource_preset_id(String) The identifier of the preset for computational resources available to an instance (CPU, memory etc.).
Nested Schema for lockbox_secrets_backend
Required:
enabled(Boolean) Enables usage of Lockbox Secrets Backend.
Nested Schema for logging
Required:
enabled(Boolean) Enables delivery of logs generated by the Airflow components to Cloud Logging.
Optional:
folder_id(String) Logs will be written to default log group of specified folder. Exactly one of the attributesfolder_idorlog_group_idshould be specified.log_group_id(String) Logs will be written to the specified log group. Exactly one of the attributesfolder_idorlog_group_idshould be specified.min_level(String) Minimum level of messages that will be sent to Cloud Logging. Can be eitherTRACE,DEBUG,INFO,WARN,ERRORorFATAL. If not set then server default is applied (currentlyINFO).
Nested Schema for maintenance_window
Optional:
day(String) Day of week for maintenance window. One ofMON,TUE,WED,THU,FRI,SAT,SUN.hour(Number) Hour of day in UTC time zone (1-24) for maintenance window.type(String) Type of maintenance window. Can be eitherANYTIMEorWEEKLY. IfWEEKLY, day and hour must be specified.
Nested Schema for timeouts
Optional:
create(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).delete(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.update(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
Nested Schema for triggerer
Required:
count(Number) The number of triggerer instances in the cluster.resource_preset_id(String) The identifier of the preset for computational resources available to an instance (CPU, memory etc.).
Import
The resource can be imported by using their resource ID. For getting the resource ID you can use Yandex Cloud Web Console
# terraform import yandex_airflow_cluster.<resource Name> <resource Id>
terraform import yandex_airflow_cluster.my_airflow_cluster enphq**********cjsw4