Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Airflow™
  • Getting started
    • All guides
      • Information about existing clusters
      • Creating a cluster
      • Updating a cluster
      • Stopping and starting a cluster
      • Maintenance
      • Deleting a cluster
    • Working with Apache Airflow™ interfaces
    • Transferring logs from Apache Airflow™ to Cloud Logging
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Release notes
  • FAQ
  1. Step-by-step guides
  2. Clusters
  3. Updating a cluster

Updating an Apache Airflow™ cluster

Written by
Yandex Cloud
Updated at September 12, 2025

After creating a cluster, you can edit its basic and advanced settings.

Management console
CLI
Terraform
REST API
gRPC API

To change the cluster settings:

  1. Navigate to the folder dashboard and select Managed Service for Apache Airflow™.

  2. Select the cluster and click Edit in the top panel.

  3. Under Basic parameters, edit the cluster name and description, delete labels, or add new ones.

  4. Under Access settings, select a service account or create a new one with the managed-airflow.integrationProvider role. The cluster will thus get the permissions it needs to work with user resources. For more information, see Impersonation.

    To update a service account in a Managed Service for Apache Airflow™ cluster, assign the iam.serviceAccounts.user role or higher to your Yandex Cloud account.

    Warning

    If the cluster already uses a service account to access objects from Object Storage, then changing it to a different service account may make these objects unavailable and interrupt the cluster operation. Before changing the service account settings, make sure that the cluster doesn't use the objects in question.

  5. Under Network settings, select a security group for cluster network traffic or create a new group.

    Security group settings do not affect access to the Apache Airflow™ web interface.

  6. In the settings sections of Managed Service for Apache Airflow™ components, i.e., Web server configuration, Scheduler configuration, and Worker configuration, specify the number of instances and computing resource configuration.

  7. Under Triggerer configuration, enable or disable the triggerer. If it is enabled, specify the number of instances and resources.

  8. Under Dependencies, delete or add names of pip and deb packages.

  9. Under DAG file storage, select an existing bucket to store DAG files or create a new one. Make sure to grant the READ permission for this bucket to the cluster service account.

  10. Under Advanced settings:

    • Update the cluster maintenance time.
    • Enable or disable deletion protection.
  11. Under Airflow configuration:

    • Add, edit, or delete Apache Airflow™ additional properties, e.g., the api.maximum_page_limit key with 150 for its value.

      Fill in the fields manually or import the settings from a configuration file (see a configuration file example).

    • Enable or disable the Use Lockbox Secret Backend option allowing you to use secrets in Yandex Lockbox to store Apache Airflow™ configuration data, variables, and connection parameters.

      To extract the required information from the secret, the cluster service account must have the lockbox.payloadViewer role.

      You can assign this role either at the folder level or individual secret level.

  12. Under Logging, enable or disable logging. If logging is enabled, specify the log group to write logs to and the minimum logging level. Logs generated by Apache Airflow™ components will be sent to Yandex Cloud Logging.

  13. Click Save.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To change the cluster settings:

  1. View the description of the CLI command to update a cluster:

    yc managed-airflow cluster update --help
    
  2. Provide a list of settings to update in the update cluster command:

    yc managed-airflow cluster update <cluster_name_or_ID> \
       --new-name <new_cluster_name> \
       --description <cluster_description> \
       --labels <label_list> \
       --service-account-id <service_account_ID> \
       --security-group-ids <security_group_IDs> \
       --webserver count=<number_of_instances>,`
                  `resource-preset-id=<resource_ID> \
       --scheduler count=<number_of_instances>,`
                  `resource-preset-id=<resource_ID> \
       --worker min-count=<minimum_number_of_instances>,`
               `max-count=<maximum_number_of_instances>,`
               `resource-preset-id=<resource_ID> \
       --triggerer count=<number_of_instances>,`
                  `resource-preset-id=<resource_ID> \
       --deb-packages <list_of_deb_packages> \
       --pip-packages <list_of_pip_packages> \
       --dags-bucket <bucket_name> \
       --maintenance-window type=<maintenance_type>,`
                            `day=<day_of_week>,`
                            `hour=<hour> \
       --deletion-protection \
       --lockbox-secrets-backend \
       --log-enabled \
       --log-folder-id <folder_ID> \
       --log-min-level <logging_level>
    

    Where:

    • --name: Cluster name.

    • --description: Cluster description.

    • --labels: List of labels. Provide labels in <key>=<value> format.

    • --admin-password: Admin user password. The password must be not less than 8 characters long and contain at least:

      • One uppercase letter
      • One lowercase letter
      • One number
      • One special character
    • --service-account-id: Service account ID.

    • --security-group-ids: List of security group IDs.

    • --webserver, --scheduler, --worker, --triggerer: Managed Service for Apache Airflow™ component configuration:

      • count: Number of instances in the cluster for the web server, scheduler, and Triggerer.

      • min-count, max-count: Minimum and maximum number of instances in the cluster for the worker.

      • resource-preset-id: ID of the computing resources of the web server, scheduler, worker, and Triggerer. The possible values are:

        • c1-m2: 1 vCPU, 2 GB RAM
        • c1-m4: 1 vCPU, 4 GB RAM
        • c2-m4: 2 vCPUs, 4 GB RAM
        • c2-m8: 2 vCPUs, 8 GB RAM
        • c4-m8: 4 vCPUs, 8 GB RAM
        • c4-m16: 4 vCPUs, 16 GB RAM
        • c8-m16: 8 vCPUs, 16 GB RAM
        • c8-m32: 8 vCPUs, 32 GB RAM
    • --deb-packages, --pip-packages: Lists of deb and pip packages enabling you to install additional libraries and applications in the cluster for running DAG files:

      You can set version restrictions for the installed packages, e.g.:

      --pip-packages "pandas==2.0.2,scikit-learn>=1.0.0,clickhouse-driver~=0.2.0"
      

      The package name format and version are defined by the install command: pip install for pip packages and apt install for deb packages.

    • --dags-bucket: Name of the bucket to store DAG files in.

    • --maintenance-window: Maintenance window settings (including for disabled clusters), where type is the maintenance type:

      • anytime: At any time (default).
      • weekly: On a schedule. For this value, also specify the following:
        • day: Day of week, i.e., MON, TUE, WED, THU, FRI, SAT, or SUN.
        • hour: Hour of day (UTC), from 1 to 24.
    • --deletion-protection: Enables cluster protection against accidental deletion.

      Even if it is enabled, one can still connect to the cluster manually and delete it.

    • --lockbox-secrets-backend: Enables using secrets in Yandex Lockbox to store Apache Airflow™ configuration data, variables, and connection parameters.

    • --airflow-config: Apache Airflow™ additional properties. Provide them in <configuration_section>.<key>=<value> format, such as the following:

      --airflow-config core.load_examples=False
      
    • Logging parameters:

      • --log-enabled: Enables logging. Logs generated by Apache Airflow™ components will be sent to Yandex Cloud Logging.

      • --log-folder-id: Folder ID. Logs will be written to the default log group for this folder.

      • --log-group-id: Custom log group ID. Logs will be written to this group.

        Specify one of the two parameters: --log-folder-id or --log-group-id.

      • --log-min-level: Minimum logging level. Possible values: TRACE, DEBUG, INFO (default), WARN, ERROR, and FATAL.

    You can get the cluster ID and name with the list of clusters in the folder.

To change the cluster settings:

  1. Open the current Terraform configuration file that defines your infrastructure.

    For more information about creating this file, see Creating clusters.

  2. To change cluster settings, change the required field values in the configuration file.

    Alert

    Do not change the cluster name and password using Terraform. This will delete the existing cluster and create a new one.

    Here is an example of the configuration file structure:

    resource "yandex_airflow_cluster" "<cluster_name>" {
      name        = "<cluster_name>"
      description = "<cluster_description>"
    
      labels = { <label_list> }
    
      admin_password     = "<admin_password>"
      service_account_id = "<service_account_ID>"
      subnet_ids         = ["<list_of_subnet_IDs>"]
      security_group_ids = ["<list_of_security_group_IDs>"]
    
      webserver = {
        count              = <number_of_instances>
        resource_preset_id = "<resource_ID>"
      }
    
      scheduler = {
        count              = <number_of_instances>
        resource_preset_id = "<resource_ID>"
      }
    
      worker = {
        min_count          = <minimum_number_of_instances>
        max_count          = <maximum_number_of_instances>
        resource_preset_id = "<resource_ID>"
      }
    
      triggerer = {
        count              = <number_of_instances>
        resource_preset_id = "<resource_ID>"
      }
    
      pip_packages = ["list_of_pip_packages"]
      deb_packages = ["list_of_deb_packages"]
    
      code_sync = {
        s3 = {
          bucket = "<bucket_name>"
        }
      }
    
      maintenance_window = {
        type = "<maintenance_type>"
        day  = "<day_of_week>"
        hour = <hour>
      }
    
      deletion_protection = <deletion_protection>
    
      lockbox_secrets_backend = {
        enabled = <usage_of_secrets>
      }
    
      airflow_config = {
        <configuration_section> = {
          <key> = "<value>"
        }
      }
    
      logging = {
        enabled   = <use_of_logging>
        folder_id = "<folder_ID>"
        min_level = "<logging_level>"
      }
    }
    
    resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = "<network_ID>"
      v4_cidr_blocks = ["<range>"]
    }
    

    Where:

    • name: Cluster name.

    • description: Cluster description.

    • labels: List of labels. Provide labels in <key> = "<value>" format.

    • admin_password: Admin user password. The password must be not less than 8 characters long and contain at least:

      • One uppercase letter
      • One lowercase letter
      • One number
      • One special character
    • service_account_id: Service account ID.

    • subnet_ids: List of subnet IDs.

      Note

      Once a cluster is created, you cannot change its subnets.

    • security_group_ids: List of security group IDs.

    • webserver, scheduler, worker, triggerer: Managed Service for Apache Airflow™ component configuration:

      • count: Number of instances in the cluster for the web server, scheduler, and Triggerer.

      • min_count, max_count: Minimum and maximum number of instances in the cluster for the worker.

      • resource_preset_id: ID of the computing resources of the web server, scheduler, worker, and Triggerer. The possible values are:

        • c1-m2: 1 vCPU, 2 GB RAM
        • c1-m4: 1 vCPU, 4 GB RAM
        • c2-m4: 2 vCPUs, 4 GB RAM
        • c2-m8: 2 vCPUs, 8 GB RAM
        • c4-m8: 4 vCPUs, 8 GB RAM
        • c4-m16: 4 vCPUs, 16 GB RAM
        • c8-m16: 8 vCPUs, 16 GB RAM
        • c8-m32: 8 vCPUs, 32 GB RAM
    • deb_packages, pip_packages: Lists of deb and pip packages enabling you to install additional libraries and applications in the cluster for running DAG files:

      You can set version restrictions for the installed packages, e.g.:

      pip_packages = ["pandas==2.0.2","scikit-learn>=1.0.0","clickhouse-driver~=0.2.0"]
      

      The package name format and version are defined by the install command: pip install for pip packages and apt install for deb packages.

    • code_sync.s3.bucket: Name of the bucket to store DAG files in.

    • maintenance_window: Maintenance window settings (including for disabled clusters):

      • type: Maintenance type. The possible values include:
        • ANYTIME: Any time.
        • WEEKLY: On a schedule.
      • day: Day of week for the WEEKLY type, i.e., MON, TUE, WED, THU, FRI, SAT, or SUN.
      • hour: Time of day (UTC) for the WEEKLY type, from 1 to 24.
    • deletion_protection: Enables cluster protection against accidental deletion. The possible values are true or false.

      Even if it is enabled, one can still connect to the cluster manually and delete it.

    • lockbox_secrets_backend.enabled: Enables using secrets in Yandex Lockbox to store Apache Airflow™ configuration data, variables, and connection parameters. The possible values are true or false.

    • airflow_config: Apache Airflow™ additional properties, e.g., core for configuration section, load_examples for key, and False for value.

    • logging: Logging parameters:

      • enabled: Enables logging. Logs generated by Apache Airflow™ components will be sent to Yandex Cloud Logging. The possible values are true or false.

      • folder_id: Folder ID. Logs will be written to the default log group for this folder.

      • log_group_id: Custom log group ID. Logs will be written to this group.

        Specify one of the two parameters: folder_id or log_group_id.

      • min_level: Minimum logging level. Possible values: TRACE, DEBUG, INFO (default), WARN, ERROR, and FATAL.

  3. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  4. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

For more information, see this Terraform provider article.

To change the cluster settings:

  1. Get an IAM token for API authentication and save it as an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Create a file named body.json and paste the following code into it:

    {
      "updateMask": "<list_of_parameters_to_update>",
      "name": "<cluster_name>",
      "description": "<cluster_description>",
      "labels": { <label_list> },
      "configSpec": {
        "airflow": {
          "config": { <list_of_properties> }
        },
        "webserver": {
          "count": "<number_of_instances>",
          "resources": {
            "resourcePresetId": "<resource_ID>"
          }
        },
        "scheduler": {
          "count": "<number_of_instances>",
          "resources": {
            "resourcePresetId": "<resource_ID>"
          }
        },
        "triggerer": {
          "count": "<number_of_instances>",
          "resources": {
            "resourcePresetId": "<resource_ID>"
          }
        },
        "worker": {
          "minCount": "<minimum_number_of_instances>",
          "maxCount": "<maximum_number_of_instances>",
          "resources": {
            "resourcePresetId": "<resource_ID>"
          }
        },
        "dependencies": {
          "pipPackages": [ <list_of_pip_packages> ],
          "debPackages": [ <list_of_deb_packages> ]
        },
        "lockbox": {
          "enabled": <use_of_logging>
        }
      },
      "codeSync": {
        "s3": {
          "bucket": "<bucket_name>"
        }
      },
      "networkSpec": {
        "securityGroupIds": [ <list_of_security_group_IDs> ]
      },
      "maintenanceWindow": {
        "weeklyMaintenanceWindow": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "deletionProtection": <deletion_protection>,
      "serviceAccountId": "<service_account_ID>",
      "logging": {
        "enabled": <use_of_logging>,
        "minLevel": "<logging_level>",
        "folderId": "<folder_ID>"
      }
    }
    

    Where:

    • updateMask: List of parameters to update as a single string, separated by commas.

      Warning

      When you update a cluster, all parameters of the object you are changing that have not been explicitly provided in the request will take their defaults. To avoid this, list the settings you want to change in the updateMask parameter.

    • name: Cluster name.

    • description: Cluster description.

    • labels: List of labels provided in "<key>": "<value>" format.

    • config: Cluster configuration:

      • airflow.config: Advanced Apache Airflow™ properties provided in "<configuration_section>.<key>": "<value>" format, e.g.:

        "airflow": {
          "config": {
            "core.load_examples": "False"
          }
        }
        
      • webserver, scheduler, triggerer, worker: Managed Service for Apache Airflow™ component configuration:

        • count: Number of instances in the cluster for the web server, scheduler, and triggerer.

        • minCount, maxCount: Minimum and maximum number of instances in the cluster for the worker.

        • resources.resourcePresetId: ID of the computing resources of the web server, scheduler, worker, and triggerer. The possible values are:

          • c1-m2: 1 vCPU, 2 GB RAM
          • c1-m4: 1 vCPU, 4 GB RAM
          • c2-m4: 2 vCPUs, 4 GB RAM
          • c2-m8: 2 vCPUs, 8 GB RAM
          • c4-m8: 4 vCPUs, 8 GB RAM
          • c4-m16: 4 vCPUs, 16 GB RAM
          • c8-m16: 8 vCPUs, 16 GB RAM
          • c8-m32: 8 vCPUs, 32 GB RAM
      • dependencies: Lists of packages enabling you to install additional libraries and applications for running DAG files in the cluster:

        • pipPackages: List of pip packages.
        • debPackages: List of deb packages.

        You can set version restrictions for the installed packages, e.g.:

        "dependencies": {
          "pipPackages": [
            "pandas==2.0.2",
            "scikit-learn>=1.0.0",
            "clickhouse-driver~=0.2.0"
          ]
        }
        

        The package name format and version are defined by the install command: pip install for pip packages and apt install for deb packages.

      • lockbox.enabled: Enables using secrets in Yandex Lockbox to store Apache Airflow™ configuration data, variables, and connection parameters. The possible values are true or false.

    • network.securityGroupIds: List of security group IDs.

    • codeSync.s3.bucket: Name of the bucket to store DAG files in.

    • maintenanceWindow: Maintenance window settings (including for disabled clusters). In maintenanceWindow, provide one of the two parameters:

      • anytime: Maintenance can take place at any time.

      • weeklyMaintenanceWindow: Maintenance takes place once a week at the specified time:

        • day: Day of week in DDD format: MON, TUE, WED, THU, FRI, SAT, or SUN.
        • hour: Time of day (UTC) in HH format, from 1 to 24.
    • deletionProtection: Enables cluster protection against accidental deletion. The possible values are true or false.

      Even if it is enabled, one can still connect to the cluster manually and delete it.

    • serviceAccountId: ID of the service account with the managed-airflow.integrationProvider role. The cluster will thus get the permissions it needs to work with user resources. For more information, see Impersonation.

      To update a service account in a Managed Service for Apache Airflow™ cluster, assign the iam.serviceAccounts.user role or higher to your Yandex Cloud account.

      Warning

      If the cluster already uses a service account to access objects from Object Storage, then changing it to a different service account may make these objects unavailable and interrupt the cluster operation. Before changing the service account settings, make sure that the cluster doesn't use the objects in question.

    • logging: Logging parameters:

      • enabled: Enables logging. Logs generated by Apache Airflow™ components will be sent to Yandex Cloud Logging. The possible values are true or false.

      • minLevel: Minimum logging level. The possible values are TRACE, DEBUG, INFO, WARN, ERROR, and FATAL.

      • folderId: Folder ID. Logs will be written to the default log group for this folder.

      • logGroupId: Custom log group ID. Logs will be written to this group.

        Specify either folderId or logGroupId.

  3. Use the Cluster.Update method and send the following request, e.g., via cURL:

    curl \
        --request PATCH \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --url 'https://airflow.api.cloud.yandex.net/managed-airflow/v1/clusters/<cluster_ID>' \
        --data '@body.json'
    

    You can get the cluster ID with the list of clusters in the folder.

  4. View the server response to make sure your request was successful.

To change the cluster settings:

  1. Get an IAM token for API authentication and save it as an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume the repository contents are stored in the ~/cloudapi/ directory.

  3. Create a file named body.json and paste the following code into it:

    {
      "cluster_id": "<cluster_ID>",
      "update_mask": "<list_of_parameters_to_update>",
      "name": "<cluster_name>",
      "description": "<cluster_description>",
      "labels": { <label_list> },
      "config_spec": {
        "airflow": {
          "config": { <list_of_properties> }
        },
        "webserver": {
          "count": "<number_of_instances>",
          "resources": {
            "resource_preset_id": "<resource_ID>"
          }
        },
        "scheduler": {
          "count": "<number_of_instances>",
          "resources": {
            "resource_preset_id": "<resource_ID>"
          }
        },
        "triggerer": {
          "count": "<number_of_instances>",
          "resources": {
            "resource_preset_id": "<resource_ID>"
          }
        },
        "worker": {
          "min_count": "<minimum_number_of_instances>",
          "max_count": "<maximum_number_of_instances>",
          "resources": {
            "resource_preset_id": "<resource_ID>"
          }
        },
        "dependencies": {
          "pip_packages": [ <list_of_pip_packages> ],
          "deb_packages": [ <list_of_deb_packages> ]
        },
        "lockbox": {
          "enabled": <use_of_logging>
        }
      },
      "code_sync": {
        "s3": {
          "bucket": "<bucket_name>"
        }
      },
      "network_spec": {
        "security_group_ids": [ <list_of_security_group_IDs> ]
      },
      "maintenance_window": {
        "weekly_maintenance_window": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "deletion_protection": <deletion_protection>,
      "service_account_id": "<service_account_ID>",
      "logging": {
        "enabled": <use_of_logging>,
        "min_level": "<logging_level>",
        "folder_id": "<folder_ID>"
      }
    }
    

    Where:

    • cluster_id: Cluster ID. You can get it with the list of clusters in a folder.

    • update_mask: List of parameters to update as an array of paths[] strings.

      Format for listing settings
      "update_mask": {
          "paths": [
              "<setting_1>",
              "<setting_2>",
              ...
              "<setting_N>"
          ]
      }
      

      Warning

      When you update a cluster, all parameters of the object you are changing that have not been explicitly provided in the request will take their defaults. To avoid this, list the settings you want to change in the update_mask parameter.

    • name: Cluster name.

    • description: Cluster description.

    • labels: List of labels provided in "<key>": "<value>" format.

    • config_spec: Cluster configuration:

      • airflow.config: Advanced Apache Airflow™ properties provided in "<configuration_section>.<key>": "<value>" format, e.g.:

        "airflow": {
          "config": {
            "core.load_examples": "False"
          }
        }
        
      • webserver, scheduler, triggerer, worker: Managed Service for Apache Airflow™ component configuration:

        • count: Number of instances in the cluster for the web server, scheduler, and triggerer.

        • min_count, max_count: Minimum and maximum number of instances in the cluster for the worker.

        • resources.resource_preset_id: ID of the computing resources of the web server, scheduler, worker, and triggerer. The possible values are:

          • c1-m2: 1 vCPU, 2 GB RAM
          • c1-m4: 1 vCPU, 4 GB RAM
          • c2-m4: 2 vCPUs, 4 GB RAM
          • c2-m8: 2 vCPUs, 8 GB RAM
          • c4-m8: 4 vCPUs, 8 GB RAM
          • c4-m16: 4 vCPUs, 16 GB RAM
          • c8-m16: 8 vCPUs, 16 GB RAM
          • c8-m32: 8 vCPUs, 32 GB RAM
      • dependencies: Lists of packages enabling you to install additional libraries and applications for running DAG files in the cluster:

        • pip_packages: List of pip packages.
        • deb_packages: List of deb packages.

        You can set version restrictions for the installed packages, e.g.:

        "dependencies": {
          "pip_packages": [
            "pandas==2.0.2",
            "scikit-learn>=1.0.0",
            "clickhouse-driver~=0.2.0"
          ]
        }
        

        The package name format and version are defined by the install command: pip install for pip packages and apt install for deb packages.

      • lockbox.enabled: Enables using secrets in Yandex Lockbox to store Apache Airflow™ configuration data, variables, and connection parameters. The possible values are true or false.

    • network_spec.security_group_ids: List of security group IDs.

    • code_sync.s3.bucket: Name of the bucket to store DAG files in.

    • maintenance_window: Maintenance window settings (including for disabled clusters). In maintenance_window, provide one of the two parameters:

      • anytime: Maintenance can take place at any time.

      • weekly_maintenance_window: Maintenance takes place once a week at the specified time:

        • day: Day of week in DDD format: MON, TUE, WED, THU, FRI, SAT, or SUN.
        • hour: Time of day (UTC) in HH format, from 1 to 24.
    • deletion_protection: Enables cluster protection against accidental deletion. The possible values are true or false.

      Even if it is enabled, one can still connect to the cluster manually and delete it.

    • service_account_id: ID of the service account with the managed-airflow.integrationProvider role. The cluster will thus get the permissions it needs to work with user resources. For more information, see Impersonation.

      To update a service account in a Managed Service for Apache Airflow™ cluster, assign the iam.serviceAccounts.user role or higher to your Yandex Cloud account.

      Warning

      If the cluster already uses a service account to access objects from Object Storage, then changing it to a different service account may make these objects unavailable and interrupt the cluster operation. Before changing the service account settings, make sure that the cluster doesn't use the objects in question.

    • logging: Logging parameters:

      • enabled: Enables logging. Logs generated by Apache Airflow™ components will be sent to Yandex Cloud Logging. The possible values are true or false.

      • min_level: Minimum logging level. The possible values are TRACE, DEBUG, INFO, WARN, ERROR, and FATAL.

      • folder_id: Folder ID. Logs will be written to the default log group for this folder.

      • log_group_id: Custom log group ID. Logs will be written to this group.

        Specify either folder_id or log_group_id.

  4. Use the ClusterService.Update call and send the following request, e.g., via gRPCurl:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/airflow/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d @ \
        airflow.api.cloud.yandex.net:443 \
        yandex.cloud.airflow.v1.ClusterService.Update \
        < body.json
    
  5. View the server response to make sure your request was successful.

Was the article helpful?

Previous
Creating a cluster
Next
Stopping and starting a cluster
© 2025 Direct Cursus Technology L.L.C.