Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Airflow™
  • Getting started
    • All guides
      • Managed Service for Apache Airflow™: Connecting to a database
      • Managed Service for ClickHouse®: Connecting to a database
      • Managed Service for PostgreSQL: Connecting to a database
    • Working with Apache Airflow™ interfaces
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Release notes
  • FAQ

In this article:

  • Getting started
  • Create a Yandex Lockbox secret
  • Prepare the DAG file and run the graph
  • Check the result
  1. Step-by-step guides
  2. Connections
  3. Managed Service for PostgreSQL: Connecting to a database

Connecting to Yandex Managed Service for PostgreSQL

Written by
Yandex Cloud
Updated at October 23, 2025
  • Getting started
  • Create a Yandex Lockbox secret
  • Prepare the DAG file and run the graph
  • Check the result

You can use a directed acyclic graph (DAG) to configure a connection to a database in a Yandex Managed Service for PostgreSQL cluster. Data for connecting to the DB is stored in Yandex Lockbox and automatically inserted into the graph.

Tip

Clusters running Apache Airflow™ older than 3.0 use apache-airflow-providers-postgres 5.13.1 as a default provider. If using a newer provider version, use SQLExecuteQueryOperator instead of PostgresOperator. For more information, see the official documentation.

Getting startedGetting started

  1. Create a Managed Service for PostgreSQL cluster with the following parameters:

    • DB name: db1
    • Username: user1
    • Password: user1-password
  2. Create a Yandex Object Storage bucket to store the DAG file in.

  3. Configure the Managed Service for Apache Airflow™ cluster:

    1. Enable Use Lockbox Secret Backend to use Yandex Lockbox secrets to store Apache Airflow™ configuration data, variables, and connection parameters.

    2. Under Dependencies, add the apache-airflow-providers-postgres pip package.

      Warning

      You need to install a pip package for clusters with Apache Airflow™ version 3.0 or higher. This package comes installed by default on clusters with Apache Airflow™ versions below 3.0.

    3. Under DAG file storage, select the Object Storage bucket you created earlier. Your DAG file will be fetched from it.

  4. Issue the lockbox.payloadViewer role to your service account.

    There is no need to assign the lockbox.payloadViewer role for the whole folder. It is enough to assign it for a specific Yandex Lockbox secret once you create it.

Create a Yandex Lockbox secretCreate a Yandex Lockbox secret

For the Apache Airflow™ cluster to work correctly, your Yandex Lockbox secret's name must have this format: airflow/<artifact_type>/<artifact_ID>, where:

  • <artifact_type>: Decides what data will be stored in the secret. The allowed values are:
    • connections: Connections.
    • variables: Variables.
    • config: Configuration data.
  • <artifact_ID>: ID to use to access the artifact in Apache Airflow™.

Create a Yandex Lockbox secret with the following parameters:

  • Name: airflow/connections/pg1.

  • Secret type: Custom.

  • Key: conn.

  • Value: Select Text and specify the following contents:

    {
      "conn_type": "postgres",
      "host": "<PostgreSQL_cluster_host_FQDN>",
      "port": 6432,
      "schema": "db1",
      "login": "user1",
      "password": "user1-password"
    }
    

For more information on how to get the FQDN of a PostgreSQL cluster host, see PostgreSQL host FQDN.

The secret will store the data to connect to the database in the Managed Service for PostgreSQL cluster.

Prepare the DAG file and run the graphPrepare the DAG file and run the graph

  1. Create a local file named postgres_operator.py and paste the following script to it:

    Apache Airflow™ version below 3.0
    Apache Airflow™ version 3.0 or higher
    from airflow import DAG
    from airflow.providers.postgres.operators.postgres import PostgresOperator
    
    
    with DAG(
        dag_id='postgres_operator',
        schedule=None
    ):
        PostgresOperator(
            task_id="check_conn",
            postgres_conn_id='pg1',
            sql="SELECT 1;"
        )
    
    from airflow import DAG
    from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
    
    
    with DAG(
        dag_id='postgres_operator',
        schedule=None
    ):
        SQLExecuteQueryOperator(
            task_id="check_conn",
            conn_id='pg1',
            sql="SELECT 1;"
        )
    
  2. Upload the postgres_operator.py DAG file to the bucket you created earlier.

  3. Open the Apache Airflow™ web interface.

  4. Make sure a new graph named postgres_operator has appeared in the DAGs section.

    It may take a few minutes to load a DAG file from the bucket.

  5. To run the graph, click image in the line with its name.

Check the resultCheck the result

To check the result in the Apache Airflow™ web interface:

Apache Airflow™ version below 3.0
Apache Airflow™ version 3.0 or higher
  1. In the DAGs section, click postgres_operator.
  2. Go to the Graph section.
  3. Select check_conn.
  4. Go to Logs.
  5. Make sure the logs contain the Rows affected: 1 line. This means the query was successful.
  1. In the DAGs section, click postgres_operator.
  2. Go to Tasks.
  3. Select check_conn.
  4. Go to Tasks Instances.
  5. Select the task instance.
  6. The Logs section will open.
  7. Make sure the logs contain the Rows affected: 1 line. This means the query was successful.

Was the article helpful?

Previous
Managed Service for ClickHouse®: Connecting to a database
Next
Loading a variable from Yandex Lockbox
© 2025 Direct Cursus Technology L.L.C.