Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex StoreDoc
  • Getting started
    • All guides
      • Information about existing clusters
      • Creating a cluster
      • Updating cluster settings
      • Yandex StoreDoc version upgrade
      • Stopping and starting a cluster
      • Managing cluster hosts
      • Migrating hosts to a different availability zone
      • Managing backups
      • Deleting a cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes

In this article:

  • Creating a cluster
  • Creating a cluster copy
  • Examples
  • Creating a single-host cluster
  • Creating sharded clusters
  1. Step-by-step guides
  2. Clusters
  3. Creating a cluster

Creating an Yandex StoreDoc cluster

Written by
Yandex Cloud
Updated at January 19, 2026
  • Creating a cluster
  • Creating a cluster copy
  • Examples
    • Creating a single-host cluster
    • Creating sharded clusters

A Yandex StoreDoc cluster is one or more database hosts between which you can configure replication. Replication is on by default in any cluster consisting of more than one host: the primary host accepts write requests and asynchronously replicates the changes in the secondary hosts.

Note

  • The number of hosts you can create together with a Yandex StoreDoc cluster depends on the selected disk type and host class.
  • The available disk types depend on the selected host class.

Cluster DB connections are managed by Connection Manager. Creating a cluster automatically creates:

  • Connection Manager connection with information about the database connection.

  • Yandex Lockbox secret that stores the DB owner's user password. Storing passwords in Yandex Lockbox ensures their security.

The connection and secret will be created for each new database user. To view all connections, select the Connections tab on the cluster page.

You need the connection-manager.viewer role to view connection info. You can use Connection Manager to configure access to connections.

You can use Connection Manager and secrets you create there free of charge.

Creating a clusterCreating a cluster

To create a Yandex StoreDoc cluster, you need the vpc.user role and the managed-mongodb.editor role or higher. For information on assigning roles, see this Identity and Access Management guide.

Management console
CLI
Terraform
REST API
gRPC API

To create a Yandex StoreDoc cluster:

  1. In the management console, select the folder where you want to create your database cluster.

  2. Go to Yandex StoreDoc.

  3. Click Create cluster.

  4. Under Basic parameters:

    • Enter a name in the Cluster name field. The cluster name must be unique within the cloud.

    • Optionally, enter a cluster Description.

    • Select the environment where you want to create your cluster (you cannot change the environment once the cluster is created):

      • PRODUCTION: For stable versions of your applications.
      • PRESTABLE: For testing purposes. The prestable environment is similar to the production environment and likewise covered by an SLA, but it is the first to get new features, improvements, and bug fixes. In the prestable environment, you can test new versions for compatibility with your application.
    • Specify the DBMS version.

    • Select the sharding type:

      • Disabled: Cluster will consist only of MONGOD hosts.
      • Standard: Cluster will consist of MONGOD and MONGOINFRA hosts.
      • Advanced: Cluster will consist of MONGOD, MONGOS, and MONGOCFG hosts.
  5. Under Network settings, select:

    • Cloud network for cluster deployment.
    • Security groups for the cluster network traffic. You may need to configure security groups to connect to the cluster.
  6. Specify the computing resource configuration:

    • For a non-sharded cluster, under Resources.
    • For a cluster with standard sharding, under Mongod Resources and Mongoinfra Resources.
    • For a cluster with advanced sharding, under Mongod Resources, Mongos Resources, and Mongocfg Resources.

    To specify your computing resource configuration:

    1. Select the platform, VM type, and host class. The latter determines the technical specifications of the VMs the database hosts will be deployed on. All available options are listed under Host classes. When you change the host class for a cluster, the specifications of all existing instances also change.

      Note

      The memory-optimized configuration type is unavailable for MONGOS hosts.

    2. Under Storage:

      • Select the disk type.

        The selected type determines the increments in which you can change your disk size:

        • Network HDD and SSD storage: In increments of 1 GB.
        • Local SSD storage:
          • For Intel Broadwell and Intel Cascade Lake: In increments of 100 GB.
          • For Intel Ice Lake: In increments of 368 GB.
        • Non-replicated SSDs and ultra high-speed network SSDs with three replicas: In increments of 93 GB.
      • Select the storage capacity for your data and backups. For more information, see Backups.

      • Optionally, select Encrypted disk to encrypt the disk with a custom KMS key.

        • To create a new key, click Create.

        • To use the key you created earlier, select it in the KMS key field.

        To learn more about disk encryption, see Storage.

    3. Under Hosts, add the DB hosts created with the cluster:

      • Click Add host.
      • Select the availability zone.
      • Select a subnet in the specified availability zone. If there is no subnet, create one.
      • If the host must be available outside Yandex Cloud, enable Public access.

      To ensure fault tolerance, you need at least 3 hosts for local-ssd and network-ssd-nonreplicated disk types. For more information, see Storage.

      By default, hosts are created in different availability zones. Read more about host management.

  7. Under Database, specify the database details:

    • Database name.

      A database name may contain Latin letters, numbers, underscores, and hyphens. The name may be up to 63 characters long. Such names as config, local, admin, and mdb_internal are reserved for Yandex StoreDoc. You cannot create DBs with these names.

    • Username.

    • User password. The password must be at least 8 characters long.

  8. Specify additional cluster settings, if required:

    • Backup start time (UTC): Time interval during which the cluster backup starts. Time is specified in 24-hour UTC format. The default time is 22:00 - 23:00 UTC.

    • Retention period for automatic backups, days

      Retention period for automatic backups. Backups are automatically deleted once their retention period expires. The default is 7 days. This feature is at the Preview stage. For more information, see Backups.

      Changing the retention period affects both new and existing automatic backups. For example, if the initial retention period was 7 days, and the remaining lifetime of a separate automatic backup is 1 day, increasing the retention period to 9 days will change the remaining lifetime of this backup to 3 days.

      For an existing cluster, automatic backups are stored for a specified number of days whereas manually created ones are stored indefinitely. After a cluster is deleted, all backups persist for 7 days.

    • Maintenance window: Maintenance window settings:

      • To enable maintenance at any time, select arbitrary (default).
      • To specify the preferred maintenance start time, select by schedule and specify the desired day of the week and UTC hour. For example, you can choose a time when the cluster is least loaded.

      Maintenance operations are carried out both on enabled and disabled clusters. They may include updating the DBMS, applying patches, and so on.

    • WebSQL access: Enables you to run SQL queries against cluster databases from the Yandex Cloud management console using Yandex WebSQL.

    • Statistics sampling: Enable this option to use the built-in performance diagnostics tool in the cluster. This feature is at the Preview stage.

    • Deletion protection: Cluster protection from accidental deletion.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

  9. Configure the DBMS, if required.

    Note

    Some Yandex StoreDoc settings depend on the selected host class.

  10. Click Create cluster.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To create a Yandex StoreDoc cluster:

  1. Check whether the folder has any subnets for the cluster hosts:

    yc vpc subnet list
    

    If your folder has no subnets, create them in VPC.

  2. View the description of the CLI command for creating a cluster:

    yc managed-mongodb cluster create --help
    
  3. Specify the cluster parameters in the create command (not all parameters are given in the example):

    For a non-sharded cluster
    yc managed-mongodb cluster create \
      --name <cluster_name> \
      --environment=<environment> \
      --network-name <network_name> \\
      --security-group-ids <security_group_IDs> \ 
      --host zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID>,`
            `assign-public-ip=<allow_public_access_to_host>,`
            `hidden=<hide_host>,`
            `secondary-delay-secs=<replica_lag_in_seconds>,`
            `priority=<host_priority> \
      --mongod-resource-preset <host_class> \
      --user name=<username>,password=<user_password> \
      --database name=<DB_name> \
      --mongod-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \
      --mongod-disk-size <storage_size_in_GB> \
      --disk-encryption-key-id <KMS_key_ID> \
      --performance-diagnostics=<enable_diagnostics> \
      --deletion-protection
    
    For a cluster with standard sharding
    yc managed-mongodb cluster create \
      --name <cluster_name> \
      --environment=<environment> \
      --mongodb-version <Yandex_StoreDoc_version> \          
      --network-name <network_name> \
      --security-group-ids <security_group_IDs> \      
      --user name=<username>,password=<user_password> \
      --database name=<DB_name> \
      --mongod-resource-preset <host_class> \
      --mongod-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \
      --mongod-disk-size <storage_size_in_GB> \
      --host type=mongod,`
            `zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID>,`
            `hidden=<hide_host>,`
            `secondary-delay-secs=<replica_lag_in_seconds>,`
            `priority=<host_priority> \
      --mongoinfra-resource-preset <host_class> \
      --mongoinfra-disk-type <network-hdd|network-ssd> \
      --mongoinfra-disk-size <storage_size_in_GB> \
      --host type=mongoinfra,`
            `zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID>,`
            `assign-public-ip=<allow_public_access_to_host> \
      --disk-encryption-key-id <KMS_key_ID> \
      --performance-diagnostics=<enable_diagnostics> \
      --deletion-protection
    
    For a cluster with advanced sharding
    yc managed-mongodb cluster create \
      --name <cluster_name> \
      --environment=<environment> \
      --mongodb-version <Yandex_StoreDoc_version> \          
      --network-name <network_name> \
      --security-group-ids <security_group_IDs> \      
      --user name=<username>,password=<user_password> \
      --database name=<DB_name> \
      --mongod-resource-preset <host_class> \
      --mongod-disk-type <network-hdd|network-ssd|network-ssd-nonreplicated|local-ssd> \
      --mongod-disk-size <storage_size_in_GB> \
      --host type=mongod,`
            `zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID>,`
            `hidden=<hide_host>,`
            `secondary-delay-secs=<replica_lag_in_seconds>,`
            `priority=<host_priority> \
      --mongos-resource-preset <host_class> \
      --mongos-disk-type <network-hdd|network-ssd> \
      --mongos-disk-size <storage_size_in_GB> \
      --host type=mongos,`
            `zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID>,`
            `assign-public-ip=<allow_public_access_to_host> \
      --mongocfg-resource-preset <host_class> \
      --mongocfg-disk-type <network-hdd|network-ssd> \
      --mongocfg-disk-size <storage_size_in_GB> \
      --host type=mongocfg,`
            `zone-id=<availability_zone>,`
            `subnet-id=<subnet_ID> \
      --disk-encryption-key-id <KMS_key_ID> \
      --performance-diagnostics=<enable_diagnostics> \
      --deletion-protection
    

    Where:

    • --environment: Environment, prestable or production.

    • --security-group-ids: List of security group IDs.

    • --database name: Database name.

      Note

      A database name may contain Latin letters, numbers, underscores, and hyphens. The name may be up to 63 characters long. Such names as config, local, admin, and mdb_internal are reserved for Yandex StoreDoc. You cannot create DBs with these names.

    • --host: Host settings:

      • type: Host type, i.e., mongod, mongoinfra, mongos, or mongocfg. The default host type is mongod.

      • zone-id: Availability zone.

      • subnet-id: Subnet ID. To be specified if the selected availability zone has more than one subnet.

      • assign-public-ip: Internet access to the host via a public IP address, true or false. In a sharded cluster, it is used only for MONGOS and MONGOINFRA hosts.

      • hidden: Hide host, true or false. If the host is hidden, only direct connections will be able to read from it (for example, to make backups from it without adding load to the cluster).

      • secondary-delay-secs: Replica's lag behind the master in seconds. It can be useful for data recovery in case of invalid operations.

      • priority: Host priority for assignment as a master.

        Note

        The hidden, secondary-delay-secs, and priority parameters are used for MONGOD hosts only.

    • --mongod-resource-preset: MONGOD host class.

    • --mongoinfra-resource-preset, --mongos-resource-preset, --mongocfg-resource-preset: MONGOINFRA, MONGOS, and MONGOCFG host classes, respectively (for sharded clusters only).

    • --mongod-disk-type: Disk type of MONGOD hosts.

    • --mongoinfra-disk-type, --mongos-disk-type, --mongocfg-disk-type: Disk types of MONGOINFRA, MONGOS, and MONGOCFG hosts, respectively (for sharded clusters only).

    • --disk-encryption-key-id: Disk encryption with a custom KMS key.

      To learn more about disk encryption, see Storage.

    • --performance-diagnostics: Enables cluster performance diagnostics, true or false.

    • --deletion-protection: Cluster protection from accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    Note

    The default maintenance mode for new clusters is anytime. You can set a specific maintenance period when updating the cluster settings.

With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

For more information about the provider resources, see the relevant documentation on the Terraform website or its mirror.

If you do not have Terraform yet, install it and configure the Yandex Cloud provider.

To create a Yandex StoreDoc cluster:

  1. In the configuration file, describe the resources you want to create:

    • Database cluster: Description of the cluster and its hosts.

    • Network: Description of the cloud network where a cluster will be located. If you already have a suitable network, you don't have to describe it again.

    • Subnets: Description of the subnets to connect the cluster hosts to. If you already have suitable subnets, you don't have to describe them again.

    Here is an example of the configuration file structure:

    For a non-sharded cluster
    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      name                = "<cluster_name>"
      environment         = "<environment>"
      network_id          = yandex_vpc_network.<network_name>.id
      security_group_ids  = [ "<list_of_security_group_IDs>" ]
      deletion_protection = <protect_cluster_against_deletion>
    
      cluster_config {
        version = "<Yandex_StoreDoc_version>"
      }
    
      resources_mongod {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongod"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        assign_public_ip = <allow_public_access_to_host>
        host_parameters {
          hidden               = <hide_host>
          secondary_delay_secs = <replica_lag_in_seconds>
          priority             = <host_priority>
        }
      }
    
      resources_mongoinfra {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongoinfra"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        assign_public_ip = <allow_public_access_to_host>
      }
    }
    
    resource "yandex_mdb_mongodb_database" "<DB_name>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<DB_name>"
    }
    
    resource "yandex_mdb_mongodb_user" "<username>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<username>"
      password   = "<password>"
      permission {
        database_name = "<DB_name>"
        roles         = [ "<list_of_user_roles>" ]
      }
      depends_on = [
        yandex_mdb_mongodb_database.<DB_name>
      ]
    }
    
    resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = yandex_vpc_network.<network_name>.id
      v4_cidr_blocks = ["<range>"]
    }
    
    For a cluster with standard sharding
    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      name                = "<cluster_name>"
      environment         = "<environment>"
      network_id          = yandex_vpc_network.<network_name>.id
      security_group_ids  = [ "<list_of_security_group_IDs>" ]
      deletion_protection = <protect_cluster_against_deletion>
    
      cluster_config {
        version = "<Yandex_StoreDoc_version>"
      }
    
      resources_mongod {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongod"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        host_parameters {
          hidden               = <hide_host>
          secondary_delay_secs = <replica_lag_in_seconds>
          priority             = <host_priority>
        }
      }
    
      resources_mongoinfra {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongoinfra"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        assign_public_ip = <allow_public_access_to_host>
      }
    }
    
    resource "yandex_mdb_mongodb_database" "<DB_name>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<DB_name>"
    }
    
    resource "yandex_mdb_mongodb_user" "<username>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<username>"
      password   = "<password>"
      permission {
        database_name = "<DB_name>"
        roles         = [ "<list_of_user_roles>" ]
      }
      depends_on = [
        yandex_mdb_mongodb_database.<DB_name>
      ]
    }
    
    resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = yandex_vpc_network.<network_name>.id
      v4_cidr_blocks = ["<range>"]
    }
    
    For a cluster with advanced sharding
    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      name                = "<cluster_name>"
      environment         = "<environment>"
      network_id          = yandex_vpc_network.<network_name>.id
      security_group_ids  = [ "<list_of_security_group_IDs>" ]
      deletion_protection = <protect_cluster_against_deletion>
    
      cluster_config {
        version = "<Yandex_StoreDoc_version>"
      }
    
      resources_mongod {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongod"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        host_parameters {
          hidden               = <hide_host>
          secondary_delay_secs = <replica_lag_in_seconds>
          priority             = <host_priority>
        }
      }
    
      resources_mongos {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongos"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
        assign_public_ip = <allow_public_access_to_host>
      }
    
      resources_mongocfg {
        resource_preset_id = "<host_class>"
        disk_type_id       = "<disk_type>"
        disk_size          = <storage_size_in_GB>
      }
    
      host {
        type             = "mongocfg"
        zone_id          = "<availability_zone>"
        subnet_id        = yandex_vpc_subnet.<subnet_name>.id
      }
    }
    
    resource "yandex_mdb_mongodb_database" "<DB_name>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<DB_name>"
    }
    
    resource "yandex_mdb_mongodb_user" "<username>" {
      cluster_id = yandex_mdb_mongodb_cluster.<cluster_name>.id
      name       = "<username>"
      password   = "<password>"
      permission {
        database_name = "<DB_name>"
        roles         = [ "<list_of_user_roles>" ]
      }
      depends_on = [
        yandex_mdb_mongodb_database.<DB_name>
      ]
    }
    
    resource "yandex_vpc_network" "<network_name>" { name = "<network_name>" }
    
    resource "yandex_vpc_subnet" "<subnet_name>" {
      name           = "<subnet_name>"
      zone           = "<availability_zone>"
      network_id     = yandex_vpc_network.<network_name>.id
      v4_cidr_blocks = ["<range>"]
    }
    

    Where:

    • environment: Environment, PRESTABLE or PRODUCTION.

    • host: Host settings:

      • zone_id: Availability zone.
      • subnet_id: ID of the subnet in the selected availability zone.
      • assign_public_ip: Public access to the host, true or false. In a sharded cluster, it is used only for MONGOS and MONGOINFRA hosts.
      • host_parameters: Additional host parameters:
        • hidden: Hide host, true or false. If the host is hidden, only direct connections will be able to read from it (for example, to make backups from it without adding load to the cluster).
        • secondary_delay_secs: Replica's lag behind the master in seconds. It can be useful for data recovery in case of invalid operations.
        • priority: Host priority for assignment as a master.

      Note

      The hidden, secondary_delay_secs, and priority parameters are used for MONGOD hosts only.

    • deletion_protection: Cluster protection against accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    • version: Yandex StoreDoc version, 6.0 or 7.0.

    A database name may contain Latin letters, numbers, underscores, and hyphens. The name may be up to 63 characters long. Such names as config, local, admin, and mdb_internal are reserved for Yandex StoreDoc. You cannot create DBs with these names.

    To set up the maintenance window (for disabled clusters as well), add the maintenance_window section to the cluster description:

    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      ...
      maintenance_window {
        type = <maintenance_type>
        day  = <day_of_week>
        hour = <hour>
      }
      ...
    }
    

    Where:

    • type: Maintenance type. The possible values include:
      • ANYTIME: Anytime
      • WEEKLY: On a schedule
    • day: Day of week for the WEEKLY type, i.e., MON, TUE, WED, THU, FRI, SAT, or SUN.
    • hour: UTC hour for the WEEKLY type, from 1 to 24.

    To encrypt the disk with a custom KMS key, add the disk_encryption_key_id parameter:

    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      ...
      disk_encryption_key_id = <KMS_key_ID>
      ...
    }
    

    To learn more about disk encryption, see Storage.

    For more information about the resources you can create with Terraform, see this provider guide.

  2. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  3. Create a cluster.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    After this, all required resources will be created in the specified folder, and the host FQDNs will be displayed in the terminal. You can check the new resources and their configuration using the management console.

    Timeouts

    The Terraform provider sets the following timeouts for Yandex StoreDoc cluster operations:

    • Creating a cluster, including by restoring one from a backup: 30 minutes.
    • Editing a cluster: 60 minutes.

    Operations exceeding the set timeout are interrupted.

    How do I change these limits?

    Add the timeouts block to the cluster description, for example:

    resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
      ...
      timeouts {
        create = "1h30m" # An hour and a half
        update = "2h"    # Two hours
      }
    }
    
  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Create a file named body.json and paste the following code into it:

    For a non-sharded cluster
    {
      "folderId": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "networkId": "<network_ID>",
      "securityGroupIds": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletionProtection": <protect_cluster_against_deletion>,
      "maintenanceWindow": {
        "weeklyMaintenanceWindow": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "configSpec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          }
        },
        "backupWindowStart":  {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },  
        "backupRetainPeriodDays": "<backup_retention_in_days>",
        "performanceDiagnostics": {
          "profilingEnabled": <enable_profiler>
        }
      },
      "databaseSpecs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "userSpecs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "databaseName": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "hostSpecs": [
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "assignPublicIp": <allow_public_access_to_host>,
          "type": "MONGOD",
          "hidden": <hide_host>,
          "secondaryDelaySecs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_2> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    
    For a cluster with standard sharding
    {
      "folderId": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "networkId": "<network_ID>",
      "securityGroupIds": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletionProtection": <protect_cluster_against_deletion>,
      "maintenanceWindow": {
        "weeklyMaintenanceWindow": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "configSpec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          },
          "mongoinfra": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          }
        },
        "backupWindowStart":  {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },
        "backupRetainPeriodDays": "<backup_retention_in_days>",
        "performanceDiagnostics": {
          "profilingEnabled": <enable_profiler>
        }
      },
      "databaseSpecs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "userSpecs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "databaseName": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "hostSpecs": [
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "type": "MONGOD",
          "shardName": "<shard_name>",
          "hidden": <hide_host>,
          "secondaryDelaySecs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "type": "MONGOINFRA",
          "assignPublicIp": <allow_public_access_to_host>,
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_3> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    
    For a cluster with advanced sharding
    {
      "folderId": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "networkId": "<network_ID>",
      "securityGroupIds": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletionProtection": <protect_cluster_against_deletion>,
      "maintenanceWindow": {
        "weeklyMaintenanceWindow": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "configSpec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          },
          "mongos": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          },
          "mongocfg": {
            "resources": {
              "resourcePresetId": "<host_class>",
              "diskSize": "<storage_size_in_bytes>",
              "diskTypeId": "<disk_type>"
            }
          }
        },
        "backupWindowStart":  {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },  
        "backupRetainPeriodDays": "<backup_retention_in_days>",
        "performanceDiagnostics": {
          "profilingEnabled": <enable_profiler>
        }
      },
      "databaseSpecs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "userSpecs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "databaseName": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "hostSpecs": [
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "type": "MONGOD",
          "shardName": "<shard_name>",
          "hidden": <hide_host>,
          "secondaryDelaySecs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "type": "MONGOS",
          "assignPublicIp": <allow_public_access_to_host>,
          "tags": "<host_labels>"
        },
        {
          "zoneId": "<availability_zone>",
          "subnetId": "<subnet_ID>",
          "type": "MONGOCFG",
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_4> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    

    Where:

    • folderId: Folder ID. You can get it with the list of folders in the cloud.

    • name: Cluster name.

    • environment: Cluster environment, PRODUCTION or PRESTABLE.

    • networkId: ID of the network the cluster will be deployed in.

    • securityGroupIds: Security group IDs.

    • deletionProtection: Cluster protection against accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    • maintenanceWindow: Maintenance window settings, including for stopped clusters. In maintenanceWindow, provide one of the following values:

      • anytime: Maintenance can occur at any time.

      • weeklyMaintenanceWindow: Maintenance takes place once a week at the specified time:

        • day: Day of week, in DDD format.
        • hour: Hour of the day, in HH format. The values range from 1 to 24 hours.
    • configSpec: Cluster settings:

      • version: Yandex StoreDoc version, 5.0, 6.0, or 7.0.

      • mongod, mongoinfra, mongos, mongocfg: Host types.

        • resources: Cluster resources:

          • resourcePresetId: Host class.
          • diskSize: Disk size, in bytes.
          • diskTypeId: Disk type.
      • backupWindowStart: Backup window settings.

        Here, specify the backup start time:

        • hours: Between 0 and 23 hours.
        • minutes: Between 0 and 59 minutes.
        • seconds: Between 0 and 59 seconds.
        • nanos: Between 0 and 999999999 nanoseconds.
      • backupRetainPeriodDays: Backup retention in days.

      • performanceDiagnostics: Statistics collection settings:

        • profilingEnabled: Enable profiler, true or false.
    • databaseSpecs: Database settings as an array of elements, one per database. Each element contains the name parameter with the database name.

      A database name may contain Latin letters, numbers, underscores, and hyphens. The name may be up to 63 characters long. Such names as config, local, admin, and mdb_internal are reserved for Yandex StoreDoc. You cannot create DBs with these names.

    • userSpecs: User settings as an array of elements, one per user. Each element has the following structure:

      • name: Username.

      • password: User password.

      • permissions: User permissions settings:

        • databaseName: Name of the database to which the user will have access.
        • roles: Array of user roles. Each role is provided as a separate string in the array. For a list of possible values, see Users and roles.

        For each database, add a separate element with permission settings to the permissions array.

    • hostSpecs: Cluster host settings as an array of elements, one per host. Each element has the following structure:

            * `zoneId`: [Availability zone](../../overview/concepts/geo-scope.md).
      
      • subnetId: Subnet ID.
      • assignPublicIp: Internet access to the host via a public IP address, true or false. In a sharded cluster, it is used only for MONGOS and MONGOINFRA hosts.
      • type: Host type in a sharded cluster, MONGOD, MONGOINFRA, MONGOS, or MONGOCFG.
      • tags: Host labels.
      • shard_name: Shard name in a sharded cluster (for MONGOD hosts only).
      • hidden: Hide host, true or false. If the host is hidden, only direct connections will be able to read from it (for example, to make backups from it without adding load to the cluster).
      • secondaryDelaySecs: Replica's lag behind the master in seconds. It can be useful for data recovery in case of invalid operations.
      • priority: Host priority for assignment as a master.

      Note

      The shardName, hidden, secondaryDelaySecs, and priority parameters are used for MONGOD hosts only.

  3. Call the Cluster.Create method, e.g., via the following cURL request:

    curl \
        --request POST \
        --header "Authorization: Bearer $IAM_TOKEN" \
        --header "Content-Type: application/json" \
        --url 'https://mdb.api.cloud.yandex.net/managed-mongodb/v1/clusters' \
        --data "@body.json"
    
  4. View the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume the repository contents are stored in the ~/cloudapi/ directory.

  3. Create a file named body.json and paste the following code into it:

    For a non-sharded cluster
    {
      "folder_id": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "network_id": "<network_ID>",
      "security_group_ids": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletion_protection": <protect_cluster_against_deletion>,
      "maintenance_window": {
        "weekly_maintenance_window": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "config_spec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          }
        },
        "backup_window_start": {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },
        "backup_retain_period_days": "<backup_retention_in_days>",
        "performance_diagnostics": {
          "profiling_enabled": <enable_profiler>
        }
      },
      "database_specs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "user_specs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "database_name": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "host_specs": [
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "assign_public_ip": <allow_public_access_to_host>,
          "type": "MONGOD",
          "hidden": <hide_host>,
          "secondary_delay_secs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_2> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    
    For a cluster with standard sharding
    {
      "folder_id": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "network_id": "<network_ID>",
      "security_group_ids": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletion_protection": <protect_cluster_against_deletion>,
      "maintenance_window": {
        "weekly_maintenance_window": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "config_spec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          },
          "mongoinfra": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          }
        },
        "backup_window_start": {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },
        "backup_retain_period_days": "<backup_retention_in_days>",
        "performance_diagnostics": {
          "profiling_enabled": <enable_profiler>
        }
      },
      "database_specs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "user_specs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "database_name": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "host_specs": [
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "type": "MONGOD",
          "shard_name": "<shard_name>",
          "hidden": <hide_host>,
          "secondary_delay_secs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "type": "MONGOINFRA",
          "assign_public_ip": <allow_public_access_to_host>,
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_3> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    
    For a cluster with advanced sharding
    {
      "folder_id": "<folder_ID>",
      "name": "<cluster_name>",
      "environment": "<environment>",
      "network_id": "<network_ID>",
      "security_group_ids": [
        "<security_group_1_ID>",
        "<security_group_2_ID>",
        ...
        "<security_group_N_ID>"
      ],
      "deletion_protection": <protect_cluster_against_deletion>,
      "maintenance_window": {
        "weekly_maintenance_window": {
          "day": "<day_of_week>",
          "hour": "<hour>"
        }
      },
      "config_spec": {
        "version": "<Yandex_StoreDoc_version>",
        "mongodb": {
          "mongod": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          },
          "mongos": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          },
          "mongocfg": {
            "resources": {
              "resource_preset_id": "<host_class>",
              "disk_size": "<storage_size_in_bytes>",
              "disk_type_id": "<disk_type>"
            }
          }
        },
        "backup_window_start": {
          "hours": "<hours>",
          "minutes": "<minutes>",
          "seconds": "<seconds>",
          "nanos": "<nanoseconds>"
        },
        "backup_retain_period_days": "<backup_retention_in_days>",
        "performance_diagnostics": {
          "profiling_enabled": <enable_profiler>
        }
      },
      "database_specs": [
        {
          "name": "<DB_name>"
        },
        { <similar_configuration_for_DB_2> },
        { ... },
        { <similar_configuration_for_DB_N> }
      ],
      "user_specs": [
        {
          "name": "<username>",
          "password": "<user_password>",
          "permissions": [
            {
              "database_name": "<DB_name>",
              "roles": [
                "<role_1>", "<role_2>", ..., "<role_N>"
              ]
            }
          ]
        },
        { <similar_settings_for_user_2> },
        { ... },
        { <similar_settings_for_user_N> }
      ],
      "host_specs": [
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "type": "MONGOD",
          "shard_name": "<shard_name>",
          "hidden": <hide_host>,
          "secondary_delay_secs": "<replica_lag_in_seconds>",
          "priority": "<host_priority>",
          "tags": "<host_labels>"
        },
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "type": "MONGOS",
          "assign_public_ip": <allow_public_access_to_host>,
          "tags": "<host_labels>"
        },
        {
          "zone_id": "<availability_zone>",
          "subnet_id": "<subnet_ID>",
          "type": "MONGOCFG",
          "tags": "<host_labels>"
        },
        { <similar_settings_for_host_4> },
        { ... },
        { <similar_configuration_for_host_N> }
      ]
    }
    

    Where:

    • folder_id: Folder ID. You can get it with the list of folders in the cloud.

    • name: Cluster name.

    • environment: Cluster environment, PRODUCTION or PRESTABLE.

    • network_id: ID of the network where the cluster will be deployed.

    • security_group_ids: Security group IDs.

    • deletion_protection: Cluster protection against accidental deletion, true or false.

      Even with cluster deletion protection enabled, one can still delete a user or database or connect manually and delete the database contents.

    • maintenance_window: Maintenance window settings, including for stopped clusters. In maintenance_window, provide one of the following values:

      • anytime: Maintenance can take place at any time.

      • weekly_maintenance_window: Maintenance takes place once a week at the specified time:

        • day: Day of week, in DDD format.
        • hour: Hour of the day, in HH format. The values range from 1 to 24 hours.
    • config_spec: Cluster settings:

      • version: Yandex StoreDoc version, 5.0, 6.0, or 7.0.
        • mongod, mongoinfra, mongos, mongocfg: Host types.

          • resources: Cluster resources:

            • resource_preset_id: Host class.
            • disk_size: Disk size, in bytes.
            • disk_type_id: Disk type.
        • backup_window_start: Backup window settings.

          Here, specify the backup start time:

          • hours: Between 0 and 23 hours.
          • minutes: Between 0 and 59 minutes.
          • seconds: Between 0 and 59 seconds.
          • nanos: Between 0 and 999999999 nanoseconds.
        • backup_retain_period_days: Backup retention in days.

        • performance_diagnostics: Statistics collection settings:

          • profiling_enabled: Enable profiler, true or false.
    • database_specs: Database settings as an array of elements, one per database. Each element contains the name parameter with the database name.

      A database name may contain Latin letters, numbers, underscores, and hyphens. The name may be up to 63 characters long. Such names as config, local, admin, and mdb_internal are reserved for Yandex StoreDoc. You cannot create DBs with these names.

    • user_specs: User settings as an array of elements, one per user. Each element has the following structure:

      • name: Username.

      • password: User password.

      • permissions: User permission settings:

        • database_name: Name of the database to which the user will have access.
        • roles: Array of user roles. Each role is provided as a separate string in the array. For a list of possible values, see Users and roles.

        For each database, add a separate element with permission settings to the permissions array.

    • host_specs: Cluster host settings as an array of elements, one per host. Each element has the following structure:

            * `zone_id`: [Availability zone](../../overview/concepts/geo-scope.md).
      
      • subnet_id: Subnet ID.
      • assign_public_ip: Internet access to the host via a public IP address, true or false. In a sharded cluster, it is used only for MONGOS and MONGOINFRA hosts.
      • type: Host type in a sharded cluster, MONGOD, MONGOINFRA, MONGOS, or MONGOCFG.
      • tags: Host labels.
      • shard_name: Shard name in a sharded cluster.
      • hidden: Hide host, true or false. If the host is hidden, only direct connections will be able to read from it (for example, to make backups from it without adding load to the cluster).
      • secondaryDelaySecs: Replica's lag behind the master in seconds. It can be useful for data recovery in case of invalid operations.
      • priority: Host priority for assignment as a master.

      Note

      The shard_name, hidden, secondaryDelaySecs, and priority parameters are used for MONGOD hosts only.

  4. Call the ClusterService.Create method, e.g., via the following gRPCurl request:

    grpcurl \
        -format json \
        -import-path ~/cloudapi/ \
        -import-path ~/cloudapi/third_party/googleapis/ \
        -proto ~/cloudapi/yandex/cloud/mdb/mongodb/v1/cluster_service.proto \
        -rpc-header "Authorization: Bearer $IAM_TOKEN" \
        -d @ \
        mdb.api.cloud.yandex.net:443 \
        yandex.cloud.mdb.mongodb.v1.ClusterService.Create \
        < body.json
    
  5. View the server response to make sure your request was successful.

Warning

If you specified security group IDs when creating a cluster, you may need to additionally configure security groups to connect to the cluster.

Creating a cluster copyCreating a cluster copy

You can create an Yandex StoreDoc cluster with the settings of another one created earlier. To do this, import the original Yandex StoreDoc cluster configuration to Terraform. This way, you can either create an identical copy or use the imported configuration as the baseline and modify it as needed. The import feature is useful when you need to replicate a Yandex StoreDoc cluster packed with settings.

To create an Yandex StoreDoc cluster copy:

Terraform
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. In the same working directory, place a .tf file with the following contents:

    resource "yandex_mdb_mongodb_cluster" "old" { }
    
  6. Save the ID of the original Yandex StoreDoc cluster to an environment variable:

    export STOREDOC_CLUSTER_ID=<cluster_ID>
    

    You can get the ID with the list of clusters in the folder.

  7. Import the original Yandex StoreDoc cluster settings to the Terraform configuration:

    terraform import yandex_mdb_mongodb_cluster.old ${STOREDOC_CLUSTER_ID}
    
  8. Get the imported configuration:

    terraform show
    
  9. Copy it from the terminal and paste it into the .tf file.

  10. Place the file in the new imported-cluster directory.

  11. Edit the copied configuration so that you can create a new cluster from it:

    • Specify the new cluster name in the resource string and the name parameter.
    • Delete created_at, health, id, sharded, and status.
    • In the host sections, delete health and name.
    • If you have type = "ANYTIME" in the maintenance_window section, delete the hour argument.
    • Delete all user sections (if any). You can add database users with a separate yandex_mdb_mongodb_user resource.
    • Optionally, make further changes if you need a customized configuration.
  12. Get the authentication credentials in the imported-cluster directory.

  13. In the same directory, configure and initialize the provider. There is no need to create a provider configuration file manually, as you can download it.

  14. Place the configuration file in the imported-cluster directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  15. Make sure the Terraform configuration files are correct:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  16. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Timeouts

The Terraform provider sets the following timeouts for Yandex StoreDoc cluster operations:

  • Creating a cluster, including by restoring one from a backup: 30 minutes.
  • Editing a cluster: 60 minutes.

Operations exceeding the set timeout are interrupted.

How do I change these limits?

Add the timeouts block to the cluster description, for example:

resource "yandex_mdb_mongodb_cluster" "<cluster_name>" {
  ...
  timeouts {
    create = "1h30m" # An hour and a half
    update = "2h"    # Two hours
  }
}

ExamplesExamples

Creating a single-host clusterCreating a single-host cluster

CLI
Terraform

Create a Yandex StoreDoc cluster with the following test specifications:

  • Name: mymg.
  • Environment: production.
  • Network: default.
  • Security group ID: enp6saqnq4ie244g67sb.
  • One s2.micro host in the b0rcctk2rvtr******** subnet, in the ru-central1-a availability zone.
  • Network SSD storage (network-ssd): 20 GB.
  • One user: user1, password: user1user1.
  • One database: db1.
  • Deletion protection: Enabled.

Run this command:

yc managed-mongodb cluster create \
  --name mymg \
  --environment production \
  --network-name default \
  --security-group-ids enp6saqnq4ie244g67sb \
  --mongod-resource-preset s2.micro \
  --host zone-id=ru-central1-a,subnet-id=b0rcctk2rvtr******** \
  --mongod-disk-size 20 \
  --mongod-disk-type network-ssd \
  --user name=user1,password=user1user1 \
  --database name=db1 \
  --deletion-protection

Create a Yandex StoreDoc cluster and its network with the following test specifications:

  • Name: mymg.

  • Version: 7.0.

  • Environment: PRODUCTION.

  • Cloud ID: b1gq90dgh25bebiu75o.

  • Folder ID: b1gia87mbaomkfvsleds.

  • Network: mynet.

  • Host class: s2.micro.

  • Number of host blocks: 1.

  • Subnet: mysubnet. Network settings:

    • Availability zone: ru-central1-a.
    • Range: 10.5.0.0/24.
  • Security group: mymg-sg. The group rules allow TCP connections to the cluster from the internet via port 27018.

  • Network SSD storage: network-ssd.

  • Storage size: 20 GB.

  • user1 user.

  • Password: user1user1.

  • Database: db1.

  • Deletion protection: Enabled.

Configuration file for a single-host cluster:

resource "yandex_mdb_mongodb_cluster" "mymg" {
  name                = "mymg"
  environment         = "PRODUCTION"
  network_id          = yandex_vpc_network.mynet.id
  security_group_ids  = [ yandex_vpc_security_group.mymg-sg.id ]
  deletion_protection = true

  cluster_config {
    version = "7.0"
  }

  resources_mongod {
    resource_preset_id = "s2.micro"
    disk_type_id       = "network-ssd"
    disk_size          = 20
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
  }
}

resource "yandex_mdb_mongodb_database" "db1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "db1"
}

resource "yandex_mdb_mongodb_user" "user1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "user1"
  password   = "user1user1"
  permission {
    database_name = "db1"
  }
  depends_on = [
    yandex_mdb_mongodb_database.db1
  ]
}

resource "yandex_vpc_network" "mynet" {
  name = "mynet"
}

resource "yandex_vpc_security_group" "mymg-sg" {
  name       = "mymg-sg"
  network_id = yandex_vpc_network.mynet.id

  ingress {
    description    = "Yandex StoreDoc"
    port           = 27018
    protocol       = "TCP"
    v4_cidr_blocks = [ "0.0.0.0/0" ]
  }
}

resource "yandex_vpc_subnet" "mysubnet" {
  name           = "mysubnet"
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.mynet.id
  v4_cidr_blocks = ["10.5.0.0/24"]
}

Creating sharded clustersCreating sharded clusters

You can create Yandex StoreDoc clusters with standard or advanced sharding. For more information about sharding types, see Sharding management.

Standard shardingStandard sharding

Create a Yandex StoreDoc cluster and a network for it with multiple hosts:

  • One MONGOD host
  • Three MONGOINFRA hosts

Cluster test specifications:

  • Name: mymg.
  • Environment: PRODUCTION.
  • Deletion protection: Enabled.
  • Version: 7.0.
  • Database: db1.
  • user1 user.
  • Password: user1user1.
  • MONGOD host class: s2.micro.
  • MONGOINFRA host class: c3-c2-m4.
  • Network SSD storage: network-ssd.
  • Storage size: 10 GB.

Network specifications:

  • Availability zone: ru-central1-a.

  • Network: mynet.

  • Security group: mymg-sg with enp6saqnq4ie244g67sb ID. In Terraform, a group is created with the rule allowing TCP connections to the cluster from the internet on port 27018.

  • Subnet: mysubnet.

  • Range: 10.5.0.0/24 (only for Terraform).

CLI
Terraform

To create a Yandex StoreDoc cluster with standard sharding, run this command:

yc managed-mongodb cluster create \
   --name mymg \
   --environment production \
   --deletion-protection \
   --mongodb-version 7.0 \
   --database name=db1 \
   --user name=user1,password=user1user1 \
   --mongod-resource-preset s2.micro \
   --mongod-disk-type network-ssd \
   --mongod-disk-size 10 \
   --host type=mongod,`
     `zone-id=ru-central1-a,`
     `subnet-name=mysubnet \
   --mongoinfra-resource-preset c3-c2-m4 \
   --mongoinfra-disk-type network-ssd \
   --mongoinfra-disk-size 10 \
   --host type=mongoinfra,`
     `zone-id=ru-central1-a,`
     `subnet-name=mysubnet \
   --host type=mongoinfra,`
     `zone-id=ru-central1-a,`
     `subnet-name=mysubnet \
   --host type=mongoinfra,`
     `zone-id=ru-central1-a,`
     `subnet-name=mysubnet \
   --network-name mynet \
   --security-group-ids enp6saqnq4ie244g67sb

The configuration file for a cluster with standard sharding is as follows:

resource "yandex_mdb_mongodb_cluster" "mymg" {
  name                = "mymg"
  environment         = "PRODUCTION"
  network_id          = yandex_vpc_network.mynet.id
  security_group_ids  = [ yandex_vpc_security_group.mymg-sg.id ]
  deletion_protection = true

  cluster_config {
    version = "7.0"
  }

  resources_mongod {
    resource_preset_id = "s2.micro"
    disk_type_id       = "network-ssd"
    disk_size          = 10
  }

  resources_mongoinfra {
    resource_preset_id = "c3-c2-m4"
    disk_type_id       = "network-ssd"
    disk_size          = 10
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongod"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongoinfra"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongoinfra"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongoinfra"
  }
}

resource "yandex_mdb_mongodb_database" "db1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "db1"
}

resource "yandex_mdb_mongodb_user" "user1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "user1"
  password   = "user1user1"
  permission {
    database_name = "db1"
  }
  depends_on = [
    yandex_mdb_mongodb_database.db1
  ]
}

resource "yandex_vpc_network" "mynet" {
  name = "mynet"
}

resource "yandex_vpc_security_group" "mymg-sg" {
  name       = "mymg-sg"
  network_id = yandex_vpc_network.mynet.id

  ingress {
    description    = "Yandex StoreDoc"
    port           = 27018
    protocol       = "TCP"
    v4_cidr_blocks = [ "0.0.0.0/0" ]
  }
}

resource "yandex_vpc_subnet" "mysubnet" {
  name           = "mysubnet"
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.mynet.id
  v4_cidr_blocks = ["10.5.0.0/24"]
}
Advanced shardingAdvanced sharding

Create a Yandex StoreDoc cluster and a network for it with multiple hosts:

  • One MONGOD host
  • Two MONGOS hosts
  • Three MONGOCFG hosts

Cluster test specifications:

  • Name: mymg.
  • Environment: PRODUCTION.
  • Deletion protection: Enabled.
  • Version: 7.0.
  • Database: db1.
  • user1 user.
  • Password: user1user1.
  • Host class: s2.micro.
  • Network SSD storage: network-ssd.
  • Storage size: 10 GB.

Network specifications:

  • Availability zone: ru-central1-a.

  • Network: mynet.

  • Security group: mymg-sg with enp6saqnq4ie244g67sb ID. In Terraform, a group is created with the rule allowing TCP connections to the cluster from the internet on port 27018.

  • Subnet: mysubnet.

  • Range: 10.5.0.0/24 (only for Terraform).

CLI
Terraform

To create a Yandex StoreDoc cluster with advanced sharding, run this command:

yc managed-mongodb cluster create \
  --name mymg \
  --environment production \
  --deletion-protection \
  --mongodb-version 7.0 \
  --database name=db1 \
  --user name=user1,password=user1user1 \
  --mongod-resource-preset s2.micro \
  --mongod-disk-type network-ssd \
  --mongod-disk-size 10 \
  --host type=mongod,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --mongos-resource-preset s2.micro \
  --mongos-disk-type network-ssd \
  --mongos-disk-size 10 \
  --host type=mongos,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --host type=mongos,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --mongocfg-resource-preset s2.micro \
  --mongocfg-disk-type network-ssd \
  --mongocfg-disk-size 10 \
  --host type=mongocfg,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --host type=mongocfg,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --host type=mongocfg,`
    `zone-id=ru-central1-a,`
    `subnet-name=mysubnet \
  --network-name mynet \
  --security-group-ids enp6saqnq4ie244g67sb

The configuration file for a cluster with advanced sharding is as follows:

resource "yandex_mdb_mongodb_cluster" "mymg" {
  name                = "mymg"
  environment         = "PRODUCTION"
  network_id          = yandex_vpc_network.mynet.id
  security_group_ids  = [ yandex_vpc_security_group.mymg-sg.id ]
  deletion_protection = true

  cluster_config {
    version = "7.0"
  }

  resources_mongod {
    resource_preset_id = "s2.micro"
    disk_type_id       = "network-ssd"
    disk_size          = 10
  }

  resources_mongos {
    resource_preset_id = "s2.micro"
    disk_type_id       = "network-ssd"
    disk_size          = 10
  }

  resources_mongocfg {
    resource_preset_id = "s2.micro"
    disk_type_id       = "network-ssd"
    disk_size          = 10
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongod"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongos"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongos"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongocfg"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongocfg"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = yandex_vpc_subnet.mysubnet.id
    type      = "mongocfg"
  }
}

resource "yandex_mdb_mongodb_database" "db1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "db1"
}

resource "yandex_mdb_mongodb_user" "user1" {
  cluster_id = yandex_mdb_mongodb_cluster.mymg.id
  name       = "user1"
  password   = "user1user1"
  permission {
    database_name = "db1"
  }
  depends_on = [
    yandex_mdb_mongodb_database.db1
  ]
}

resource "yandex_vpc_network" "mynet" {
  name = "mynet"
}

resource "yandex_vpc_security_group" "mymg-sg" {
  name       = "mymg-sg"
  network_id = yandex_vpc_network.mynet.id

  ingress {
    description    = "Yandex StoreDoc"
    port           = 27018
    protocol       = "TCP"
    v4_cidr_blocks = [ "0.0.0.0/0" ]
  }
}

resource "yandex_vpc_subnet" "mysubnet" {
  name           = "mysubnet"
  zone           = "ru-central1-a"
  network_id     = yandex_vpc_network.mynet.id
  v4_cidr_blocks = ["10.5.0.0/24"]
}

Was the article helpful?

Previous
Information about existing clusters
Next
Updating cluster settings
© 2026 Direct Cursus Technology L.L.C.