Storage in Managed Service for MongoDB
Managed Service for MongoDB allows you to use network and local storage drives for database clusters. Network drives are based on network blocks, which are virtual disks in the Yandex Cloud infrastructure. Local disks are physically located on the database host servers.
When creating a cluster, you can select the following disk types for data storage:
-
Network HDD storage (
network-hdd
): Most cost-effective option for clusters that do not require high read/write performance. -
Network SSD storage (
network-ssd
): Balanced solution. Such disks are slower than local SSD storage, but, unlike local disks, they ensure data integrity if Yandex Cloud hardware fails. -
Non-replicated SSD storage (
network-ssd-nonreplicated
): Network SSD storage with enhanced performance but without redundancy.The storage size can only be increased in 93 GB increments.
-
Local SSDs (
local-ssd
): The fastest performing disks.The size of such a storage can be increased:
- For Intel Broadwell and Intel Cascade Lake: Only in 100 GB increments.
- For Intel Ice Lake: In 368 GB increments only.
For a list of host classes and their respective platforms, see Host classes.
Note
For clusters with hosts residing in the
ru-central1-d
availability zone, local SSD storage is not available if using the Intel Cascade Lake platform.
Selecting the disk type during cluster creation
The number of hosts you can create together with a MongoDB cluster depends on the selected disk type:
-
When using storage on local SSDs (
local-ssd
) or non-replicated SSDs (network-ssd-nonreplicated
), you can create a cluster with three or more hosts.This cluster will be fault-tolerant.
Local SSD storage has an effect on how much a cluster will cost: you pay for it even if it is stopped. For more information, refer to the pricing policy.
-
With HDD (
network-hdd
) or SSD (network-ssd
) network storage, you can add any number of hosts within the current quota.
For more information about limits on the number of hosts per cluster or shard, see Quotas and limits.
Disk space management
If at least one host in a Managed Service for MongoDB cluster runs out of its allocated disk space, the MongoDB instance on this host will crash and the host will be disabled. If this host was a PRIMARY
replica, this role will be assigned to one of the SECONDARY
replicas. As a result of migrating the PRIMARY
role from one host to another, you may run out of disk space on all hosts in the cluster, which will result in a complete cluster failure.
To avoid this, Managed Service for MongoDB monitors disk space in use and automatically enables read-only mode (using the db.fsyncLock
method
- Less than 500 MB of free disk space left (if the host storage size is less than 600 GB).
- Less than 5 GB of free disk space left (if the host storage size is 600 GB or more).
After a transition to read-only mode:
- Write queries stop being allowed on the host. You can only make read queries.
- If the host was a primary replica before switching to read-only mode, this role will be automatically assigned to another cluster host, because the primary replica role requires permission to write to the disk.
If the amount of data in the cluster keeps growing, all hosts will switch to read-only mode one by one and the cluster will stop accepting data to write.
Maintaining a cluster in operable condition
To keep your cluster up and running as the host is switching over to read-only:
-
Increase the disk space on the host. Once there is enough space on the host, Yandex Cloud will clear read-only mode automatically.
-
Add more shards to the cluster. The read-only mode will not be cleared on this host, but the cluster will be able to keep working normally as long as there is free disk space on the other shards.
-
Ask support
to temporarily suspend read-only mode on this host to manually delete some of the data.Alert
If free disk space drops to zero, MongoDB will crash and the cluster will stop operating.
-
Force data synchronization between hosts. This can help when a large amount of data was deleted from the cluster, but the disk space was not released (marked as available for reuse).