Yandex MPP Analytics for PostgreSQL clusters and hosts
-
Why is the cluster slow even though the computing resources are not used fully?
-
Why do I get an error about minimum memory for Greenplum® processes?
What is a database host and database cluster?
A database host is an isolated database environment in the cloud infrastructure with dedicated computing resources and reserved data storage.
A database cluster is one or more database hosts between which you can configure replication.
How many database hosts can there be in one cluster?
A Yandex MPP Analytics for PostgreSQL cluster includes a minimum of 4 hosts:
- 2 master hosts.
- 2 segment hosts.
You can increase the number of segment hosts up to 32.
For more information, see Quotas and limits.
How many clusters can you create in a single cloud?
For more information on MDB technical and organizational limitations, see Quotas and limits.
How are DB clusters maintained?
In Yandex MPP Analytics for PostgreSQL, maintenance implies:
- Automatic installation of DBMS updates and fixes for your database hosts.
- Changes to the host class and storage size.
- Other Yandex MPP Analytics for PostgreSQL maintenance activities.
For more information, see Maintenance.
How do you calculate usage cost for a database host?
In Yandex MPP Analytics for PostgreSQL, the usage cost is calculated based on the following parameters:
- Selected host class.
- Size of the storage reserved for the database host.
- Size of the database cluster backups. Backup space in the amount of the reserved storage is free of charge. Backup storage that exceeds this size is charged at special rates.
- Number of hours of database host operation. Partial hours are rounded to an integer value. You can find the cost per hour for each host class in the Pricing policy section.
Why is the cluster slow even though the computing resources are not used fully?
Your storage may have insufficient maximum IOPS and bandwidth to process the current number of requests. In this case, throttling occurs, which degrades the entire cluster performance.
The maximum IOPS and bandwidth values increase by a fixed value when the storage size increases by a certain step. The step and increment values depend on the disk type:
| Disk type | Step, GB | Max IOPS increase (read/write) | Max bandwidth increase (read/write), MB/s |
|---|---|---|---|
network-hdd |
256 | 300/300 | 30/30 |
network-ssd |
32 | 1,000/1,000 | 15/15 |
network-ssd-nonreplicated, network-ssd-io-m3 |
93 | 28,000/5,600 | 110/82 |
To increase the maximum IOPS and bandwidth values and make throttling less likely, increase the storage size when you update your cluster.
If you are using the network-hdd storage type, consider switching to network-ssd or network-ssd-nonreplicated by restoring the cluster from a backup.
Why do I get the minimum memory error for Greenplum® processes?
When creating, modifying, or restoring a cluster, you may get this error:
Per process memory must be more then '20971520' bytes on segment host, got '<calculated_memory_size>'
This error occurs if the memory size for each Greenplum® process is less than 20 MB and the number of connections equals the max_connections value. Minimum memory per cluster process is calculated using the following formula:
<host_segment_RAM> ÷ (<max_connections> x <number_of_segments_per_host>)
To fix the error, do one of the following:
- Reduce the
max_connectionsvalue. - Increase memory size by changing the segment host class.
Greenplum® and Greenplum Database® are registered trademarks or trademarks of Broadcom Inc. in the United States and/or other countries.