Greenplum® clusters and hosts
-
Why is the cluster slow even though the computing resources are not used fully?
-
Why do I get an error about minimum memory for Greenplum® processes?
What is a database host and database cluster?
A database host is a cloud-based isolated database environment with dedicated computing resources and reserved storage capacity.
A database cluster is one or more database hosts that allow configuring replication between them.
How many database hosts can be in a cluster?
A Greenplum® cluster includes a minimum of four hosts:
- two master hosts
- two segment hosts
You can increase the number of segment hosts up to 32.
For more information, see Quotas and limits.
How many clusters can you create in a single cloud?
To learn more about MDB technical and organizational limitations, see Quotas and limits.
How are database clusters maintained?
In Yandex MPP Analytics for PostgreSQL, maintenance implies:
- Automatic installation of DBMS updates and fixes for your database hosts.
- Changes in the host class and storage size.
- Other Yandex MPP Analytics for PostgreSQL maintenance activities.
For more information, see Maintenance.
How do you calculate the usage cost for a database host?
In Yandex MPP Analytics for PostgreSQL, the usage cost is calculated based on the following variables:
- Selected host class.
- Reserved storage capacity for the database host.
- Size of database cluster backups. You do not pay for backups as long as their size does not exceed the storage capacity. Additional backup storage is charged according to our pricing policy.
- Database host uptime in hours. Partial hours are rounded to the nearest whole hour. For the hourly rates of each host class, see our pricing policy.
Why is my cluster slow even though the computing resources are not fully utilized?
Your storage may have insufficient maximum IOPS and bandwidth to process the current number of requests. In this case, throttling occurs, which degrades the entire cluster performance.
The maximum IOPS and bandwidth values increase by a fixed value when the storage size increases by a certain step. The step and increment values depend on the disk type:
| Disk type | Step, GB | Max IOPS increase (read/write) | Max bandwidth increase (read/write), MB/s |
|---|---|---|---|
network-hdd |
256 | 300/300 | 30/30 |
network-ssd |
32 | 1,000/1,000 | 15/15 |
network-ssd-nonreplicated, network-ssd-io-m3 |
93 | 28,000/5,600 | 110/82 |
To increase the maximum IOPS and bandwidth values and make throttling less likely, increase the storage size when updating your cluster.
If you are using the network-hdd storage type, consider switching to network-ssd or network-ssd-nonreplicated by restoring the cluster from a backup.
Why do I get a minimum memory error for Greenplum® processes?
When creating, modifying, or restoring a cluster, you may get this error:
Per process memory must be more then '20971520' bytes on segment host, got '<calculated_memory_size>'
This error occurs if the memory size for each Greenplum® process is less than 20 MB and the number of connections equals the max_connections value. The minimum memory per cluster process is calculated using the following formula:
<host_segment_RAM> ÷ (<max_connections> x <number_of_segments_per_host>)
To fix the error, do one of the following:
- Reduce the
max_connectionsvalue. - Increase the memory size by changing the segment host class.
Greenplum® and Greenplum Database® are registered trademarks or trademarks of Broadcom Inc. in the United States and/or other countries.