Virtual environment configuration requirements
3. Secure virtual environment configuration
This section recommends customers on how to configure secure Yandex Cloud services and employ additional virtual environment data protection tools.
Overview
3.1 Antivirus protection is used
Make sure to provide anti-malware protection within your scope of responsibility. You can use a variety of solutions from our partners in Yandex Cloud Marketplace.
Antivirus solution images are available in Yandex Cloud Marketplace. License types and other required information are available in the product descriptions.
Make sure that critical systems are protected with antivirus solutions.
Guides and solutions to use:
Follow the vendor guide to install the AV solution.
3.2 The serial console is either controlled or not used
On VMs, access to the serial console is disabled by default. For risks of using the serial console, see Getting started with a serial console in the Yandex Compute Cloud documentation.
When working with a serial console:
- Make sure that critical data is not output to the serial console.
- If SSH access to the serial console is enabled, make sure that both the credentials management routine and the password used to log in to the operating system locally are as per the regulatory standards. For example, in an infrastructure for storing payment card data, passwords must meet the PCI DSS requirements: they must contain both letters and numbers, be at least 7 characters long, and be changed at least once every 90 days.
Note
According to the PCI DSS standard, access to a VM via a serial console is considered "non-console", and Yandex Cloud uses TLS encryption for it.
We do not recommend using access to the serial console unless it is absolutely necessary.
- In the management console, select the folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Open the settings of all the necessary VMs.
- Under Access, find the Additional parameter.
- Serial console access must be disabled.
- If it is disabled for all the VMs, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Find a VM with access to the serial console enabled:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for VM_ID in $(yc compute instance list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do echo "VM_ID: " && yc compute instance get --id=$VM_ID --full --format=json | jq -r '. | select(.metadata."serial-port-enable"=="1")' | jq -r '.id' && echo "FOLDER_ID: " $FOLDER_ID && echo "-----" done; done; done
-
If an empty value is set in VM_ID next to FOLDER_ID, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
If you don't intend to use serial console on the VM, disable it.
3.3 A benchmark image is used for VM deployment
When deploying virtual machines, we recommend:
- Preparing a VM image whose system settings correspond to your information security policy. You can create an image using Packer. See Getting started with Packer.
- Use this image to create a virtual machine or instance group.
- Look up the virtual machine's information to check that it was created using this image.
- In the management console, select the folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Go to the Disks tab.
- Open the settings of all disks.
- Under Source, find the Identifier parameter.
- If every disk displays the ID of your benchmark image, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for the VM disks that do not contain the ID of your benchmark image:
export ORG_ID=<organization ID> export IMAGE_ID=<ID of your benchmark image> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DISK_ID in $(yc compute disk list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); \ do echo "DISK_ID: " && yc compute disk get --id=$DISK_ID \ --format=json | jq -r --arg IMAGE_ID $IMAGE_ID '. | select(."source_image_id"==$IMAGE_ID | not)' | jq -r '.id' && echo "FOLDER_ID: " $FOLDER_ID && echo "-----" done; done; done
-
If an empty value is set in DISK_ID next to FOLDER_ID, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- Find out why these VM disks use an image different from the benchmark one.
- Recreate the VMs with the appropriate image.
3.4 Terraform is used in accordance with best information security practices
With Terraform, you can manage a cloud infrastructure using configuration files. If you change the files, Terraform will automatically detect which part of your configuration is already deployed, and what should be added or removed. For more information, see Getting started with Terraform.
We do not recommend using private information in Terraform configuration files, such as passwords, secrets, personal data, payment system data, etc. Instead, you should use services to store and use secrets in the configuration, such as: HashiCorp Vault from Cloud Marketplace or Lockbox (to transfer secrets to the target object without using Terraform).
If you still need to enter private information in the configuration, you should take the following security measures:
- Use sensitive = true
for private information to exclude it from the console output of theterraform plan
andterraform apply
commands. - Use terraformremotestate
. We recommend uploading a Terraform state to Object Storage and setting up configuration locks using Managed Service for YDB to prevent simultaneous edits by administrators. - Use the mechanism for transferring secrets to Terraform via env
instead of plain text or use built-in KeyManagementService features for encrypting data in Terraform using a separate file with private data. Learn more about this technique .
For more information about Object Storage security, see Object Storage below.
Note
When a configuration is deployed, you can delete the configuration file with private data.
Scan your Terraform manifests using Checkov
Collect data about using the Terraform best security practices from different points.
3.5 Integrity control is performed
3.5.1 File integrity control
Numerous information security standards require integrity control of critical files. To do this, you can use free host-based solutions:
Paid solutions are also available in Yandex Cloud marketplace, such as Kaspersky Security.
Collect data about using integrity control from different points.
3.5.2 VM runtime environment integrity control
To control a VM's runtime environment (e.g., to enable access from the VM to a secure repository only when running it in the YC CLI cloud), you can use the identity document mechanism. When you create a VM, an identity document that stores information about the VM is generated. It contains the IDs of the VM, Yandex Cloud Marketplace product, disk image, etc. This document is signed with a Yandex Cloud certificate. The document and its signature are available to VM processes through the metadata service. Thus, the processes identify the VM runtime environment, disk image, etc., to restrict access to the resources under monitoring.
Make sure that critical VMs have identity documents signed.
3.6 Side-channel attack protection principles are followed
To ensure the best protection against CPU level side-channel attacks (such as Spectre or Meltdown):
- Use full-core virtual machines (instances with a CPU share of 100%).
- Install updates for your operating system and kernel that ensure side-channel attack protection (for example, Kernelpage-tableisolation for Linux
or applications built using Retpoline ).
We recommend that you use dedicated hosts for the most security-critical resources.
Learn more
3.7 The corporate Yandex Cloud users have the Yandex Cloud Certified Security Specialist certification
The Yandex Cloud Certified Security Specialist certification exam evaluates the competencies of Yandex Cloud users involved in information security and cloud system protection.
Guides and solutions to use:
- See the description of competencies tested during the Yandex Cloud Certified Security Specialist exam.
- Study the materials to help you pass the exam.
- Fill out this form to sign up for the exam.
Yandex Object Storage
3.8 There is no public access to the Object Storage bucket
We recommend assigning minimum roles for a bucket using IAM and supplementing or itemizing them using a bucket policy (for example, to restrict access to the bucket by IP, grant granular permissions for objects, and so on).
Access to Object Storage resources is verified at three levels:
Verification procedure:
-
If a request passes the IAM check, the next step is the bucket policy check.
-
Bucket policy rules are checked in the following order:
- If the request meets at least one of the Deny rules, access is denied.
- If the request meets at least one of the Allow rules, access will be allowed.
- If the request does not meet any of the rules, access will be denied.
-
If the request fails the IAM or bucket policy check, access verification is performed based on an object's ACL.
In IAM, a bucket inherits the same access permissions as those of the folder and cloud where it is located. For more information, see Inheritance of bucket access permissions by Yandex Cloud public groups. Therefore, we recommend that you only assign the minimum required roles to certain buckets or objects in Object Storage.
Bucket policies are used for additional data protection, for example, to restrict access to a bucket by IP, issue granular permissions to objects, and so on.
With ACLs, you can grant access to an object bypassing IAM verification and bucket policies. We recommend setting strict ACLs for buckets.
Example of a secure Object Storage configuration: Terraform
- In the management console, select the cloud or folder to check the buckets in.
- From the list of services, select Object Storage.
- Click the three dots next to each bucket and check its ACL for
allUsers
andallAuthenticatedUsers
. - Open the bucket and check the ACL of each of its objects for
allUsers
andallAuthenticatedUsers
. - Check that the object Read access section has the Public parameter enabled. Otherwise, proceed to "Guides and solutions to use".
-
Configure the AWS CLI to work with a cloud.
-
Run the command below to check the bucket ACL for
allUsers
andallAuthenticatedUsers
:aws --endpoint-url=https://storage.yandexcloud.net s3api get-bucket-acl <name of your bucket>
Guides and solutions to use:
If public access is enabled, remove it or perform access control (grant permission to access public data consciously).
3.9 Object Storage uses bucket policies
Bucket policies set permissions for actions with buckets, objects, and object groups. A policy applies when a user makes a request to a resource. As a result, the request is either executed or rejected.
Bucket policy examples:
- Policy that only enables object download from a specified range of IP addresses.
- Policy that prohibits downloading objects from the specified IP address.
- Policy that provides different users with full access only to certain folders, with each user being able to access their own.
- Policy that gives each user and service account full access to a folder named the same as the user ID or service account ID.
We recommend making sure that your Object Storage bucket uses at least one policy.
- In the management console, select the cloud or folder to check the bucket policies in.
- In the list of services, select Object Storage.
- Go to Bucket policy.
- Make sure that at least one policy is enabled. Otherwise, proceed to "Guides and solutions to use".
-
Configure the AWS CLI to work with a cloud.
-
Run the command below to check the bucket ACL for
allUsers
andallAuthenticatedUsers
:aws --endpoint-url=https://storage.yandexcloud.net s3api get-bucket-policy --bucket <name of your bucket>
Guides and solutions to use:
Enable the required policy.
3.10 The Object lock feature is enabled in Object Storage
When processing critical data in buckets, you must ensure that data is protected from deletion and that versions are backed up. This can be achieved by versioning and lifecycle management mechanisms, as well as by using object locks.
Bucket versioning allows keeping a version history of an object. Each version is a complete copy of an object and occupies space in Object Storage. Using version control protects your data from both accidental user actions and application faults.
If you delete or modify an object with versioning enabled, the action will create a new object version with a new ID. In the case of deletion, the object becomes unreadable, but its version is kept and can be restored.
For more information about setting up versioning, see the Object Storage documentation, Bucket versioning.
For more information about lifecycles, see the Object Storage documentation, Bucket object lifecycles and Bucket object lifecycle configuration.
In addition, to protect object versions against deletion, use object locks. For more information about object lock types and how to enable them, see the documentation.
The storage period of critical data in a bucket is determined by the customer's information security requirements and the information security standards. For example, the PCI DSS standard states that audit logs should be stored for at least one year and be available online for at least three months.
- In the management console, select the cloud or folder to check the buckets in.
- From the list of services, select Object Storage.
- Open the settings of all buckets.
- Go to the Versioning tab and make sure it is enabled. Otherwise, proceed to "Guides and solutions to use".
-
Configure the AWS CLI to work with a cloud.
-
Run the command below to check whether versioning is enabled:
aws --endpoint https://storage.yandexcloud.net \ s3api get-bucket-versioning \ --bucket <name of your bucket>
-
Run the command below to check whether versioning is enabled:
aws --endpoint-url=https://storage.yandexcloud.net/ \ s3api get-object-lock-configuration \ --bucket <name of your bucket>
Guides and solutions to use:
If public access is enabled, remove it or use access control (by only enabling it when necessary and if approved).
3.11 Logging of actions with buckets is enabled in Object Storage
When using Object Storage to store critical data, be sure to enable logging of actions with buckets. For more information, see the Object Storage documentation, Logging actions with a bucket.
This makes sure that data-plane logs with the following objects are written: PUT, DELETE, GET, POST, OPTIONS, HEAD.
You can request log data writing (except for bucket object read events) in Audit Trails. You can use the traffic
metric in Monitoring to view the amount of outgoing traffic from the bucket. In the future, all logs will be written to Audit Trails.
You can also analyze Object Storage logs in DataLens. For more information, see Analyzing Object Storage logs using DataLens.
Guides and solutions to use:
You can check if logging is enabled only via Terraform/API by following this guide.
3.12 Cross-Origin Resource Sharing (CORS) is configured in Object Storage
If you need cross-domain requests
- In the management console, select the cloud or folder to check the buckets in.
- From the list of services, select Object Storage.
- Open the settings of all buckets.
- Go to the CORS tab and make sure that the configuration is set up. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
Set up CORS.
3.13 Yandex Security Token Service is used to get access keys to Object Storage
Yandex Security Token Service: Yandex Identity and Access Management component used to get temporary access keys compatible with AWS S3 API.
Temporary access keys as an authentication method are only supported in Yandex Object Storage.
With temporary keys, you can set up granular access to buckets for multiple users with a single service account. The service account permissions must include all the permissions you want to grant using temporary keys.
A temporary access key is created based on a static key, but, unlike it, it has a limited lifetime and access permissions. Access permissions and lifetime are set for each temporary key individually. The maximum key lifetime is 12 hours.
To set up access permissions for the key, you need an access policy in JSON format based on this schema.
Temporary Security Token Service keys inherit the access permissions of the service account but are limited by the access policy. If you set up a temporary key's access policy to allow operations not allowed for the service account, such operations will not be performed.
Guides and solutions to use:
Create a temporary access key using Security Token Service.
3.14 Pre-signed URLs are generated for isolated cases of access to specific objects in Object Storage private buckets
Object Storage incorporates several access management mechanisms. To learn how these mechanisms interact, see Access management methods in Object Storage: Overview.
With pre-signed URLs, any web user can perform various operations in Object Storage, such as:
- Downloading an object
- Uploading an object
- Creating a bucket
A pre-signed URL is a URL containing request authorization data in its parameters. Pre-signed URLs can be created by users with static access keys.
We recommend using pre-signed URLs to users who are not authorized in the cloud but need access to specific objects in the bucket. This way you follow the principle of least privilege and avoid opening access to all the objects in the bucket.
Guides and solutions to use:
Create a pre-signed URL and communicate it to the user.
Managed Services for Databases
3.15 A security group is assigned in managed databases
We recommend prohibiting internet access to databases that contain critical data, in particular PCI DSS data or private data. Configure security groups to only allow connections to the DBMS from particular IP addresses. To do this, follow the steps in Creating a security group. You can specify a security group in the cluster settings or when creating the cluster in the network settings section.
- In the management console, select the cloud or folder to check the databases in.
- In the list of services, select a service or services with managed databases.
- In the object settings, find the Security group parameter and make sure that at least one security group is assigned.
- If the parameters of each object have at least one security group set, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
Run the command below to search for Managed PostgreSQL DBs with no SG:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-postgresql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-postgresql cluster get --id=$DB_ID --format=json | jq -r '. | select(.security_group_ids | not)' | jq -r '.id' done; done; done
-
The output should return an empty string. Otherwise, proceed to "Guides and solutions to use".
-
Run the command below to search for Managed MySQL DBs with no SG:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-mysql cluster get --id=$DB_ID --format=json | jq -r '. | select(.security_group_ids | not)' | jq -r '.id' done; done; done
-
The output should return an empty string. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
If any databases without security groups are found, assign them or enable the Default security group functionality.
3.16 No public IP address is assigned in managed databases
Assigning a public IP to a managed database raises information security risks. We do not recommend assigning an external IP unless it is absolutely necessary.
- In the management console, select the cloud or folder to check the databases in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Hosts tab.
- If the parameters of each object have the Public access option disabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for managed DB clusters with public IPs:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-mysql hosts list --cluster-id=$DB_ID --format=json | jq -r '.[] | select(.assign_public_ip)' | jq -r '.cluster_id' done; done; done
-
If an empty string is output, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
Disable public access if it is not required.
3.17 The deletion protection feature is enabled
In Yandex Cloud managed databases, you can enable deletion protection. The deletion protection feature safeguards the cluster against accidental deletion by a user. Even with cluster deletion protection enabled, one can still connect to the cluster manually and delete the data.
- In the management console, select the cloud or folder to check the databases in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- If the parameters of each object have the Deletion protection option enabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for managed DB clusters with deletion protection disabled:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-mysql cluster get --id=$DB_ID --format=json | jq -r '. | select(.deletion_protection | not)' | jq -r '.id' done; done; done
-
The output should return an empty string. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- In the management console, select the cloud or folder to enable deletion protection in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- In the object parameters, enable Deletion protection.
3.18 The setting for access from DataLens is not active if not needed
Do not enable access to databases containing critical data from the management console, DataLens, or other services unless you have to. Access from DataLens may be required for data analysis and visualization. For such access, the Yandex Cloud service network is used, with authentication and TLS encryption. You can enable and disable access from DataLens or other services in the cluster settings or when creating it in the advanced settings section.
- In the management console, select the cloud or folder to check the databases in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- If the parameters of each object have Access from DataLens disabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Find managed DB clusters with enabled access from DataLens:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-mysql cluster get --id=$DB_ID --format=json | jq -r '. | select(.config.access.data_lens)' | jq -r '.id' done; done; done
-
The output should return an empty string. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- In the management console, select the cloud or folder to disable access from DataLens in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- In the object parameters, disable Access from DataLens.
3.19 Access from the management console is disabled in managed databases
You may need access to the database from the management console to send SQL queries to the database and visualize the data structure.
We recommend that you enable this type of access only if needed, because it raises information security risks. In normal mode, use a standard DB connection as a DB user.
- In the management console, select the cloud or folder to check the databases in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- If the parameters of each object have Access from the management console disabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for managed DB clusters with access from the management console enabled:
Bash
export ORG_ID=<organization_ID> # MySQL for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '. | select(.config.access.web_sql)' | jq -r '.id' done; done # PostgreSQL for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do yc managed-postgresql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '. | select(.config.access.web_sql)' | jq -r '.id' done; done # ClickHouse for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do yc managed-clickhouse cluster list --folder-id=$FOLDER_ID --format=json | jq -r '. | select(.config.access.web_sql)' | jq -r '.id' done; done # Redis for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do yc managed-redis cluster list --folder-id=$FOLDER_ID --format=json | jq -r '. | select(.config.access.web_sql)' | jq -r '.id' done; done # MongoDB for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do yc managed-mongodb cluster list --folder-id=$FOLDER_ID --format=json | jq -r '. | select(.config.access.web_sql)' | jq -r '.id' done; done
PowerShell
$ORG_ID = "<organization_ID>" $Clouds = yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | ConvertFrom-Json | Select @{n="CloudID";e={$_.id}}, created_at, @{n="CloudName";e={$_.name}}, organization_id $MDBClusters = @() foreach ($Cloud in $Clouds) { $Folders = yc resource-manager folder list --cloud-id $Cloud.CloudID --format=json | ConvertFrom-Json foreach($Folder in $Folders) { # Getting Postgre $MDBName = "Managed PostgreSQL" $MDBClusters += yc managed-postgresql cluster list --folder-id $Folder.id --format=json | ConvertFrom-Json | where {$_.config.access.web_sql -eq $True} | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="MDB";e={$MDBName}}, @{n="ClusterID";e={$_.id}}, @{n="ClusterName";e={$_.name}}, @{n="ClusterEnv";e={$_.environment}}, @{n="ClusterStatus";e={$_.status}}, network_id, health, @{n="WebSQLAccess";e={$_.config.access.web_sql}} # Getting MySQL $MDBName = "Managed MySQL" $MDBClusters += yc managed-mysql cluster list --folder-id $Folder.id --format=json | ConvertFrom-Json | where {$_.config.access.web_sql -eq $True} | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="MDB";e={$MDBName}}, @{n="ClusterID";e={$_.id}}, @{n="ClusterName";e={$_.name}}, @{n="ClusterEnv";e={$_.environment}}, @{n="ClusterStatus";e={$_.status}}, network_id, health, @{n="WebSQLAccess";e={$_.config.access.web_sql}} # Getting ClickHouse $MDBName = "Managed ClickHouse" $MDBClusters += yc managed-clickhouse cluster list --folder-id $Folder.id --format=json | ConvertFrom-Json | where {$_.config.access.web_sql -eq $True} | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="MDB";e={$MDBName}}, @{n="ClusterID";e={$_.id}}, @{n="ClusterName";e={$_.name}}, @{n="ClusterEnv";e={$_.environment}}, @{n="ClusterStatus";e={$_.status}}, network_id, health, @{n="WebSQLAccess";e={$_.config.access.web_sql}} # Getting Redis $MDBName = "Managed Redis" $MDBClusters += yc managed-redis cluster list --folder-id $Folder.id --format=json | ConvertFrom-Json | where {$_.config.access.web_sql -eq $True} | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="MDB";e={$MDBName}}, @{n="ClusterID";e={$_.id}}, @{n="ClusterName";e={$_.name}}, @{n="ClusterEnv";e={$_.environment}}, @{n="ClusterStatus";e={$_.status}}, network_id, health, @{n="WebSQLAccess";e={$_.config.access.web_sql}} # Getting MongoDB $MDBName = "Managed MongoDB" $MDBClusters += yc managed-mongodb cluster list --folder-id $Folder.id --format=json | ConvertFrom-Json | where {$_.config.access.web_sql -eq $True} | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="MDB";e={$MDBName}}, @{n="ClusterID";e={$_.id}}, @{n="ClusterName";e={$_.name}}, @{n="ClusterEnv";e={$_.environment}}, @{n="ClusterStatus";e={$_.status}}, network_id, health, @{n="WebSQLAccess";e={$_.config.access.web_sql}} } } $MDBClusters
-
If an empty string is output, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- In the management console, select the cloud or folder to disable access from the management console in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- In the object parameters, disable Access from console.
Yandex Cloud Functions
3.20 Serverless Containers/Cloud Functions uses the VPC internal network
By default, the function is launched in the isolated IPv4 network with the enabled NAT gateway. For this reason, only public IPv4 addresses are available. You cannot fix the address.
Networking between two functions, as well as between functions and user resources, is limited:
- Incoming connections are not supported. For example, you cannot access the internal components of a function over the network, even if you know the IP address of its instance.
- Outgoing connections are supported via TCP, UDP, and ICMP. For example, a function can access a Yandex Compute Cloud VM or a Yandex Managed Service for YDB DB on the user's network.
- Function is cross-zoned: you cannot explicitly specify a subnet or select an availability zone to run a function.
If necessary, you can specify a cloud network in the function settings. In which case:
- The function will be executed in the specified cloud network.
- While being executed, the function will get an IP address in the relevant subnet and access to all the network resources.
- The function will have access not only to the internet but also to user resources located in the specified network, such as databases, virtual machines, etc.
- The function will have an IP address within the
198.19.0.0/16
range when accessing user resources.
You can only specify a single network for functions, containers, and API gateways that reside in the same cloud.
- In the management console, select the cloud or folder to check the functions in.
- In the list of services, select Cloud Functions.
- Open all the functions.
- In the object settings, go to the Edit function version tab.
- If the parameters of each object have Network — VPC set, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
Run the command below to search for any cloud functions that have no network settings specified in VPC:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for VER in $(yc serverless function version list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); \ do yc serverless function version get $VER --format=json | jq -r '. | select(.connectivity.network_id | not)' | jq -r '.id' done; done; done
-
If an empty string is output, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- Select the cloud or folder to check the functions in.
- Select Cloud Functions in the list of services.
- Open the function.
- In the object settings, go to the Edit function version tab.
- Set Network — VPC.
For more information about tracking function versions, see Backups in Cloud Functions.
3.21 Functions are configured in terms of access control, secret and environment variable management, and DBMS connection
In cases where the use of public functions is not explicitly required, we recommend that you use private functions. For more information about setting up access to functions, see Managing function access permissions. We recommend using private functions and assigning rights to invoke functions to specific cloud users.
A service account is an account that can be used by programs or functions to manage resources in Yandex Cloud. If the function version was created with a service account, you can get an IAM token for service account from the function invocation context.
Make sure to assign roles to the service account. A role is a set of permissions to perform operations with the cloud's resources. A function automatically inherits roles assigned for a folder, cloud, or organization. However, they do not appear in the list of assigned roles.
Do not store secrets and sensitive data in the function code and environment variables. Use Yandex Lockbox to store and rotate secrets. You can transmit a Yandex Lockbox secret to a function in the environment variable.
For the function to get access to the secret, edit its parameters to specify a service account with the following roles assigned:
lockbox.payloadViewer
for a secret.kms.keys.encrypterDecrypter
for an encryption key if the secret was created using a Yandex Key Management Service encryption key.
A Yandex Lockbox secret provided to a function is cached in Cloud Functions. After you revoke a service account's access to a secret, the function may continue to store the secret for up to 5 minutes.
Transmitting secrets creates a new function version. You cannot transmit secrets to an existing function version.
You can add other environment variables when creating a function version. The maximum size of environment variables, including their names, is limited to 4 KB.
You cannot calculate environment variables. Environment variable values are string constants. You can only calculate these within function code. You can retrieve environment variables using standard programming language tools.
You can access the DB cluster hosts from the function only via the SSL protocol
Guides and solutions to use:
- Disable public access to a function.
- View a list of roles assigned to a function.
- Get a service account IAM token using a function.
- Revoke a role assigned to a function.
- Connect to a database from a function.
For more information about roles and resources you can assign roles for in Cloud Functions, see Access management in Cloud Functions.
3.22 Aspects of time synchronization in Cloud Functions are addressed
Cloud Functions does not guarantee time synchronization prior to or during execution of requests by functions. To get a function log with exact timestamps on the Cloud Functions side, use a cloud logging service. For more information on function logging, see Function logs.
3.23 Aspects of header management in Cloud Functions are addressed
If the function is called to process an HTTP request, the returned result should be a JSON document containing the HTTP response code, response headers, and response content. Cloud Functions automatically processes this JSON document and returns data in a standard HTTP response to the user. It is the customer's responsibility to manage the response headers according to the regulatory requirements and the threat model. For more information on how to process an HTTP request, refer to the Cloud Functions manual, Invoking a function in Cloud Functions.
You can run a function by specifying the ?integration=raw
string query parameter. When invoked this way, a function cannot parse or set HTTP headers:
- HTTPS request body content is provided as the first argument (without converting to a JSON structure).
- HTTPS request body content is the same as the function's response (without converting and checking the structure); the HTTP response status is
200
.
The request must be a JSON structure which contains:
httpMethod
: HTTP method:DELETE
,GET
,HEAD
,OPTIONS
,PATCH
,POST
, orPUT
.headers
: Dictionary of strings with HTTP request headers and their values. If the same header is provided multiple times, the dictionary contains the last provided value.multiValueHeaders
: Dictionary with HTTP request headers and lists of their values. It contains the same keys as theheaders
dictionary; however, if any of the headers was repeated multiple times, its list will contain all the values provided for this header. If the header was provided only once, it gets included into this dictionary and its list will contain only one value.queryStringParameters
: Dictionary with the query parameters. If the same parameter is specified multiple times, the dictionary will contain the last specified value.multiValueQueryStringParameters
: Dictionary with the list of all specified values for each query parameter. If the same parameter is specified multiple times, the dictionary will contain all the specified values.requestContext
: Request context.
For the purpose of debugging a function, you can use special requests that return the JSON structure of the request and the result you need for debugging. For more information, see function debugging.
Managed Service for YDB
3.24 Recommendations for using confidential data in YDB are followed
It is prohibited to use confidential data for names of databases, tables, columns, folders, and so on. Do not send critical data, e.g., payment card details, to Managed Service for YDB (both Dedicated and Serverless) in plain text. Prior to sending data, be sure to encrypt it at the application level. For this you can use the KMS service or any other method compliant with the regulator standard. For data where the storage period is known in advance, we recommend that you configure the Time To Live
3.25 Recommendations for SQL injection protection in YDB are followed
When working with the database, use parameterized prepared statements
3.26 There is no public access for YDB
When accessing the database in dedicated mode, we recommend that you use it inside VPC and disable public access to it from the internet. In serverless mode, the database can be accessed from the internet. You must therefore take this into account when modeling threats to your infrastructure. For more information about the operating modes, see the Serverless and dedicated modes section in the Managed Service for YDB documentation.
When setting up database permissions, use the principle of least privilege.
- In the management console, select the cloud or folder to check the database in.
- In the list of services, select Managed Service for YDB.
- Open all the databases.
- In the database settings, go to the Network tab.
- If the parameters of each object have the Public IP addresses option disabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for managed DB clusters with public IPs:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for DB_ID in $(yc managed-mysql cluster list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc managed-mysql hosts list --cluster-id=$DB_ID --format=json | jq -r '.[] | select(.assign_public_ip)' | jq -r '.cluster_id' done; done; done
-
The output should return an empty string. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
Disable public access if it is not required.
3.27 YDB backup recommendations are followed
When creating on-demand backups, make sure that the backup data is properly protected.
When creating backups on demand in Object Storage, follow the recommendations in the Object Storage subsection above (for example, use the built-in bucket encryption feature).
Yandex Container Registry
3.28 ACL by IP address is set up for Yandex Container Registry
We recommend that you limit access to your Container Registry to specific IPs.
- In the management console, select the cloud or folder to check the registry in.
- In the list of services, select Container Registry.
- In the settings of the specific registry, go to the Access for IP address tab.
- If specific IPs to allow access for are set in the parameters, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for CRs that are not filtered by IP:
Bash
export ORG_ID=<organization_ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for CR in $(yc container registry list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); do yc container registry list-ip-permissions --id=$CR --format=json | jq -r '.[] | select(.ip)' | jq -r '.action' && echo $CR "IF ACTION PULL/PUSH exist before CR then OK" done; done; done
PowerShell
$ORG_ID = "<organization_ID>" $Clouds = yc resource-manager cloud list --organization-id $ORG_ID --format=json | ConvertFrom-Json | Select @{n="CloudID";e={$_.id}}, created_at, @{n="CloudName";e={$_.name}}, organization_id $CRIPPermissions = @() foreach ($Cloud in $Clouds) { $Folders = yc resource-manager folder list --cloud-id $Cloud.CloudID --format=json | ConvertFrom-Json foreach($Folder in $Folders) { $CRList = yc container registry list --folder-id $Folder.id --format=json | ConvertFrom-Json if($CRList) { foreach($CR in $CRList) { $IPPermissions = yc container registry list-ip-permissions --id $CR.id --format=json | ConvertFrom-Json if($IPPermissions) { $CRIPPermissions += $CR | Select @{n="CloudID";e={$Cloud.CloudID}}, @{n="CloudName";e={$Cloud.CloudName}}, @{n="FolderID";e={$Folder.id}}, @{n="FolderName";e={$Folder.name}}, @{n="CRID";e={$_.id}}, @{n="CRName";e={$_.name}}, @{n="CRStatus";e={$_.status}},@{n="Lables";e={$_.labels}},@{n="IPPermissionsList";e={$IPPermissions}} } } } } } $CRIPPermissions
-
If PULL/PUSH is output before each registry ID, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
Specify the IP addresses for registry access.
3.29 Requirements for application protection in Yandex Container Registry are met
3.29.1. Docker images are scanned when uploaded to Yandex Container Registry
Auto scans of Docker images on push are critical for early detection and elimination of vulnerabilities to ensure secure deployment of containers. Reports on completed scans provide a brief description of detected vulnerabilities and issues and help you set priorities and eliminate security risks in containerized applications.
- In the management console
, select the folder the registry with Docker images belongs to. - Select the appropriate registry in Container Registry.
- Navigate to the Vulnerability scanner tab and click Edit settings.
- Make sure Docker image scans on push are enabled.
Guides and solutions to use:
Guide on scanning Docker images on push.
Guides and solutions to use:
Guide on scanning Docker images on push.
3.29.2 Docker images stored in Container Registry are regularly scanned
Scheduled scanning of Docker images is an automated process that checks containerized images for vulnerabilities and compliance with security standards. Such scans are regular and automatic to ensure the consistency of image checks for vulnerabilities and maintain a high security level in the long run. Reports on completed scans provide a brief description of detected vulnerabilities and issues and help you set priorities and eliminate security risks in containerized applications.
We recommend setting up a schedule for scans to be run at least once a week.
- In the management console
, select the folder the registry with Docker images belongs to. - Select the appropriate registry in Container Registry.
- Navigate to the Vulnerability scanner tab and click Edit settings.
- Make sure that scheduled Docker image scans are enabled with a frequency of at least once a week.
Guides and solutions to use:
Guide on scheduled scanning of Docker images.
3.29.3 Artifact integrity is ensured
Signing artifacts enhances security to ensure your software validity, integrity, reliability, and compliance with the requirements.
Make sure that artifacts are signed while building an application.
Guides and solutions to use:
To sign artifacts within a pipeline, you can use Cosign
A special build of Cosign allows you to store the created digital signature key pair in Yandex Key Management Service, sign files and artifacts with the private key of the pair, and verify a digital signature using its public key.
To learn more, see Signing and verifying Container Registry Docker images in Yandex Managed Service for Kubernetes.
Yandex Container Solution
3.30 Privileged containers are not used in Yandex Container Solution
We do not recommend that you use privileged containers to run loads that process untrusted user input. Privileged containers should be used for the purposes of administering VMs or other containers.
- In the management console, select the cloud or folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Open the settings of a specific VM with a Container Optimized Image.
- In the Docker container's Settings, find the Privileged mode parameter.
- If it is disabled, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Run the command below to search for CRs that are not filtered by IP:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for VM_ID in $(yc compute instance list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); \ do yc compute instance get --id=$VM_ID --full --format=json | jq -r '. | select(.metadata."docker-container-declaration")| .metadata."docker-container-declaration" | match("privileged: true") | .string' && echo $VM_ID done; done; done
-
If there is no
privileged: true
in front of each VM ID, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
- In the management console, select the cloud or folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Open the settings of a specific VM with a Container Optimized Image.
- In the Docker container's Settings, disable the Privileged mode parameter.
3.31 The Yandex Certificate Manager certificate is valid for at least 30 days
You can use Yandex Certificate Manager to manage TLS certificates for your API gateways in the API Gateway, as well as your websites and buckets in Object Storage. Application Load Balancer is integrated with Certificate Manager for storing and installing certificates. We recommend that you use Certificate Manager to obtain your certificates and rotate them automatically.
When using TLS in your application, we recommend that you limit the list of your trusted root certificate authorities (root CA).
When using certificate pinning, keep in mind that Let's Encrypt certificates are valid for 90 days
We recommend that you update certificates in advance if they are not updated automatically.
- In the management console, select the cloud or folder to check the VMs in.
- In the list of services, select Yandex Certificate Manager.
- Open the settings of each certificate and find the End date parameter.
- If the parameter shows that the certificate will be valid for at least 30 days more, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
-
See what organizations are available to you and write down the ID you need:
yc organization-manager organization list
-
Search for any of your organization's certificates with the end date:
export ORG_ID=<organization ID> for CLOUD_ID in $(yc resource-manager cloud list --organization-id=${ORG_ID} --format=json | jq -r '.[].id'); do for FOLDER_ID in $(yc resource-manager folder list --cloud-id=$CLOUD_ID --format=json | jq -r '.[].id'); do for CERT_ID in $(yc certificate-manager certificate list --folder-id=$FOLDER_ID --format=json | jq -r '.[].id'); \ do yc certificate-manager certificate get --id $CERT_ID --format=json | jq -r '. | "Date of the end " + .not_after + " --- Cert_ID " + .id' done; done; done
-
If there is no
privileged: true
in front of each VM ID, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions to use:
Update the certificate or set up auto updates.
Yandex Managed Service for GitLab
3.32 GitLab instance security setup guidelines are followed
See the recommendations here.
Run a manual check.
3.33 Requirements for application protection in GitLab are met
3.33.1 Protected secure pipeline templates are used
When working with Managed Service for GitLab, make sure you use built-in GitLab security mechanisms to secure your pipeline. You can integrate a pipeline into your projects in the following ways:
- Creating a pipeline in an individual project and connecting it to other projects using the
include
function. This option is available for all license types. - Using the
Compliance framework and pipeline
mechanism that you can run in any group project. It is available for theUltimate
license. - Copying pipeline sections to
.gitlab-ci.yml
files in your projects.
3.33.2 Approval rules are configured
With Yandex Managed Service for GitLab, you can flexibly set up mandatory approval rules for adding code to the target project branch. This feature is an alternative to the GitLab Enterprise Edition’s Approval Rules
If a GitLab instance has the approval rules enabled, Managed Service for GitLab analyzes approvals from reviewers for compliance with the specified rules. If there are not enough approvals, a thread is created in a merge request that blocks it from being merged to the target branch. Editing the merge request creates or updates a comment in the thread with its current compliance status. Once all the required approvals are obtained, the thread is closed.
If you close a thread manually, it will be created again. If a merge request is approved regardless of the existing rules, users with the Maintainer
role or higher will receive an email notification about the violated code approval workflow.
- In the management console
, select the folder where your GitLab instance is located. - In the list of services, select Managed Service for GitLab.
- Select the instance you need and click Edit in the top-right corner of the page.
- Make sure to select a configured approval rule configuration in the Approval rules field.
Guides and solutions to use:
Enabling approval rules in the GitLab instance
3.34 Yandex Managed Service for Kubernetes security guidelines are used
Check the recommendations in Kubernetes security requirements.
3.35 OS Login is used for connection to a VM or Kubernetes node
OS Login is a convenient way to manage connections to VMs and Yandex Managed Service for Kubernetes cluster nodes via SSH through the YC CLI or via a standard SSH client with an SSH certificate or SSH key, which you first need to add to the OS Login profile of organization user or service account in Yandex Cloud Organization.
OS Login links the account of a virtual machine or Kubernetes node user with that of an organization or service account user. To manage access to virtual machines and Kubernetes nodes, enable the OS Login access option at the organization level and then activate OS Login access on each virtual machine or Kubernetes node separately.
Thus, you can easily manage access to virtual machines and Kubernetes nodes by assigning appropriate roles to users or service accounts. If you revoke the roles from a user or service account, they will lose access to all virtual machines and Kubernetes nodes with OS Login access enabled.
Guides and solutions to use:
- Enabling OS Login access at the organization level.
- Setting up OS Login access on an existing VM.
- Connect to the virtual machine via OS Login.
- Connecting to a Kubernetes node via OS Login.
3.36 Vulnerability scanning is performed at the cloud IP level
We recommend that customers should scan their hosts for vulnerabilities by themselves. Cloud resources support the installation of custom virtual images of vulnerability scanners or software agents on hosts. There are many paid and free scanning solutions on the market.
Network scanners scan hosts that are accessible over a network. Generally, authentication can be configured on network scanners.
Examples of free network scanners:
Example of a free scanner operating as an agent on hosts: Wazuh
You can also use a solution from Cloud Marketplace.
Run a manual check.
3.37 External security scans are performed according to the Yandex Cloud rules
Customers hosting their own software in Yandex Cloud can perform external security scans for the hosted software, including penetration tests. You can run your own scans or use contractors. For more information, see Rules for performing external security scans.
Run a manual check.
3.38 The security updates process has been set up
Customers must perform security updates themselves within their scope of responsibility. Various automated tools are available for centralized automated OS and software updates.
Yandex Cloud publishes security bulletins to notify customers of newly discovered vulnerabilities and security updates.
Backups
3.39 Cloud Backup or scheduled snapshots are used
Make sure to back up all VMs in your organization using one of these options:
- Scheduled snapshots
- Cloud Backup
- In the management console, select the cloud or folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Make sure that the scheduled snapshot policy is set up on the VMs.
- In the list of services, select Cloud Backup.
- Make sure that it is enabled.
Yandex API Gateway
An API gateway is an interface for working with services in Yandex Cloud or on the internet. A gateway is specified declaratively using OpenAPI 3.0
3.40 Access management in API Gateway is configured
Yandex Cloud users can only perform operations on resources that are allowed by the roles assigned to them. With no roles assigned, a user cannot perform most operations.
Yandex Identity and Access Management checks all transactions in Yandex Cloud. If an entity does not have required permissions, this service returns an error.
Make sure that the Yandex Cloud user has access to the API Gateway resources. The user needs proper roles for it. Roles for an API gateway can be issued by users with the api-gateway.admin
role or one of the following roles:
admin
resource-manager.admin
organization-manager.admin
resource-manager.clouds.owner
organization-manager.organizations.owner
- In the management console, select the cloud and folder to check the API gateway access in.
- Click the Access permissions tab.
- Make sure that users have the roles required to access the gateway.
You can also assign a role for an API gateway via the Yandex Cloud CLI or API.
To learn more about roles in API Gateway, see Roles existing in this service.
3.41 Networking is configured in API Gateway
By default, an API gateway resides in an isolated IPv4 network with a NAT gateway enabled. For this reason, it can only access public IPv4 addresses.
For the gateway to have access not just to the internet but to the user resources as well, specify the cloud network those resources reside in in the API gateway settings.
A cloud network must meet the following conditions:
- Has subnets in all availability zones.
- Has at least one resource with an IP address in the specified cloud network.
Note
If the network does not meet the conditions above, the service does not guarantee its operation.
If you specify a network in the API gateway settings, this will create an auxiliary subnet with addresses from the 198.19.0.0/16
range in each availability zone. The API gateway will get an IP address from the respective subnet and will have access to all network resources.
Note
You can only specify a single network for functions, containers, and API gateways that reside in the same cloud.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- Make sure the cloud network is specified in the Overview section.
Guides and solutions to use:
If the API gateway does not require access to resources from the specified cloud network, delete it from the gateway settings. For more information, see Updating an API gateway.
3.42 Recommendations for using custom domains are followed
API Gateway is integrated with the Certificate Manager domain management system.
If you are using your own domains in API Gateway with confirmed permissions when accessing the API:
- Regularly check the validity of the TLS certificate linked to your domain.
- Use TLS version 1.2 or higher.
- Use additional protection tools, such as intrusion detection and DDoS protection systems.
Run a manual check of the TLS version and the validity of the TLS connection certificate.
For more information about domains, see Integration of the domain management system with Yandex Cloud services.
3.43 Recommendations for using Websocket are followed
For two-way asynchronous communication between clients and an API gateway, API Gateway supports the WebSocket
You can manage web sockets using the API that receives information about a connection, sends data to the client side, and closes the connection.
We recommend that you use the following when connecting to the API gateway via WebSocket:
- TLS version 1.2 or higher (regularly check the validity of the TLS connection certificate).
- OpenAPI 3.0 authentication and authorization mechanisms.
- API gateway specification extensions, which can help you enhance your virtual environment security.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- Set up integrations in the OpenAPI specification using the following operations:
x-yc-apigateway-websocket-message
,x-yc-apigateway-websocket-connect
, orx-yc-apigateway-websocket-disconnect
.
For more information, see Working with an API gateway via WebSocket.
3.44 API gateway interaction with {yandex-cloud} services is configured
Make sure that security enhancement extensions were added to the API Gateway specification.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- OpenAPI 3.0 is used in the Specification section.
3.45 API gateway security is enhanced with extensions
The x-yc-apigateway:smartWebSecurity
extension uses Yandex Smart Web Security profile rules with conditions for actions to apply to HTTP requests received by the protected resource:
- The basic rules block unwanted traffic.
- The Smart Protection rule for the whole traffic provides the fullest and transparent protection.
- Advanced Rate Limiter sets request number limits, thus reducing workload on web apps and protecting the backend from depleting resources.
- The WAF profile analyzes web app's incoming HTTP requests based on pre-configured rules for DoS/DDoS protection.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- Make sure the Specification section uses the
x-yc-apigateway:smartWebSecurity
extension, which protects the API gateway as well as your application, function, or container from DDoS attacks based on the Yandex Smart Web Security profile rules.
3.46 Authorization in the API gateway is configured
We recommend using the OpenAPI 3.0 authentication and authorization mechanisms that are standard for API Gateway. Currently, you can use authorization via a function and via a JWT.
- Authorization via Cloud Functions. For HTTP request authorization, API Gateway calls the
x-yc-apigateway-authorizer:function
extension, which currently supports three types:HTTP Basic
,HTTP Bearer
, andAPI Key
. - Authorization via a JWT. For HTTP request authorization, API Gateway validates a token and verifies its signature using the following supported public keys: address, place, fields, body, time, caching mode, and cache storage period.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- Make sure that the Specification section has the
x-yc-apigateway-authorizer:jwt
orx-yc-apigateway-authorizer:function
extension configured.
3.47 Authorization context is used
We recommend using an authorization context in the request inside the requestContext.authorizer
field. This helps preserve data integrity and prevents unauthorized access.
Make sure an authorization context is configured in the API gateway specification settings when the x-yc-apigateway-authorizer:function
extension is used.
3.48 Logging is on
We recommend to keep logging enabled when creating an API gateway. For more information, see Writing to the execution log in API Gateway.
- In the management console, select the folder containing the API gateway.
- From the list of services, select API Gateway.
- Select the API gateway you need from the list.
- Make sure that the Write logs option is enabled in the Logging section and that the gateway logging level and destination are set up.
Use audit logs that go to Yandex Audit Trails for the API gateway performance analysis.