All Security Deck Rules
- CSPM — Cloud Security Posture Management
- Only authorized administrators manage memberships in user groups
- Service accounts have minimum privileges granted on the organization level
- Service accounts have minimum privileges granted on the service level
- Only trusted administrators have privileged roles
- Resource labels are used
- Yandex Object Storage uses bucket policies
- Cloud Backup or scheduled snapshots are used
- Permissions to manage keys in KMS are granted to authorized users
- Container images used in the production environment have the last scan date of one week ago or less
- There is no access to Kubernetes API
- Managed Service for Kubernetes® uses secure configuration
- Separate service accounts are used for cluster and node group
- Security events monitoring for a Yandex Managed Service for GitLab instance in progress
- The Yandex Audit Trails service is operating properly
- The Kubernetes security policy is in place
- User group mapping is configured in an identity federation
- Only trusted administrators have access to service accounts
- Access permissions of users and service accounts are regularly audited using the Yandex Security Deck CIEM
- ACL by IP address is set up for Yandex Container Registry
- No public access to the Object Storage bucket
- Public IP addresses are not assigned to virtual machines
- Access from the management console is disabled in managed databases
- The setting for access from DataLens is not active if not needed
- The minimum required scopes for service account API keys are defined
- Service roles are used instead of primitive roles: admin, editor, viewer
- OS Login is used for connection to a VM or Kubernetes node
- There is no public access to your organization's resources
- Service accounts have minimum privileges granted
- The serial console is either controlled or not used
- In Yandex Application Load Balancer, HTTPS is used
- API gateways use HTTPS and their own domains
- Yandex Cloud CDN uses HTTPS and its own SSL certificate
- Application DDoS protection is enabled (L7)
- Network DDoS protection is enabled (L3)
- Docker images are scanned when uploaded to Container Registry
- Advanced rate limiter is implemented
- Yandex SmartCaptcha is used
- Yandex Smart Web Security profile is used
- Web application firewall is implemented
- When creating a registry in Yandex Container Registry, keep the safe registry settings by default
- Getting an IAM token through the metadata service in AWS IMDSv1 format is disabled on the VM
- Cloud Backup or scheduled snapshots are used
- The Yandex Certificate Manager certificate is valid for at least 30 days
- Deletion protection is enabled for KMS keys
- The Key Management Service keys are stored in the hardware Security module(HSM)
- Key rotation is enabled for KMS keys
- Encryption of disks and virtual machine snapshots is used
- Service account keys are rotated on a regular basis
- The organization uses Yandex Lockbox for secure secret storage
- Lockbox secrets are used for Serverless Containers and Cloud Functions
- At-rest encryption with a KMS key is enabled in Yandex Object Storage
- HTTPS for static website hosting is enabled in Yandex Object Storage
- Deletion protection is enabled
- The cookie lifetime timeout in the federation is less than 6 hours
- Managed Service for Kubernetes uses secure configuration
- Access to Kubernetes components is limited by IP address, port, and protocol
- Audit log collection is set up for incident investigation
- Only authorized administrators manage memberships in user groups
- A security group is assigned in managed databases
- No public IP address is assigned in managed databases
- Cloud resources are protected by a firewall or security groups
- Security groups have no access rule that is too broad
- In Virtual Private Cloud, a security group is created; the default security group is not used
- Serverless Containers/Cloud Functions uses the VPC internal network
- No public access to managed YDB
- Yandex Audit Trails is enabled at the organization level
- Data events are monitored
- The Object lock feature is enabled in Object Storage
- Access through control ports is only allowed for trusted IPs
- Access to Kubernetes components through control ports is only allowed for trusted IPs
- KSPM — Kubernetes® Security Posture Management
- Restrictive permissions for Kubelet service file are set
- Kubelet service file ownership is set to root:root
- Restrictive permissions for kubeconfig configuration file are set
- The owner of kubeconfig configuration file is set to root:root
- Restrictive permissions for Kubelet configuration file are set
- The owner of Kubelet configuration file is set to root:root
- Requests from anonymous users to Kubelet server are disabled
- Only explicitly authorized requests to Kubelet server are allowed
- Kubelet authentication via certificates is enabled
- Kubelet is allowed to manage iptables
- Kubelet client certificate rotation is enabled
This page provides a complete list of security rules used in Security Deck.
CSPM — Cloud Security Posture Management
Rules for checking cloud resource configuration.
Only authorized administrators manage memberships in user groups
|
kind |
severity |
ID |
|
manual |
high |
access.user-groups-access |
Description
Working in the cloud requires following the principle of least privilege and granting users no more permissions than they need to address their respective tasks.
Make sure to manage access permissions to a user group as a resource. Failing to do so may result in users getting excess permissions allowing them to manage the membership of other users in the group.
This check detects cases where users get such permissions:
- User has the
organization-manager.groups.memberAdminrole for the organization. - User has the
organization-manager.groups.memberAdminrole for a specific group as a resource. - User has the
organization-manager.organizations.owneroradminrole or another privileged role for the whole organization. - User has the
adminoreditorrole for a specific group as a resource (this is not recommended).
Guides and solutions to use
- In the left-hand panel of the Cloud Center interface
, select Groups and in the list that opens, click the line with the group in question. - Navigate to the Group access rights tab and enable the Inherited roles option.
- Follow the instructions for revoking a role for an organization or user group to take away permissions from unauthorized accounts.
Service accounts have minimum privileges granted on the organization level
|
kind |
severity |
ID |
|
automatic |
information |
access.sa-privileges-org-roles |
Description
Follow the principle of least privilege and assign to the service account only the roles necessary for the organization to run.
This rule detects service accounts with the following roles within the organization:
admineditorresource-manager.clouds.owner
Guides and solutions to use
Guides and solutions to use:
- Use Security Deck to revoke the service account's excessive access permissions.
- Revoke the excessive permissions from the service account using IAM.
Service accounts have minimum privileges granted on the service level
|
kind |
severity |
ID |
|
automatic |
information |
access.sa-privileges-service-roles |
Description
Follow the principle of least privilege and assign to the service account only the roles necessary for the service to run.
This rule detects service accounts with the following roles within the service:
compute.adminstorage.adminiam.serviceAccounts.adminvpc.admink8s.adminlockbox.adminkms.admin
Guides and solutions to use
Guides and solutions to use:
- Use Security Deck to revoke the service account's excessive access permissions.
- Revoke the excessive permissions from the service account using IAM.
Only trusted administrators have privileged roles
|
kind |
severity |
ID |
|
manual |
medium |
access.check-privileged-roles |
Description
Manual verification
This rule automatically finds accounts with any of these roles assigned:
billing.accounts.owneradminassigned for a billing accountorganization-manager.organizations.ownerorganization-manager.adminresource-manager.clouds.owneradminandeditorassigned for an organizationadminandeditorassigned for a cloudadminandeditorassigned for a folderresource-manager.clouds.editor
This rule requires an additional manual check. Upon completion, please change the rule's status.
When creating your billing account, you get the billing.accounts.owner role automatically. Any user with the billing.accounts.owner role can remove this role from the billing account creator and change the owner. The role allows you to perform any action with the billing account.
The billing.accounts.owner role can only be assigned to a Yandex ID account. An account with the billing.accounts.owner role is used when setting up payment methods and adding clouds.
Make sure to properly secure this account: it offers significant privileges and cannot be federated with a corporate account.
The most appropriate approach would be to not use this account on a regular basis:
- Only use it for initial setup and updates.
- When actively using this account, enable two-factor authentication (2FA) in Yandex ID.
- After that, if you do not use the bank card payment method (only available for this role), set a strong password for this account (generated using specialized software), disable 2FA, and refrain from using this account unnecessarily.
- Change the password to a newly generated one each time you use the account.
We recommend disabling 2FA only for this account and if it is not assigned to a specific employee. Thus you can avoid linking this critical account to a personal device.
To manage a billing account, assign the admin or editor role for the billing account to a dedicated employee with a federated account.
To view billing data, assign the viewer role for the billing account to a dedicated employee with a federated account.
By default, the organization-manager.organizations.owner role is granted to the user who creates an organization: the organization owner. The role allows you to appoint organization owners and use all the administrator privileges.
The resource-manager.clouds.owner role is assigned automatically when you create your first cloud in the organization. A user with this role can perform any operation with the cloud or its resources and grant cloud access to other users: assign roles and revoke them.
Assign the resource-manager.clouds.owner and organization-manager.organizations.owner roles to one or more employees with a federated account. Set a strong password for the Yandex ID account that was used to create the cloud, and use it only when absolutely necessary (for example, if the federated access fails).
Make sure to fully protect your federated account that is granted one of the privileged roles listed above:
- Enable two-factor authentication.
- Disable authentication from devices beyond the company's control.
- Configure login attempt monitoring and set alert thresholds.
Assign federated accounts the admin roles for clouds, folders, and billing accounts. Minimize the number of accounts with these roles and regularly review the expedience of these roles for the accounts they are assigned to.
Guides and solutions to use
Guides and solutions to use:
Check access rights for the Yandex Cloud Billing service:
- Go to Yandex Cloud Billing
. - In the left-hand panel, select Access management.
- Check which users have the
billing.accounts.ownerandadminroles.
Check access rights for an organization:
- Go to Yandex Identity Hub
- In the left-hand panel, select Access bindings.
- Check which users have the
admin,organization-manager.organizations.owner,organization-manager.admin, andresource-manager.clouds.ownerroles.
Check access rights for a cloud or a folder:
- In the management console
, select the cloud or folder to check access permissions in. - Click the Access permissions tab.
- Check which users have the
admin,editor,resource-manager.clouds.owner, andresource-manager.clouds.editorroles.
Make sure all the privileged roles are granted to trusted administrators. If any roles granted to untrusted administrators are found, investigate why and remove the respective permissions.
Resource labels are used
|
kind |
severity |
ID |
|
manual |
information |
o11y.labeled-resources |
Description
This rule checks labels on folder level and lists the folders missing the labels.
A label is a key-value pair in <label_name>=<label_value> format. You can use labels to break resources into logical groups and to monitor data streams and tag critical resources for privilege management.
Labels are crucial when it comes to structurizing and carrying out inventory of the infrastructure by attributes. This is especially important when there are many resources and they are dynamically created/deleted.
Guides and solutions to use
Guides and solutions to use:
You can add, delete, or update resource labels in the management console, Yandex Cloud CLI, and Terraform. For more infromation, read the Managing labels guide.
Yandex Object Storage uses bucket policies
|
kind |
severity |
ID |
|
manual |
high |
access.bucket-access-policy |
Description
Manual verification
This rule automatically finds buckets with no bucket policy applied.
This rule requires a manual check. Upon completion, please change the rule's status.
Bucket policies set permissions for actions with buckets, objects, and object groups. A policy applies when a user makes a request to a resource. As a result, the request is either executed or rejected.
Bucket policy examples:
- Policy that only enables object download from a specified range of IP addresses.
- Policy that prohibits downloading objects from the specified IP address.
- Policy that provides different users with full access only to certain folders, with each user being able to access their own.
- Policy that gives each user and service account full access to a folder named the same as the user ID or service account ID.
We recommend making sure that your Object Storage bucket uses at least one policy.
Guides and solutions to use
Guides and solutions to use:
- In the management console
, select the cloud or folder where resides the bucket which you want to check bucket policies for. - Go to Object Storage and select the bucket in question.
- In the left-hand menu, select Security and go to the Access policy tab.
- If at least one policy is enabled, the rule is considered satisfied. Otherwise, it is recommended to configure an access policy for the bucket.
Cloud Backup or scheduled snapshots are used
|
kind |
severity |
ID |
|
automatic |
high |
backup.compute-disks |
Description
This rule lists virtual machines which do not have configured back up policy.
It is important to configure back ups since it is the only practical way to restore VM's operation after a data loss or a data corruption. Without back ups, any incident leads to non-recoverable loss and operational downtime.
In cloud, there are two options to back up VMs:
- Scheduled snapshots
- Cloud Backup
Guides and solutions to use
Backups in Compute Cloud includes snapshots of disks connected to VMs and Yandex Cloud Backup usage.
Cloud Backup is a service for creating backups and restoring Yandex Cloud resources and their data.
You can connect to Cloud Backup either a new Yandex Compute Cloud VM as soon as its is created or an existing VM with active and configured apps, resources, data, etc.
For Cloud Backup to be able to back up and restore a VM, the VM must be associated with a backup policy.
Permissions to manage keys in KMS are granted to authorized users
|
kind |
severity |
ID |
|
manual |
medium |
access.kms-keys-access |
Description
To minimize the risk of compromising user account credentials, it is recommended to grant users and service accounts granular permissions for particular keys in Yandex Key Management Service. For more information, see Access management in Key Management Service.
This rule checks access permissions for KMS keys and returns all the users that are assigned either of the following roles:
admin,editor,kms.admin,kms.editor, orkms.keys.encrypterDecrypterfor organization, clouds, or folders.kms.keys.encrypterDecrypterorkms.editorfor KMS keys.
Guides and solutions to use
It is recommended to follow these principles when granting permissions for KMS keys:
- To access Yandex Key Management Service, you need an IAM token.
- To automate operations with KMS, we recommend that you create a service account and run commands and scripts under it. If you use VMs, get an IAM token for your service account using the mechanism of assigning a service account to your VM. For other ways to get an IAM token for your service account, see the Yandex Identity and Access Management documentation, Getting an IAM token for a service account.
- We recommend that you grant granular permissions for specific keys in the KMS service to your users and service accounts. For more information, see the KMS documentation, Access management in Key Management Service.
For more information about security measures for access control, see Authentication and access control.
Container images used in the production environment have the last scan date of one week ago or less
|
kind |
severity |
ID |
|
manual |
medium |
appsec.registry-recently-scan |
Description
Checking Docker images used in production environments with the last scan date not older than a week ensures that you continuously monitor and update security measures, eliminating potential vulnerabilities that might have occurred since the last scan. This also helps you make sure you are not deploying containers with recently detected vulnerabilities and enhance the security level.
You can automate this process by setting up a schedule.
Guides and solutions to use
Guides and solutions to use:
Set up automatically scheduled Docker images scanning for vulnerabilities
There is no access to Kubernetes API
|
kind |
severity |
ID |
|
automatic |
medium |
k8s.api-security |
Description
We do not recommend granting access to Kubernetes API from the internet. Use firewall protection where needed (for example, security groups).
Note
This rule checks only for external IP addresses on Kubernetes clusters.
Guides and solutions to use
Guides and solutions to use:
It is recommended to use Kubernetes clusters that are not accessible from the internet. For guidance on creating such a cluster, see Creating and configuring a Kubernetes cluster with no internet access.
If a cluster must be accessible from the internet, configure it using these firewall options:
-
Use network policy configuration tools via the Calico (basic) or Cilium CNI (advanced) plugins in Yandex Cloud. Apply
default denyrules for inbound and outbound traffic by default, permitting only necessary traffic. -
For online endpoints, allocate an independent Kubernetes cluster or independent node groups (using such mechanisms as Taints and Tolerations
+ Node affinity ). This creates a DMZ, limiting your attack surface so that if your nodes are compromised online, the impact is minimized. -
Use an Ingress
resource to enable incoming network access to your workloads via HTTP/HTTPS. There are at least two Ingress controller options that you can use in Yandex Cloud:
Managed Service for Kubernetes® uses secure configuration
|
kind |
severity |
ID |
|
manual |
medium |
k8s.secure-configuration |
Description
Manual Check
Please check if you have implemented controls for node group settings.
In Managed Service for Kubernetes, the user is fully in control of all node group settings but only partially in control of the master settings. The user is responsible for the whole cluster's security.
The CIS Kubernetes Benchmark
Guides and solutions to use
Guides and solutions to use:
- Using the kube-bench
tool, check whether the node group configuration is compliant with CIS Kubernetes Benchmark. The tool officially supports the Yandex Cloud node groups. - Starboard Operator
is a free tool that helps you automate scanning of images for vulnerabilities and checking that the configuration is compliant with CIS Kubernetes Benchmark. Starboard Operator supports integration with kube-bench and is used for its automatic startup.
Separate service accounts are used for cluster and node group
|
kind |
severity |
ID |
|
manual |
high |
k8s.access |
Description
When creating a cluster in Managed Service for Kubernetes, specify two service accounts:
- Cluster service account: On behalf of this service account, Managed Service for Kubernetes manages cluster nodes, subnets for pods and services, disks, load balancers, encrypts and decrypts secrets.
- Node group service account: Under this service account, Managed Service for Kubernetes cluster nodes get authenticated in Yandex Container Registry or Yandex Cloud Registry. For other container registries, you do not need to assign roles to the service account.
Guides and solutions to use
Guides and solutions to use:
Make sure that the access of IAM accounts to Managed Service for Kubernetes resources is managed at the following levels:
- Managed Service for Kubernetes service roles (access to the Yandex Cloud API). These allow you to control clusters and node groups (e.g., create a cluster, create/edit/delete a node group, and so on).
- Service roles required to access the Kubernetes API. These allow you to control cluster resources via the Kubernetes API (e.g., perform standard actions with Kubernetes: create, delete, view namespaces, work with pods, deployments, create roles, and so on). Only the basic global roles are available at cluster level:
k8s.cluster-api.cluster-admin,k8s.cluster-api.editor, ork8s.cluster-api.viewer. - Primitive roles. These are global primitive IAM roles that comprise service roles (e.g., the primitive
adminrole comprises both the service administration role and the administration role for access to the Kubernetes API). - Standard Kubernetes roles. Inside the Kubernetes cluster itself, the Kubernetes tools can help you create both regular roles and cluster roles. Thus you can manage access for IAM accounts at the namespace level. To assign IAM roles at the namespace level, you can manually create RoleBinding objects in a relevant namespace stating the cloud user's IAM ID in the subjects name field.
Security events monitoring for a Yandex Managed Service for GitLab instance in progress
|
kind |
severity |
ID |
|
automatic |
medium |
o11y.gitlab-audited |
Description
The rule checks whether log collection is set up for a Yandex Managed Service for GitLab instance.
The core log collection tool is Yandex Audit Trails. It enables collecting audit logs about events occurring to Yandex Cloud resources and uploading these logs to an Object Storage bucket or a log group in Cloud Logging for further analysis or export.
Audit Trails events in Managed Service for GitLab are control plane events, which include creating, deleting, and modifying an instance, as well as execution events and more. For more information, see the Audit Trails reference.
Guides and solutions to use
Guides and solutions to use:
Set up log collection using Yandex Audit Trails:
- Create a bucket with restricted access.
- Assign the required roles to the service accounts.
- Create a trail.
Audit Trails events in Managed Service for GitLab are control plane events, which include creating, deleting, and modifying an instance, as well as execution events and more. For more information, see the Audit Trails reference.
The Yandex Audit Trails service is operating properly
|
kind |
severity |
ID |
|
automatic |
medium |
o11y.audit-trails-no-errors |
Description
The Yandex Audit Trails service check helps promptly detect audit log collection failures, which is essential for continuous security monitoring and compliance with audit requirements. Unavailability or malfunction of Audit Trails may lead to the loss of cloud operations data, reducing transparency and increasing the risk of undetected incidents.
This check returns the list of trails within the organization that are currently in the Error status.
Recommendations
Guides and solutions to use:
If a trail enters the Error state, create a temporary trail with the same audit log collection scope and an appropriate destination object. This helps prevent interruptions in audit log collection and potential data loss. For more information, see Creating a trail to upload audit logs.
Once a new trail has been created, you can begin restoring the operation of the existing one that is in the Error state. You can do this independently or with assistance from our technical support
The Kubernetes security policy is in place
|
kind |
severity |
ID |
|
automatic |
high |
k8s.kspm |
Description
Kubernetes Security Posture Management (KSPM) ensures the security of containerized applications and images they use.
The KSPM module automatically identifies all Kubernetes clusters and containers in the specified workspace, and deploys security components in them as defined in the configuration. New clusters automatically get security coverage, without requiring manual search or installation of any components.
The module continuously assesses workloads for misconfigurations and provides runtime security monitoring through sensors that detect attacks targeting nodes and containers.
The KSPM configuration is set when you create a workspace and may include checking clusters for compliance with the following standards:
-
Kubernetes Pod Security Standards (Restricted): This standard contains security controls based on the Kubernetes Pod Security Standards (PSS) Restricted profile . A restricted profile is the most secure and provides the highest detection efficiency for container-based attacks. It applies strict security policies that may require modifying applications to ensure compliance. A restricted profile is recommended for security-critical applications and environments where maximum security is required. -
Kubernetes Pod Security Standards (Baseline): This standard contains security controls based on the Kubernetes Pod Security Standards (PSS) Baseline profile . A baseline profile is designed for easy implementation and provides common best practices for container security. It prevents the most common security issues in containers while maintaining compatibility with most applications. The baseline profile is a good starting point for organizations just getting started with container security. -
Microsoft Threat Matrix for Kubernetes: This standard contains security controls based on the Microsoft Threat Matrix for Kubernetes , which is a framework that helps security teams understand and fend off threats specific to Kubernetes environments. It provides a comprehensive approach to attack methods and defensive strategies tailored for container orchestration platforms. -
CIS Kubernetes Benchmark: This standard includes recommendations from the CIS Kubernetes Benchmark for secure configuration of Kubernetes worker node components. Only automatic checks from section4 Worker Nodesare included.
Recommendations
Guides and solutions to use:
Use the KSPM module to protect Kubernetes clusters and containers in your workspace:
- Create a service account KSPM will use to view Managed Service for Kubernetes cluster info, install the necessary components, and perform checks.
- Assign to the service account the
security-deck.workerrole for the organization, cloud, or folder. - Create a Security Deck workspace, specify the clouds and folders you want to control the security of clusters in, and select the industry standards and regulations the resources you have chosen will be benchmarked against.
- On the new workspace page, click Workspace Parameters and navigate to the KSPM tab.
- Under Scope of control, select the clouds, folders, or clusters within the workspace resources where compliance with the Kubernetes security rules will be enforced.
- Click Save and confirm the action.
For more information, see Activating KSPM.
User group mapping is configured in an identity federation
|
kind |
severity |
ID |
|
automatic |
low |
access.user-groups-mapping |
Description
In organizations with a lot of users, you may need to grant the same access permissions for Yandex Cloud resources to multiple users at once. In which case, it is more efficient to grant roles and permissions to groups rather than individual users.
If you have created user groups in your identity provider or plan to do so, you can map user groups between the IdP and Yandex Identity Hub. Users in the identity provider's groups will be granted the same access permissions for Yandex Cloud resources as their respective groups in Identity Hub.
Recommendations
Guides and solutions to use:
Configure group mapping between your identity provider and Yandex Identity Hub.
Only trusted administrators have access to service accounts
|
kind |
severity |
ID |
|
manual |
information |
access.privileged-sa-access |
Description
Note
This rule automatically identifies accounts that have access rights assigned for service accounts.
You can grant a user or another service account permission to use a service account.
Follow the principle of least privilege when granting access for a service account as a resource. A user with permission to use a service account also inherits all permissions assigned to that service account. Assign roles that allow the use and management of service accounts only to a minimal number of trusted users.
Each service account with extended permissions should be placed as a resource in a separate folder. This helps prevent accidentally granting permissions for a service account along with the permissions for the folder with the respective service component.
Recommendations
Guides and solutions to use:
Validate the access rights assigned for service accounts. The recommendation is considered satisfied if the list contains only trusted administrators. Otherwise, follow this guide to revoke any excessive permissions using the Identity and Access Management service.
To manage access centrally, use the CIEM module
Access permissions of users and service accounts are regularly audited using the Yandex Security Deck CIEM
|
kind |
severity |
ID |
|
manual |
information |
access.check-bindings |
Description
Manual verification
The rule requires a manual check. Upon completion, please change the rule's status.
To ensure data and cloud infrastructure security, you need to regularly audit the access permissions of users and service accounts.
Cloud Infrastructure Entitlement Management
For more information, see Cloud Infrastructure Entitlement Management (CIEM).
Recommendations
Guides and solutions to use:
Use the Cloud Infrastructure Entitlement Management module to centrally view all access permissions granted to individual subjects and groups for organization resources, and to revoke any permissions that are excessive.
For a quick start with the CIEM module, refer to the guides below:
ACL by IP address is set up for Yandex Container Registry
|
kind |
severity |
ID |
|
automatic |
medium |
access.acl-container-registry |
Description
Automatic verification
This control automatically checks for ACL settings on Container Registry instances.
It is recommended that you limit access to your Container Registry to specific IPs.
- In the management console, select the cloud or folder to check the registry in.
- In the list of services, select Container Registry.
- In the settings of the specific registry, go to the Access for IP address tab.
- If specific IPs to allow access for are set in the parameters, the recommendation is fulfilled. Otherwise, proceed to "Guides and solutions to use".
Guides and solutions
Guides and solutions to use:
- In the management console, select the cloud or folder to check the VMs in.
- In the list of services, select Compute Cloud.
- Open the settings of a specific VM with a Container Optimized Image.
- In the Docker container's Settings, disable the Privileged mode parameter.
No public access to the Object Storage bucket
|
kind |
severity |
ID |
|
manual |
medium |
access.bucket-public-access |
Description
Manual check
Make sure that the found buckets actually require public access. Please change the status manually.
Attention
This control does not automatically check access when IAM roles are modified or when public access is specified via anonymous_access_flags. Manual verification is required.
It is recommended to assign minimum roles for a bucket using IAM and supplementing or itemizing them using a bucket policy (for example, to restrict access to the bucket by IP, grant granular permissions for objects, and so on).
Access to Object Storage resources is verified at three levels:
Verification procedure:
- If a request passes the IAM check, the next step is the bucket policy check.
- Bucket policy rules are checked in the following order:
- If the request meets at least one of the Deny rules, access is denied.
- If the request meets at least one of the Allow rules, access will be allowed.
- If the request does not meet any of the rules, access will be denied.
- If the request fails the IAM or bucket policy check, access verification is performed based on an object's ACL.
In IAM, a bucket inherits the same access permissions as those of the folder and cloud where it is located. For more information, see Inheritance of bucket access permissions by Yandex Cloud public groups. Therefore, we recommend that you only assign the minimum required roles to certain buckets or objects in Object Storage.
Bucket policies are used for additional data protection, for example, to restrict access to a bucket by IP, issue granular permissions to objects, and so on.
With ACLs, you can grant access to an object bypassing IAM verification and bucket policies. We recommend setting strict ACLs for buckets.
Example of a secure Object Storage configuration: Terraform
Guides and solutions
Guides and solutions to use:
- It is recommended to assign minimum roles for a bucket using IAM and supplementing or itemizing them using a bucket policy (for example, to restrict access to the bucket by IP, grant granular permissions for objects, and so on).
- If public access is required, it is recommended to use DSPM to monitor the presence of sensitive data in buckets.
Public IP addresses are not assigned to virtual machines
|
kind |
severity |
ID |
|
manual |
medium |
access.compute-public-ip |
Description
Manual verification
Make sure that the found virtual machines actually require a public IP address. Manually mark the control as completed.
Virtual machines with public IP addresses are accessible from the internet. It is recommended to use public IP addresses only for resources that require direct access from the internet (e.g., NAT instances or bastion hosts). For other resources, it is recommended to use private IP addresses and organize access through VPN or bastion hosts.
Guides and solutions
- Make sure that virtual machines with public IP addresses actually require direct internet access.
- For resources that do not require direct internet access, use private IP addresses.
- Organize access to resources with private IP addresses through VPN or bastion hosts.
Access from the management console is disabled in managed databases
|
kind |
severity |
ID |
|
automatic |
low |
access.db-console-access |
Description
Automatic verification
This control automatically checks for management console access settings on managed database clusters.
You may need access to the database from the management console to send SQL queries to the database and visualize the data structure.
We recommend that you enable this type of access only if needed, because it raises information security risks. In normal mode, use a standard DB connection as a DB user.
Guides and solutions
Guides and solutions to use:
- In the management console, select the cloud or folder to disable access from the management console in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- In the object parameters, disable Access from console.
The setting for access from DataLens is not active if not needed
|
kind |
severity |
ID |
|
automatic |
low |
access.db-datalens-access |
Description
Automatic verification
This control automatically checks for DataLens access settings on managed database clusters.
Do not enable access to databases containing critical data from the management console, DataLens, or other services unless you have to. Access from DataLens may be required for data analysis and visualization. For such access, the Yandex Cloud service network is used, with authentication and TLS encryption. You can enable and disable access from DataLens or other services in the cluster settings or when creating it in the advanced settings section.
Guides and solutions
Instructions and solutions for implementation:
- In the management console, select the cloud or folder where you want to disable access from DataLens.
- In the list of services, select the service(s) where the managed databases are located.
- In the object settings, go to the tab Additional settings.
- In the object's parameters, disable the Access from DataLens option.
The minimum required scopes for service account API keys are defined
|
kind |
severity |
ID |
|
manual |
medium |
access.defined-key-scopes |
Description
A scope is the total of the actions a service account is allowed to perform with the service's resources. A service can have more than one scope. You cannot use an API key with specified scopes in other services or scopes.
In addition to service account access permissions, you can define scopes to restrict the use of API keys. Configuring scope limits and expiration dates will reduce the risk of unauthorized use of your keys. Assign only the strictly required scopes to API keys.
more details: <https://yandex.cloud/en/docs/security/standard/authentication#api-key-scopes>
Guides and solutions
Guides and solutions to use:
Create an API key with a specified scope.
Service roles are used instead of primitive roles: admin, editor, viewer
|
kind |
severity |
ID |
|
manual |
medium |
access.min-privileges |
Description
This rule requires manual check. After checking the necessity for the privileges, please change the rule status.
The principle of least privilege (see best practices) requires assigning users the minimum required roles. We do not recommend using primitive roles, such as admin, editor and viewer that are valid for all services, because this contradicts the principle of least privilege. To ensure more selective access control and implementation of the principle of least privilege, use service roles that only contain permissions for a certain type of resources in a given service. You can see the list of all service roles in the Yandex Cloud roles reference: link.
Use the auditor role without data access wherever possible.
Guides and solutions
Guides and solutions to use:
- Analyze the accounts found with the admin, editor, and viewer primitive roles assigned and replace them with auditor role or service granular roles, based on your role matrix: https://yandex.cloud/en/docs/iam/roles-reference
- Follow this guide to view the full list of a subject's access permissions: https://yandex.cloud/en/docs/security-deck/operations/ciem/view-permissions
OS Login is used for connection to a VM or Kubernetes node
|
kind |
severity |
ID |
|
automatic |
low |
access.os-login-onto-hosts.vm |
Description
Automatic verification
This control automatically checks for OS Login usage on virtual machines and Kubernetes nodes.
OS Login is a convenient way to manage connections to VMs over SSH via the CLI or a standard SSH client with an SSH certificate or SSH key, which you first need to add to the OS Login profile of organization user or service account in Yandex Identity Hub.
OS Login links the account of a virtual machine user with that of an organization or service account user. To manage access to virtual machines, enable the OS Login access option at the organization level and then activate OS Login access on each virtual machine separately.
Thus, you can easily manage access to virtual machines by assigning appropriate roles to users or service accounts. If you revoke the roles from a user or service account, they will lose access to all virtual machines with OS Login access enabled.
Guides and solutions
Guides and solutions to use:
- Enabling OS Login access at the organization level
- Setting up OS Login access on an existing VM
- Connect to the virtual machine via OS Login
There is no public access to your organization's resources
|
kind |
severity |
ID |
|
manual |
high |
access.public-access |
Description
Manual verification
This rule requires manual check. After checking the necessity for the public access, please change the rule status.
Yandex Cloud allows you to grant public access to your resources. You can grant public access by assigning access permissions to public groups (All authenticated users, All users).
Public group details:
-
All authenticated users: All authenticated users. This means all registered Yandex Cloud users or service accounts, both from your clouds and other users' clouds.
-
All users: Any user. No authentication is required.
Warning
Now All users is only supported in the following services: Object Storage (if ACL-based access management is used), Container Registry, and Cloud Functions. For other services, assigning a role to the All users group is equivalent to assigning a role to All authenticated users.
Make sure that these groups have no public access to your resources: clouds, folder, buckets, and more.
Guides and solutions
Guides and solutions to use:
- If you detect that All users and All authenticated users have the access permissions that they should not have, remove these permissions using CIEM module.
Service accounts have minimum privileges granted
|
kind |
severity |
ID |
|
manual |
high |
access.sa-privileges |
Description
Manual verification
This rule requires manual check. After auditing the required privileges, please change the rule status.
Follow the principle of least privilege and assign to the service account only the roles necessary to run the application.
Guides and solutions
Guides and solutions to use:
- Use Yandex Security Deck to view the full list of a service account's access permissions.
- Use Security Deck to revoke the service account's excessive access permissions.
- Remove the excessive permissions from the service account using IAM.
The serial console is either controlled or not used
|
kind |
severity |
ID |
|
automatic |
medium |
access.serial-console |
Description
Automatic verification
This control automatically checks for serial console access on virtual machines.
On VMs, access to the serial console is disabled by default. For risks of using the serial console, see Getting started with a serial console in the Yandex Compute Cloud documentation.
When working with a serial console:
- Make sure that critical data is not output to the serial console.
- If SSH access to the serial console is enabled, make sure that both the credentials management routine and the password used to log in to the operating system locally are as per the regulatory standards. For example, in an infrastructure for storing payment card data, passwords must meet the PCI DSS requirements: they must contain both letters and numbers, be at least 7 characters long, and be changed at least once every 90 days.
- It is not recommend using access to the serial console unless it is absolutely necessary.
Evaluate the risk of enabling access through the serial console, considering the following factors:
- The VM will be accessible for management from the internet even if there is no external IP address.
- A user successfully authenticated in the Yandex Cloud management console with appropriate VM permissions will be able to access the VM's serial console from the Yandex Cloud management console. Access to the VM's serial console from an SSH client (e.g., Putty) or CLI is also possible by authenticating via an SSH key. Therefore, it is necessary to carefully control the SSH key and terminate the web session to reduce the risks of its interception.
- The session will be available simultaneously to all users who have the right to access the serial console.
- Actions of one user will be visible to other users if they are viewing the serial console output at the same time.
- An unfinished session can be used by another user.
We recommend enabling the serial console only in case of extreme necessity, granting such access to a narrow circle of people, and using strong passwords to access the VM.
Do not forget to disable access after working with the serial console.
Guides and solutions
Guides and solutions to use:
- It is recommended to disable access to the serial console: https://yandex.cloud/en/docs/compute/operations/serial-console/disable
In Yandex Application Load Balancer, HTTPS is used
|
kind |
severity |
ID |
|
automatic |
high |
appsec.alb-https |
Description
Automatic verification
This control automatically checks for HTTPS listener settings on Application Load Balancer.
Application Load Balancer service supports an HTTPS listener with loading a certificate from Certificate Manager. See the listener configuration description in the Yandex Application Load Balancer documentation.
Guides and solutions
API gateways use HTTPS and their own domains
|
kind |
severity |
ID |
|
automatic |
medium |
appsec.api-gateway-https |
Description
Yandex API Gateway ensures secure connections over HTTPS. You can link your own domain and upload your own security certificate to access your API gateway over HTTPS.
Without using the HTTPS protocol, traffic between the client and API gateway is transmitted unencrypted, running the following risks:
- Hackers intercepting data via, for example, MITM (man-in-the-middle) attacks.
- Leaks of confidential information, such as personal data, payment data, authorization tokens, passwords, etc.
Guides and solutions
Guides and solutions to use:
- In the management console
, select the folder the API gateway is in. - Go to API Gateway and in the window that opens, click the line with the API gateway in question.
- In the left-hand menu, select Domains and click Attach.
- In the window that opens, select a TLS certificate and specify the domain name matching this certificate.
- Click Attach.
Yandex Cloud CDN uses HTTPS and its own SSL certificate
|
kind |
severity |
ID |
|
automatic |
low |
appsec.cdn-https |
Description
Automatic verification
This control automatically checks for HTTPS configuration and SSL certificates on CDN resources.
Cloud CDN supports secure connections to origins over HTTPS. You can also upload your own security certificate to access your CDN resource over HTTPS.
Guides and solutions
Guides and solutions to use:
Application DDoS protection is enabled (L7)
|
kind |
severity |
ID |
|
automatic |
high |
appsec.ddos-protection.l7 |
Description
Automatic verification
This control automatically checks Smart Web Security security profiles for ALB.
Manual verification
If an external DDoS protection software is used, please change the status manually.
Yandex Cloud provides basic and advanced DDoS protection as well as protection at the application level with Yandex Smart Web Security. Make sure to use at least basic protection.
Yandex Smart Web Security is a service for protection against DDoS attacks and bots at application level L7 of the OSI model
Yandex DDoS Protection is a Virtual Private Cloud component that safeguards cloud resources from DDoS attacks. DDoS Protection is provided in partnership with Curator. You can enable it yourself for an external IP address through cloud administration tools. Supported up to OSI L4.
Advanced DDoS protection is available at OSI layers 3, 4, and 7. You can also track load and attack metrics and enable Solidwall WAF in your Curator account. To enable advanced protection, contact your manager or technical support.
Guides and solutions
Guides and solutions to use:
Network DDoS protection is enabled (L3)
|
kind |
severity |
ID |
|
automatic |
high |
appsec.ddos-protection.l3 |
Description
Automatic verification
This control automatically checks Yandex DDoS Protection security profiles. If an external DDoS protection software is used, please change the status manually.
Yandex Cloud provides basic and advanced DDoS protection. Make sure to use at least basic protection.
Yandex DDoS Protection is a VPC component that safeguards cloud resources from DDoS attacks. DDoS Protection is provided in partnership with Qrator Labs. Supported up to OSI L4.
Activating Yandex DDoS Protection for VM instances or network load balancers allows you to efficiently respond to attacks aiming to overwhelm the channel capacity and computing resources of your VM instances.
To prevent such attacks, DDoS Protection:
- Constantly analyzes all incoming traffic.
- Detects the above issues in the network and transport layers.
- Automatically diverts unwanted traffic when its intensity threatens the health of your service in Yandex Cloud.
Advanced DDoS protection is available at OSI layers 3, 4, and 7. You can also track load and attack metrics and enable Solidwall WAF in your Curator account.
Recommendations
Guides and solutions to use:
Use Yandex DDoS Protection to protect your cloud resources against DDoS attacks on basic level. You can enable DDoS Protection with a single click: just select the DDoS protection checkbox when creating your VM and reserving public IP addresses.
Enable and set up advanced DDoS protection at OSI layers 3, 4, and 7. To enable advanced protection, contact support
Docker images are scanned when uploaded to Container Registry
|
kind |
severity |
ID |
|
automatic |
high |
appsec.periodic-scan |
Description
Automatic verification
This control automatically checks for Docker image scanning policies in Container Registry.
When creating a new registry, use the default options to make sure it meets the Yandex Cloud security standard:
-
Docker images are automatically scanned as they are uploaded to the registry.
-
Docker images in the registry are regularly re-scanned, i.e., every 7 days with an option to switch to daily scanning in the settings.
How to manually check rule:
-
In the management console
, select the folder the registry with Docker images belongs to. -
Select the appropriate registry in Container Registry.
-
Navigate to the Vulnerability scanner tab and click Edit settings.
-
Make sure that scheduled Docker image scans are enabled with a frequency of at least once a week.
Guides and solutions
Guides and solutions to use:
Advanced rate limiter is implemented
|
kind |
severity |
ID |
|
automatic |
medium |
appsec.use-arl |
Description
Automatic verification
This control automatically checks for Advanced Rate Limiter configuration.
Manual Check
This rule checks only the built-in information security features in Yandex Cloud. If an applied protection is used, please manually mark the rule as completed.
Advanced Rate Limiter (ARL) is a Yandex Smart Web Security module used to monitor and limit web app loads. It allows you to set a limit on the number of HTTP requests over a certain period of time. All requests above the limit will get blocked. You can set a single limit for all traffic or configure specific limits to segment requests by certain parameters. For the purpose of limits, you can count requests one by one or group them together based on specified property.
You need to connect your ARL profile to the security profile in Smart Web Security.
Guides and solutions
Guides and solutions to use:
Yandex SmartCaptcha is used
|
kind |
severity |
ID |
|
automatic |
low |
appsec.use-smartcaptcha |
Description
Automatic verification
This control automatically checks for Yandex SmartCaptcha usage in applications.
To mitigate the risks associated with automated attacks on applications, we recommend using Yandex SmartCaptcha. The service checks user requests with its ML algorithms and only shows challenges to those users whose requests it considers suspicious. You do not have to place the "I'm not a robot" button on the page.
Guides and solutions
Guides and solutions to use:
Yandex Smart Web Security profile is used
|
kind |
severity |
ID |
|
automatic |
high |
appsec.use-sws |
Description
Automatic verification
This control automatically checks for Yandex Smart Web Security profile configuration.
Yandex Smart Web Security protects you against DDoS attacks, web attacks, and bots at application level L7 of the OSI model
In a nutshell, the service checks the HTTP requests sent to the protected resource against the rules configured in the security profile. Depending on the results of the check, the requests are forwarded to the protected resource, blocked, or sent to Yandex SmartCaptcha for additional verification.
Manual Check
This rule checks only the built-in information security features in Yandex Cloud. If an applied protection is used, please manually mark the rule as completed.
Guides and solutions
Guides and solutions to use:
Web application firewall is implemented
|
kind |
severity |
ID |
|
automatic |
medium |
appsec.use-waf |
Description
Automatic verification
This control automatically checks for Web Application Firewall configuration.
Manual Check
This rule checks only the built-in information security features in Yandex Cloud. If an applied protection is used, please manually mark the rule as completed.
To mitigate risks associated with web attacks, we recommend using the Yandex Smart Web Security web application firewall (WAF). A web application firewall analyzes HTTP requests to a web app according to pre-configured rules. Based on the analysis results, certain actions are applied to HTTP requests.
You can manage the web application firewall using a WAF profile that connects to a security profile in Smart Web Security as a separate rule.
Guides and solutions
Guides and solutions to use:
When creating a registry in Yandex Container Registry, keep the safe registry settings by default
|
kind |
severity |
ID |
|
manual |
medium |
appsec.secure-registry |
Description
The lack of control over new Docker images leads to risks associated with the following factors:
- use of vulnerable containers;
- introduction of malicious code;
- slower response to threats.
Automatic vulnerability scanning when new images are added to the Container Registry will help reduce these risks.
Guides and solutions to use
-
In the management console
, select the folder where you want to create a registry. -
Go to Container Registry.
-
Click Create registry.
-
Specify a name for the registry. Follow these naming requirements:
- Length: between 3 and 63 characters.
- It can only contain lowercase Latin letters, numbers, and hyphens.
- It must start with a letter and cannot end with a hyphen.
-
Under Automatic scanning:
- Keep the Scan Docker images on push option enabled to scan Docker images at their upload to the repository.
- Keep the Scan all Docker images in the registry option enabled, and set scanning frequency if necessary.
-
Click Create registry.
Getting an IAM token through the metadata service in AWS IMDSv1 format is disabled on the VM
|
kind |
severity |
ID |
|
automatic |
high |
aws-token |
Description
Yandex Compute Cloud features a metadata service for VM instances that provides info on their operation in the following formats:
- Google Compute Engine (some fields are not supported).
- Amazon EC2 (some fields are not supported).
Amazon EC2 Instance Metadata Service version 1 (IMDSv1) has a number of drawbacks. The most critical of them is the risk of a service account token getting compromised through the metadata service by means of a server-side request forgery (SSRF) attack. For more information, see the official AWS blog
Guides and solutions to use
To get a service account's IAM token from within the VM, we recommend using metadata in Google Compute Engine format.
Make sure to disable getting an IAM token through the metadata service in IMDSv1 format.
Guides and solutions to use:
For the VMs found in the metadata_options section, set aws_v1_http_token to DISABLED:
yc compute instance update <VM_instance_ID_or_name> \
--metadata-options aws-v1-http-token=DISABLED
Cloud Backup or scheduled snapshots are used
|
kind |
severity |
ID |
|
automatic |
high |
backup.compute-disks |
Description
This rule lists virtual machines which do not have configured back up policy.
It is important to configure back ups since it is the only practical way to restore VM's operation after a data loss or a data corruption. Without back ups, any incident leads to non-recoverable loss and operational downtime.
In cloud, there are two options to back up VMs:
- Scheduled snapshots
- Cloud Backup
Guides and solutions to use
Backups in Compute Cloud includes snapshots of disks connected to VMs and Yandex Cloud Backup usage.
Cloud Backup is a service for creating backups and restoring Yandex Cloud resources and their data.
You can connect to Cloud Backup either a new Yandex Compute Cloud VM as soon as its is created or an existing VM with active and configured apps, resources, data, etc.
For Cloud Backup to be able to back up and restore a VM, the VM must be associated with a backup policy.
The Yandex Certificate Manager certificate is valid for at least 30 days
|
kind |
severity |
ID |
|
automatic |
medium |
crypto.certificate-validity |
Description
Automatic verification
This control automatically checks certificate validity periods in Yandex Certificate Manager.
You can use Yandex Certificate Manager to manage TLS certificates for your API gateways in the API Gateway, as well as your websites and buckets in Object Storage. Application Load Balancer is integrated with Certificate Manager for storing and installing certificates. We recommend that you use Certificate Manager to obtain your certificates and rotate them automatically.
When using TLS in your application, we recommend that you limit the list of your trusted root certificate authorities (root CA).
When using certificate pinning, keep in mind that Let's Encrypt certificates are valid for 90 days
Guides and solutions
- Update the certificate or setup auto updates.
- We recommend that you update certificates in advance if they are not updated automatically
Deletion protection is enabled for KMS keys
|
kind |
severity |
ID |
|
automatic |
high |
crypto.keys-deletion-protection |
Description
Automatic verification
This control automatically checks for deletion protection settings on KMS keys.
Deleting a KMS key always means destroying data. Therefore, make sure to protect the keys against accidental deletion. KMS has the necessary feature.
Guides and solutions
- Enable deletion protection: https://yandex.cloud/en/docs/kms/operations/key#update
The Key Management Service keys are stored in the hardware Security module(HSM)
|
kind |
severity |
ID |
|
manual |
medium |
crypto.keys-hsm |
Description
Manual check
This rule requires manual verification of HSM key storage settings.
In production environments, we recommend using separate keys whose every cryptographic operation will only be handled inside a HSM. For more information, see Hardware security module (HSM) https://yandex.cloud/en/docs/kms/concepts/hsm.
To use the HSM, when creating a key, select AES-256 HSM as the algorithm type. The HSM will handle all operations with this key internally, and no additional actions are required.
It is recommended to use HSMs for KMS keys to enhance the security level.
Guides and solutions
- Set the encryption algorithm for KMS keys to AES-256 HSM: https://yandex.cloud/en/docs/kms/operations/symmetric-encryption
Key rotation is enabled for KMS keys
|
kind |
severity |
ID |
|
automatic |
high |
crypto.keys-rotation |
Description
Automatic verification
This control automatically checks for key rotation settings on KMS keys.
To improve the security of your infrastructure, we recommend that you categorize your encryption keys into two groups:
-
Keys for services that process critical data but do not store it, such as Message Queue or Cloud Functions.
-
Keys for services storing critical data, e.g., Managed Services for Databases.
For the first group, we recommend that you set up automatic key rotation with a rotation period longer than the data processing period in these services. When the rotation period expires, the old key versions must be deleted. In the case of automatic rotation and the deletion of old key versions, previously processed data cannot be restored and decrypted.
For data storage services, we recommend that you either manually rotate keys or use automatic key rotation, depending on your internal procedures for processing critical data.
A secure value for AES-GCM mode is encryption using 4294967296 (= 232) blocks. Having reached this number of encrypted blocks, you need to create a new DEK version. For more information about the AES-GCM operating mode, see the NIST materials
Note
Destroying any version of a key means destroying all data encrypted with it. You can protect a key against deletion by setting the deletionProtection parameter. However, it does not protect against deleting individual versions.
For more information about key rotation, see the KMS documentation, Key version.
Guides and solutions
Encryption of disks and virtual machine snapshots is used
|
kind |
severity |
ID |
|
manual |
medium |
crypto.managed-vm-kms |
Description
Manual check
This rule requires manual verification of disk encryption settings.
By default, all data on Yandex Compute Cloud disks is encrypted at the storage database level using a system key. This protects your data from being compromised in the event of a physical theft of disks from Yandex Cloud data centers.
We also recommend encrypting disks and disk snapshots using Yandex Key Management Service custom symmetric keys. This approach allows you to:
-
Protect against the potential threats of data isolation breach and compromise at the virtual infrastructure level.
-
Control the encryption and lifecycle of KMS keys, as well as manage them. For more information, see Key management.
-
Improve access control to the data on your disk by setting permissions for KMS keys. For more information, see Configuring access permissions for a symmetric encryption key.
-
Use Yandex Audit Trails to track encryption and decryption operations performed using your KMS key. For more information, see Key usage audit.
You can encrypt the following types of disks:
- Network SSD (
network-ssd) - Network HDD (
network-hdd) - Non-replicated SSD (
network-ssd-nonreplicated) - Ultra-fast network storage with three replicas (SSD) (
network-ssd-io-m3)
Guides and solutions
Service account keys are rotated on a regular basis
|
kind |
severity |
ID |
|
automatic |
high |
crypto.sa-key-rotation |
Description
Yandex Cloud allows you to create the following access keys for service accounts:
- IAM tokens that are valid for 12 hours.
- API keys: You can choose any validity period.
- Authorized keys with unlimited validity.
- AWS API-compatible static access keys with unlimited validity.
It is recommended to rotate keys with unlimited validity yourself: delete and generate new ones. You can check out the date when a key was created in its properties. Perform key rotation at least once in 90 days.
This control checks the last update date. In cases where it is impossible to determine the last update date (for example, when starting CSPM for the first time), it is recommended that the control is performed manually.
Guides and solutions
Follow the guide for rotating keys depending on their type.
The organization uses Yandex Lockbox for secure secret storage
|
kind |
severity |
ID |
|
automatic |
low |
crypto.secrets-lockbox |
Description
Automatic verification
This control automatically checks for the use of Yandex Lockbox for secret storage.
Critical data and access secrets (authentication tokens, API keys, and encryption keys, etc.) should not be used in plain text in code, cloud object names and descriptions, VM metadata, etc. Use secret storage services instead, e.g., Lockbox.
Lockbox securely stores secrets in an encrypted form only. Encryption is performed using KMS. For secret access control, use service roles.
Note
When working in Terraform, we recommend using a script to fill in.tfstate file.
Guides and solutions
- You can learn how to use the service in the Lockbox documentation: https://yandex.cloud/en/docs/lockbox
Lockbox secrets are used for Serverless Containers and Cloud Functions
|
kind |
severity |
ID |
|
automatic |
medium |
crypto.secrets-serverless |
Description
Automatic verification
This control automatically checks for the use of Lockbox secrets in serverless functions and containers.
When working with Serverless Containers or Cloud Functions, it is often necessary to use a secret (such as a token or password).
If you specify secret information in environment variables, it can be viewed by any cloud user with permissions to view and use a function, which causes information security risks.
We recommend using Serverless integration with Lockbox for that. You can use a specific secret from Yandex Lockbox and a service account with access rights to this secret to use it in a function or container.
Make sure that the secrets are used as described above.
Guides and solutions
- Delete secret data from env and use the Lockbox integration functionality:
- Transmitting Yandex Lockbox secrets to a container: https://yandex.cloud/en/docs/serverless-containers/operations/lockbox-secret-transmit
- Transmitting Yandex Lockbox secrets to a function: https://yandex.cloud/en/docs/functions/operations/function/lockbox-secret-transmit
At-rest encryption with a KMS key is enabled in Yandex Object Storage
|
kind |
severity |
ID |
|
automatic |
medium |
data.object-storage-encryption |
Description
Automatic verification
This control automatically checks for encryption settings on Object Storage buckets.
To protect critical data in Yandex Object Storage, we recommend using bucket server-side encryption with Yandex Key Management Service keys. This encryption method protects against accidental or intentional publication of the bucket content on the web. For more information, see Encryption in the Object Storage documentation.
Guides and solutions
- It is recommended to enable data encryption for buckets with critical data: https://yandex.cloud/en/docs/tutorials/security/server-side-encryption
HTTPS for static website hosting is enabled in Yandex Object Storage
|
kind |
severity |
ID |
|
automatic |
high |
data.storage-https |
Description
Automatic verification
This control automatically checks for HTTPS settings on Object Storage static websites.
Object Storage supports secure connections over HTTPS. You can upload your own security certificate if a connection to your Object Storage website requires HTTPS access. Integration with Certificate Manager is also supported. See the instructions in the Object Storage documentation:
When using Object Storage, make sure that support for TLS protocols below version 1.2 is disabled at the client level. Use the aws:securetransport bucket policy to make sure running without TLS is disabled for the bucket.
Guides and solutions
- Enable access over HTTPS if the bucket is used to host a static website: https://yandex.cloud/en/docs/storage/operations/hosting/certificate
Deletion protection is enabled
|
kind |
severity |
ID |
|
automatic |
low |
db.db-deletion-protection |
Description
Automatic verification
This control automatically checks for deletion protection on managed database clusters.
In Yandex Cloud managed databases, you can enable deletion protection. The deletion protection feature safeguards the cluster against accidental deletion by a user. Even with cluster deletion protection enabled, one can still connect to the cluster manually and delete the data.
Guides and solutions
- In the management console, select the cloud or folder to enable deletion protection in.
- In the list of services, select a service or services with managed databases.
- In the object settings, go to the Advanced settings tab.
- In the object parameters, enable Deletion protection.
The cookie lifetime timeout in the federation is less than 6 hours
|
kind |
severity |
ID |
|
manual |
high |
cookie-timeout.organization |
Description
Limiting the validity period of cookies is a key security measure for web applications, as it significantly reduces the risks associated with the compromise of user sessions. A short timeout minimizes the potential damage in the event of cookie theft (e.g., through XSS or MITM attacks) and limits the time during which an attacker can use the intercepted data.
In addition, automatic session termination after a predetermined period (e.g., 6 hours) prevents unauthorized access if a user forgets to log out of their account on a foreign device or if their device has been compromised.
Guides and solutions to use
In your identity federation settings make sure the Cookie lifetime value is less or equal to 6 hours. This would help minimize the risk of compromising cloud users' workstations.
Set the Cookie lifetime to 6 hours (21,600 seconds) or less.
Managed Service for Kubernetes uses secure configuration
|
kind |
severity |
ID |
|
manual |
medium |
k8s.kubernetes-safe-config |
Description
Manual check
Please check if you have implemented controls for node group settings.
In Managed Service for Kubernetes, the user is fully in control of all node group settings, but only partially in control of the master settings. The user is responsible for the whole cluster's security.
The CIS Kubernetes Benchmark
Guides and solutions
- Using the kube-bench
tool, check whether the node group configuration is compliant with CIS Kubernetes Benchmark. The tool officially supports the Yandex Cloud node groups. - Starboard Operator
is a free tool that helps you automate scanning of images for vulnerabilities and checking that the configuration is compliant with CIS Kubernetes Benchmark. Starboard Operator supports integration with kube-bench and is used for its automatic startup.
Access to Kubernetes components is limited by IP address, port, and protocol
|
kind |
severity |
ID |
|
automatic |
medium |
k8s.network-firewall-scope |
Description
We recommend using security groups to configure safe access to Kubernetes cluster components under the principle of least privilege. To establish access to cluster components, only open the required ports over the required network protocols, and only for trusted IP addresses.
Guides and solutions to use
Create a security group and configure it for working in a Kubernetes cluster.
In your configuration, follow the key principles that apply to security group settings for Kubernetes clusters:
-
Do not use security rules with broad access rules:
- Port range:
0-65535. - Protocol:
Any. - Source:
CIDR. - CIDR blocks: IPv4
0.0.0.0/0or IPv6::/0(access allowed from any address).
- Port range:
-
Create dedicated security groups for:
- Kubernetes masters
- Kubernetes nodes
- Load balancers and ingress controllers
- Databases and backends
- Bastion hosts
-
In your security rules, use links to other security groups instead of resource IP addresses (in the Source/Target field, select Security groups instead of CIDR). This enables maintaining network access when editing resource IP addresses.
-
Limit egress traffic. We recommend that you clearly indicate ranges of IP addresses and ports as well as target protocols in the security rules for outgoing traffic.
-
Enable logging for Kubernetes clusters.
-
Enable Flow Logs Kubernetes to monitor traffic.
Audit log collection is set up for incident investigation
|
kind |
severity |
ID |
|
manual |
high |
k8s.network-policy |
Description
Manual check
This rule requires manual verification of audit log collection setting.
Events available to the user in the Managed Service for Kubernetes service can be classified as levels:
- Kubernetes API events (Kubernetes audit logging)
- Kubernetes node events
- Kubernetes pod events
- Kubernetes metrics
- Kubernetes flow logs
For more information about setting up audit event logging at various levels, see Collecting, monitoring, and analyzing Managed Service for Kubernetes audit logs.
Guides and solutions to use
In Managed Service for Kubernetes, you can audit the current role model used in the service. To do this, open the Kubernetes cluster page in the management console
You can also use:
- KubiScan
- Krane
- Yandex Audit Trails audit logs
Only authorized administrators manage memberships in user groups
|
kind |
severity |
ID |
|
manual |
high |
iam.group-membership-admin |
Description
Working in the cloud requires following the principle of least privilege and granting users no more permissions than they need to address their respective tasks.
Make sure to manage access permissions to a user group as a resource. Failing to do so may result in users getting excess permissions allowing them to manage the membership of other users in the group.
This check detects cases where users get such permissions:
- User has the
organization-manager.groups.memberAdminrole for the organization. - User has the
organization-manager.groups.memberAdminrole for a specific group as a resource. - User has the
organization-manager.organizations.owneroradminrole or another privileged role for the whole organization. - User has the
adminoreditorrole for a specific group as a resource (this is not recommended).
Guides and solutions to use
- In the left-hand panel of the Cloud Center interface
, select Groups and in the list that opens, click the line with the group in question. - Navigate to the Group access rights tab and enable the Inherited roles option.
- Follow the instructions for revoking a role for an organization or user group to take away permissions from unauthorized accounts.
A security group is assigned in managed databases
|
kind |
severity |
ID |
|
automatic |
high |
network.db-ip |
Description
Automatic verification
This control automatically checks for security group assignment on managed database clusters.
We recommend prohibiting internet access to databases that contain critical data, in particular PCI DSS data or private data. Configure security groups to only allow connections to the DBMS from particular IP addresses. To do this, follow the steps in Creating a security group. You can specify a security group in the cluster settings or when creating the cluster in the network settings section.
Guides and solutions
No public IP address is assigned in managed databases
|
kind |
severity |
ID |
|
automatic |
medium |
network.db-security-group |
Description
Automatic verification
This control automatically checks for public IP address assignment on managed database clusters.
Assigning a public IP to a managed database raises information security risks. We do not recommend assigning an external IP unless it is absolutely necessary.
Remove public access if it is not required.
Guides and solutions
- It is recommended to delete the IP address linked to the database: https://yandex.cloud/en/docs/vpc/operations/address-delete
Cloud resources are protected by a firewall or security groups
|
kind |
severity |
ID |
|
automatic |
high |
network.firewall |
Description
Automatic verification
This control automatically checks security groups availability for the following types of resources:
enum <resource-type>
Manual verification
This rule requires manual check. After checking and update, please change the rule status.
With built-in security groups, you can manage VM access to resources and security groups in Yandex Cloud or resources on the internet. A security group is a set of rules for incoming and outgoing traffic that can be assigned to a VM's network interface. Security groups work like a stateful firewall: they monitor the status of sessions and, if a rule allows a session to be created, they automatically allow response traffic. For a guide on how to set up security groups, see Creating a security group. You can specify a security group in the VM settings.
You can use security groups to protect:
- VM
- Managed databases: https://yandex.cloud/en/services#data-platform
- Yandex Application Load Balancer: https://yandex.cloud/en/docs/application-load-balancer
- Yandex Managed Service for Kubernetes: https://yandex.cloud/en/docs/managed-kubernetes
You can manage network access without security groups, e.g., by using a separate VM as a firewall based on an NGFW image from Yandex Cloud Marketplace or a custom image. Using the NGFW can be critical to customers if they need the following features:
- Logging network connections.
- Streaming traffic analysis for malicious content.
- Detecting network attacks by signature.
- Other features of conventional NGFW solutions.
Make sure that your clouds use any of the following:
- Security groups in each cloud object.
- A separate NGFW VM from Cloud Marketplace.
- BYOI principle, e.g., your own disk image: https://yandex.cloud/en/docs/compute/operations/image-create/upload
Guides and solutions
- Apply security groups to any objects that have no group.
- To apply security groups through Terraform, set up security groups (dev/stage/prod) using Terraform: https://github.com/yandex-cloud/yc-solution-library-for-security/tree/master/network-sec/segmentation
- To use the NGFW, install the NGFW on your VM: Check Point: https://github.com/yandex-cloud/yc-solution-library-for-security/tree/master/network-sec/checkpoint-1VM
- Refer to this guide on using the UserGate NGFW in the cloud: https://docs.google.com/document/d/1yYwHorzkwXwIUGeG3n_K6Zo-07BVYowZJL7q2bAgVR8/edit?usp=sharing
- Use NGFW in active-passive mode: https://github.com/yandex-cloud/yc-solution-library-for-security/blob/master/network-sec/checkpoint-2VM_active-active/README.md
Security groups have no access rule that is too broad
|
kind |
severity |
ID |
|
automatic |
high |
network.network-firewall-scope |
Description
A security group lets you grant network access to absolutely any IP address on the internet as well as across all port ranges. A dangerous rule looks as follows:
- Port range: 0 to 65535 or empty.
- Protocol: Any or TCP/UDP.
- Source: CIDR.
- CIDR blocks: 0.0.0.0/0 (access from any IP address) or ::/0 (ipv6).
Warning
If no port range is set, it is considered that access is granted across all ports (0-65535).
Make sure to only allow access through the ports that your application requires to run and from the IPs to connect to your objects from.
Guides and solutions
- Delete the dangerous rule in each security group or edit it by specifying trusted IPs: https://yandex.cloud/en/docs/vpc/operations/security-group-create
In Virtual Private Cloud, a security group is created; the default security group is not used
|
kind |
severity |
ID |
|
automatic |
medium |
network.network-firewall |
Description
Automatic verification
This control automatically checks for the presence of custom security groups in VPC networks.
A security group (SG) is a resource created at the cloud network level. Once created, a security group can be used in Yandex Cloud services to control network access to an object it applies to.
A default security group (DSG) is created automatically while creating a new cloud network. The default security group has the following properties:
- It will allow any network traffic, both egress and ingress, in the new cloud network.
- It applies to traffic passing through all subnets in the network where the DSG is created.
- It is only used if no security group is explicitly assigned to the object yet.
- You cannot delete the DSG: it is deleted automatically when deleting the network.
The default security group is a convenient but insecure mechanism that automatically allows all network traffic (incoming and outgoing) for your network objects. While simplifying the initial setup, such openness creates significant risks:
- Attackers can get access to resources through public interfaces.
- Uncontrolled traffic makes your network more vulnerable to DDoS attacks and port scanning.
- The DSG remains active until you assign another security group to the object.
We recommend you to create a security group of your own with rules explicitly allowing only the traffic you need (e.g., HTTP/HTTPS for web servers or SSH for administration) and assign this group to your cloud objects VMs, Kubernetes clusters, etc.) to override the DSG.
This is important because without your rules cloud resources remain open to all and any connections from the internet, whereas security groups of your own enable the principle of least privilege, thus reducing the attack surface.
You can combine security groups by assigning up to five groups per object for more flexible access control.
Guides and solutions
- Create a security group in each Virtual Private Cloud with restricted access rules, so that it can be assigned to cloud objects.
Serverless Containers/Cloud Functions uses the VPC internal network
|
kind |
severity |
ID |
|
manual |
information |
network.serverless-uses-vpc |
Description
By default, the function is launched in an isolated IPv4 network with NAT gateway enabled. For this reason, only public IPv4 addresses are available. You cannot fix the address.
Networking between two functions, as well as between functions and user resources, is limited:
- Incoming connections are not supported. For example, you cannot access the internal components of a function over the network, even if you know the IP address of its instance.
- Outgoing connections are supported via TCP, UDP, and ICMP. For example, a function can access a Yandex Compute Cloud VM or a Yandex Managed Service for YDB DB on the user's network.
- Functions are cross-zoned: you cannot explicitly specify a subnet or select an availability zone to run a function.
If necessary, you can specify a cloud network in the function settings. In such case:
- The function will be executed in the specified cloud network.
- While being executed, the function will get an IP address in the relevant subnet and access to all the network resources.
- The function will have access not only to the internet but also to user resources located in the specified network, such as databases, virtual machines, etc.
- The function will have an IP address within the
198.19.0.0/16range when accessing user resources. - You can only specify a single network for functions, containers, and API gateways that reside in the same cloud.
Guides and solutions to use
- In the management console
, select the cloud or folder to check functions in. - Go to Cloud Functions.
- Open the function.
- In the object settings, go to the Edit function version tab.
- In the Network field, select the cloud network you need.
- Click Save changes.
No public access to managed YDB
|
kind |
severity |
ID |
|
automatic |
low |
network.ydb-public |
Description
Automatic verification
This control automatically checks for public access settings on YDB clusters.
When accessing the database in dedicated mode, we recommend that you use it inside VPC and disable public access to it from the internet. In serverless mode, the database can be accessed from the internet. You must therefore take this into account when modeling threats to your infrastructure. For more information about the operating modes, see the Serverless and dedicated modes section in the Managed Service for YDB documentation.
When setting up database permissions, use the principle of least privilege.
Guides and solutions
- For more information about the operating modes, see the Serverless and dedicated modes section in the Managed Service for YDB documentation
- When setting up database permissions, use the principle of least privilege.
Yandex Audit Trails is enabled at the organization level
|
kind |
severity |
ID |
|
automatic |
high |
o11y.audit-trails |
Description
Automatic verification
This control automatically checks for Yandex Audit Trails service configuration at the organization level.
The main tool for collecting Yandex Cloud level logs is Yandex Audit Trails. This service allows you to collect audit logs about events happening to Yandex Cloud resources and upload these logs to Yandex Object Storage buckets or Cloud Logging log groups for further analysis or export. For information on how to start collecting logs, see this guide.
Audit Trails audit logs may contain two types of events: management events and data events.
Management events are actions you take to configure Yandex Cloud resources, such as creating, updating, or deleting infrastructure components, users, or policies. Data events are updates and actions performed on data and resources within Yandex Cloud services. By default, Audit Trails does not log data events. You need to enable collection of data event audit logs individually for each supported service.
To learn more, see Comparing management and data event logs.
To collect metrics, analyze Yandex Cloud-level events, and set up notifications, we recommend using Yandex Monitoring. For example, it can help you track spikes in Compute Cloud workload, Application Load Balancer RPS, or significant changes in Identity and Access Management event statistics.
You can also use Monitoring to monitor the health of the Audit Trails service itself and track security events. You can export metrics to a SIEM system via the API, see this guide.
Solution: Monitoring Audit Trails and security events using Monitoring
You can export audit logs to Cloud Logging or Data Streams log group and to a customer's SIEM system to analyze information about events and incidents.
List of important Yandex Cloud-level events to search for in audit logs:
Solution: Searching for important security events in audit logs
Guides and solutions
- You can enable Yandex Audit Trails at the folder, cloud, and organization level. We recommend enabling Yandex Audit Trails at the level of the entire organization. Thus you will be able to collect audit logs in a centralized manner, e.g., to a separate security cloud
Data events are monitored
|
kind |
severity |
ID |
|
manual |
medium |
o11y.data-plane-events |
Description
A data event audit log is a JSON object with a record of events related to Yandex Cloud resources. Data event monitoring makes it easier for you to collect additional events from cloud services and, as a result, effectively respond to security incidents in clouds. This also helps you ensure your cloud infrastructure meets regulatory requirements and industry standards. For example, you can keep track of your employees' access permissions to sensitive data stored in buckets.
You need to enable collection of data event audit logs individually for each supported service.
Guides and solutions to use
We recommend to choose Get all events for Yandex Identity and Access Management and Yandex Cloud DNS, as well as for the following services if used:
- Yandex Certificate Manager
- Yandex Compute Cloud
- Yandex Key Management Service
- Yandex Lockbox
- Yandex Managed Service for ClickHouse®
- Yandex Managed Service for Kubernetes®
- Yandex StoreDoc
- Yandex Managed Service for MySQL®
- Yandex Managed Service for PostgreSQL
- Yandex Managed Service for Valkey™
- Yandex Object Storage
- Yandex Smart Web Security
- Yandex WebSQL
The Object lock feature is enabled in Object Storage
|
kind |
severity |
ID |
|
manual |
medium |
s3.used-object-lock |
Description
When processing critical data in buckets, it is necessary to ensure protection against deletion and maintain version backups. This can be achieved using mechanisms for versioning, lifecycle management, and object version locking.
Bucket versioning is the ability to store a history of object versions. Each version represents a full copy of the object and occupies the corresponding amount of space in Object Storage. Using version management, you can protect your data both from unintentional user actions and from application failures.
If an object is deleted or modified with versioning enabled, a new version of the object with a new ID is actually created. When an object is deleted, it becomes unavailable for reading, but its version is retained and can be restored.
The retention period for critical data in the bucket is determined by the client's information security (IS) requirements and information security standards. For example, the PCI DSS standard stipulates that audit logs must be retained for at least one year, with at least three months of data available online.
Guides and solutions to use
For more information about setting up versioning, see Bucket versioning in the Object Storage guide.
For more information about lifecycles, see Bucket object lifecycles and Bucket object lifecycle configuration in the Object Storage guide.
In addition, to protect object versions against deletion, use object locks. For more information about object lock types and how to enable them, refer to the guide.
The retention period for critical data in the bucket is determined by the client's information security (IS) requirements and information security standards. For example, the PCI DSS standard stipulates that audit logs must be retained for at least one year, with at least three months of data available online.
Access through control ports is only allowed for trusted IPs
|
kind |
severity |
ID |
|
automatic |
medium |
trusted-ip |
Description
We recommend that you only allow access to your cloud infrastructure through control ports for trusted IP addresses.
This check displays a list of all security groups containing broad rules that allow access through control ports:
- Port range:
22,3389, or21. - Protocol:
TCP. - Source:
CIDR. - CIDR blocks: IPv4
0.0.0.0/0or IPv6::/0(access allowed from any address).
Guides and solutions to use
Make sure your security groups' rules only allow access to your infrastructure through control ports for trusted IP addresses.
Guides and solutions to use:
If such access is allowed for a broad range of addresses, specify the trusted IP addresses in the relevant access rules:
-
In the management console
, select the folder where your security group resides. -
Go to Virtual Private Cloud.
-
In the left-hand panel, select Security groups and in the list that opens, click the line with the group in question.
-
In the top-right corner, click Edit.
-
In the Rules section, in the line with the rule allowing access through control ports for a broad range of addresses, click ... and select Edit.
-
In the CIDR blocks field, enter only the trusted address for which access will be allowed, e.g.,
198.51.100.17/32.To add several trusted addresses to a rule, click Add.
-
Click Save to save the rule settings.
-
Click Save to save the security group settings.
Access to Kubernetes components through control ports is only allowed for trusted IPs
|
kind |
severity |
ID |
|
automatic |
medium |
trusted-ip-k8s |
Description
We recommend that you allow access to Kubernetes components in your cloud infrastructure through control ports for trusted IP addresses only.
This check displays a list of all security groups containing broad rules that allow access through control ports:
- Port range:
22,3389, or21. - Protocol:
TCP. - Source:
CIDR. - CIDR blocks: IPv4
0.0.0.0/0or IPv6::/0(access allowed from any address).
Guides and solutions to use
Make sure your security groups' rules allow access to Kubernetes components through control ports for trusted IP addresses only.
Guides and solutions to use:
If such access is allowed for a broad range of addresses, specify the trusted IP addresses in the relevant access rules:
-
In the management console
, select the folder where your security group resides. -
Go to Virtual Private Cloud.
-
In the left-hand panel, select Security groups and in the list that opens, click the line with the group in question.
-
In the top-right corner, click Edit.
-
In the Rules section, in the line with the rule allowing access through control ports for a broad range of addresses, click ... and select Edit.
-
In the CIDR blocks field, enter only the trusted address for which access will be allowed, e.g.,
198.51.100.17/32.To add several trusted addresses to a rule, click Add.
-
Click Save to save the rule settings.
-
Click Save to save the security group settings.
KSPM — Kubernetes® Security Posture Management
Rules for checking Kubernetes cluster configuration.
Restrictive permissions for Kubelet service file are set
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-service-file-perm-600 |
Description
The kubelet service file controls various parameters that set the behavior of the kubelet service in the worker node.
You should restrict its file permissions to maintain the integrity of the file.
The file should be writable by only the administrators on the system.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %a /etc/systemd/system/kubelet.service.d/kubeadm.conf
Verify that the permissions are set as 600 or more restrictive.
Remediation:
Run the below command (based on the file location on your system) on the each worker node.
For example:
chmod 600 /etc/systemd/system/kubelet.service.d/kubeadm.conf
Kubelet service file ownership is set to root:root
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-service-file-owner-root |
Description
The kubelet service file controls various parameters that set the behavior of the kubelet service in the worker node.
You should set its file ownership to maintain the integrity of the file.
The file should be owned by root:root.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %U:%G /etc/systemd/system/kubelet.service.d/kubeadm.conf
Verify that the ownership is set to root:root.
Remediation:
Run the below command (based on the file location on your system) on the each worker node.
For example:
chown root:root /etc/systemd/system/kubelet.service.d/kubeadm.conf
Restrictive permissions for kubeconfig configuration file are set
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-conf-600 |
Description
The kubelet.conf file is the kubeconfig file for the node. It controls various parameters that set the behavior and identity of the worker node.
You should restrict its file permissions to maintain the integrity of the file.
The file should be writable by only the administrators on the system.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %a /etc/kubernetes/kubelet.conf
Verify that the permissions are 600 or more restrictive.
Remediation:
Run the below command (based on the file location on your system) on the each worker node.
For example:
chmod 600 /etc/kubernetes/kubelet.conf
The owner of kubeconfig configuration file is set to root:root
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-conf-owner-root |
Description
The kubelet.conf file is the kubeconfig file for the node. It controls various parameters that set the behavior and identity of the worker node.
You should set its file ownership to maintain the integrity of the file.
The file should be owned by root:root.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %U:%G /etc/kubernetes/kubelet.conf
Verify that the ownership is set to root:root.
Remediation:
Run the below command (based on the file location on your system) on the each worker node.
For example:
chown root:root /etc/kubernetes/kubelet.conf
Restrictive permissions for Kubelet configuration file are set
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-config-permissions-600 |
Description
Ensure that if the kubelet refers to a configuration file with the --config argument, that file has permissions of 600 or more restrictive.
The kubelet reads various parameters, including security settings, from a config file specified by the --config argument.
If this file is specified, you should restrict its file permissions to maintain the integrity of the file.
The file should be writable by only the administrators on the system.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %a /var/lib/kubelet/config.yaml
Verify that the permissions are set as 600 or more restrictive.
Remediation:
Run the following command (using the config file location identified in the Audit step):
chmod 600 /var/lib/kubelet/config.yaml
The owner of Kubelet configuration file is set to root:root
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.kubelet-config-owner-root |
Description
Ensure that if the kubelet refers to a configuration file with the --config argument, that file is owned by root:root.
The kubelet reads various parameters, including security settings, from a config file specified by the --config argument.
If this file is specified, you should restrict its file permissions to maintain the integrity of the file.
The file should be owned by root:root.
Recommendations
To perform the audit manually:
Run the below command (based on the file location on your system) on the each worker node.
For example:
stat -c %U:%G /var/lib/kubelet/config.yaml
Verify that the ownership is set to root:root.
Remediation:
Run the following command (using the config file location identied in the Audit step):
chown root:root /etc/kubernetes/kubelet.conf
Requests from anonymous users to Kubelet server are disabled
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.anonymous-auth-false |
Description
Disable anonymous requests to the Kubelet server.
When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests.
These requests are then served by the Kubelet server.
You should rely on authentication to authorize access and disallow anonymous requests.
Recommendations
To perform the audit manually:
If using a Kubelet configuration file, check that there is an entry for authentication: anonymous: enabled set to false.
Run the following command on each node:
ps -ef | grep kubelet
Verify that the --anonymous-auth argument is set to false.
This executable argument may be omitted, provided there is a corresponding entry set to false in the Kubelet config file.
Remediation:
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false.
If using executable arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable:
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Only explicitly authorized requests to Kubelet server are allowed
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.auth-mode-not-always-allow |
Description
Do not allow all requests. Enable explicit authorization.
Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver.
You should restrict this behavior and only allow explicitly authorized requests.
Recommendations
To perform the audit manually:
Run the following command on each node:
ps -ef | grep kubelet
If the --authorization-mode argument is present, check that it is not set to AlwaysAllow.
If it is not present, check that there is a Kubelet config file specified by --config, and that file sets authorization: mode to something other than AlwaysAllow.
It is also possible to review the running configuration of a Kubelet via the /configz endpoint on the Kubelet API port (typically 10250/TCP).
Accessing these with appropriate credentials will provide details of the Kubelet's configuration.
Remediation:
If using a Kubelet config file, edit the file to set authorization: mode to Webhook.
If using executable arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf on each worker node and set the below parameter in KUBELET_AUTHZ_ARGS variable:
--authorization-mode=Webhook
Based on your system, restart the kubelet service.
For example:
systemctl daemon-reload
systemctl restart kubelet.service
Kubelet authentication via certificates is enabled
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.client-ca-file-set |
Description
Enable Kubelet authentication using certificates.
The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint.
By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks.
Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests.
Recommendations
To perform the audit manually:
Run the following command on each node:
ps -ef | grep kubelet
Verify that the --client-ca-file argument exists and is set to the location of the client certificate authority file.
If the --client-ca-file argument is not present, check that there is a Kubelet config file specified by --config, and that the file sets authentication: x509: clientCAFile to the location of the client certificate authority file.
Remediation:
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to the location of the client CA file.
If using command line arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf on each worker node and set the below parameter in KUBELET_AUTHZ_ARGS variable:
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service.
For example:
systemctl daemon-reload
systemctl restart kubelet.service
Kubelet is allowed to manage iptables
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.make-iptables-util-chains-true |
Description
Allow Kubelet to manage iptables.
Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods.
It is recommended to let kubelets manage the changes to iptables.
This ensures that the iptables configuration remains in sync with pods networking configuration.
Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world.
You might have iptables rules too restrictive or too open.
Recommendations
To perform the audit manually:
Run the following command on each node:
ps -ef | grep kubelet
Verify that if the --make-iptables-util-chains argument exists, then it is set to true.
If the --make-iptables-util-chains argument does not exist, and there is a Kubelet config file specified by --config, verify that the file does not set makeIPTablesUtilChains to false.
Remediation:
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf on each worker node and remove the --make-iptables-util-chains argument from the KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service.
For example:
systemctl daemon-reload
systemctl restart kubelet.service
Kubelet client certificate rotation is enabled
|
kind |
severity |
ID |
|
HostSecurity |
Medium |
host-security.rotate-certs-not-false |
Description
Enable kubelet client certificate rotation.
The --rotate-certificates setting causes the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire.
This automated periodic rotation ensures that there is no downtime due to expired certificates and thus addressing availability in the CIA security triad.
Note: This recommendation only applies if you let kubelets get their certificates from the API server.
In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself.
Note: This feature also requires the RotateKubeletClientCertificate feature gate to be enabled (which is the default since Kubernetes v1.7)
Recommendations
To perform the audit manually:
Run the following command on each node:
ps -ef | grep kubelet
Verify that the RotateKubeletServerCertificate argument is not present, or is set to true.
If the RotateKubeletServerCertificate argument is not present, verify that if there is a Kubelet config file specified by --config, that file does not contain RotateKubeletServerCertificate: false.
Remediation:
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file /etc/kubernetes/kubelet.conf on each worker node and remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS variable or set --rotate-certificates=true.
Based on your system, restart the kubelet service.
For example:
systemctl daemon-reload
systemctl restart kubelet.service