Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up Time-Slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets

In this article:

  • Required paid resources
  • Prepare the infrastructure for Managed Service for Kubernetes
  • Set up a virtual machine
  • Check cluster availability
  • (Optional) Connect a private Docker image registry
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Creating a Kubernetes cluster with no internet access

Creating and configuring a Managed Service for Kubernetes cluster with no internet access

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Prepare the infrastructure for Managed Service for Kubernetes
  • Set up a virtual machine
  • Check cluster availability
  • (Optional) Connect a private Docker image registry
  • Delete the resources you created

You can create and configure a Managed Service for Kubernetes cluster with no internet connectivity. For this, you will need the following configuration:

  • Managed Service for Kubernetes cluster and node group without a public address. You can only connect to such a cluster using a Yandex Cloud virtual machine.
  • The cluster and node group are hosted by subnets with no internet access.
  • Service accounts have no roles to use resources with internet access, e.g., Yandex Network Load Balancer.
  • Cluster security groups restrict incoming and outgoing traffic.

To create a Managed Service for Kubernetes cluster with no internet access:

  1. Prepare the infrastructure for Managed Service for Kubernetes.
  2. Set up a virtual machine.
  3. Check cluster availability.
  4. (Optional) Connect a private Docker image registry.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Fee for the Managed Service for Kubernetes cluster: using the master (see Managed Service for Kubernetes pricing).
  • Fee for cluster nodes and VMs: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for a public IP address for a VM, which is used to connect to the cluster (see Virtual Private Cloud pricing).
  • Key Management Service fee: number of active key versions (with Active or Scheduled For Destruction for status) and completed cryptographic operations (see Key Management Service pricing).

Prepare the infrastructure for Managed Service for KubernetesPrepare the infrastructure for Managed Service for Kubernetes

Manually
Terraform
  1. Create service accounts:

    • resource-sa with the k8s.clusters.agent, logging.writer, and kms.keys.encrypterDecrypter roles for the folder where the Kubernetes cluster is created. This account will be used to create the resources required for the Kubernetes cluster.
    • node-sa with the container-registry.images.puller role. Nodes will pull the required Docker images from the registry on behalf of this account.

    Tip

    You can use the same service account to manage your Kubernetes cluster and its node groups.

  2. Create a Yandex Key Management Service symmetric encryption key with the following parameters:

    • Name: my-kms-key.
    • Encryption algorithm: AES-256.
    • Rotation period, days: 365 days.
  3. Create the my-net network.

  4. Create a subnet named my-subnet with the internal. domain name.

  5. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  6. Create a Managed Service for Kubernetes cluster with the following parameters:

    • Service account for resources: resource-sa.
    • Service account for nodes: node-sa.
    • Encryption key: my-kms-key.
    • Public address: No address.
    • Cloud network: my-net.
    • Subnet: my-subnet.
    • Security groups: Select the previously created security groups containing the rules for service traffic and Kubernetes API access.
    • CIDR cluster: 172.19.0.0/16.
    • CIDR services: 172.20.0.0/16.
    • Write logs: Enabled.
    • Cluster Autoscaler logs: Enabled.
    • Event logs: Enabled.
    • Kubernetes API server logs: Enabled.
  7. In the Managed Service for Kubernetes cluster, create a node group with the following parameters:

    • Public address: No address
    • Security groups: Select the previously created security groups containing the rules for service traffic, connection to the services from the internet, and connection to nodes over SSH.
    • Location: Subnet named my-subnet.
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-cluster-with-no-internet.tf configuration file to the same working directory. This file will be used to create the following resources:

    • Network.

    • Route table.

    • Subnets.

    • Managed Service for Kubernetes cluster.

    • Managed Service for Kubernetes node group.

    • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    • Service accounts for Kubernetes resources and nodes.

    • Yandex Key Management Service symmetric encryption key.

    The file is generated using the libraries of the terraform-yc-vpc and terraform-yc-kubernetes modules. For more information on the configuration of the resources you create using these modules, see the library pages.

  6. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  7. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up a virtual machineSet up a virtual machine

As the Managed Service for Kubernetes cluster has no internet access, you can only connect to it from a VM that is in the same network as the cluster. Therefore, to check the cluster availability, set up the infrastructure:

  1. Create the required resources:

    Manually
    Terraform
    1. Create a service account named vm-sa with the k8s.cluster-api.cluster-admin and k8s.admin roles. This account will be used to connect to the Managed Service for Kubernetes cluster.

    2. Create a security group named vm-security-group and specify a rule for incoming traffic in it:

      • Port range: 22.
      • Protocol: TCP.
      • Source: CIDR.
      • CIDR blocks: 0.0.0.0/0.
    3. Create a Linux VM with the following parameters:

      • Subnet: my-subnet.
      • Public IP address: Auto or you can reserve a static public IP address and assign it to the new VM.
      • Security groups: vm-security-group.
      • Service account: vm-sa.
    1. Download the virtual-machine-for-k8s.tf configuration file to the directory containing the k8s-cluster-with-no-internet.tf file.

      This file describes:

      • Service account for VM.
      • Security group for VM.
      • VM.
    2. Specify the following in the virtual-machine-for-k8s.tf file:

      • Folder ID.
      • ID of the network created together with the Managed Service for Kubernetes cluster.
      • ID of the subnet created together with the Managed Service for Kubernetes cluster and residing in the ru-central1-a availability zone. You can find this zone in the VM settings.
      • Username to be used for connection to the VM over SSH.
      • Absolute path to the public part of the SSH key for connection to the VM.
    3. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      If there are any errors in the configuration files, Terraform will point them out.

    4. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  2. Connect to the VM over SSH:

    ssh <username>@<VM_public_IP_address>
    

    Where <username> is the VM account username.

  3. Install the Yandex Cloud command line interface (YC CLI).

  4. Create a YC CLI profile.

  5. Install kubect and set it up to work with the created cluster.

Check cluster availabilityCheck cluster availability

Run this command on the VM:

kubectl cluster-info

The command will return the following Managed Service for Kubernetes cluster information:

Kubernetes control plane is running at https://<cluster_address>
CoreDNS is running at https://<cluster_address>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

(Optional) Connect a private Docker image registry(Optional) Connect a private Docker image registry

You can connect a private Docker image registry to your Managed Service for Kubernetes cluster. To get authenticated in the registry and connect to it over HTTPS, the cluster will need certificates issued by the CA (Certificate Authority). Use the DaemonSet controller to add and later automatically update the certificates on cluster nodes. It runs the following process in pods:

  1. A Bash script constantly checks cluster nodes for required certificates.
  2. If not, the certificates are copied from the Kubernetes secret and updated.
  3. The containerd runtime environment is restarted.

To configure certificate updates using DaemonSet, do the following on your VM:

  1. Place the .crt certificate files.

  2. Create a file named certificate-updater-namespace.yaml with the namespace configuration. This namespace will be used for DaemonSet operation and isolation:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: certificate-updater
      labels:
        name: certificate-updater
    
  3. Create a certificate-updater-daemonset.yaml file with the DaemonSet configuration:

    File contents
    ---
    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: certificate-updater-deny-all
      namespace: certificate-updater
    spec:
      podSelector:
        matchLabels:
          k8s-app: certificate-updater
      policyTypes:
        - Ingress
        - Egress
      ingress: []
      egress:  []
    ---
    apiVersion: "apps/v1"
    kind: DaemonSet
    metadata:
      name: certificate-updater
      namespace: certificate-updater
      labels:
        k8s-app: certificate-updater
        version: 1v
    spec:
      selector:
        matchLabels:
          k8s-app: certificate-updater
      template:
        metadata:
          labels:
            k8s-app: certificate-updater
        spec:
          hostPID: true
          hostIPC: true
          containers:
          - name: certificate-updater
            image: cr.yandex/yc/mk8s-openssl:stable
            command: 
              - sh
              - -c
              - |
                while true; do
                  diff -x '.*' -r /mnt/user-cert-path/ /usr/local/share/ca-certificates
                  if [ $? -ne 0 ];
                    then
                        echo "Removing all old certificates"
                        rm -r /usr/local/share/ca-certificates/*
                        echo "Copying certificates from configmap"
                        cp /mnt/sbin/update-ca-certificates /usr/sbin/
                        cp /mnt/user-cert-path/* /usr/local/share/ca-certificates
          
                        echo "Updating cerfificates authorities"
                        update-ca-certificates 
    
                        echo "Restarting containerd"
                        ps -x -o pid= -o comm= | awk '$2 ~ "^(containerd|dockerd)$" { print $1 }' | xargs kill
                        #systemd will get them back less than a minute
                    else
                      echo "Doing Nothing as no certs has not been changed"
                    fi
                  sleep 60
                done
            imagePullPolicy: Never
            securityContext:
              privileged: true
            resources:
              limits:
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
            volumeMounts:
            - mountPath: /etc/
              name: etc
            - mountPath: /usr/local/share/ca-certificates
              name: docker-cert
            - name: secret
              mountPath: /mnt/user-cert-path
            - name: sbin
              mountPath: /mnt/sbin
              readOnly: true
            - name: ca-cert
              mountPath: /usr/share/ca-certificates
          volumes:
          - name: secret
            secret:
              secretName: crt
          - name: sbin
            hostPath:
              path: /usr/sbin/
              type: Directory
          - name: ca-cert
            hostPath:
              path: /usr/share/ca-certificates
              type: Directory
          - name: docker-cert
            hostPath:
              path: /usr/local/share/ca-certificates
              type: DirectoryOrCreate
          - name: etc
            hostPath:
              path: /etc/
              type: Directory
    
  4. Create a namespace:

    kubectl apply -f certificate-updater-namespace.yaml
    
  5. Create a secret with the contents of the certificates issued by the CA:

    kubectl create secret generic crt \
       --from-file=<certificate_file_path>.crt \
       --namespace="certificate-updater"
    

    Specify a certificate with the .crt extension in the command. If you need to add multiple certificates, provide each one in the command using the --from-file flag.

    You can check the secret configuration using the command below and see if it contains the information about the certificates:

    kubectl get secret crt -o yaml
    
  6. Create a DaemonSet:

    kubectl apply -f certificate-updater-daemonset.yaml
    

Now you can monitor the state of the DaemonSet controller. As soon as the certificates are updated, the cluster will restart the containerd runtime environment processes.

Delete the resources you createdDelete the resources you created

Some resources are not free of charge. Delete the resources you no longer need to avoid paying for them.

Manually
Terraform

Delete:

  1. Service accounts.
  2. Key Management Service encryption key.
  3. Security groups.
  4. Managed Service for Kubernetes node group.
  5. Managed Service for Kubernetes cluster.
  6. Virtual machine.
  7. Subnet.
  8. Network.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

If you assigned a static public IP address to your VM, release and delete it.

Was the article helpful?

Previous
Creating a new Kubernetes project
Next
Running workloads with GPUs
Yandex project
© 2025 Yandex.Cloud LLC