Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Virtual Private Cloud
  • Getting started
    • All tutorials
    • Architecture and protection of a basic internet service
    • DHCP settings for working with a corporate DNS server
    • Installing the Cisco CSR 1000v virtual router
    • Installing the Mikrotik CHR virtual router
    • Connecting to a cloud network using OpenVPN
    • Configuring Cloud DNS to access a Managed Service for ClickHouse® cluster from other cloud networks
    • Secure user access to cloud resources based on WireGuard VPN
    • Creating and configuring a UserGate gateway in proxy server mode
    • Creating and configuring a UserGate gateway in firewall mode
    • Implementing fault-tolerant use cases for network VMs
    • Creating a tunnel between two subnets using OpenVPN Access Server
    • Creating a bastion host
    • Migrating an HDFS Yandex Data Processing cluster to a different availability zone
    • Configuring a network for Yandex Data Processing
    • Network between folders
    • Implementing a secure high-availability network infrastructure with a dedicated DMZ based on the Check Point NGFW
    • Cloud infrastructure segmentation with the Check Point next-generation firewall
    • Connecting to Object Storage from Virtual Private Cloud
    • Connecting to Container Registry from Virtual Private Cloud
    • Using Yandex Cloud modules in Terraform
    • Deploying an Always On availability group with an internal network load balancer
    • Configuring Cloud Interconnect access to cloud networks behind NGFWs
    • Automating tasks using Managed Service for Apache Airflow™
    • Setting up network connectivity between BareMetal and Virtual Private Cloud subnets
  • DDoS Protection
  • Access management
  • Terraform reference
  • Audit Trails events
  • Release notes
  • FAQ

In this article:

  • Get your cloud ready
  • Required paid resources
  • Required quotas
  • Configure your CLI profile
  • Set up your environment
  • Install the required tools
  • Deploy your resources
  • Test the solution
  • Tips for production deployment
  • Delete the resources you created
  1. Tutorials
  2. Connecting to Container Registry from Virtual Private Cloud

Connecting to Container Registry from Virtual Private Cloud

Written by
Yandex Cloud
Updated at May 14, 2025
  • Get your cloud ready
    • Required paid resources
    • Required quotas
  • Configure your CLI profile
  • Set up your environment
    • Install the required tools
  • Deploy your resources
  • Test the solution
  • Tips for production deployment
  • Delete the resources you created

To access Container Registry, cloud resources require internet access. In this tutorial, we will deploy a Yandex Cloud infrastructure providing access to Container Registry for resources hosted in Virtual Private Cloud and having no public IP addresses or internet access through a NAT gateway.

Container Registry uses Object Storage to store Docker images. This solution also provides Object Storage access for Virtual Private Cloud resources.

You can see the solution architecture in the diagram below.

While deploying this Yandex Cloud infrastructure, you will create the following resources:

Name Description
cr-vpc * Cloud network with the resources to provide with Container Registry access
cr-nlb Container Registry internal NLB accepting TCP traffic with destination port 443 and distributing it across VM instances in a target group
nat-group Load balancer target group of NAT instances
s3-nlb Object Storage internal NLB accepting TCP traffic on port 443 and distributing it across its target group instances
nat-a1-vm, nat-b1-vm NAT instances residing in the ru-central1-a and ru-central1-b availability zones for routing traffic to and from Container Registry and Object Storage with translation of IP addresses of traffic sources and targets
pub-ip-a1, pub-ip-b1 VM public IP addresses mapped by the VPC cloud network from their internal IP addresses
DNS zones and A records storage.yandexcloud.net. and cr.yandex. internal DNS zones in the cr-vpc network with A resource records mapping domain names to IP addresses of internal network load balancers
test-registry Test registry in Container Registry
container-registry-<registry_ID> Name of the Object Storage bucket for storing Docker images, automatically created by Container Registry for <registry_ID>
cr-subnet-a, cr-subnet-b Cloud subnets to host the NAT instances in the ru-central1-a and ru-central1-b zones
test-cr-vm VM used to test access to Container Registry
test-cr-subnet-a Cloud subnet to host the test VM

* You can also specify an existing cloud network.

We will use the following internal DNS zones created in Cloud DNS for the cloud network hosting our resources:

  • cr.yandex. and an A resource record mapping the cr.yandex domain name of Container Registry to the IP address of the cr-nlb internal network load balancer.
  • storage.yandexcloud.net. and an A resource record mapping the storage.yandexcloud.net domain name of Object Storage to the IP address of the s3-nlb internal network load balancer.

These records will direct traffic coming from your cloud resources and aimed at Container Registry and Object Storage to internal load balancers that will in turn distribute it across NAT instances.

To deploy a NAT instance, use this image from Marketplace. It translates the source and target IP addresses to ensure traffic routing to the Container Registry and Object Storage public IP addresses.

By placing the NAT instances in multiple availability zones, you can ensure fault-tolerant access to Container Registry. You can scale the solution for higher workload by adding more NAT instances. Before doing that, consider the internal NLB traffic processing locality.

Only the cloud resources that use this solution can access the registry. The registry access policy only allows access from NAT instance public IP addresses, so you will not be able to access it from any other IP address. If you need, you can remove this limitation in Terraform settings.

For more information, see the project repository.

To deploy a cloud infrastructure providing Container Registry access for VPC cloud network resources:

  1. Get your cloud ready.
  2. Configure your CLI profile.
  3. Set up your environment.
  4. Deploy your resources.
  5. Test the solution.
  6. Tips for production deployment.

If you no longer need the resources you created, delete them.

Get your cloud readyGet your cloud ready

Sign up in Yandex Cloud and create a billing account:

  1. Navigate to the management console and log in to Yandex Cloud or register a new account.
  2. On the Yandex Cloud Billing page, make sure you have a billing account linked and it has the ACTIVE or TRIAL_ACTIVE status. If you do not have a billing account, create one and link a cloud to it.

If you have an active billing account, you can navigate to the cloud page to create or select a folder for your infrastructure to operate in.

Learn more about clouds and folders.

Required paid resourcesRequired paid resources

The infrastructure support cost includes:

  • Fee for continuously running VMs (see Yandex Compute Cloud pricing).
  • Fee for using Network Load Balancer (see Yandex Network Load Balancer pricing).
  • Fee for storing pushed Docker images (see Container Registry pricing).
  • Fee for public IP addresses and outbound traffic (see Yandex Virtual Private Cloud pricing).

Required quotasRequired quotas

Make sure your cloud has sufficient quotas that are not used by other projects.

Resources used in our tutorial
Resource Quantity
Virtual machines 3
VM vCPUs 6
VM RAM 6 GB
Disks 3
HDD size 30 GB
SSD size 20 GB
Network load balancer 2
Load balancer target group 1
Networks 1*
Subnets 3
Static public IP addresses 2
Security groups 1
DNS zone 2
Registry 1
Service account 1

* Unless you use the existing network, specifying its ID in terraform.tfvars.

Configure your CLI profileConfigure your CLI profile

  1. If you do not have the Yandex Cloud CLI yet, install it and sign in as a user.

  2. Create a service account:

    Management console
    CLI
    API
    1. In the management console, select the folder where you want to create a service account.
    2. In the list of services, select Identity and Access Management.
    3. Click Create service account.
    4. Enter a name for the service account, e.g., sa-terraform.
    5. Click Create.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

    To create a service account, run the command below, specifying sa-terraform as the service account name:

    yc iam service-account create --name sa-terraform
    

    Where name is the service account name.

    Result:

    id: ajehr0to1g8b********
    folder_id: b1gv87ssvu49********
    created_at: "2023-06-20T09:03:11.665153755Z"
    name: sa-terraform
    

    To create a service account, use the ServiceAccountService/Create gRPC API call or the create REST API method for the ServiceAccount resource.

  3. Assign the admin role for the folder to the service account:

    Management console
    CLI
    API
    1. In the management console, select your service account folder.
    2. Navigate to the Access bindings tab.
    3. Select sa-terraform from the account list and click -> Edit roles.
    4. In the dialog that opens, click Add role and select the admin role.

    Run this command:

    yc resource-manager folder add-access-binding <folder_ID> \
       --role admin \
       --subject serviceAccount:<service_account_ID>
    

    To assign a role for a folder to a service account, use the setAccessBindings REST API method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.

  4. Set up the CLI profile to run operations under the service account:

    CLI
    1. Create an authorized key for the service account and save it to the file:

      yc iam key create \
      --service-account-id <service_account_ID> \
      --folder-id <ID_of_folder_with_service_account> \
      --output key.json
      

      Where:

      • service-account-id: Service account ID.
      • folder-id: Service account folder ID.
      • output: Authorized key file name.

      Result:

      id: aje8nn871qo4********
      service_account_id: ajehr0to1g8b********
      created_at: "2023-06-20T09:16:43.479156798Z"
      key_algorithm: RSA_2048
      
    2. Create a CLI profile to run operations under the service account:

      yc config profile create sa-terraform
      

      Result:

      Profile 'sa-terraform' created and activated
      
    3. Configure the profile:

      yc config set service-account-key key.json
      yc config set cloud-id <cloud_ID>
      yc config set folder-id <folder_ID>
      

      Where:

      • service-account-key: Service account authorized key file.
      • cloud-id: Cloud ID.
      • folder-id: Folder ID.
    4. Add your credentials to the environment variables:

      export YC_TOKEN=$(yc iam create-token)
      

Set up your environmentSet up your environment

Install the required toolsInstall the required tools

  1. Install Git using the following command:

    sudo apt install git
    
  2. Install Terraform.

Deploy your resourcesDeploy your resources

  1. Clone the GitHub repository and navigate to the yc-cr-private-endpoint directory containing resources for our example:

    git clone https://github.com/yandex-cloud-examples/yc-cr-private-endpoint.git
    cd yc-cr-private-endpoint
    
  2. Open the terraform.tfvars file and edit the following:

    1. Folder ID line:

      folder_id = "<folder_ID>"
      
    2. Line with a list of aggregated prefixes of cloud subnets allowed to access Container Registry:

      trusted_cloud_nets = ["10.0.0.0/8", "192.168.0.0/16"]
      
    terraform.tfvars variable description
    Parameter
    name
    Change
    required
    Description Type Example
    folder_id Yes Solution components folder ID string b1gentmqf1ve9uc54nfh
    vpc_id - ID of your cloud network to provide with Container Registry access. If left empty, the system will create a new network. string enp48c1ndilt42veuw4x
    yc_availability_zones - List of the availability zones for deploying NAT instances list(string) ["ru-central1-a", "ru-central1-b"]
    subnet_prefix_list - List of prefixes of cloud subnets to host the NAT instances (one subnet in each availability zone from the yc_availability_zones list in the same order). list(string) ["10.10.1.0/24", "10.10.2.0/24"]
    nat_instances_count - Number of NAT instances to deploy. We recommend setting an even number to evenly distribute the instances across the availability zones. number 2
    registry_private_access - This parameter limits access to the registry to the public IP addresses of NAT instances. You can set it to true to apply this limitation or false, to remove it. bool true
    trusted_cloud_nets Yes List of aggregated prefixes of cloud subnets allowed to access Container Registry. It is used in the inbound traffic rule for NAT instance security groups. list(string) ["10.0.0.0/8", "192.168.0.0/16"]
    vm_username - NAT instance and test VM username string admin
    cr_ip - Container Registry public IP address string 84.201.171.239
    cr_fqdn - Container Registry domain name string cr.yandex
    s3_ip - Object Storage public IP address string 213.180.193.243
    s3_fqdn - Object Storage domain name string storage.yandexcloud.net
  3. Deploy your cloud resources with Terraform:

    1. Initialize Terraform:

      terraform init
      
    2. Check the Terraform file configuration:

      terraform validate
      
    3. Check the list of new cloud resources:

      terraform plan
      
    4. Create the resources:

      terraform apply
      
  4. Once the terraform apply process is complete, the command line will show information you need to connect to the test VM and run Container Registry tests. Later on, you can view this info by running the terraform output command:

    Expand to view the deployed resource details
    Name Description Value (example)
    cr_nlb_ip_address IP address of the Container Registry internal load balancer 10.10.1.100
    cr_registry_id Registry ID in Container Registry crp1r4h00mj*********
    path_for_private_ssh_key File with a private key for SSH access to NAT instances and test VM ./pt_key.pem
    s3_nlb_ip_address IP address of the Object Storage internal load balancer 10.10.1.200
    test_vm_password Test VM admin password v3RCqUrQN?x)
    vm_username NAT instance and test VM username admin

Test the solutionTest the solution

  1. In the management console, navigate to the folder with the resources you created.

  2. Select Compute Cloud.

  3. Select test-cr-vm from the list of VMs.

  4. In the left-hand menu, select Serial console.

  5. Click Connect.

  6. Enter the admin username and the password from the terraform output test_vm_password command output (without quotation marks).

  7. Run this command:

    dig cr.yandex storage.yandexcloud.net
    
  8. Check the DNS server response and make sure Object Storage and Container Registry domain names resolve to the IP addresses of the relevant internal load balancers. The command output will show type A resource records as follows:

    ;; ANSWER SECTION:
    cr.yandex.               300    IN      A       10.10.1.100
    
    ;; ANSWER SECTION:
    storage.yandexcloud.net. 300    IN      A       10.10.1.200
    
  9. View the list of available Docker images:

    docker image list
    

    Result:

    REPOSITORY    TAG       IMAGE ID       CREATED        SIZE
    golang        1.20.5    342*********   8 months ago   777MB
    hello-world   latest    9c7*********   9 months ago   13.3kB
    
  10. Assign a URL to the Docker image using the following format: cr.yandex/<registry_ID>/<Docker_image_name>:<tag>. This Docker command will retrieve the registry ID from the test VM environment variable:

    docker tag hello-world cr.yandex/$REGISTRY_ID/hello-world:demo
    
    docker image list
    

    Result:

    REPOSITORY                                   TAG       IMAGE ID       CREATED        SIZE
    golang                                       1.20.5    342*********   8 months ago   777MB
    cr.yandex/crp1r4h00mj*********/hello-world   demo      9c7*********   9 months ago   13.3kB
    hello-world                                  latest    9c7*********   9 months ago   13.3kB
    

    Note

    To push Docker images to Container Registry, you need to assign them URLs in this format: cr.yandex/<registry_ID>/<Docker_image_name>:<tag>.

  11. Push the required Docker image to the registry:

    docker push cr.yandex/$REGISTRY_ID/hello-world:demo
    

    Result:

    The push refers to repository [cr.yandex/crp1r4h00mj*********/hello-world]
    01bb4*******: Pushed 
    demo: digest: sha256:7e9b6e7ba284****************** size: 525
    
  12. In the management console, navigate to the folder with the resources you created.

  13. Select Container Registry.

  14. Select test-registry.

  15. Make sure the registry now contains the hello-world repository with the Docker image.

Tips for production deploymentTips for production deployment

  • When deploying your NAT instances in multiple availability zones, use an even number of VMs to evenly distribute them across the availability zones.

  • When selecting the number of NAT instances, consider the internal NLB traffic processing locality.

  • Once the solution is deployed, reduce the number of NAT instances or update the list of availability zones in the yc_availability_zones parameter only during a pre-scheduled time window. Applying changes may cause interruptions in traffic processing.

  • If, with increased Container Registry workload, a NAT instance demonstrates a high CPU steal time metric value, enable a software-accelerated network for that NAT instance.

  • If you are using your own DNS server, create the following type A resource records in its settings:

    Name Type Value
    cr.yandex. A <IP address of the Container Registry internal load balancer. To get it, run terraform output cr_nlb_ip_address>
    storage.yandexcloud.net. A <IP address of the Object Storage internal load balancer. To get it, run terraform output s3_nlb_ip_address>
  • Save the pt_key.pem private SSH key for accessing NAT instances to a secure location or recreate it without using Terraform.

  • Once the solution is deployed, SSH access to the NAT instances will be disabled. To enable it, add a rule for inbound SSH traffic (TCP/22) in the cr-nat-sg security group to enable access only from trusted IP addresses of admin workstations.

  • Once you tested the solution, delete the test VM and its subnet.

Delete the resources you createdDelete the resources you created

  • Manually

    1. In the management console, navigate to the folder with the resources you created.
    2. Select Container Registry.
    3. Select test-registry.
    4. Select the hello-world repository.
    5. For each Docker image in the repository, click .
    6. In the menu that opens, click Delete.
    7. In the window that opens, click Delete.
  • Using Terraform

    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Connecting to Object Storage from Virtual Private Cloud
Next
Using Yandex Cloud modules in Terraform
Yandex project
© 2025 Yandex.Cloud LLC