Connecting to Container Registry from Virtual Private Cloud
To access Container Registry, cloud resources require internet access. In this tutorial, we will deploy a Yandex Cloud infrastructure providing access to Container Registry for resources hosted in Virtual Private Cloud and having no public IP addresses or internet access through a NAT gateway.
Container Registry uses Object Storage to store Docker images. This solution also provides Object Storage access for Virtual Private Cloud resources.
You can see the solution architecture in the diagram below.
While deploying this Yandex Cloud infrastructure, you will create the following resources:
Name | Description |
---|---|
cr-vpc * |
Cloud network with the resources to provide with Container Registry access |
cr-nlb |
Container Registry internal NLB accepting TCP traffic with destination port 443 and distributing it across VM instances in a target group |
nat-group |
Load balancer target group of NAT instances |
s3-nlb |
Object Storage internal NLB accepting TCP traffic on port 443 and distributing it across its target group instances |
nat-a1-vm , nat-b1-vm |
NAT instances residing in the ru-central1-a and ru-central1-b availability zones for routing traffic to and from Container Registry and Object Storage with translation of IP addresses of traffic sources and targets |
pub-ip-a1 , pub-ip-b1 |
VM public IP addresses mapped by the VPC cloud network from their internal IP addresses |
DNS zones and A records |
storage.yandexcloud.net. and cr.yandex. internal DNS zones in the cr-vpc network with A resource records mapping domain names to IP addresses of internal network load balancers |
test-registry |
Test registry in Container Registry |
container-registry-<registry_ID> |
Name of the Object Storage bucket for storing Docker images, automatically created by Container Registry for <registry_ID> |
cr-subnet-a , cr-subnet-b |
Cloud subnets to host the NAT instances in the ru-central1-a and ru-central1-b zones |
test-cr-vm |
VM used to test access to Container Registry |
test-cr-subnet-a |
Cloud subnet to host the test VM |
*
You can also specify an existing cloud network.
We will use the following internal DNS zones created in Cloud DNS for the cloud network hosting our resources:
cr.yandex.
and anA
resource record mapping thecr.yandex
domain name of Container Registry to the IP address of thecr-nlb
internal network load balancer.storage.yandexcloud.net.
and anA
resource record mapping thestorage.yandexcloud.net
domain name of Object Storage to the IP address of thes3-nlb
internal network load balancer.
These records will direct traffic coming from your cloud resources and aimed at Container Registry and Object Storage to internal load balancers that will in turn distribute it across NAT instances.
To deploy a NAT instance, use this image from Marketplace. It translates the source and target IP addresses to ensure traffic routing to the Container Registry and Object Storage public IP addresses.
By placing the NAT instances in multiple availability zones, you can ensure fault-tolerant access to Container Registry. You can scale the solution for higher workload by adding more NAT instances. Before doing that, consider the internal NLB traffic processing locality.
Only the cloud resources that use this solution can access the registry. The registry access policy only allows access from NAT instance public IP addresses, so you will not be able to access it from any other IP address. If you need, you can remove this limitation in Terraform settings.
For more information, see the project repository
To deploy a cloud infrastructure providing Container Registry access for VPC cloud network resources:
- Get your cloud ready.
- Configure your CLI profile.
- Set up your environment.
- Deploy your resources.
- Test the solution.
- Tips for production deployment.
If you no longer need the resources you created, delete them.
Get your cloud ready
Sign up in Yandex Cloud and create a billing account:
- Navigate to the management console
and log in to Yandex Cloud or register a new account. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one and link a cloud to it.
If you have an active billing account, you can navigate to the cloud page
Learn more about clouds and folders.
Required paid resources
The infrastructure support cost includes:
- Fee for continuously running VMs (see Yandex Compute Cloud pricing).
- Fee for using Network Load Balancer (see Yandex Network Load Balancer pricing).
- Fee for storing pushed Docker images (see Container Registry pricing).
- Fee for public IP addresses and outbound traffic (see Yandex Virtual Private Cloud pricing).
Required quotas
Make sure your cloud has sufficient quotas that are not used by other projects.
Resources used in our tutorial
Resource | Quantity |
---|---|
Virtual machines | 3 |
VM vCPUs | 6 |
VM RAM | 6 GB |
Disks | 3 |
HDD size | 30 GB |
SSD size | 20 GB |
Network load balancer | 2 |
Load balancer target group | 1 |
Networks | 1* |
Subnets | 3 |
Static public IP addresses | 2 |
Security groups | 1 |
DNS zone | 2 |
Registry | 1 |
Service account | 1 |
*
Unless you use the existing network, specifying its ID in terraform.tfvars
.
Configure your CLI profile
-
If you do not have the Yandex Cloud CLI yet, install it and sign in as a user.
-
Create a service account:
Management consoleCLIAPI- In the management console
, select the folder where you want to create a service account. - In the list of services, select Identity and Access Management.
- Click Create service account.
- Enter a name for the service account, e.g.,
sa-terraform
. - Click Create.
The folder specified when creating the CLI profile is used by default. To change the default folder, use the
yc config set folder-id <folder_ID>
command. You can specify a different folder using the--folder-name
or--folder-id
parameter.To create a service account, run the command below, specifying
sa-terraform
as the service account name:yc iam service-account create --name sa-terraform
Where
name
is the service account name.Result:
id: ajehr0to1g8b******** folder_id: b1gv87ssvu49******** created_at: "2023-06-20T09:03:11.665153755Z" name: sa-terraform
To create a service account, use the ServiceAccountService/Create gRPC API call or the create REST API method for the
ServiceAccount
resource. - In the management console
-
Assign the admin role for the folder to the service account:
Management consoleCLIAPI- In the management console
, select your service account folder. - Navigate to the Access bindings tab.
- Select
sa-terraform
from the account list and click -> Edit roles. - In the dialog that opens, click
Add role and select theadmin
role.
Run this command:
yc resource-manager folder add-access-binding <folder_ID> \ --role admin \ --subject serviceAccount:<service_account_ID>
To assign a role for a folder to a service account, use the setAccessBindings REST API method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.
- In the management console
-
Set up the CLI profile to run operations under the service account:
CLI-
Create an authorized key for the service account and save it to the file:
yc iam key create \ --service-account-id <service_account_ID> \ --folder-id <ID_of_folder_with_service_account> \ --output key.json
Where:
service-account-id
: Service account ID.folder-id
: Service account folder ID.output
: Authorized key file name.
Result:
id: aje8nn871qo4******** service_account_id: ajehr0to1g8b******** created_at: "2023-06-20T09:16:43.479156798Z" key_algorithm: RSA_2048
-
Create a CLI profile to run operations under the service account:
yc config profile create sa-terraform
Result:
Profile 'sa-terraform' created and activated
-
Configure the profile:
yc config set service-account-key key.json yc config set cloud-id <cloud_ID> yc config set folder-id <folder_ID>
Where:
-
Add your credentials to the environment variables:
export YC_TOKEN=$(yc iam create-token)
-
Set up your environment
Install the required tools
-
Install Git
using the following command:sudo apt install git
Deploy your resources
-
Clone the GitHub repository
and navigate to theyc-cr-private-endpoint
directory containing resources for our example:git clone https://github.com/yandex-cloud-examples/yc-cr-private-endpoint.git cd yc-cr-private-endpoint
-
Open the
terraform.tfvars
file and edit the following:-
Folder ID line:
folder_id = "<folder_ID>"
-
Line with a list of aggregated prefixes of cloud subnets allowed to access Container Registry:
trusted_cloud_nets = ["10.0.0.0/8", "192.168.0.0/16"]
terraform.tfvars variable description
Parameter
nameChange
requiredDescription Type Example folder_id
Yes Solution components folder ID string
b1gentmqf1ve9uc54nfh
vpc_id
- ID of your cloud network to provide with Container Registry access. If left empty, the system will create a new network. string
enp48c1ndilt42veuw4x
yc_availability_zones
- List of the availability zones for deploying NAT instances list(string)
["ru-central1-a", "ru-central1-b"]
subnet_prefix_list
- List of prefixes of cloud subnets to host the NAT instances (one subnet in each availability zone from the yc_availability_zones
list in the same order).list(string)
["10.10.1.0/24", "10.10.2.0/24"]
nat_instances_count
- Number of NAT instances to deploy. We recommend setting an even number to evenly distribute the instances across the availability zones. number
2
registry_private_access
- This parameter limits access to the registry to the public IP addresses of NAT instances. You can set it to true
to apply this limitation orfalse
, to remove it.bool
true
trusted_cloud_nets
Yes List of aggregated prefixes of cloud subnets allowed to access Container Registry. It is used in the inbound traffic rule for NAT instance security groups. list(string)
["10.0.0.0/8", "192.168.0.0/16"]
vm_username
- NAT instance and test VM username string
admin
cr_ip
- Container Registry public IP address string
84.201.171.239
cr_fqdn
- Container Registry domain name string
cr.yandex
s3_ip
- Object Storage public IP address string
213.180.193.243
s3_fqdn
- Object Storage domain name string
storage.yandexcloud.net
-
-
Deploy your cloud resources with Terraform:
-
Initialize Terraform:
terraform init
-
Check the Terraform file configuration:
terraform validate
-
Check the list of new cloud resources:
terraform plan
-
Create the resources:
terraform apply
-
-
Once the
terraform apply
process is complete, the command line will show information you need to connect to the test VM and run Container Registry tests. Later on, you can view this info by running theterraform output
command:Expand to view the deployed resource details
Name Description Value (example) cr_nlb_ip_address
IP address of the Container Registry internal load balancer 10.10.1.100
cr_registry_id
Registry ID in Container Registry crp1r4h00mj*********
path_for_private_ssh_key
File with a private key for SSH access to NAT instances and test VM ./pt_key.pem
s3_nlb_ip_address
IP address of the Object Storage internal load balancer 10.10.1.200
test_vm_password
Test VM admin
passwordv3RCqUrQN?x)
vm_username
NAT instance and test VM username admin
Test the solution
-
In the management console
, navigate to the folder with the resources you created. -
Select Compute Cloud.
-
Select
test-cr-vm
from the list of VMs. -
In the left-hand menu, select
Serial console. -
Click Connect.
-
Enter the
admin
username and the password from theterraform output test_vm_password
command output (without quotation marks). -
Run this command:
dig cr.yandex storage.yandexcloud.net
-
Check the DNS server response and make sure Object Storage and Container Registry domain names resolve to the IP addresses of the relevant internal load balancers. The command output will show type
A
resource records as follows:;; ANSWER SECTION: cr.yandex. 300 IN A 10.10.1.100 ;; ANSWER SECTION: storage.yandexcloud.net. 300 IN A 10.10.1.200
-
View the list of available Docker images:
docker image list
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE golang 1.20.5 342********* 8 months ago 777MB hello-world latest 9c7********* 9 months ago 13.3kB
-
Assign a URL to the Docker image using the following format:
cr.yandex/<registry_ID>/<Docker_image_name>:<tag>
. This Docker command will retrieve the registry ID from the test VM environment variable:docker tag hello-world cr.yandex/$REGISTRY_ID/hello-world:demo docker image list
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE golang 1.20.5 342********* 8 months ago 777MB cr.yandex/crp1r4h00mj*********/hello-world demo 9c7********* 9 months ago 13.3kB hello-world latest 9c7********* 9 months ago 13.3kB
Note
To push Docker images to Container Registry, you need to assign them URLs in this format:
cr.yandex/<registry_ID>/<Docker_image_name>:<tag>
. -
Push the required Docker image to the registry:
docker push cr.yandex/$REGISTRY_ID/hello-world:demo
Result:
The push refers to repository [cr.yandex/crp1r4h00mj*********/hello-world] 01bb4*******: Pushed demo: digest: sha256:7e9b6e7ba284****************** size: 525
-
In the management console
, navigate to the folder with the resources you created. -
Select Container Registry.
-
Select
test-registry
. -
Make sure the registry now contains the
hello-world
repository with the Docker image.
Tips for production deployment
-
When deploying your NAT instances in multiple availability zones, use an even number of VMs to evenly distribute them across the availability zones.
-
When selecting the number of NAT instances, consider the internal NLB traffic processing locality.
-
Once the solution is deployed, reduce the number of NAT instances or update the list of availability zones in the
yc_availability_zones
parameter only during a pre-scheduled time window. Applying changes may cause interruptions in traffic processing. -
If, with increased Container Registry workload, a NAT instance demonstrates a high
CPU steal time
metric value, enable a software-accelerated network for that NAT instance. -
If you are using your own DNS server, create the following type
A
resource records in its settings:Name Type Value cr.yandex.
A
<IP address of the Container Registry internal load balancer. To get it, run terraform output cr_nlb_ip_address>
storage.yandexcloud.net.
A
<IP address of the Object Storage internal load balancer. To get it, run terraform output s3_nlb_ip_address>
-
Save the
pt_key.pem
private SSH key for accessing NAT instances to a secure location or recreate it without using Terraform. -
Once the solution is deployed, SSH access to the NAT instances will be disabled. To enable it, add a rule for inbound SSH traffic (
TCP/22
) in thecr-nat-sg
security group to enable access only from trusted IP addresses of admin workstations. -
Once you tested the solution, delete the test VM and its subnet.
Delete the resources you created
-
Manually
- In the management console
, navigate to the folder with the resources you created. - Select Container Registry.
- Select
test-registry
. - Select the
hello-world
repository. - For each Docker image in the repository, click
. - In the menu that opens, click Delete.
- In the window that opens, click Delete.
- In the management console
-
Using Terraform
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy
-
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-