Connecting to Container Registry from Virtual Private Cloud
To work with Container Registry, cloud resources require internet access. Follow this guide to deploy a cloud infrastructure in Yandex Cloud to set up access to Container Registry for resources that are hosted in the Virtual Private Cloud cloud network and have no public IP addresses or access to the internet through a NAT gateway.
Container Registry uses Object Storage to store Docker images in a registry. This solution also has access to Object Storage for resources in Virtual Private Cloud.
After the solution is deployed in Yandex Cloud, the following resources will be created:
Name | Description |
---|---|
cr-vpc * |
Cloud network with the resources for which access to Container Registry is set up. |
cr-nlb |
Internal network load balancer accepts traffic to Container Registry. The load balancer accepts TCP traffic with destination port 443 and distributes it across resources (VMs) in a target group. |
nat-group |
Load balancer target group with VMs that have the NAT function enabled. |
s3-nlb |
Internal network load balancer accepts traffic to Object Storage. The load balancer accepts TCP traffic with destination port 443 and distributes it across resources (VMs) in a target group. |
nat-a1-vm , nat-b1-vm |
VMs with NAT in the ru-central1-a and ru-central1-b zones for routing traffic to Container Registry and Object Storage with translation of IP addresses of traffic sources and targets, as well as for routing the return traffic. |
pub-ip-a1 , pub-ip-b1 |
VM public IP addresses to which the VPC cloud network translates their internal IP addresses. |
DNS zones and A records |
The storage.yandexcloud.net. and cr.yandex. internal DNS zones in the cr-vpc network with A resource records mapping domain names to IP addresses of internal network load balancers. |
test-registry |
Registry in Container Registry. |
container-registry-<registry_ID> |
Bucket name in Object Storage for storing Docker images, where <registry_ID> stands for registry ID. Container Registry automatically creates a bucket for the registry in Object Storage. |
cr-subnet-a , cr-subnet-b |
Cloud subnets to host NAT instances in the ru-central1-a and ru-central1-b zones. |
test-cr-vm |
Test VM to verify access to Container Registry. |
test-cr-subnet-a |
Cloud subnet to host the test VM. |
*
When deploying, you can also specify an existing cloud network.
Internal DNS zones are created for the cloud network with hosted resources in Cloud DNS:
cr.yandex.
and an A resource record that maps thecr.yandex
domain name of Container Registry to the IP address of thecr-nlb
internal network load balancer.storage.yandexcloud.net.
and an A resource record that maps thestorage.yandexcloud.net
domain name of Object Storage to the IP address of thes3-nlb
internal network load balancer.
With these records, traffic from cloud resources to Container Registry and Object Storage will be routed to internal load balancers that will distribute the load across the NAT instances.
To deploy a NAT instance, an image from Marketplace is used that translates the source and target IP addresses to ensure traffic routing to the Container Registry and Object Storage public IP addresses.
By placing the NAT instances in multiple availability zones, you can ensure fault-tolerant access to Container Registry. By increasing the number of NAT instances, you can scale the solution up when the load grows. When calculating the number of NAT instances, consider the locality of traffic handling by the internal load balancer.
Only the cloud resources that use this solution can access the registry. The registry access policy allows registry actions only from public IP addresses of NAT instances. You cannot access the registry from other IP addresses. You can disable this limitation by specifying a parameter in Terraform, if required.
For more information, see the project repository
To deploy a cloud infrastructure to provide access to Container Registry for resources located in the VPC cloud network:
- Prepare your cloud.
- Configure the CLI profile.
- Prepare the environment.
- Deploy your resources.
- Test the solution.
- Recommendations for solution deployment in the production environment
If you no longer need the resources you created, delete them.
Prepare your cloud
Sign up for Yandex Cloud and create a billing account:
- Go to the management console
and log in to Yandex Cloud or create an account if you do not have one yet. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The infrastructure support cost includes:
- Fee for continuously running VMs (see Yandex Compute Cloud pricing).
- Fee for using Network Load Balancer (see Yandex Network Load Balancer pricing).
- Fee for storing pushed Docker images (see Container Registry pricing).
- Fee for using public IP addresses and outgoing traffic (see Yandex Virtual Private Cloud pricing).
Required quotas
Make sure your cloud has sufficient quotas not being used by resources for other jobs.
Number of occupied resources created in the scenario
Resource | Amount |
---|---|
Virtual machines | 3 |
VM instance vCPUs | 6 |
VM instance RAM | 6 GB |
Disks | 3 |
HDD size | 30 GB |
SSD size | 20 GB |
Network load balancer | 2 |
Target group for the load balancer | 1 |
Networks | 1* |
Subnets | 3 |
Static public IP addresses | 2 |
Security groups | 1 |
DNS zone | 2 |
Registry | 1 |
Service accounts | 1 |
*
If you do not specify the ID of an existing network in terraform.tfvars
.
Configure the CLI profile
-
If you do not have the Yandex Cloud command line interface yet, install it and sign in as a user.
-
Create a service account:
Management consoleCLIAPI- In the management console
, select the folder where you want to create a service account. - In the Service accounts tab, click Create service account.
- Enter a name for the service account, e.g.,
sa-terraform
. - Click Create.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.Run the command below to create a service account, specifying the
sa-terraform
name:yc iam service-account create --name sa-terraform
Where
name
is the service account name.Result:
id: ajehr0to1g8b******** folder_id: b1gv87ssvu49******** created_at: "2023-06-20T09:03:11.665153755Z" name: sa-terraform
To create a service account, use the ServiceAccountService/Create gRPC API call or the create REST API method for the
ServiceAccount
resource. - In the management console
-
Assign the service account the administrator role for the folder:
Management consoleCLIAPI- In the management console
, select the folder where the service account is located. - Go to the Access bindings tab.
- Select
sa-terraform
from the account list and click -> Edit roles. - Click
Add role in the dialog that opens and select theadmin
role.
Run this command:
yc resource-manager folder add-access-binding <folder_ID> \ --role admin \ --subject serviceAccount:<service_account_ID>
To assign the service account a role for the folder, use the setAccessBindings REST API method for the ServiceAccount resource or the ServiceAccountService/SetAccessBindings gRPC API call.
- In the management console
-
Set up the CLI profile to run operations on behalf of the service account:
CLI-
Create an authorized key for the service account and save it to the file:
yc iam key create \ --service-account-id <service_account_ID> \ --folder-id <ID_of_folder_with_service_account> \ --output key.json
Where:
service-account-id
: Service account ID.folder-id
: ID of the folder in which the service account was created.output
: Name of the file with the authorized key.
Result:
id: aje8nn871qo4******** service_account_id: ajehr0to1g8b******** created_at: "2023-06-20T09:16:43.479156798Z" key_algorithm: RSA_2048
-
Create a CLI profile to run operations on behalf of the service account:
yc config profile create sa-terraform
Result:
Profile 'sa-terraform' created and activated
-
Set the profile configuration:
yc config set service-account-key key.json yc config set cloud-id <cloud_ID> yc config set folder-id <folder_ID>
Where:
-
Add the credentials to the environment variables:
export YC_TOKEN=$(yc iam create-token)
-
Prepare the environment
Install the required utilities
-
Install Git
using the following command:sudo apt install git
Deploy your resources
-
Clone the GitHub repository
and go to theyc-cr-private-endpoint
script folder:git clone https://github.com/yandex-cloud-examples/yc-cr-private-endpoint.git cd yc-cr-private-endpoint
-
Open the
terraform.tfvars
file and edit the following:-
String with the folder ID:
folder_id = "<folder_ID>"
-
String containing a list of aggregated prefixes of cloud subnets that are allowed to access Container Registry:
trusted_cloud_nets = ["10.0.0.0/8", "192.168.0.0/16"]
Description of variables in terraform.tfvars
Parameter
nameNeeds
editingDescription Type Example folder_id
Yes ID of the folder to host the solution components string
b1gentmqf1ve9uc54nfh
vpc_id
- ID of the cloud network for which access to Container Registry is set up. If not specified, such network will be created. string
enp48c1ndilt42veuw4x
yc_availability_zones
- List of availability zones for deploying NAT instances list(string)
["ru-central1-a", "ru-central1-b"]
subnet_prefix_list
- List of prefixes of cloud subnets to host the NAT instances (one subnet in each availability zone from the yc_availability_zones
list in the same order)list(string)
["10.10.1.0/24", "10.10.2.0/24"]
nat_instances_count
- Number of NAT instances to deploy. We recommend setting an even number to evenly distribute the instances across the availability zones. number
2
registry_private_access
- Only allow registry access from public IP addresses of NAT instances. true
means the access is limited. To remove the limit, setfalse
.bool
true
trusted_cloud_nets
Yes List of aggregated prefixes of cloud subnets that Container Registry access is allowed for. It is used in the rule for incoming traffic of security groups for the NAT instances. list(string)
["10.0.0.0/8", "192.168.0.0/16"]
vm_username
- NAT instance and test VM user names string
admin
cr_ip
- Container Registry public IP address string
84.201.171.239
cr_fqdn
- Container Registry domain name string
cr.yandex
s3_ip
- Object Storage public IP address string
213.180.193.243
s3_fqdn
- Object Storage domain name string
storage.yandexcloud.net
-
-
Deploy the resources in the cloud using Terraform:
-
Initialize Terraform:
terraform init
-
Check the Terraform file configuration:
terraform validate
-
Check the list of cloud resources you are about to create:
terraform plan
-
Create resources:
terraform apply
-
-
Once the
terraform apply
process is completed, the command line will output information required for connecting to the test VM and running test operations with Container Registry. Later on, you can view this information by running theterraform output
command:Viewing information on deployed resources
Parameter Description Sample value cr_nlb_ip_address
IP address of the internal load balancer for Container Registry 10.10.1.100
cr_registry_id
Registry ID in Container Registry crp1r4h00mj*********
path_for_private_ssh_key
File with a private key used to connect to the NAT instances and test VM over SSH ./pt_key.pem
s3_nlb_ip_address
IP address of the internal load balancer for Object Storage 10.10.1.200
test_vm_password
admin
user password for the test VMv3RCqUrQN?x)
vm_username
NAT instance and test VM user names admin
Test the solution
-
In the management console
, go to the folder where the resources were created. -
Select Compute Cloud.
-
Select
test-cr-vm
from the list of VM instances. -
In the left-hand menu, select
Serial console. -
Click Connect.
-
Enter the
admin
username and the password from theterraform output test_vm_password
command output (without quotation marks). -
Run this command:
dig cr.yandex storage.yandexcloud.net
-
Make sure Object Storage and Container Registry domain names in the DNS server response match the IP addresses of the internal load balancers. The output of the type
A
resource records is as follows:;; ANSWER SECTION: cr.yandex. 300 IN A 10.10.1.100 ;; ANSWER SECTION: storage.yandexcloud.net. 300 IN A 10.10.1.200
-
View the list of available Docker images:
docker image list
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE golang 1.20.5 342********* 8 months ago 777MB hello-world latest 9c7********* 9 months ago 13.3kB
-
Assign to the Docker image a URL in this format:
cr.yandex/<registry_ID>/<Docker_image_name>:<tag>
. The registry ID will be obtained from the test VM environment variable:docker tag hello-world cr.yandex/$REGISTRY_ID/hello-world:demo docker image list
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE golang 1.20.5 342********* 8 months ago 777MB cr.yandex/crp1r4h00mj*********/hello-world demo 9c7********* 9 months ago 13.3kB hello-world latest 9c7********* 9 months ago 13.3kB
Note
You can only push Docker images to Container Registry if they have a URL in this format:
cr.yandex/<registry_ID>/<Docker_image_name>:<tag>
. -
Push the required Docker image to the registry:
docker push cr.yandex/$REGISTRY_ID/hello-world:demo
Result:
The push refers to repository [cr.yandex/crp1r4h00mj*********/hello-world] 01bb4*******: Pushed demo: digest: sha256:7e9b6e7ba284****************** size: 525
-
In the management console
, go to the folder where the resources were created. -
Select Container Registry.
-
Select the
test-registry
registry. -
Make sure the
hello-world
repository with the Docker image appears in the registry.
Tips for deployment in the production environment
-
When deploying NAT instances in multiple availability zones, set an even number of VMs to evenly distribute them across the availability zones.
-
When selecting the number of NAT instances, consider the locality of traffic handling by the internal load balancer.
-
Once the solution is deployed, reduce the number of NAT instances or update the list of availability zones in the
yc_availability_zones
parameter only in the pre-scheduled period of time. While applying changes, traffic handling may be interrupted. -
If the
CPU steal time
metric of a NAT instance shows a high value as the Container Registry load goes up, we recommend enabling a software-accelerated network for that NAT instance. -
If you are using your own DNS server, create type
A
resource records in its settings in the following format:Name Type Value cr.yandex.
A
<IP address of the internal load balancer for Container Registry from the
terraform output cr_nlb_ip_addresscommand output>
storage.yandexcloud.net.
A
<IP address of the internal load balancer for Object Storage from the
terraform output s3_nlb_ip_addresscommand output>
-
Save the
pt_key.pem
private SSH key used to connect to the NAT instances to a secure location or recreate it separately from Terraform. -
Once the solution is deployed, SSH access to the NAT instances will be denied. To enable access to the NAT instances over SSH, add a rule for incoming SSH traffic (
TCP/22
) in thecr-nat-sg
security group to enable access only from certain IP addresses of admin workstations. -
After a performance check, delete the test VM and its subnet.
Delete the resources you created
-
In the management console
, go to the folder where the resources were created. -
Select Container Registry.
-
Select the
test-registry
registry. -
Select the
hello-world
repository. -
For each Docker image in the repository, click
. -
In the menu that opens, click Delete.
-
In the window that opens, click Delete.
-
To delete the resources you created using Terraform, run the
terraform destroy
command.Warning
Terraform will permanently delete all the resources that were created while deploying the solution.