AWS Command Line Interface (AWS CLI)
The AWS CLI
To work with Object Storage via the AWS CLI, you can use the following commands:
- s3api
: Commands corresponding to operations in the REST API. Before you start, look through the list of supported operations. - s3
: Additional commands that make it easier to work with a large number of objects.
Getting started
-
Assign to the service account the roles required for your project, e.g., storage.editor for a bucket (to work with a particular bucket) or a folder (to work with all buckets in this folder). For more information about roles, see Access management with Yandex Identity and Access Management.
To work with objects in an encrypted bucket, a user or service account must have the following roles for the encryption key in addition to the
storage.configurer
role:kms.keys.encrypter
: To read the key, encrypt and upload objects.kms.keys.decrypter
: To read the key, decrypt and download objects.kms.keys.encrypterDecrypter
: This role includes thekms.keys.encrypter
andkms.keys.decrypter
permissions.
For more information, see Key Management Service service roles.
-
As a result, you will get the static access key data. To authenticate in Object Storage, you will need the following:
key_id
: Static access key IDsecret
: Secret key
Save
key_id
andsecret
: you will not be able to get the key value again.
Note
A service account is only allowed to view a list of buckets in the folder it was created in.
A service account can perform actions with objects in buckets that are created in folders different from the service account folder. To enable this, assign the service account roles for the appropriate folder or its bucket.
Installation
To install the AWS CLI, follow the guide
Configuration
To configure the AWS CLI, run the aws configure
command in your terminal. The command will request values for the following parameters:
-
AWS Access Key ID
: Static key ID you got previously. -
AWS Secret Access Key
: Static key contents you got previously. -
Default region name
:ru-central1
.To work with Object Storage, always specify the
ru-central1
. A different region value may lead to an authorization error. -
Leave the other parameters unchanged.
Configuration files
The aws configure
command saves the static key and the region.
-
The static key in
.aws/credentials
has the following format:[default] aws_access_key_id = <static_key_ID> aws_secret_access_key = <static_key_contents>
-
The default region in
.aws/config
has the following format:[default] region = ru-central1
-
You can create multiple profiles for different service accounts by specifying their details in the
.aws/credentials
file:[default] aws_access_key_id = <ID_of_static_key_1> aws_secret_access_key = <contents_of_static_key_1> [<name_of_profile_2>] aws_access_key_id = <ID_of_static_key_2> aws_secret_access_key = <contents_of_static_key_2> ... [<name_of_profile_n>] aws_access_key_id = <ID_of_static_key_n> aws_secret_access_key = <contents_of_static_key_n>
Where
default
is the default profile.To switch between profiles, the AWS CLI commands use the
--profile
option, e.g.:aws --endpoint-url=https://storage.yandexcloud.net/ \ --profile <name_of_profile_2> \ s3 mb s3://<bucket_name>
You can use Yandex Lockbox to safely store the static key for access to Object Storage. For more information, see Using a Yandex Lockbox secret to store a static access key.
Features
Take note of these AWS CLI features when used with Object Storage:
- The AWS CLI treats Object Storage as a hierarchical file system and object keys look like file paths.
- By default, the client is configured to work with Amazon servers. Therefore, when running the
aws
command to work with Object Storage, make sure to use the--endpoint-url
parameter. To avoid adding the parameter manually each time you run the command, you can use a configuration file or an alias.-
In the
.aws/config
configuration file, add theendpoint_url
parameter (this is supported by the AWS CLI versions 1.29.0, 2.13.0, and higher):endpoint_url = https://storage.yandexcloud.net
This enables you to invoke commands without explicitly specifying an endpoint. For example, you can specify
aws s3 ls
instead ofaws --endpoint-url=https://storage.yandexcloud.net s3 ls
. For more information, see the AWS CLI documentation. -
Create an alias using the following command:
alias ycs3='aws s3 --endpoint-url=https://storage.yandexcloud.net'
To create an alias each time you open the terminal, add the
alias
command to the configuration file, which can be either~/.bashrc
or~/.zshrc
, depending on the type of shell you are using.When using this alias, the following commands are equivalent:
aws s3 --endpoint-url=https://storage.yandexcloud.net ls
ycs3 ls
-
Examples of operations
Note
To enable debug output in the console, use the --debug
key.
Creating a bucket
aws s3 mb s3://bucket-name
Result:
make_bucket: bucket-name
Note
When creating a bucket, follow the naming conventions.
Uploading objects
You can upload all objects within a directory, use a filter, or upload objects one at a time.
-
Upload all objects from a local directory:
aws s3 cp --recursive local_files/ s3://bucket-name/path_style_prefix/
Result:
upload: ./textfile1.log to s3://bucket-name/path_style_prefix/textfile1.log upload: ./textfile2.txt to s3://bucket-name/path_style_prefix/textfile2.txt upload: ./prefix/textfile3.txt to s3://bucket-name/path_style_prefix/prefix/textfile3.txt
-
Upload objects specified in the
--include
filter and skip objects specified in the--exclude
filter:aws s3 cp --recursive --exclude "*" --include "*.log" \ local_files/ s3://bucket-name/path_style_prefix/
Result:
upload: ./textfile1.log to s3://bucket-name/path_style_prefix/textfile1.log
-
Upload objects one by one, running the following command for each object:
aws s3 cp testfile.txt s3://bucket-name/path_style_prefix/textfile.txt
Result:
upload: ./testfile.txt to s3://bucket-name/path_style_prefix/textfile.txt
Getting a list of objects
aws s3 ls --recursive s3://bucket-name
Result:
2022-09-05 17:10:34 10023 other/test1.png
2022-09-05 17:10:34 57898 other/test2.png
2022-09-05 17:10:34 704651 test.png
Deleting objects
You can delete all objects with a specified prefix, use a filter, or delete objects one at a time.
-
Delete all objects with the specified prefix:
aws s3 rm s3://bucket-name/path_style_prefix/ --recursive
Result:
delete: s3://bucket-name/path_style_prefix/test1.png delete: s3://bucket-name/path_style_prefix/subprefix/test2.png
-
Delete objects specified in the
--include
filter and skip objects specified in the--exclude
filter:aws s3 rm s3://bucket-name/path_style_prefix/ --recursive \ --exclude "*" --include "*.log"
Result:
delete: s3://bucket-name/path_style_prefix/test1.log delete: s3://bucket-name/path_style_prefix/subprefix/test2.log
-
Delete objects one by one, running the following command for each object:
aws s3 rm s3://bucket-name/path_style_prefix/textfile.txt
Result:
delete: s3://bucket-name/path_style_prefix/textfile.txt
Retrieving an object
aws s3 cp s3://bucket-name/textfile.txt textfile.txt
Result:
download: s3://bucket-name/path_style_prefix/textfile.txt to ./textfile.txt