Uploading Yandex Audit Trails audit logs to KUMA SIEM through Terraform
To configure delivery of audit log files to KUMA
- Prepare your cloud environment.
- Create an infrastructure.
- Mount the bucket on a server.
- Configure the KUMA collector.
If you no longer need the resources you created, delete them.
Prepare your cloud environment
Sign up for Yandex Cloud and create a billing account:
- Go to the management console
and log in to Yandex Cloud or create an account if you do not have one yet. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The cost of support for a new Yandex Cloud infrastructure includes:
- Fee for data storage, operations with data, and outgoing traffic (see Yandex Object Storage pricing).
- Fee for a symmetric encryption key and cryptographic operations (see Yandex Key Management Service pricing).
- (Optional) Fee for a continuously running VM (see Yandex Compute Cloud pricing).
- (Optional) Fee for using a dynamic or static external IP address (see Yandex Virtual Private Cloud pricing).
In addition, to complete the tutorial, you will need a KUMA user license
Create an infrastructure
With Terraform
Terraform is distributed under the Business Source License
For more information about the provider resources, see the documentation on the Terraform
To create an infrastructure using Terraform:
-
Install Terraform, get the authentication credentials, and specify the source for installing the Yandex Cloud provider (see Configure a provider, Step 1).
-
Prepare the infrastructure description file:
Ready-made configurationManually-
Clone the repository with configuration files:
git clone https://github.com/yandex-cloud-examples/yc-audit-trails-kuma-integration
-
Navigate to the repository directory. Make sure it contains the following files:
at-events-to-kuma.tf
: Your infrastructure configuration.at-events-to-kuma.auto.tfvars
: User data.
-
Create a folder for the infrastructure description file.
-
Create a configuration file named
at-events-to-kuma.tf
in the folder:at-events-to-kuma.tf
# Configuring a provider terraform { required_providers { yandex = { source = "yandex-cloud/yandex" version = ">= 0.47.0" } } } provider "yandex" { folder_id = var.folder_id } # Declaring variables for custom parameters variable "folder_id" { type = string } variable "vm_user" { type = string } variable "ssh_key_path" { type = string } variable "bucket_name" { type = string } variable "object_prefix" { type = string } # Adding other variables locals { sa_bucket_name = "kuma-bucket-sa" sa_trail_name = "kuma-trail-sa" sym_key_name = "kuma-key" trail_name = "kuma-trail" zone = "ru-central1-b" network_name = "kuma-network" subnet_name = "kuma-network-ru-central1-b" vm_name = "kuma-server" image_id = "fd8ulbhv5dpakf3io1mf" } # Creating service accounts resource "yandex_iam_service_account" "sa-bucket" { name = local.sa_bucket_name folder_id = "${var.folder_id}" } resource "yandex_iam_service_account" "sa-trail" { name = local.sa_trail_name folder_id = "${var.folder_id}" } # Creating a static access key resource "yandex_iam_service_account_static_access_key" "sa-bucket-static-key" { service_account_id = yandex_iam_service_account.sa-bucket.id } output "access_key" { value = yandex_iam_service_account_static_access_key.sa-bucket-static-key.access_key sensitive = true } output "secret_key" { value = yandex_iam_service_account_static_access_key.sa-bucket-static-key.secret_key sensitive = true } # Creating a symmetric encryption key resource "yandex_kms_symmetric_key" "sym_key" { name = local.sym_key_name default_algorithm = "AES_256" } # Assigning roles to service accounts resource "yandex_resourcemanager_folder_iam_member" "sa-bucket-storage-viewer" { folder_id = "${var.folder_id}" role = "storage.admin" member = "serviceAccount:${yandex_iam_service_account.sa-bucket.id}" } resource "yandex_resourcemanager_folder_iam_member" "sa-trail-storage-uploader" { folder_id = "${var.folder_id}" role = "storage.uploader" member = "serviceAccount:${yandex_iam_service_account.sa-trail.id}" } resource "yandex_resourcemanager_folder_iam_member" "sa-trail-at-viewer" { folder_id = "${var.folder_id}" role = "audit-trails.admin" member = "serviceAccount:${yandex_iam_service_account.sa-trail.id}" } resource "yandex_kms_symmetric_key_iam_binding" "encrypter-decrypter" { symmetric_key_id = "${yandex_kms_symmetric_key.sym_key.id}" role = "kms.keys.encrypterDecrypter" members = [ "serviceAccount:${yandex_iam_service_account.sa-bucket.id}","serviceAccount:${yandex_iam_service_account.sa-trail.id}" ] } # Creating a bucket resource "yandex_storage_bucket" "kuma-bucket" { folder_id = "${var.folder_id}" bucket = "${var.bucket_name}" default_storage_class = "standard" anonymous_access_flags { read = false list = false config_read = false } server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { kms_master_key_id = "${yandex_kms_symmetric_key.sym_key.id}" sse_algorithm = "aws:kms" } } } } # Creating a trail resource "yandex_audit_trails_trail" "kuma-trail" { depends_on = [yandex_storage_bucket.kuma-bucket, yandex_resourcemanager_folder_iam_member.sa-trail-at-viewer, yandex_resourcemanager_folder_iam_member.sa-trail-storage-uploader] name = local.trail_name folder_id = "${var.folder_id}" service_account_id = "${yandex_iam_service_account.sa-trail.id}" storage_destination { bucket_name = "${var.bucket_name}" object_prefix = "${var.object_prefix}" } filtering_policy { management_events_filter { resource_scope { resource_id = "${var.folder_id}" resource_type = "resource-manager.folder" } } } } # Creating a cloud network and a subnet resource "yandex_vpc_network" "kuma-network" { name = local.network_name } resource "yandex_vpc_subnet" "kuma-network-subnet-b" { name = local.subnet_name zone = local.zone v4_cidr_blocks = ["10.1.0.0/24"] network_id = yandex_vpc_network.kuma-network.id } # Creating a VM instance resource "yandex_compute_disk" "boot-disk" { name = "bootvmdisk" type = "network-hdd" zone = local.zone size = "20" image_id = local.image_id } resource "yandex_compute_instance" "kuma-vm" { name = local.vm_name platform_id = "standard-v3" zone = local.zone resources { cores = 2 memory = 2 core_fraction = 20 } boot_disk { disk_id = yandex_compute_disk.boot-disk.id } network_interface { subnet_id = yandex_vpc_subnet.kuma-network-subnet-b.id nat = true } metadata = { user-data = "#cloud-config\nusers:\n - name: ${var.vm_user}\n groups: sudo\n shell: /bin/bash\n sudo: 'ALL=(ALL) NOPASSWD:ALL'\n ssh_authorized_keys:\n - ${file("${var.ssh_key_path}")}" } }
-
In the directory, create a user data file named
at-events-to-kuma.auto.tfvars
:at-events-to-kuma.auto.tfvars
folder_id = "<folder_ID>" vm_user = "<instance_username>" ssh_key_path = "<path_to_public_SSH_key>" bucket_name = "<bucket_name>" object_prefix = "<prefix>"
For more information about the properties of Terraform resources, see the provider documentation:
- Service account: yandex_iam_service_account
- Static access key: yandex_iam_service_account_static_access_key
- Symmetric encryption key: yandex_kms_symmetric_key
- Role: yandex_resourcemanager_folder_iam_member
- Bucket: yandex_storage_bucket
- Trail: yandex_audit_trails_trail
- Network: yandex_vpc_network
- Subnet: yandex_vpc_subnet
- Disk: yandex_compute_disk
- VM: yandex_compute_instance
-
-
In the
at-events-to-kuma.auto.tfvars
file, set the following user-defined properties:-
folder_id
: Folder ID. -
vm_user
: Username of the user you are going to create on the VM, e.g.,yc-user
.Alert
Do not use
root
or other reserved usernames. To perform operations requiring root privileges, use thesudo
command. -
ssh_key_path
: Path to the public SSH key file and its name, e.g.,~/.ssh/id_ed25519.pub
. You need to create](../../compute/operations/vm-connect/ssh.md#creating-ssh-keys) a key pair for the SSH connection to a VM yourself. -
bucket_name
: Name of the bucket you want to upload audit logs to, e.g.,my-audit-logs-for-kuma
.Note
The bucket name must be unique across Object Storage. You cannot create two buckets with the same name – even in different folders of different clouds.
-
object_prefix
: Prefix that will be added to the names of the audit log objects in the bucket, e.g.,/
. The prefix forms a part of the full name of the audit log file.
-
-
Create resources:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
-
-
Get the key ID and secret key (you will need them later when mounting the bucket on the server):
terraform output access_key terraform output secret_key
Result:
"YCAJE0tO1Q4zO7bW4********" "YCNpH34y9fzL6xEap3wkuxYfkc1PTNvr********"
Once the infrastructure is created, mount the bucket on a server and set up the KUMA collector.
Mount the bucket on a server
Perform this action on the server you are going to install the KUMA collector on. As a server, you can use a Compute Cloud VM or your own hardware. In this tutorial, we use the previously created Compute Cloud VM.
-
Connect to the server over SSH.
-
Create a new user named
kuma
:sudo useradd kuma
-
Create the
kuma
user's home directory:sudo mkdir /home/kuma
-
Create a file with a static access key and grant permissions for it to the
kuma
user:sudo bash -c 'echo <access_key_ID>:<secret_access_key> > /home/kuma/.passwd-s3fs' sudo chmod 600 /home/kuma/.passwd-s3fs sudo chown -R kuma:kuma /home/kuma
Where
<access_key_ID>
and<secret_access_key>
are the previously saved values of the static access key of thekuma-bucket-sa
service account. -
Install the s3fs
package:sudo apt install s3fs
-
Create a directory that will serve as a mount point for the bucket and grant permissions for it to the
kuma
user:sudo mkdir /var/log/yandex-cloud/ sudo chown kuma:kuma /var/log/yandex-cloud/
-
Mount the bucket you created earlier by specifying its name:
sudo s3fs <bucket_name> /var/log/yandex-cloud \ -o passwd_file=/home/kuma/.passwd-s3fs \ -o url=https://storage.yandexcloud.net \ -o use_path_request_style \ -o uid=$(id -u kuma) \ -o gid=$(id -g kuma)
You can configure automatic mounting of the bucket at operating system start-up by opening the
/etc/fstab
file (sudo nano /etc/fstab
command) and adding the following line to it:s3fs#<bucket_name> /var/log/yandex-cloud fuse _netdev,uid=<kuma_uid>,gid=<kuma_gid>,use_path_request_style,url=https://storage.yandexcloud.net,passwd_file=/home/kuma/.passwd-s3fs 0 0
Where:
-
<bucket_name>
: Name of the bucket you created earlier, e.g.,my-audit-logs-for-kuma
. -
<kuma_uid>
:kuma
user ID in the VM operating system. -
<kuma_gid>
:kuma
user group ID in the VM operating system.To learn
<kuma_uid>
and<kuma_gid>
, run theid kuma
command in the terminal.
-
-
Make certain that the bucket is mounted:
sudo ls /var/log/yandex-cloud/
If everything is configured correctly, the command will return the current contents of the audit event bucket.
The Yandex Cloud event transfer setup is complete. The events will reside in JSON
/var/log/yandex-cloud/{audit_trail_id}/{year}/{month}/{day}/*.json
Configure the KUMA collector
For this step, you will need the distribution and license files included with KUMA. Use them to install and configure the collector in the KUMA network infrastructure. For more information, see this guide
Once the setup is successfully completed, audit events will start being delivered to KUMA. The KUMA web interface allows you to search for related events
How to delete the resources you created
To stop paying for the resources you created:
-
Open the
at-events-to-kuma.tf
configuration file and delete your infrastructure description. -
Delete all objects from the bucket you created earlier. Otherwise, the bucket and some of the infrastructure will not be deleted, and the
terraform apply
command will terminate with an error. -
Apply the changes:
-
In the terminal, change to the folder where you edited the configuration file.
-
Make sure the configuration file is correct using the command:
terraform validate
If the configuration is correct, the following message is returned:
Success! The configuration is valid.
-
Run the command:
terraform plan
The terminal will display a list of resources with parameters. No changes are made at this step. If the configuration contains errors, Terraform will point them out.
-
Apply the configuration changes:
terraform apply
-
Confirm the changes: type
yes
in the terminal and press Enter.
-