Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Enabling a blue-green and canary deployment of the web service versions
    • Automating image builds using Jenkins and Packer
    • Continuous deployment of containerized applications using GitLab
    • App testing with GitLab
    • Creating test VMs using GitLab CI
    • GitLab integration with Tracker
    • High-performance computing on preemptible VMs
    • Load testing a gRPC service
    • Fixed-load HTTPS testing with Phantom
    • Step-load HTTPS testing with Pandora
    • Scripted HTTP load testing with Pandora
    • Load testing using multiple agents
    • Running external agents for load testing
    • JMeter load testing
    • Getting statistics on queries to Object Storage objects using Query
    • Getting the number of queries to Object Storage objects
    • Invoking a load test from GitLab CI
    • Deploying GitLab Runner on a Compute Cloud virtual machine
    • Deploying a SourceCraft worker on a Compute Cloud virtual machine
    • Comparing load test results
    • Load testing migration to k6
    • AI-powered review of GitHub pull requests with Yandex Workflows and GitHub Actions

In this article:

  • Feature equivalency between Load Testing and k6
  • Set up your environment
  • Check the k6 installation
  • Set up the infrastructure for load generation
  • Migrate load profiles
  • const: Constant load
  • line: Linear growth
  • step: Stepwise growth
  • once: Single-time surge
  • Migrate test data and scenarios
  • HTTP requests
  • gRPC requests
  • Data from files
  • Set up an autostop
  • Stop based on response time
  • Stop based on HTTP codes
  • Stop based on quantiles
  • Timeout
  • Configure visualization of results
  • Built-in output
  • Export to JSON
  • Visualization in Grafana
  • Configure regression analysis
  • Thresholds in CI/CD
  • Storing history in Grafana
  • Configure monitoring of generator resources
  • Configure running in CI/CD
  • What's next
  1. Development and testing
  2. Load testing migration to k6

Load testing migration from Load Testing to k6

Written by
Yandex Cloud
Updated at April 28, 2026
  • Feature equivalency between Load Testing and k6
  • Set up your environment
  • Check the k6 installation
  • Set up the infrastructure for load generation
  • Migrate load profiles
    • const: Constant load
    • line: Linear growth
    • step: Stepwise growth
    • once: Single-time surge
  • Migrate test data and scenarios
    • HTTP requests
    • gRPC requests
    • Data from files
  • Set up an autostop
    • Stop based on response time
    • Stop based on HTTP codes
    • Stop based on quantiles
    • Timeout
  • Configure visualization of results
    • Built-in output
    • Export to JSON
    • Visualization in Grafana
  • Configure regression analysis
    • Thresholds in CI/CD
    • Storing history in Grafana
  • Configure monitoring of generator resources
  • Configure running in CI/CD
  • What's next

Warning

Starting July 1, 2026, Load Testing will be discontinued. For more information, see Yandex Load Testing shutdown.

Warning

The k6 tool is an open-source software product developed and supported by a third party (Grafana Labs) not affiliated with Yandex. The migration guide is provided for information purposes only. We assume no responsibility for the operation, features, security, or any consequences of using the k6 software, nor any errors that may occur during the adaptation of your testing scenarios. Carefully read all the documents governing the use of k6.

As a replacement, we recommend k6, which is an open-source load testing tool by Grafana Labs. k6 supports HTTP, gRPC, WebSocket, etc., easily integrates into CI/CD, and provides flexible tools for result analysis.

Follow this guide to migrate existing load testing scenarios to k6 while preserving the familiar features: load profiles, autostop, monitoring, and regression analysis.

To migrate load testing scenarios to k6:

  1. Study feature equivalency data.
  2. Set up your environment.
  3. Check the k6 installation.
  4. Set up the infrastructure for load generation.
  5. Migrate load profiles.
  6. Migrate test data and scenarios.
  7. Set up an autostop.
  8. Configure visualization of results.
  9. Configure regression analysis.
  10. Configure monitoring of generator resources.
  11. Configure running in CI/CD.

Feature equivalency between Load Testing and k6Feature equivalency between Load Testing and k6

Load Testing feature

Equivalent k6 feature

Read more

Agent: Managed VM for load generation

Any machine with k6 installed: a local computer, cloud-based VM, or CI/CD container

Load generation infrastructure

Load generators: Pandora, Phantom, JMeter

Universal built-in k6 generator with HTTP/1.1, HTTP/2, gRPC, and WebSocket support

Test data and scenarios

Load profile: const, line, step, once

Scenarios with executors: constant-arrival-rate, ramping-arrival-rate, shared-iterations, etc.

Load profiles

Testing threads: Parallel connections for RPS generation

Virtual users (VU): Each one executes a looped scenario; RPS = VU / iteration time

Load profiles

Test data: URI, HTTP_JSON, and GRPC_JSON formats, files from Object Storage

Requests are described in a JS script; external data is loaded via SharedArray

Test data and scenarios

Autostop: Stopping the test based on response time, HTTP codes, quantiles

Thresholds with abortOnFail: Similar stopping criteria

Autostop

Test results: Charts for quantiles, response codes, RPS

Built-in summary in the terminal; visualization via Grafana + InfluxDB/Prometheus

Test results

Regression dashboard: Tracking degradation between runs

Grafana dashboard with run history and threshold alerts

Regression analysis

Agent monitoring: CPU, memory, disk, network via Telegraf/Monitoring

Telegraf, node_exporter, and Prometheus or Monitoring for a cloud-based VM

Monitoring of generator resources

Management console: Web interface to configure and run tests

k6 run CLI for running; Grafana for visualization; CI/CD for automation

Running in CI/CD

Test configuration: YAML file with generator parameters

JavaScript file with the options object: profiles, thresholds, and scenarios, all in the same place

Load profiles

Set up your environmentSet up your environment

  1. Install k6 on the machine you plan to run tests from:

    Linux
    macOS
    Docker
    sudo gpg -k
    sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
      --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
    echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \
      | sudo tee /etc/apt/sources.list.d/k6.list
    sudo apt-get update && sudo apt-get install k6
    
    brew install k6
    
    docker pull grafana/k6
    
  2. If you want to visualize the results, get a monitoring stack ready, e.g., Grafana with InfluxDB or Prometheus.

Check the k6 installationCheck the k6 installation

Make sure k6 is installed and operational. The following command will run a short test: 10 virtual users will be sending GET requests and check the response code for 30 seconds:

k6 run - << 'EOF'
import http from 'k6/http';
import { sleep, check } from 'k6';

export const options = {
  vus: 10,
  duration: '30s',
};

export default function () {
  const res = http.get('https://test.k6.io/');
  check(res, { 'status 200': (r) => r.status === 200 });
  sleep(1);
}
EOF

Once this completes, k6 will show the summary: number of requests, average response time, percentage of successful checks, and other metrics. This is the result format you will work with further on.

To see the full list of commands and flags, run this command:

k6 --help

Set up the infrastructure for load generationSet up the infrastructure for load generation

Load Testing used to run tests on managed agents, i.e., VM instances that were created and configured automatically.

In k6, you can run the load generator on any machine: a local computer, cloud-based VM, or CI/CD pipeline container.

If your tests required access to the target application via the Yandex Cloud internal network, create a VM instance in the relevant subnet and run k6 on it.

Here are approximate recommendations on resources for the load generator (for simple HTTP scenarios without heavy processing of responses):

Load profile

vCPU

RAM

Up to 10,000 RPS

2

2 GB

Up to 20,000 RPS

4

4 GB

Up to 40,000 RPS

8

8 GB

Scenarios involving JSON parsing, multiple checks, and large response bodies will require more resources. For more on running large-scale tests, see this guide.

For loads above 40,000 RPS, use distributed k6 execution.

Migrate load profilesMigrate load profiles

In Load Testing, loads were specified in RPS units via the line, const, step, and once profiles. In k6, loads are managed via virtual user (VU) count and scenarios. Each VU executes a looped test scenario. The resulting RPS depends on VU count and execution time per iteration.

const: Constant loadconst: Constant load

Load Testing:

load_profile:
  load_type: rps
  schedule:
    - {duration: 300s, type: const, ops: 10000}

k6: Fixed number of VUs:

export const options = {
  scenarios: {
    constant_load: {
      executor: 'constant-vus',
      vus: 100,
      duration: '5m',
    },
  },
};

If you need precise RPS control, use the constant-arrival-rate executor:

export const options = {
  scenarios: {
    constant_load: {
      executor: 'constant-arrival-rate',
      rate: 10000,
      timeUnit: '1s',
      duration: '5m',
      preAllocatedVUs: 200,
      maxVUs: 500,
    },
  },
};

line: Linear growthline: Linear growth

Load Testing:

load_profile:
  load_type: rps
  schedule:
    - {duration: 180s, type: line, from: 1, to: 10000}

k6:

export const options = {
  scenarios: {
    ramp_up: {
      executor: 'ramping-arrival-rate',
      startRate: 1,
      timeUnit: '1s',
      preAllocatedVUs: 200,
      maxVUs: 500,
      stages: [
        { duration: '3m', target: 10000 },
      ],
    },
  },
};

step: Stepwise growthstep: Stepwise growth

Load Testing:

load_profile:
  load_type: rps
  schedule:
    - {duration: 30s, type: step, from: 10, to: 100, step: 5}

k6: Equivalent via several stages. The source profile contains 18 steps (from 10 to 100 with an increment of 5) for 30 seconds, i.e., around 1.7 s per step:

export const options = {
  scenarios: {
    stepped_load: {
      executor: 'ramping-arrival-rate',
      startRate: 10,
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 200,
      stages: [
        { duration: '1.7s', target: 15 },
        { duration: '1.7s', target: 20 },
        { duration: '1.7s', target: 25 },
        { duration: '1.7s', target: 30 },
        { duration: '1.7s', target: 35 },
        { duration: '1.7s', target: 40 },
        { duration: '1.7s', target: 45 },
        { duration: '1.7s', target: 50 },
        { duration: '1.7s', target: 55 },
        { duration: '1.7s', target: 60 },
        { duration: '1.7s', target: 65 },
        { duration: '1.7s', target: 70 },
        { duration: '1.7s', target: 75 },
        { duration: '1.7s', target: 80 },
        { duration: '1.7s', target: 85 },
        { duration: '1.7s', target: 90 },
        { duration: '1.7s', target: 95 },
        { duration: '1.7s', target: 100 },
      ],
    },
  },
};

once: Single-time surgeonce: Single-time surge

Load Testing:

load_profile:
  load_type: rps
  schedule:
    - {type: once, times: 133}

k6:

export const options = {
  scenarios: {
    burst: {
      executor: 'shared-iterations',
      vus: 133,
      iterations: 133,
    },
  },
};

Migrate test data and scenariosMigrate test data and scenarios

HTTP requestsHTTP requests

If your tests were in URI or HTTP_JSON, import your requests to a k6 script:

import http from 'k6/http';
import { check, sleep } from 'k6';

export default function () {
  // GET request
  const res = http.get('https://my-app.example.com/api/items');
  check(res, {
    'status 200': (r) => r.status === 200,
  });

  // POST request 
  const payload = JSON.stringify({ title: 'New task', completed: false });
  const params = { headers: { 'Content-Type': 'application/json' } };
  const postRes = http.post('https://my-app.example.com/api/items', payload, params);
  check(postRes, {
    'created': (r) => r.status === 201,
  });

  sleep(1);
}

gRPC requestsgRPC requests

If your gRPC services were tested via Pandora with GRPC_JSON format, use the built-in k6 module called k6/net/grpc.

Load Testing (GRPC_JSON format):

{"tag": "/api.Adder/Add", "call": "api.Adder.Add", "metadata": {"Authorization": ["Bearer ..."]}, "payload": {"x": 21, "y": 12}}

k6:

import grpc from 'k6/net/grpc';
import { check } from 'k6';

const client = new grpc.Client();
client.load(['proto'], 'adder.proto');

export default () => {
  if (__ITER === 0) {
    client.connect('my-grpc-service.example.com:8080', { plaintext: true });
  }

  const response = client.invoke('api.Adder/Add', { x: 21, y: 12 }, {
    metadata: { authorization: 'Bearer ...' },
  });

  check(response, {
    'Status OK': (r) => r.status === grpc.StatusOK,
  });

};

Data from filesData from files

If your test data was stored in Object Storage, upload it locally and use it in the script via SharedArray:

import http from 'k6/http';
import { SharedArray } from 'k6/data';

const data = new SharedArray('urls', function () {
  return JSON.parse(open('./test-data.json'));
});

export default function () {
  const item = data[Math.floor(Math.random() * data.length)];
  http.get(item.url);
}

Set up an autostopSet up an autostop

In Load Testing, an autostop used to interrupt the test if preset conditions were broken. In k6, the same function is served by thresholds with the abortOnFail parameter.

Stop based on response timeStop based on response time

Load Testing:

autostop:
  - time(1s, 30s)

k6: Use a high quantile, e.g., p(99), to achieve a behavior close to the original, where the time of every request is checked:

export const options = {
  thresholds: {
    http_req_duration: [
      { threshold: 'p(99)<1000', abortOnFail: true, delayAbortEval: '30s' },
    ],
  },
};

Stop based on HTTP codesStop based on HTTP codes

Load Testing:

autostop:
  - http(5xx, 10%, 30s)

The built-in http_req_failed metric counts all responses with codes >= 400 (4xx and 5xx). To track 5xx only, create a custom metric:

import http from 'k6/http';
import { Rate } from 'k6/metrics';

const errors5xx = new Rate('errors_5xx');

export const options = {
  thresholds: {
    errors_5xx: [
      { threshold: 'rate<0.1', abortOnFail: true, delayAbortEval: '30s' },
    ],
  },
};

export default function () {
const res = http.get('https://my-app.example.com/api/endpoint');
errors5xx.add(res.status >= 500);
}

Stop based on quantilesStop based on quantiles

Load Testing:

autostop:
  - quantile(95, 100ms, 10s)

k6:

export const options = {
  thresholds: {
    http_req_duration: [
      { threshold: 'p(95)<100', abortOnFail: true, delayAbortEval: '10s' },
    ],
  },
};

TimeoutTimeout

Load Testing:

autostop:
  - limit(10m)

k6: Specified via duration in the scenario parameters.

export const options = {
  scenarios: {
    default: {
      executor: 'constant-vus',
      vus: 50,
      duration: '10m',
    },
  },
};

Configure visualization of resultsConfigure visualization of results

Built-in outputBuilt-in output

Once the test is complete, k6 outputs a summary of the key metrics to the terminal:

  • http_req_duration: Response time (avg, min, med, max, p90, p95).
  • http_req_failed: Percentage of failed requests.
  • http_reqs: Total number of requests and RPS.
  • iterations: Number of completed iterations.
  • vus: Number of virtual users.

Export to JSONExport to JSON

To save detailed results, run this command:

k6 run --out json=results.json test.js

Visualization in GrafanaVisualization in Grafana

To get charts similar to those provided in Load Testing (response time quantiles, response codes, RPS), configure the export of metrics to one of the supported systems.

InfluxDB v1 + Grafana:

k6 run --out influxdb=http://localhost:8086/k6 test.js

Warning

The built-in --out influxdb output supports only InfluxDB v1. For InfluxDB v2, use the xk6-output-influxdb extension.

Use the ready-made k6 dashboard in Grafana. It includes the following charts:

  • Response time quantiles (similar to Quantiles in Load Testing).
  • Number of virtual users (similar to Testing threads).
  • HTTP response codes.
  • RPS.

Prometheus + Grafana:

K6_PROMETHEUS_RW_SERVER_URL=http://localhost:9090/api/v1/write \
  k6 run -o experimental-prometheus-rw test.js

Configure regression analysisConfigure regression analysis

In Load Testing, regression dashboards allowed tracking the degradation of measures from one run to another. In k6, you can achieve the same in two ways.

Thresholds in CI/CDThresholds in CI/CD

Set thresholds in the script. If they are exceeded, k6 will terminate with a non-zero return code:

export const options = {
  thresholds: {
    http_req_duration: ['p(95)<200', 'p(99)<500'],
    http_req_failed: ['rate<0.01'],
  },
};

Check the k6 return code in the CI/CD pipeline:

load-test:
  script:
    - k6 run test.js
  # If the thresholds are exceeded, the job will terminate automatically.

Storing history in GrafanaStoring history in Grafana

When you export results to InfluxDB or Prometheus, the history of all runs is saved automatically. Configure the following in Grafana:

  1. Dashboard with charts for key metrics (p95, p99, RPS, error percentage).
  2. Threshold alerts.
  3. Annotations to register test start times.

This visualizes the performance trend tracking, same as on Load Testing regression dashboards.

Configure monitoring of generator resourcesConfigure monitoring of generator resources

Load Testing featured built-in agent monitoring (CPU, memory, disk, network). When using k6 to monitor the generator machine, use the standard tools:

  • Telegraf + Grafana: If monitoring is already configured, the data collection will continue.
  • Monitoring: If the generator is running on a Compute Cloud instance, its metrics are available in Monitoring without any additional setup.
  • node_exporter + Prometheus: Standard approach for monitoring of Linux-based machines.

Configure running in CI/CDConfigure running in CI/CD

You can easily integrate k6 into any CI/CD pipeline.

Example for GitLab CI:

load-test:
  stage: test
  image: grafana/k6
  script:
    - k6 run
        -u ${K6_VUS:-50}
        -d ${K6_DURATION:-5m}
        --env BASE_URL="${BASE_URL}"
        test.js
  artifacts:
    paths:
      -load-performance.json

To save the results to a file, use the handleSummary() function in your script:

export function handleSummary(data) {
  return {
    'load-performance.json': JSON.stringify(data, null, 2),
  };
}

Example for GitHub Actions:

- name: Load testing
  uses: grafana/k6-action@v0.3.1
  with:
    filename: test.js
    flags: -u 50 -d 5m

What's nextWhat's next

  • k6 guide: A complete guide on the tool’s features.
  • Examples of k6 scripts: Ready-to-use scenarios for various protocols.
  • k6 Extensions: Extensions adding support of additional protocols (SQL, Kafka, Redis, etc.).
  • Grafana Cloud k6: Managed service used to run k6 with visualization and storage of results.

Was the article helpful?

Previous
Comparing load test results
Next
AI-powered review of GitHub pull requests with Yandex Workflows and GitHub Actions
© 2026 Direct Cursus Technology L.L.C.