Releases 30 July 2020

k6 v0.27.0 and v0.27.1 released

Mostafa Moradian, Developer Advocate

k6 v0.27.0 is finally out! It has been over a year since the k6 team started working on this release, which includes a multitude of new features, improvements, bugfixes and beyond. This release was an effort to redefine performance and load-testing in k6, by introducing a new execution engine and lots of new executors on top, along with the most requested feature, scenarios. It also includes many UX improvements and bugfixes. This release is a joint effort between the company, specifically the k6 team, and the community to fulfill the goal of the #1007 PR and many others.

k6 v0.27.0 was released on Jul 14th 2020, and the changes in this release included 438 commits and the efforts of at least 9 contributors. k6 v0.27.1 was released on Jul 30th 2020, and featured a few important bugfixes and optimizations compared to v0.27.0. This is a huge milestone for us and the k6 project as a whole and we hope that you'll enjoy it as much as we do!

New features and enhancements

New execution engine

This is the first public release of the new execution engine, offering users new ways of modeling advanced load testing scenarios that can more closely represent real-world traffic patterns. It includes the long-awaited feature, scenarios, that helps model traffic patterns in more creative ways. Previously, there were only a few options to control the execution of k6 and the test, namely the vus, iterations, duration and stages. Although a vast majority of the existing k6 scripts continue to work the same as before, some corner cases require changes. The scenarios are an entirely optional feature, but they help with modeling advanced traffic patterns.

The new execution engine includes several different ways of scheduling script iterations, encapsulated as the following distinct executors:

  • shared-iterations: a number of iterations are shared between all specified VUs (up to some specified total maxDuration).
  • per-vu-iterations: each VU executes a fixed number of iterations (up to some specified total maxDuration).
  • constant-vus: a fixed number of VUs execute as many iterations as possible for a specified duration.
  • ramping-vus: a variable number of VUs execute as many iterations as possible for a specified duration.
  • constant-arrival-rate: iterations are executed at a fixed rate for a specified duration.
  • ramping-arrival-rate: iterations are executed at a variable rate for a specified duration.
  • externally-controlled: control and scale execution at runtime via k6 REST API or the CLI.

Scenarios also include the possibility to configure gracefulStop and gracefulRampDown to control the behavior of test executions and iterations. Also, the scenarios can be mixed and matched together to provide more granular control over the test, in sequence or in parallel. Different scenarios can execute different JS functions, have different environment variables and assign extra tags to the metrics they generate. The following script is an advanced example of such a scenario. You see how different scenarios with different executors are combined to run different functions for testing the website and the API.

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
  scenarios: {
    my_web_test: { // some arbitrary scenario name
      executor: 'constant-vus',
      vus: 50,
      duration: '5m',
      gracefulStop: '0s', // do not wait for iterations to finish in the end
      tags: { test_type: 'website' }, // extra tags for the metrics generated by this scenario
      exec: 'webtest', // the function this scenario will execute
    my_api_test_1: {
      executor: 'constant-arrival-rate',
      rate: 90, timeUnit: '1m', // 90 iterations per minute, i.e. 1.5 RPS
      duration: '5m',
      preAllocatedVUs: 10, // the size of the VU (i.e. worker) pool for this scenario
      tags: { test_type: 'api' }, // different extra metric tags for this scenario
      env: { MY_CROC_ID: '1' }, // and we can specify extra environment variables as well!
      exec: 'apitest', // this scenario is executing different code than the one above!
    my_api_test_2: {
      executor: 'ramping-arrival-rate',
      startTime: '30s', // the ramping API test starts a little later
      startRate: 50, timeUnit: '1s', // we start at 50 iterations per second
      stages: [
        { target: 200, duration: '30s' }, // go from 50 to 200 iters/s in the first 30 seconds
        { target: 200, duration: '3m30s' }, // hold at 200 iters/s for 3.5 minutes
        { target: 0, duration: '30s' }, // ramp down back to 0 iters/s over the last 30 second
      preAllocatedVUs: 50, // how large the initial pool of VUs would be
      maxVUs: 100, // if the preAllocatedVUs are not enough, we can initialize more
      tags: { test_type: 'api' }, // different extra metric tags for this scenario
      env: { MY_CROC_ID: '2' }, // same function, different environment variables
      exec: 'apitest', // same function as the scenario above, but with different env vars
  discardResponseBodies: true,
  thresholds: {
    // we can set different thresholds for the different scenarios because
    // of the extra metric tags we set!
    'http_req_duration{test_type:api}': ['p(95)<250', 'p(99)<350'],
    'http_req_duration{test_type:website}': ['p(99)<500'],
    // we can reference the scenario names as well
    'http_req_duration{scenario:my_api_test_2}': ['p(99)<300'],

export function webtest() {
  sleep(Math.random() * 2);

export function apitest() {
  // no need for sleep() here, the iteration pacing will be controlled by the
  // arrival-rate executors above!

For more information, please see scenarios in the documentation.

UX Improvements, Bugfixes and Breaking Changes

The CLI has new real-time thread-safe progress bars for each individual executor, along with better error messages for module imports. The __VU variable is now available in the script init context, allowing easier splitting of test input data per VU and reducing RAM usage. Also, a new method has been added to stop engine execution via the k6 REST API.

There are many bugfixes and improvements in the CLI tool. The validation of configuration options has also been improved. The JS engine (goja) is also updated and many bugfixes and enhancements were made to the HTTP and the WebSocket protocols. The internal architecture of k6 has also gone through extensive changes and improvements.

There are some breaking changes in the script, execution of the script, CLI and configurations, that are described in the release notes.

As always, we appreciate the community feedback on our tool, k6. Please test it, and report any issues, either on GitHub or the community forum. We also welcome any contributions.

< Back to all posts