Tutorials 11 March 2022

How to Perform Load Testing with k6 using GitHub Actions

Michael Wanyoike, Simon Aronsson

You can find a collection of k6 scripts and GitHub workflows referenced in this tutorial here.

📖What you will learn

  • How to integrate load testing with k6 into GitHub Actions
  • Different implementation paths, and when to use each

In this tutorial, we will look into how to integrate performance testing in your development process with GitHub Actions and k6. For a video tutorial 🎥 , check out the following tutorial on YouTube.

k6 is an open-source load testing tool for testing the performance of APIs, microservices, and websites. Developers use k6 to test a system's performance under a particular load to catch performance regressions or errors.

GitHub Actions is a new tool that enables developers to create custom workflows for their software development lifecycle directly inside their GitHub repositories. As of mid 2019, GitHub Actions now supports full CI/CD pipelines.

If you've not used GitHub Actions before, we recommend looking at the following links to get a hold of how it works:

Writing your performance test

We'll start small by writing a simple test that measure the performance of a single endpoint. As with most, if not all, development efforts, performance testing yields the best results if we work in small increments, iterating and expanding as our knowledge increases.

Our test will consist of three parts:

  1. An HTTP request against our system under test.
  2. A load configuration controlling the test duration and amount of virtual users.
  3. A performance goal, or service level objective, expressed as a threshold.

Creating the test script

When we execute our test script, each virtual user will execute the default function as many times as possible until the duration is up. To make sure we dont flood our system under test, we'll make the virtual user sleep for a second before it continues.

test.js
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
const res = http.get('https://test.k6.io');
sleep(1);
}

Configuring the load

We'll configure our test to run 50 virtual users continuously for one minute. Because of the sleep we added earlier, this will result in just below 50 iterations per second, givinig us a total of about 2900 iterations.

test.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
duration: '1m',
vus: 50,
};
export default function () {
const res = http.get('https://test.k6.io');
sleep(1);
}

If you have installed k6 in your local machine, you can run your test locally in your terminal using the command: k6 run test.js.

Configuring our thresholds

The next step is to define your service level objectives, or SLOs around your application performance. SLOs are a vital aspect of ensuring the reliability of your systems and applications. If you do not currently have any defined SLAs or SLOs, now is an excellent time to consider your requirements.

You can define SLOs as Pass/Fail criteria with Thresholds in your k6 script. k6 evaluates them during the test execution and informs about the Threshold results. If any of the thresholds in our test fails, k6 will return with a non-zero exit code, communicating to the CI tool that the step has failed.

Now, we will add one Threshold to our previous script to validate than the 95th percentile response time must be below 500ms and also that our error rate is less than 1%. After this change, the script will be as in the snippet below:

test.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
duration: '1m',
vus: 50,
thresholds: {
http_req_failed: ['rate<0.01'], // http errors should be less than 1%
http_req_duration: ['p(95)<500'], // 95 percent of response times must be below 500ms
},
};
export default function () {
const res = http.get('https://test.k6.io');
sleep(1);
}

Thresholds are a powerful feature providing a flexible API to define various types of Pass/Fail criteria in the same test run. For example:

  • The 99th percentile response time must be below 700 ms.
  • The 95th percentile response time must be below 400 ms.
  • No more than 1% failed requests.
  • The content of a response must be correct more than 95% of the time.

Check out the Thresholds documentation for additional details on the API and its usage.

Setting up the GitHub Actions workflow

To have GitHub Actions pick up and execute our load test, we need to create a workflow configuration and place it in .github/workflows. Once this file has been pushed to our repository, each commit to our repository will result in the workflow being run.

.github/workflows/load-test.yml
k6_load_test:
name: k6 Load Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Run local k6 test
uses: grafana/k6-action@v0.2.0
with:
filename: test.js

To avoid having to either install k6 on the runner or download the k6 docker image, we're utilising the official k6 action available on the GitHub marketplace.

At this point, commit and push your changes and then go to the actions tab of your GitHub repository. GitHub Actions will now have picked up our new workflow and executed it:

workflow result overview

and if we select the k6 Load Test job:

workflow result details

Running k6 Cloud tests

There are two common execution modes to run k6 tests as part of the CI process.

  • Locally on the CI server.
  • In k6 cloud, from one or multiple geographic locations.

You might want to use cloud tests in these common cases:

  • If you're going to run a test from multiple geographic locations (load zones).
  • If you're going to run a high-load test, that will need more compute resources than available in the runner.

If any of those reasons fit your needs, then running k6 cloud tests is the way to go for you.

⚠️ Try it locally first

Before we start with the configuration, it is good to familiarize ourselves with how cloud execution works, and we recommend you to test how to trigger a cloud test from your machine.

Check out the cloud execution guide to learn how to distribute the test load across multiple geographic locations and more information about the cloud execution.

Now, we will show how to trigger cloud tests using GitHub actions. If you do not have an account with k6 Cloud already, you should go here and start your free trial.

After that, get your account token from the cloud app and add this token to your GitHub project's Secrets page.

.github/workflows/load-test.yml
k6_cloud_test:
name: k6 cloud test run
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Run k6 cloud test
uses: k6io/action@v0.1
with:
filename: test.js
cloud: true
token: ${{ secrets.K6_CLOUD_API_TOKEN }}

As you can see, the only changes needed in our workflow file is setting cloud to true and pass our API token to the action.

Once we commit and push these changes, k6 will now run our test using the k6 cloud, and output the URL to our test results as part of the workflow logs:

actions cloud link

And if we copy the highlighted URL and navigate to it in a new tab

cloud results

Running k6 extensions

k6 extensions allow users to extend the usage of k6 to cover use cases that are not natively supported. With extensions, users can test new protocols, build clients that communicate with other systems during test, and improve performance of tests by writing it in Go and consuming it from tests written in JavaScript. k6 extensions can be imported as JavaScript modules and used in the script used for testing.

As an example, we'll use xk6-counter to execute the following test:

extension/script.js
import counter from 'k6/x/counter';
export const options = {
vus: 10,
duration: '5s',
};
export default function () {
console.log(counter.up(), __VU, __ITER);
}

The standard k6 executable won't be able to import the k6/x/counter module. On your local machine, this test can be run by using a custom k6 executable built with the xk6-counter extension:

# Install xk6
go install go.k6.io/xk6/cmd/xk6@latest
# Build binary
xk6 build --with github.com/mstoykov/xk6-counter@latest
# Run test using the compiled k6 binary
./k6 run test.js

To achieve the same result on GitHub, all you need to do is to setup this workflow:

.github/workflows/k6_extension.yml
on: [push]
jobs:
k6_local_test:
name: k6 counter extension run
runs-on: ubuntu-latest
container: docker://golang:1.17-alpine
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Install xk6
run: go install go.k6.io/xk6/cmd/xk6@latest
- name: Build xk6-counter binary
run: xk6 build --with github.com/mstoykov/xk6-counter@latest
- name: Run k6 extension test
run: ./k6 run extension/script.js

Since the Go development environment is required, we'll use the officialgolang:1.17-alpine image to provide for us a suitable environment for compiling our extension. The install, build and run commands are exactly the same as those used in the local machine.

Alternatively, you can build and host your own custom Docker image that has your required k6 extensions already setup. For reference, you can check out this article to see how's implemented.

Storing test results as artifacts

Using the JSON output for time-series data

Using the upload-artifacts GitHub action, we can upload k6 results in GitHub for later inspection. Do note however this feature requires GitHub storage which is only available on private(free) repositories and paid plans accounts. If you attempt to run a workflow that uses the upload-artifacts action on a public repository, it will simply be ignored.

Below is an example of load-test.yml that demonstrates how to upload k6 results to GitHub:

.github/workflows/load-test.yml
on: [push]
jobs:
k6_load_test:
name: k6 Load Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Run local k6 test
uses: grafana/k6-action@v0.2.0
with:
filename: test.js
flags: --out json=results.json
- name: Upload performance test results
uses: actions/upload-artifact@v3
with:
name: k6-report
path: results.json

In the snippet above, we've passed the output option via the flags field which the k6 GitHub action will pass to the actual k6 runner. Results will be uploaded and hosted on the GitHub repo which you can access via the UI.

github-action-upload-artifact

The results.json file will provide all the metric points collected by k6. Depending on the load options specified, the file can get quite large. Storing it on GitHub is convenient if you don't need to analyze the raw data right away.

Using handleSummary callback for test summary

k6 can also report the general overview of the test results (end of the test summary) in a custom file. To accomplish this, we will need to export a handleSummary function as demonstrated in the code snippet below:

summary-test.js
import { sleep } from 'k6';
import http from 'k6/http';
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.1/index.js';
export const options = {
duration: '.5m',
vus: 5,
iterations: 10,
thresholds: {
http_req_failed: ['rate<0.01'], // http errors should be less than 1%
http_req_duration: ['p(95)<500'], // 95 percent of response times must be below 500ms
},
};
export default function () {
http.get('http://test.k6.io/contacts.php');
sleep(3);
}
export function handleSummary(data) {
console.log('Finished executing performance tests');
return {
'stdout': textSummary(data, { indent: ' ', enableColors: true }), // Show the text summary to stdout...
'summary.json': JSON.stringify(data), // and a JSON with all the details...
};
}

In the handleSummary callback, we have specified the summary.json file to store the results. Below is an example of a GitHub workflow that demonstrates how to upload the summary results to GitHub:

.github/workflows/k6_summary.yml
name: Summary Workflow
on: [push]
jobs:
k6_local_test:
name: k6 local test run - summary example
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Run local k6 test
uses: grafana/k6-action@v0.2.0
with:
filename: src/summary_test.js
- name: Store performance test results
uses: actions/upload-artifact@v3
with:
name: k6-summary-report
path: summary.json

When we briefly analyze the execution below, we can see that our console statement showed up as an INFO message. We can also confirm that summary.json file was created after we completed executing our test as demonstrated below. You can learn more about the handleSummary callback function here.

/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: src/summary_test.js
output: -
scenarios: (100.00%) 1 scenario, 5 max VUs, 1m0s max duration (incl. graceful stop):
* default: 10 iterations shared among 5 VUs (maxDuration: 30s, gracefulStop: 30s)
running (0m08.2s), 0/5 VUs, 10 complete and 0 interrupted iterations
default ✓ [======================================] 5 VUs 08.2s/30s 10/10 shared iters
INFO[0009] Finished executing performance tests source=console
data_received..................: 38 kB 4.6 kB/s
data_sent......................: 4.5 kB 545 B/s
http_req_blocked...............: avg=279.38ms min=0s med=145.54ms max=841.37ms p(90)=826.82ms p(95)=831ms
http_req_connecting............: avg=136.62ms min=0s med=133.85ms max=278.03ms p(90)=275.88ms p(95)=277.67ms
✓ http_req_duration..............: avg=272.97ms min=266.09ms med=271.73ms max=285.25ms p(90)=281.63ms p(95)=282.88ms
{ expected_response:true }...: avg=272.97ms min=266.09ms med=271.73ms max=285.25ms p(90)=281.63ms p(95)=282.88ms
✓ http_req_failed................: 0.00% ✓ 0 ✗ 20
http_req_receiving.............: avg=98.26µs min=0s med=0s max=982.6µs p(90)=98.26µs p(95)=982.6µs
http_req_sending...............: avg=193.22µs min=0s med=0s max=1.43ms p(90)=996.41µs p(95)=1.02ms
http_req_tls_handshaking.......: avg=137.98ms min=0s med=0s max=563.72ms p(90)=554.79ms p(95)=558.97ms
http_req_waiting...............: avg=272.68ms min=266.09ms med=271.73ms max=285.25ms p(90)=281.24ms p(95)=282.88ms
http_reqs......................: 20 2.427327/s
iteration_duration.............: avg=4.11s min=3.55s med=4.11s max=4.68s p(90)=4.68s p(95)=4.68s
iterations.....................: 10 1.213664/s
vus............................: 5 min=5 max=5
vus_max........................: 5 min=5 max=5
{
"state": {
"isStdOutTTY": true,
"isStdErrTTY": true,
"testRunDurationMs": 8239.5155
},
"metrics": {
"http_req_duration{expected_response:true}": {
"type": "trend",
"contains": "time",
"values": {
"med": 271.73555,
"max": 285.2528,
"p(90)": 281.63156,
"p(95)": 282.88102999999995,
"avg": 272.97362499999997,
"min": 266.0929
}
},
"http_req_waiting": {
"type": "trend",
"contains": "time",
"values": {
"avg": 272.68214000000006,
"min": 266.0929,
"med": 271.73555,
"max": 285.2528,
"p(90)": 281.24339000000003,
"p(95)": 282.88102999999995
}
},
...
}
}

On observation, we can verify that the summary.json is an overview of all the data that k6 uses to curate the end of the test summary report including the metrics gathered, test execution state and also test configuration.

Variations

Using a different runner

GitHub provides Windows and macOS environments to run your workflows. You can also setup custom runners that operate on your premises or your cloud infrastructure.

The k6-load-testing workflow we have used above is based on the official k6 action, provided through the GitHub Marketplace. This action, however, currently only runs on Linux. To be able to run it on a Windows or macOS runner, we'll have to install k6 as part of our pipeline.

Using a Windows runner

on: [push]
jobs:
k6_local_test:
name: k6 local test run on windows
runs-on: windows-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: download and extract k6 release binaries
run: |
curl -L https://github.com/grafana/k6/releases/download/v0.26.2/k6-v0.26.2-win64.zip -o k6.zip
7z.exe e k6.zip
shell: bash
- name: k6 test
run: ./k6.exe run ./test.js
shell: bash

Here is the most up-to-date for k6 Windows installation instructions. I would recommend using Chocolatey Package Manager to ensure your script grabs the latest k6 version.

Using a macOS runner

on: [push]
jobs:
k6_local_test:
name: k6 local test on macos
runs-on: macos-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Install k6 by homebrew
run: brew install k6
- name: Local k6 test
run: k6 run ./test.js

The brew package manager is the best tool for grabbing and installing the latest version of k6 whenever the workflow is run.

Nightly Builds

Triggering a subset of performance tests at a specific time is a best-practice for automating your performance testing.

It's common to run some performance tests during the night when users do not access the system under test. For example, to isolate more extensive tests from other types of testing or to generate a performance report periodically.

To configure scheduled nightly build that runs at a given time of a given day or night, head over to your GitHub action workflow and update the on section. Here is an example that triggers the workflow every 15 minutes:

on:
schedule:
# * is a special character in YAML, so you have to quote this string
- cron: '*/15 * * * *'

You'll have to use POSIX cron syntax to schedule a workflow to run at specific UTC times. Here is an interactive tool for creating crontab scheduling expressions.

Simply save, commit, and push the file. GitHub will take care of running the workflow at the time intervals you specified.

Using the docker image

Using the docker image directly is almost as easy as the marketplace app. The example below uses the cloud service, but you could just as easily use it for local execution as well

on: [push]
jobs:
k6_cloud_test:
name: k6 cloud test run
runs-on: ubuntu-latest
container: docker://grafana/k6:latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Cloud k6 test
env:
K6_CLOUD_TOKEN: ${{ secrets.k6_cloud_token }}
run: k6 cloud ./test.js

Summary

The official k6 GitHub Action, as well as the other possible configurations mentioned throughout the article, provide the same flexibility and capabilities as you're used to from running k6 locally.

Integrating k6 performance tests into a new or existing GitHub Actions pipeline is quick and easy, especially using the official marketplace app. By running your performance tests continuously, and automated, you'll be able to identify and correct performance regressions as they occur.

See also

< Back to all posts