- Writing your performance test
- Creating the test script
- Configuring the load
- Configuring our thresholds
- Setting up the GitHub Actions workflow
- Running cloud tests
- Running k6 extensions
- Storing test results as artifacts
- Using the JSON output for time-series data
- Using handleSummary callback for test summary
- Variations
- Using a different runner
- Nightly Builds
- Using the docker image
- Summary
- See also
You can find a collection of k6 scripts and GitHub workflows referenced in this tutorial here.
📖What you will learn
- How to integrate load testing with k6 into GitHub Actions
- Different implementation paths, and when to use each
In this tutorial, we will look into how to integrate performance testing in your development process with GitHub Actions and k6. For a video tutorial 🎥 , check out the following tutorial on YouTube.
k6 is an open-source load testing tool for testing the performance of APIs, microservices, and websites. Developers use k6 to test a system's performance under a particular load to catch performance regressions or errors.
GitHub Actions is a new tool that enables developers to create custom workflows for their software development lifecycle directly inside their GitHub repositories. As of mid 2019, GitHub Actions now supports full CI/CD pipelines.
If you've not used GitHub Actions before, we recommend looking at the following links to get a hold of how it works:
Writing your performance test
We'll start small by writing a simple test that measure the performance of a single endpoint. As with most, if not all, development efforts, performance testing yields the best results if we work in small increments, iterating and expanding as our knowledge increases.
Our test will consist of three parts:
- An HTTP request against our system under test.
- A load configuration controlling the test duration and amount of virtual users.
- A performance goal, or service level objective, expressed as a threshold.
Creating the test script
When we execute our test script, each virtual user will execute the default function as many times as possible until the duration is up. To make sure we dont flood our system under test, we'll make the virtual user sleep for a second before it continues.
Configuring the load
We'll configure our test to run 50 virtual users continuously for one minute. Because of the sleep we added earlier, this will result in just below 50 iterations per second, givinig us a total of about 2900 iterations.
If you have installed k6 in your local machine, you can run your test locally in your terminal using the command: k6 run test.js.
Configuring our thresholds
The next step is to define your service level objectives, or SLOs around your application performance. SLOs are a vital aspect of ensuring the reliability of your systems and applications. If you do not currently have any defined SLAs or SLOs, now is an excellent time to consider your requirements.
You can define SLOs as Pass/Fail criteria with Thresholds in your k6 script. k6 evaluates them during the test execution and informs about the Threshold results. If any of the thresholds in our test fails, k6 will return with a non-zero exit code, communicating to the CI tool that the step has failed.
Now, we will add one Threshold to our previous script to validate than the 95th percentile response time must be below 500ms and also that our error rate is less than 1%. After this change, the script will be as in the snippet below:
Thresholds are a powerful feature providing a flexible API to define various types of Pass/Fail criteria in the same test run. For example:
- The 99th percentile response time must be below 700 ms.
- The 95th percentile response time must be below 400 ms.
- No more than 1% failed requests.
- The content of a response must be correct more than 95% of the time.
Check out the Thresholds documentation for additional details on the API and its usage.
Setting up the GitHub Actions workflow
To have GitHub Actions pick up and execute our load test, we need to create a workflow configuration and place it in .github/workflows. Once this file has been pushed to our repository, each commit to our repository will result in the workflow being run.
To avoid having to either install k6 on the runner or download the k6 docker image, we're utilising the official k6 action available on the GitHub marketplace.
At this point, commit and push your changes and then go to the actions tab of your GitHub repository. GitHub Actions will now have picked up our new workflow and executed it:
and if we select the k6 Load Test job:
Running cloud tests
There are two common execution modes to run k6 tests as part of the CI process.
- Locally on the CI server.
- In Grafana Cloud k6, from one or multiple geographic locations.
You might want to use cloud tests in these common cases:
- If you're going to run a test from multiple geographic locations (load zones).
- If you're going to run a high-load test, that will need more compute resources than available in the runner.
If any of those reasons fit your needs, then running cloud tests is the way to go for you.
⚠️ Try it locally first
Before we start with the configuration, it is good to familiarize ourselves with how cloud execution works, and we recommend you to test how to trigger a cloud test from your machine.
Check out the cloud execution guide to learn how to distribute the test load across multiple geographic locations and more information about the cloud execution.
Now, we will show how to trigger cloud tests using GitHub actions. If you do not have an account with Grafana Cloud already, you should go here and start your free trial.
After that, get your account token and add this token to your GitHub project's Secrets page.
As you can see, the only changes needed in our workflow file is setting cloud to true and pass our API token to the action.
Once we commit and push these changes, k6 will now run the cloud test, and output the URL to our test results as part of the workflow logs:
And if we copy the highlighted URL and navigate to it in a new tab
Running k6 extensions
k6 extensions allow users to extend the usage of k6 to cover use cases that are not natively supported. With extensions, users can test new protocols, build clients that communicate with other systems during test, and improve performance of tests by writing it in Go and consuming it from tests written in JavaScript. k6 extensions can be imported as JavaScript modules and used in the script used for testing.
As an example, we'll use xk6-counter to execute the following test:
The standard k6 executable won't be able to import the k6/x/counter module. On your local machine, this test can be run by using a custom k6 executable built with the xk6-counter extension:
To achieve the same result on GitHub, all you need to do is to setup this workflow:
Since the Go development environment is required, we'll use the officialgolang:1.17-alpine image to provide for us a suitable environment for compiling our extension. The install, build and run commands are exactly the same as those used in the local machine.
Alternatively, you can build and host your own custom Docker image that has your required k6 extensions already setup. For reference, you can check out this article to see how's implemented.
Storing test results as artifacts
Using the JSON output for time-series data
Using the upload-artifacts GitHub action, we can upload k6 results in GitHub for later inspection. Do note however this feature requires GitHub storage which is only available on private(free) repositories and paid plans accounts. If you attempt to run a workflow that uses the upload-artifacts action on a public repository, it will simply be ignored.
Below is an example of load-test.yml that demonstrates how to upload k6 results to GitHub:
In the snippet above, we've passed the output option via the flags field which the k6 GitHub action will pass to the actual k6 runner. Results will be uploaded and hosted on the GitHub repo which you can access via the UI.
The results.json file will provide all the metric points collected by k6. Depending on the load options specified, the file can get quite large. Storing it on GitHub is convenient if you don't need to analyze the raw data right away.
Using handleSummary callback for test summary
k6 can also report the general overview of the test results (end of the test summary) in a custom file. To accomplish this, we will need to export a handleSummary function as demonstrated in the code snippet below:
In the handleSummary callback, we have specified the summary.json file to store the results. Below is an example of a GitHub workflow that demonstrates how to upload the summary results to GitHub:
When we briefly analyze the execution below, we can see that our console statement showed up as an INFO message. We can also confirm that summary.json file was created after we completed executing our test as demonstrated below. You can learn more about the handleSummary callback function here.
On observation, we can verify that the summary.json is an overview of all the data that k6 uses to curate the end of the test summary report including the metrics gathered, test execution state and also test configuration.
Variations
Using a different runner
GitHub provides Windows and macOS environments to run your workflows. You can also setup custom runners that operate on your premises or your cloud infrastructure.
The k6-load-testing workflow we have used above is based on the official k6 action, provided through the GitHub Marketplace. This action, however, currently only runs on Linux. To be able to run it on a Windows or macOS runner, we'll have to install k6 as part of our pipeline.
Using a Windows runner
Here is the most up-to-date for k6 Windows installation instructions. I would recommend using Chocolatey Package Manager to ensure your script grabs the latest k6 version.
Using a macOS runner
The brew package manager is the best tool for grabbing and installing the latest version of k6 whenever the workflow is run.
Nightly Builds
Triggering a subset of performance tests at a specific time is a best-practice for automating your performance testing.
It's common to run some performance tests during the night when users do not access the system under test. For example, to isolate more extensive tests from other types of testing or to generate a performance report periodically.
To configure scheduled nightly build that runs at a given time of a given day or night, head over to your GitHub action workflow and update the on section. Here is an example that triggers the workflow every 15 minutes:
You'll have to use POSIX cron syntax to schedule a workflow to run at specific UTC times. Here is an interactive tool for creating crontab scheduling expressions.
Simply save, commit, and push the file. GitHub will take care of running the workflow at the time intervals you specified.
Using the docker image
Using the docker image directly is almost as easy as the marketplace app. The example below uses the cloud service, but you could just as easily use it for local execution as well
Summary
The official k6 GitHub Action, as well as the other possible configurations mentioned throughout the article, provide the same flexibility and capabilities as you're used to from running k6 locally.
Integrating k6 performance tests into a new or existing GitHub Actions pipeline is quick and easy, especially using the official marketplace app. By running your performance tests continuously, and automated, you'll be able to identify and correct performance regressions as they occur.
See also
- Writing your performance test
- Creating the test script
- Configuring the load
- Configuring our thresholds
- Setting up the GitHub Actions workflow
- Running cloud tests
- Running k6 extensions
- Storing test results as artifacts
- Using the JSON output for time-series data
- Using handleSummary callback for test summary
- Variations
- Using a different runner
- Nightly Builds
- Using the docker image
- Summary
- See also