No results for

Powered byAlgolia

What is a check?

Checks are like asserts but differ in that they don't halt the execution, instead, they just store the result of the check, pass or fail, and let the script execution continue. Take a look at thresholds for a way to halt the execution. Checks are great for codifying assertions relating to HTTP requests/responses, making sure the response code is 2xx for example:

1import { check } from 'k6';
2import http from 'k6/http';
4export default function () {
5 let res = http.get('');
6 check(res, {
7 'is status 200': (r) => r.status === 200,
8 });

In the above example, one check was specified but you can add as many as you need in a call to check(). When the above script is run you can see how k6 displays the results of the check calls in the following output:

check output

In the output above you can see that our check "is status 200" was successful 100% of the times it was called.

You may also add multiple checks within a single check() statement, like this:

1import { check } from 'k6';
2import http from 'k6/http';
4export default function () {
5 let res = http.get('');
6 check(res, {
7 'is status 200': (r) => r.status === 200,
8 'body size is 1176 bytes': (r) => r.body.length == 1176,
9 });

multiple checks output

Using checks in a CI setting

One important thing to understand regarding checks is that a failed check will not fail the whole load test.

Checks help to keep your code organized and easy to read, but when you're running a load test in a CI test suite you may want to check for error conditions that fail the whole load test. In this case you may want to combine checks with thresholds to get what you want:

1import http from 'k6/http';
2import { check } from 'k6';
3import { Rate } from 'k6/metrics';
5export let errorRate = new Rate('errors');
6export let options = {
7 thresholds: {
8 errors: ['rate<0.1'], // <10% errors
9 },
12export default function () {
13 const res = http.get('');
14 const result = check(res, {
15 'status is 200': (r) => r.status == 200,
16 });
18 errorRate.add(!result);

The above script declares a custom Rate metric (called "errors") to hold information about the errors we have seen during the test, then it uses a threshold on that custom metric to fail the test when it encounters too many errors. If we replace the "" URL with one that will generate an error, k6 will exit with a nonzero exit value, indicating a FAIL result to e.g. a CI system that may have executed it:

threshold results

As you can see above, the exit code generated by k6 after this run was 99. Any nonzero exit code is commonly interpreted by Un*x shells, CI servers, and monitoring systems as a "failure".

Note also that we use the return value of the check() to decide whether to increment our error rate. When any one of the check conditions inside a check() call fails, check() returns false, which will cause the error rate to be incremented. Only if all check conditions pass will check() return true.

See check() in the script API reference for more details on how check() works.

Checks in k6 Cloud Results

In k6 Cloud Results Checks are available in their own tab for analysis.

Here we can quickly see what checks are failing, and upon clicking on any check, see the count of passes/failures at given points in the test. You can also add the check to the analysis tab, for further comparison with other metrics.

k6 Cloud Checks Tab