No results for

Powered byAlgolia

End-of-test Summary

suggest edits

By default, at the end of every local test run, k6 prints a summary report to stdout that contains a general overview of your test results. It includes aggregated values for all built-in and custom metrics and sub-metrics, thresholds, groups, and checks. It can look somewhat like this:

✓ http2 is used
✓ status is 200
✓ content is present
█ Static Assets
✓ status is 200
✓ reused connection
✓ check_failure_rate.........: 0.00% ✓ 0 ✗ 6708
checks.....................: 100.00% ✓ 16770 ✗ 0
data_received..............: 94 MB 308 kB/s
data_sent..................: 1.6 MB 5.2 kB/s
group_duration.............: min=134.4ms avg=177.67ms med=142.75ms p(95)=278.26ms p(99)=353.49ms p(99.99)=983.84ms max=1.01s
http_req_blocked...........: min=947ns avg=1.66ms med=2.37µs p(95)=4.65µs p(99)=38.98µs p(99.99)=620.34ms max=811.88ms
http_req_connecting........: min=0s avg=536.83µs med=0s p(95)=0s p(99)=0s p(99.99)=208.81ms max=232.16ms
✓ http_req_duration..........: min=131.44ms avg=150.63ms med=138.13ms p(95)=269.81ms p(99)=283.83ms p(99.99)=982.76ms max=1.01s
✗ { staticAsset:yes }......: min=131.44ms avg=153.09ms med=138.2ms p(95)=271.34ms p(99)=284.22ms p(99.99)=1.01s max=1.01s
http_req_receiving.........: min=33.36µs avg=2.66ms med=180.36µs p(95)=2.4ms p(99)=128.79ms p(99.99)=205.16ms max=205.45ms
http_req_sending...........: min=6.09µs avg=44.92µs med=35.77µs p(95)=98.26µs p(99)=148.49µs p(99.99)=1.09ms max=5.53ms
http_req_tls_handshaking...: min=0s avg=1.12ms med=0s p(95)=0s p(99)=0s p(99.99)=447.46ms max=614.35ms
http_req_waiting...........: min=131.3ms avg=147.92ms med=137.57ms p(95)=267.49ms p(99)=282.23ms p(99.99)=982.55ms max=1.01s
http_reqs..................: 13416 44.111343/s
iteration_duration.........: min=2.28s avg=3.83s med=3.82s p(95)=5.2s p(99)=5.36s p(99.99)=6.1s max=6.18s
iterations.................: 3354 11.027836/s
vus........................: 1 min=1 max=50
vus_max....................: 50 min=50 max=50

A few options can affect how this report behaves:

Summary export to a JSON file

Since version 0.26.0 k6 has had the --summary-export=path/to/file.json option for local test runs. It exports some of the summary report data to a JSON file format.

Unfortunately, the exported format is somewhat limited and has a few confusing peculiarities. For example, groups and checks are unordered. Threshold values are also somewhat unintuitive - they signify whether the threshold has been crossed. So, true is the "bad" threshold value, i.e. when the threshold has failed, and false is the "good" value...

We couldn't change the --summary-export data format because it would have broken backwards compatibility in a feature that people depended on in CI, so it still works how it used to. However, in k6 v0.30.0, we introduced handleSummary() - a new and better way to make JSON exports of the summary data, as well as any other format (CSV, XML (JUnit/xUnit/etc.), HTML, TXT, etc.) that may be required. We strongly recommend everyone to use handleSummary() instead of --summary-export. For more details, see the next section in this document...

handleSummary() callback

Starting with k6 v0.30.0, users can now completely customize the end-of-test summary report!

You can now export a function called handleSummary() and k6 will call it at the end of the test run, even after teardown(). handleSummary() will be called with a JS object containing the same information that is used to generate the end-of-test summary and --summary-export, and allows users to completely customize how the end-of-test summary looks like.

Besides customizing the end-of-test CLI summary (if handleSummary() is exported, k6 will not print the default), you can also transform the summary data to various machine or human-readable formats and save it to files. This allows the creation of JS helper functions that generate JSON, CSV, XML (JUnit/xUnit/etc.), HTML, etc. files from the summary data.

You can also send the generated reports to a remote server by making an HTTP request with them (or using any of the other protocols k6 already supports)! Here's a simple example:

handleSummary() demo
1import http from 'k6/http';
2import k6example from 'https://raw.githubusercontent.com/loadimpact/k6/master/samples/thresholds_readme_example.js';
3export default k6example; // use some predefined example to generate some data
4export const options = { vus: 5, iterations: 10 };
5
6// These are still very much WIP and untested, but you can use them as is or write your own!
7import { jUnit, textSummary } from 'https://jslib.k6.io/k6-summary/0.0.1/index.js';
8
9export function handleSummary(data) {
10 console.log('Preparing the end-of-test summary...');
11
12 // Send the results to some remote server or trigger a hook
13 let resp = http.post('https://httpbin.test.k6.io/anything', JSON.stringify(data));
14 if (resp.status != 200) {
15 console.error('Could not send summary, got status ' + resp.status);
16 }
17
18 return {
19 'stdout': textSummary(data, { indent: ' ', enableColors: true}), // Show the text summary to stdout...
20 '../path/to/junit.xml': jUnit(data), // but also transform it and save it as a JUnit XML...
21 'other/path/to/summary.json': JSON.stringify(data), // and a JSON with all the details...
22 // And any other JS transformation of the data you can think of,
23 // you can write your own JS helpers to transform the summary data however you like!
24 }
25}

k6 expects handleSummary() to return a {key1: value1, key2: value2, ...} map. The values can be a string or ArrayBuffer, and represent the generated summary report contents. The keys should be strings and determine where the contents will be displayed or saved:

  • stdout for standard output
  • stderr for standard error,
  • or any relative or absolute path to a file on the system (which will be overwritten)

The format of the data parameter is similar but not identical to the data format of --summary-export. The format of --summary-export remains unchanged, for backwards compatibility, but the data format for this new k6 feature was made more extensible and had some of the ambiguities and issues from the previous format fixed.

To get an idea how data would look like in your specific test run, just add return { 'raw-data.json': JSON.stringify(data)}; in your handleSummary() function and inspect the resulting raw-data.json file. Here's a very abridged example of how it might look like:

data passed to handleSummary()
1{
2 "root_group": {
3 "path": "",
4 "groups": [
5 // Sub-groups of the root group...
6 ],
7 "checks": [
8 {
9 "passes": 10,
10 "fails": 0,
11 "name": "check name",
12 "path": "::check name"
13 },
14 // More checks...
15 ],
16 "name": ""
17 },
18 "options": {
19 // Some of the global options of the k6 test run,
20 // Currently only summaryTimeUnit and summaryTrendStats
21 },
22 "metrics": {
23 // A map with metric and sub-metric names as the keys and objects with
24 // details for the metric. These objects contain the following keys:
25 // - type: describes the metric type, e.g. counter, rate, gauge, trend
26 // - contains: what is the type of data, e.g. time, default, data
27 // - values: the specific metric values, depends on the metric type
28 // - thresholds: any thresholds defined for the metric or sub-metric
29 //
30 "http_reqs": {
31 "type": "counter",
32 "contains": "default",
33 "values": {
34 "count": 40,
35 "rate": 19.768856959496336
36 }
37 },
38 "vus": {
39 "type": "gauge",
40 "contains": "default",
41 "values": {
42 "value": 1,
43 "min": 1,
44 "max": 5
45 }
46 },
47 "http_req_duration": {
48 "type": "trend",
49 "contains": "time",
50 "values": {
51 // actual keys depend depend on summaryTrendStats
52 "min": 135.092841,
53 "avg": 268.31137452500013,
54 "max": 846.198634,
55 "p(99.99)": 846.1969478817999,
56 // ...
57 },
58 "thresholds": {
59 "p(95)<500": {
60 "ok": false
61 }
62 }
63 },
64 "http_req_duration{staticAsset:yes}": { // sub-metric from threshold
65 "contains": "time",
66 "values": {
67 // actual keys depend on summaryTrendStats
68 "min": 135.092841,
69 "avg": 283.67766343333335,
70 "max": 846.198634,
71 "p(99.99)": 846.1973802197999,
72 // ...
73 },
74 "thresholds": {
75 "p(99)<250": {
76 "ok": false
77 }
78 },
79 "type": "trend"
80 },
81 // ...
82 }
83}

This feature is only available for local k6 run tests for now, though we plan to support k6 cloud tests eventually. And, as mentioned in the snippet above, the JS helper functions that transform the summary in various formats are far from final, so keep an eye on jslib.k6.io for updates. Or, better yet, submit PRs with improvements and more transformations at https://github.com/k6io/jslib.k6.io