No results for

Powered byAlgolia

Custom summary

With handleSummary(), you can completely customize your end-of-test summary. In this document, read about:

  • How handleSummary() works
  • How to customize the content and output location of your summary
  • The data structure of the summary object

handleSummary() is available only for local tests.

However, we plan to support the feature for k6 Cloud tests, too. Track progress in this issue.

About handleSummary()

After your test runs, k6 aggregates your metrics into a JavaScript object. The handleSummary() function takes this object as an argument (called data in all examples here).

You can use handleSummary() to create a custom summary or return the default summary object. To get an idea of what the data looks like, run this script and open the output file, summary.json.

return summary as JSON
import http from 'k6/http';
export default function () {
http.get('https://test.k6.io');
}
export function handleSummary(data) {
return {
'summary.json': JSON.stringify(data), //the default data object
};
}

Fundamentally, handleSummary() is just a function that can access a data object. As such, you can transform the summary data into any text format: JSON, HTML, console, XML, and so on. You can pipe your custom summary to standard output or standard error, write it to a file, or send it to a remote server.

k6 calls handleSummary() at the end of the test lifecycle.

Use handleSummary()

The following sections go over the handleSummary() syntax and provide some examples.

To look up the structure of the summary object, refer to the reference section.

Syntax

k6 expects handleSummary() to return a {key1: value1, key2: value2, ...} map that represents the summary metrics.

The keys must be strings. They determine where k6 displays or saves the content:

  • stdout for standard output
  • stderr for standard error,
  • any relative or absolute path to a file on the system (this operation overwrites existing files)

The value of a key can have a type of either string or ArrayBuffer.

You can return multiple summary outputs in a script. As an example, this return statement sends a report to standard output and writes the data object to a JSON file.

example keys for handleSummary output
return {
'stdout': textSummary(data, { indent: ' ', enableColors: true }), // Show the text summary to stdout...
'other/path/to/summary.json': JSON.stringify(data), // and a JSON with all the details...
};

Example: extract data properties

This minimal handleSummary() extracts the median value for the iteration_duration metric and prints it to standard output:

Print metric value
import http from 'k6/http';
export default function () {
http.get('https://test.k6.io');
}
export function handleSummary(data) {
const med_latency = data.metrics.iteration_duration.values.med;
const latency_message = `The median latency was ${med_latency}\n`;
return {
stdout: latency_message,
};
}

Example: modify default output

If handleSummary() is exported, k6 does not print the default summary. However, if you want to keep the default output, you could import textSummary from the K6 JS utilities library. For example, you could write a custom HTML report to a file, and use the textSummary() function to print the default report to the console.

You can also use textSummary() to make minor modifications to the default end-of-test summary. To do so:

  1. Modify the data object however you want.
  2. In your return statement, pass the modified object as an argument to the textSummary() function.

The textSummary() function comes with a few options:

OptionDescription
indentHow to start the summary indentation
enableColorWhether to print the summary in color.

For example, this handleSummary() modifies the default summary in the following ways:

  • It deletes the http_req_duration{expected_response:true} sub-metric.
  • It deletes all metrics whose key starts with iteration.
  • It begins each line with the character.
Modify default
import http from 'k6/http';
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.2/index.js';
export default function () {
http.get('https://test.k6.io');
}
export function handleSummary(data) {
delete data.metrics['http_req_duration{expected_response:true}'];
for (const key in data.metrics) {
if (key.startsWith('iteration')) delete data.metrics[key];
}
return {
stdout: textSummary(data, { indent: '→', enableColors: true }),
};
}

In the collapsible, you can use the tabs to compare default and modified reports.

Compare the default and modified reports

To see the output of the preceding script, select Modified. For compactness, these outputs were limited with the summaryTrendStats option.

Default report
Modified
data_received..................: 63 kB 42 kB/s
data_sent......................: 830 B 557 B/s
http_req_blocked...............: med=10.39µs count=5 p(99)=451.07ms p(99.99)=469.67ms
http_req_connecting............: med=0s count=5 p(99)=223.97ms p(99.99)=233.21ms
http_req_duration..............: med=202.26ms count=5 p(99)=225.81ms p(99.99)=226.71ms
{ expected_response:true }...: med=202.26ms count=5 p(99)=225.81ms p(99.99)=226.71ms
http_req_failed................: 0.00% ✓ 0 ✗ 5
http_req_receiving.............: med=278.27µs count=5 p(99)=377.64µs p(99.99)=381.29µs
http_req_sending...............: med=47.57µs count=5 p(99)=108.42µs p(99.99)=108.72µs
http_req_tls_handshaking.......: med=0s count=5 p(99)=204.42ms p(99.99)=212.86ms
http_req_waiting...............: med=201.77ms count=5 p(99)=225.6ms p(99.99)=226.5ms
http_reqs......................: 5 3.352646/s
iteration_duration.............: med=204.41ms count=5 p(99)=654.78ms p(99.99)=672.43ms
iterations.....................: 5 3.352646/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1

Example: make custom file format

This script imports a helper function to turn the summary into a JUnit XML. The output is a short XML file that reports whether the test thresholds failed.

Custom file format
1import http from 'k6/http';
2
3// Use example functions to generate data
4import { jUnit } from 'https://jslib.k6.io/k6-summary/0.0.2/index.js';
5import k6example from 'https://raw.githubusercontent.com/grafana/k6/master/examples/thresholds_readme_example.js';
6
7export default k6example;
8export const options = {
9 vus: 5,
10 iterations: 10,
11 thresholds: {
12 http_req_duration: ['p(95)<200'], // 95% of requests should be below 200ms
13 },
14};
15
16export function handleSummary(data) {
17 console.log('Preparing the end-of-test summary...');
18
19 return {
20 'junit.xml': jUnit(data), // Transform summary and save it as a JUnit XML...
21 };
22}

Output for a test that crosses a threshold looks something like this:

<?xml version="1.0"?>
<testsuites tests="1" failures="1">
<testsuite name="k6 thresholds" tests="1" failures="1"><testcase name="http_req_duration - p(95)&lt;200"><failure message="failed" /></testcase>
</testsuite >
</testsuites >

Example: send data to remote server

You can also send the generated reports to a remote server (over any protocol that k6 supports).

POST the summary
1import http from 'k6/http';
2
3// use example function to generate data
4import k6example from 'https://raw.githubusercontent.com/grafana/k6/master/examples/thresholds_readme_example.js';
5export const options = { vus: 5, iterations: 10 };
6
7export function handleSummary(data) {
8 console.log('Preparing the end-of-test summary...');
9
10 // Send the results to some remote server or trigger a hook
11 const resp = http.post('https://httpbin.test.k6.io/anything', JSON.stringify(data));
12 if (resp.status != 200) {
13 console.error('Could not send summary, got status ' + resp.status);
14 }
15}
note

The last examples use imported helper functions. These functions might change, so keep an eye on jslib.k6.io for the latest.

Of course, we always welcome PRs to the jslib, too!

Summary data reference

Summary data includes information about your test run time and all built-in and custom metrics (including checks).

All metrics are in a top-level metrics object. In this object, each metric has an object whose key is the name of the metric. For example, if your handleSummary() argument is called data, the function can access the object about the http_req_duration metric at data.metrics.http_req_duration.

Metric schema

The following table describes the schema for the metrics object. The specific values depend on the metric type:

PropertyDescription
type
String that gives the metric type
contains
String that describes the data
values
Object with the summary metric values (properties differ for each metric type)
thresholds
Object with info about the thresholds for the metric (if applicable)
note

If you change the default trend metrics with the summaryTrendStats option, the keys for the values of the trend will change accordingly.

Example summary JSON

To see what the summary data looks like in your specific test run:

  1. Add this to your handleSummary() function:

    return { 'raw-data.json': JSON.stringify(data)};`
  2. Inspect the resulting raw-data.json file.

The following is an abridged example of how it might look:

data passed to handleSummary()
1{
2 "root_group": {
3 "path": "",
4 "groups": [
5 // Sub-groups of the root group...
6 ],
7 "checks": [
8 {
9 "passes": 10,
10 "fails": 0,
11 "name": "check name",
12 "path": "::check name"
13 },
14 // More checks...
15 ],
16 "name": ""
17 },
18 "options": {
19 // Some of the global options of the k6 test run,
20 // Currently only summaryTimeUnit and summaryTrendStats
21 },
22
23 "state": {
24 "testRunDurationMs": 30898.965069,
25 // And information about TTY checkers
26 },
27
28 "metrics": {
29 // A map with metric and sub-metric names as the keys and objects with
30 // details for the metric. These objects contain the following keys:
31 // - type: describes the metric type, e.g. counter, rate, gauge, trend
32 // - contains: what is the type of data, e.g. time, default, data
33 // - values: the specific metric values, depends on the metric type
34 // - thresholds: any thresholds defined for the metric or sub-metric
35 //
36 "http_reqs": {
37 "type": "counter",
38 "contains": "default",
39 "values": {
40 "count": 40,
41 "rate": 19.768856959496336
42 }
43 },
44 "vus": {
45 "type": "gauge",
46 "contains": "default",
47 "values": {
48 "value": 1,
49 "min": 1,
50 "max": 5
51 }
52 },
53 "http_req_duration": {
54 "type": "trend",
55 "contains": "time",
56 "values": {
57 // actual keys depend depend on summaryTrendStats
58
59 "avg": 268.31137452500013,
60 "max": 846.198634,
61 "p(99.99)": 846.1969478817999,
62 // ...
63 },
64 "thresholds": {
65 "p(95)<500": {
66 "ok": false
67 }
68 }
69 },
70 "http_req_duration{staticAsset:yes}": { // sub-metric from threshold
71 "contains": "time",
72 "values": {
73 // actual keys depend on summaryTrendStats
74 "min": 135.092841,
75 "avg": 283.67766343333335,
76 "max": 846.198634,
77 "p(99.99)": 846.1973802197999,
78 // ...
79 },
80 "thresholds": {
81 "p(99)<250": {
82 "ok": false
83 }
84 },
85 "type": "trend"
86 },
87 // ...
88 }
89}

Custom output examples

These examples are community contributions. We thank everyone who has shared!