Metrics measure how a system performs under test conditions. By default, k6 automatically collects built-in metrics. Besides built-ins, you can also make custom metrics.
Metrics fall into four broad types:
- Counters sum values.
- Gauges track the smallest, largest, and latest values.
- Rates track how frequently a non-zero value occurs.
- Trends calculates statistics for multiple values (like mean, mode or percentile).
To make a test fail a certain criteria, you can write a Threshold based on the metric criteria (the specifics of the expression depend on the metric type). To filter metrics, you can use Tags and groups. You can also export metrics in various summary and granular formats, as documented in Results output.
|On this page...||Read about...|
|Built-in metrics||Each built-in metric for each supported protocol|
|Create custom metrics||How to build your own metric for each metric type|
What metrics to look at?
Each metric provides a different perspective on performance. So the best metric for your analysis depends on your goals.
However, if you're unsure about the metrics to focus on, you can start with the metrics that measure the requests, errors, and duration (the criteria of the RED method).
- http_reqs, to measure requests
- http_req_failed, to measure error rate
- req_duration, to measure duration
In other terminology, these metrics measure traffic (in requests), availability (in error rate), and latency (in request duration). SREs might recognize these metrics as three of the four Golden Signals.
An aggregated summary of all built-in and custom metrics outputs to stdout when you run a test:
The preceding script outputs something like this:
In that output, all the metrics that start with http, iteration, and vu are built-in metrics, which get written to stdout at the end of a test. For details of all metrics, refer to the Metrics reference.