Metrics measure how a system performs under test conditions. By default, k6 automatically collects built-in metrics. Besides built-ins, you can also make custom metrics.
Metrics fall into four broad types:
- Counters sum values.
- Gauges track the smallest, largest, and latest values.
- Rates track how frequently a non-zero value occurs.
- Trends calculate statistics for multiple values (like mean or mode).
Built-in metrics
The built-in metrics output to stdout when you run the simplest possible k6 test:
Running the preceding script outputs something like this:
In that output, all the metrics that start with http, iteration, and vu are built-in metrics, which get written to stdout at the end of a test.
k6 always collects the following built-in metrics:
Metric Name | Type | Description |
---|---|---|
vus | Gauge | Current number of active virtual users |
vus_max | Gauge | Max possible number of virtual users (VU resources are pre-allocated, ensuring performance will not be affected when scaling up the load level) |
iterations | Counter | The aggregate number of times the VUs executed the JS script (the default function). |
iteration_duration | Trend | The time it took to complete one full iteration, including time spent in setup and teardown. To calculate the duration of the iteration's function for the specific scenario, try this workaround |
dropped_iterations | Counter | The number of iterations that weren't started due to lack of VUs (for the arrival-rate executors) or lack of time (expired maxDuration in the iteration-based executors). About dropped iterations |
data_received | Counter | The amount of received data. This example covers how to track data for an individual URL. |
data_sent | Counter | The amount of data sent. Track data for an individual URL to track data for an individual URL. |
checks | Rate | The rate of successful checks. |
HTTP-specific built-in metrics
These metrics are generated only when the test makes HTTP requests.
Metric Name | Type | Description |
---|---|---|
http_reqs | Counter | How many total HTTP requests k6 generated. |
http_req_blocked | Trend | Time spent blocked (waiting for a free TCP connection slot) before initiating the request. float |
http_req_connecting | Trend | Time spent establishing TCP connection to the remote host. float |
http_req_tls_handshaking | Trend | Time spent handshaking TLS session with remote host |
http_req_sending | Trend | Time spent sending data to the remote host. float |
http_req_waiting | Trend | Time spent waiting for response from remote host (a.k.a. “time to first byte”, or “TTFB”). float |
http_req_receiving | Trend | Time spent receiving response data from the remote host. float |
http_req_duration | Trend | Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times). float |
http_req_failed | Rate | The rate of failed requests according to setResponseCallback. |
Accessing HTTP timings from a script
To access the timing information from an individual HTTP request, the Response.timings object provides the time spent on the various phases in ms:
- blocked: equals to http_req_blocked.
- connecting: equals to http_req_connecting.
- tls_handshaking: equals to http_req_tls_handshaking.
- sending: equals to http_req_sending.
- waiting: equals to http_req_waiting.
- receiving: equals to http_req_receiving.
- duration: equals to http_req_duration.
The expected (partial) output looks like this:
Custom metrics
You can also create custom metrics. They are reported at the end of a load test, just like HTTP timings:
The preceding code creates a Trend metric called waiting_time. In the code, it's referred to with the variable name myTrend.
Custom metrics are reported at the end of a test. Here's how the output might look:
You can optionally tag any value for a custom metric. This can be useful when analyzing test results.
noteCustom metrics are collected from VU threads only at the end of a VU iteration. For long-running scripts, custom metrics might appear only after the test runs a while.
Metric types
All metrics (both built-in and custom) have a type. The four different metric types in k6 are:
Metric type | Description |
---|---|
Counter | A metric that cumulatively sums added values. |
Gauge | A metric that stores the min, max and last values added to it. |
Rate | A metric that tracks the percentage of added values that are non-zero. |
Trend | A metric that allows for calculating statistics on the added values (min, max, average and percentiles). |
Counter (cumulative metric)
The preceding code generates something like the following output:
If you run the script for one iteration—without specifying --iterations or --duration—the value of my_counter will be three.
Note that there is currently no way to access the value of any custom metric from within JavaScript. Note also that counters that have a value of zero (0) at the end of a test are a special case. They will NOT print to the stdout summary.
Gauge (keep the latest value only)
The preceding code results in output like this:
The value of my_gauge will be 2 at the end of the test. As with the Counter metric, a Gauge with a value of zero (0) will NOT be printed to the stdout summary at the end of the test.
Trend (collect trend statistics (min/max/avg/percentiles) for a series of values)
The preceding code outputs something like this:
A trend metric holds a set of sample values, which it can output statistics about (min, max, average, median, or percentiles). By default, k6 prints average, min, max, median, 90th percentile, and 95th percentile.
Rate (keeps track of the percentage of values in a series that are non-zero)
The preceding code outputs something like this:
The value of my_rate at the end of the test will be 50%, indicating that half of the values added to the metric were non-zero.
Metric graphs in k6 Cloud Results
If you use k6 Cloud Results, you can access all test metrics within the Analysis Tab. You can use this tab to analyze, compare, and look for meaningful correlations in your test result data.