This section covers the important aspect of metrics management in k6. How and what kind of metrics k6 collects automatically (built-in metrics), and what custom metrics you can make k6 collect.
The built-in metrics are the ones you can see output to stdout when you run the simplest possible k6 test:
Running the above script will output something like below:
All the http_req_... lines and the ones after them are built-in metrics that get written to stdout at the end of a test.
The following built-in metrics will always be collected by k6:
|vus||Gauge||Current number of active virtual users|
|vus_max||Gauge||Max possible number of virtual users (VU resources are pre-allocated, to ensure performance will not be affected when scaling up the load level)|
|iterations||Counter||The aggregate number of times the VUs in the test have executed the JS script (the default function).|
|iteration_duration||Trend||The time it took to complete one full iteration of the default/main function.|
|dropped_iterations||Counter||Introduced in k6 v0.27.0, the number of iterations that could not be started due to lack of VUs (for the arrival-rate executors) or lack of time (due to expired maxDuration in the iteration-based executors).|
|data_received||Counter||The amount of received data. Read this example to track data for an individual URL.|
|data_sent||Counter||The amount of data sent. Read this example to track data for an individual URL.|
|checks||Rate||The rate of successful checks.|
built-in metrics will only be generated when/if HTTP requests are made:
|http_reqs||Counter||How many HTTP requests has k6 generated, in total.|
|http_req_blocked||Trend||Time spent blocked (waiting for a free TCP connection slot) before initiating the request. float|
|http_req_connecting||Trend||Time spent establishing TCP connection to the remote host. float|
|http_req_tls_handshaking||Trend||Time spent handshaking TLS session with remote host|
|http_req_sending||Trend||Time spent sending data to the remote host. float|
|http_req_waiting||Trend||Time spent waiting for response from remote host (a.k.a. \"time to first byte\", or \"TTFB\"). float|
|http_req_receiving||Trend||Time spent receiving response data from the remote host. float|
|http_req_duration||Trend||Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times). float|
|http_req_failed (≥ v0.31)||Rate||The rate of failed requests according to setResponseCallback.|
If you want to access the timing information from an individual HTTP request in the k6, the Response.timings object provides the time spent on the various phases in ms:
- blocked: equals to http_req_blocked.
- connecting: equals to http_req_connecting.
- tls_handshaking: equals to http_req_tls_handshaking.
- sending: equals to http_req_sending.
- waiting: equals to http_req_waiting.
- receiving: equals to http_req_receiving.
- duration: equals to http_req_duration.
Below is the expected (partial) output:
You can also create your own metrics, that are reported at the end of a load test, just like HTTP timings:
The above code will create a Trend metric named “waiting_time” and referred to in the code using the variable name myTrend.
Custom metrics will be reported at the end of a test. Here is how the output might look:
All metrics (both the built-in ones and the custom ones) have a type. The four different metric types in k6 are:
|Counter||A metric that cumulatively sums added values.|
|Gauge||A metric that stores the min, max and last values added to it.|
|Rate||A metric that tracks the percentage of added values that are non-zero.|
|Trend||A metric that allows for calculating statistics on the added values (min, max, average and percentiles).|
All values added to a custom metric can optionally be tagged which can be useful when analysing the test results.
The above code will generate the following output:
The value of my_counter will be 3 (if you run it one single iteration - i.e. without specifying --iterations or --duration).
The above code will result in an output like this:
The value of my_gauge will be 2 at the end of the test. As with the Counter metric above, a Gauge with value zero (0) will NOT be printed to the stdout summary at the end of the test.
The above code will make k6 print output like this:
A trend metric is a container that holds a set of sample values, and which we can ask to output statistics (min, max, average, median or percentiles) about those samples. By default, k6 will print average, min, max, median, 90th percentile, and 95th percentile.
The above code will make k6 print output like this:
The value of my_rate at the end of the test will be 50%, indicating that half of the values added to the metric were non-zero.
- custom metrics are only collected from VU threads at the end of a VU iteration, which means that for long-running scripts, you may not see any custom metrics until a while into the test.
If you use k6 Cloud Results, you have access to all test metrics within the Analysis Tab. You can use this tab to further analyze and compare test result data, to look for meaningful correlations in your data.