In this tutorial, learn how to:
- Apply tags to filter specific results
- Learn about k6 metrics
- Use jq to filter JSON results
- Define groups to organize the test
- Create custom metrics
Context: k6 result outputs
k6 provides many result outputs. By default, the end-of-test summary provides the aggregated results of the test metrics.
For simplicity to learn about k6 metric results, this tutorial uses the JSON output and jq to filter results.
For other options to analyze test results such as storage and time-series visualizations in real-time, refer to:
Write time-series results to a JSON file
To output results to a JSON file, use the --out flag.
Then run this jq command to filter the latency results; http_req_duration metric.
k6 results have a number of built-in tags. For example, filter results to only results where the status is 200.
Or calculate the aggregated value of any metric with any particular tags.
Apply custom tags
You can also apply Tags to requests or code blocks. For example, this is how you can add a tags to the request params.
Create a new script named "tagged-login.js", and add a custom tag to it.
Run the test:
Filter the results for this custom tag:
Organize requests in groups
You can also organize your test logic into Groups. Test logic inside a group tags all requests and metrics within its block. Groups can help you organize the test as a series of logical transactions or blocks.
Context: a new test to group test logic
Results filtering isn't very meaningful in a test that makes one request. And the API test script is getting long. To learn more about how to compare results and other k6 APIs, write a test for the following situation:
A dummy example: your development team wants to evaluate the performance of two user-facing flows.
- visit an endpoint, then another one
- A GET request to https://test.k6.io/contacts.php
- A GET to https://test.k6.io/
- play the coinflip game:
- A POST request to https://test.k6.io/flip_coin.php with the query param bet=heads
- Another POST to https://test.k6.io/flip_coin.php with the query param bet=tails
Can you figure out how to script the requests? If not, use the following script.
Since this example simulates a human user rather than an API call, it has a sleep between each request. Run with k6 run multiple-flows.js.
Add Group functions
Wrap the two endpoints in different groups. Name one group Contacts flow and another Coinflip game.
Run and filter
Inspect the results for only the Coinflip game group. To do so:
Save the preceding script as multiple-flows.js.
Run the script with the command:
Inspect the results with jq. Group names have a :: prefix.
Add a custom metric
As you have seen in the output, all k6 tests emit metrics. However, if the built-in metrics aren't enough, you can create custom metrics. A common use case is to collect metrics of a particular scope of your test.
As an example, create a metric that collects latency results for each group:
- Import Trend from the k6 metrics module.
- Create two duration trend metric functions.
- In each group, add the duration time to the trend for requests to contacts and the coin_flip endpoints.
Run the test with small number of iterations and output the results to results.json.
Look for the custom trend metrics in the end-of-test console summary.
You can also query custom metric results from the JSON results. For example, to get the aggregated results as.
Next steps
In this tutorial, you looked at granular output and filtered by built-in and custom tags. Then you made a new script with groups. Finally, you added a new metric for each group. A next step would be to create a Custom end-of-test summary or to stream the results to a database.
For ongoing operations, you can modularize your logic and configuration. That's the subject of the next step of this tutorial.