In the previous section, you made a working script to test an endpoint functionality. The next step is to test how this system responds under load. This requires setting up a few options to configure the parts of the test that don't deal with test logic.
In this tutorial, learn how to:
These examples build on the script from the previous section.
To assess the login endpoint's performance, your team may have defined service level objectives (SLOs). For example:
- 99% of requests should be successful
- 99% of requests should have a latency of 1000ms or less
The service must meet these SLOs under different types of usual traffic.
To codify the SLOs, add thresholds to test that your system performs to its goal criteria.
Thresholds are set in the options object.
Add this options object with thresholds to your script api-test.js.
Run the test.
Inspect the console output to determine whether performance crossed a threshold.
The ✓ and ✗ symbols indicate whether the performance thresholds passed or failed.
Now your script has logic to simulate user behavior, and assertions for functionality (checks) and performance (thresholds).
It's time to increase the load to see how it performs. To increase the load, use the scenarios property. Scenarios schedule load according to the number of VUs, number of iterations, VUs, or by iteration rate.
Start small. Run a smoke test to check that your script can handle a minimal load.
To do so, use the --iterations flag with an argument of 10 or fewer.
If the service can't receive 10 iterations, the system has some serious performance issues to debug. Good thing you ran the test early!
Generally, traffic doesn't arrive all at once. Rather, it gradually increases to a peak load. To simulate this, testers increase the load in stages.
Add the following scenario property to your options object and rerun the test.
Since this is a learning environment, the stages are still quite short. Where the smoke test defined the load in terms of iterations, this configuration uses the ramping-vus executor to express load through virtual users and duration.
Run the test with no command-line flags:
The load is small, so the server should perform within thresholds. However, this test server may be under load by many k6 learners, so the results are unpredictable.
To visualize results...
At this point, it'd be nice to have a graphical interface to visualize metrics as they occur. k6 has many output formats, which can serve as inputs for many visualization tools, both open source and commercial. For ideas, read Ways to visualize k6 results.
Finally, run a breakpoint test, where you probe the system's limits. In this case, run the test until the availability (error rate) threshold is crossed.
To do this:
Add the abortOnFail property to http_req_failed.
Update the scenarios property to ramp the test up until it fails.
Here is the full script.
Run the test.
Did the threshold fail? If not, add another stage with a higher target and try again. Repeat until the threshold aborts the test:
The next step of this tutorial shows how to interpret test results. This involves filtering results and adding custom metrics.