📖What you will learn
- How to generate a constant request rate in k6
- How to utilize the `scenarios` api to configure executors
Overview
This v0.27.0 release includes a new execution engine and lots of new executors that cater to your specific needs. It also includes the new scenarios API with lots of different options to configure and model the load on the system under test (SUT). This is the result of 1.5 years of work on the infamous #1007 PR 😂.
For generating constant request rates, we may use the constant-arrival-rate executor. This executor starts the test with iterations at a fixed rate for a specified duration. This allows k6 to dynamically change the amount of active VUs during a test run, with the goal of achieving the specified amount of iterations per time unit. In this article, I am going to explain how to use this scenario to generate constant request rates.
Configuring a scenario with the constant-arrival-rate executor
Let's look at different terms used in k6 to describe a test configuration in a scenario that uses a constant-arrival-rate executor:
executor
Executors are the workhorses of the k6 execution engine. Each one schedules VUs and iterations differently, and you'll choose one depending on the type of traffic you want to model to test your services.
rate and timeUnit
k6 tries to start rate iterations every timeUnit period. For example:
- rate: 1, timeUnit: '1s' means "try to start 1 iteration every second"
- rate: 1, timeUnit: '1m' means "try to start 1 iteration every minute"
- rate: 90, timeUnit: '1m' means "try to start 90 iterations per minute", i.e. 1.5 iterations/s or try to start a new iteration every 667ms
- rate: 50, timeUnit: '1s' means "try to start 50 iterations every second", i.e. 50 RPS if we have 1 request in our iteration, i.e. try to start a new iteration every 20ms
duration
The total duration of the scenario, excluding gracefulStop.
preAllocatedVUs
The number of VUs to pre-allocate before the test starts.
maxVUs
The maximum number of VUs to allow during the test run.
Together, these terms form a scenario, which is part of the test configuration options. The code snippet below is an example of a constant-arrival-rate scenario.
In this configuration, we have a constant_request_rate scenario, which is a unique identifier used as a label for the scenario. This scenario uses the constant-arrival-rate executor and executes for 1 minute. Each second (timeUnit), 1 iteration will be made (rate). The pool of pre-allocated virtual users contains 20 instances and may go up to 100, depending on the number of requests and iterations.
Keep in mind that initializing VUs mid-test could be a CPU-heavy task and might skew your test results. In general, it's better for the preAllocatedVUs to be enough to run the load test. So, make sure to allocate more VUs depending on the number of requests you have in your test and the rate at which you want to run the test.
Generating a constant request rate with constant-arrival-rate
In the previous tutorial, we demonstrated how to calculate a constant request rate. Let's run through it again, taking into account how scenarios work:
Suppose that you expect your SUT to handle 1000 requests per second on an endpoint. Pre-allocating 100 VUs (with a maximum of 200) allows each VU to send roughly 5~10 requests (based on 100~200 VUs). If each request takes more than 1 second to complete, you'll end up making fewer requests than expected (more dropped_iterations), which is a sign of either performance issues with your SUT or having unrealistic expectations. Thus, fix performance issues and test again, or lower your expectations by adjusting the timeUnit.
In this scenario, each pre-allocated VU will make 10 requests (rate divided by preAllocatedVUs). If the requests don't make it in 1 second, e.g. the response took more than 1 second to receive or your SUT took more than 1 second to complete the task, k6 will increase the number of VUs to account for missing requests. The following test generates 1000 requests per second and runs for 30 seconds, which roughly generates 30,000 requests, as you can see below in the output: http_reqs and iterations. Also, k6 has only used 148 VUs from the pool of 200 VUs.
The result of executing this script is as follows:
Considerations
There are some things to consider while writing your test script:
Since k6 follows redirects, the number of redirects add to the total number of RPS in the result output. If you don't want that, you can disable it globally by setting maxRedirects: 0 in your options. You can also configure the maximum redirects on the http request itself, which will override the global maxRedirects.
Complexity adds to the mix. So, keep the function being executed simple, preferably only executing a few requests, avoiding additional processing or sleep() calls where possible.
You need a fair amount of VUs to achieve the desired results, otherwise, you'll encounter warning message(s) like the following. In this case, just increase the preAllocatedVUs and/or maxVUs, but keep in mind that you will reach the maximum capacity of the machine running the test at some point, where neither preAllocatedVUs, nor maxVUs, will make any difference.
As you can see in the above results, there are dropped_iterations and the number of iterations and http_reqs is less than the specified rate. Having dropped_iterations set means that there weren't enough initialized VUs to execute some of the iterations. The issue can generally be solved by increasing preAllocatedVUs. The precise value requires a bit of trial and error since it depends on different factors, including the endpoint response time, network throughput, and other related latencies.
While testing, you may encounter the following warning message(s) that signify you have reached your operating system limits. So, consider fine-tuning your operating system:
Beware that the scenarios API deprecates global usages of duration, vus and stages, although they can still be used. This also means that you can't use them together with scenarios.
Conclusion
Before the release of k6 v0.27.0, there was insufficient support for generating constant request rates. Therefore, we introduced a JavaScript workaround by calculating the time it takes for requests to exhaust each iteration of the script. With v0.27.0, this is no longer needed.
In this article, I've explained how k6 can achieve a constant request rate by using the new scenarios API with the constant-arrival-rate executor. This executor simplifies the code and provides the means to achieve fixed RPS. This is in contrast with the previous version of the same article which I described another method to achieve pretty much the same results by calculating the number of VUs, iterations and duration using a formula and some boilerplate JavaScript code. Fortunately, this new approach works as intended and we don't need to use any hacks anymore.
I hope you enjoyed reading this article and I'd be happy to hear your feedback.