Previously, I have covered an article on Load Testing SQL Databases with k6. For your information, from k6 version 0.29.0 onwards, you can write a k6 Go extension and build your own k6 binaries. This comes in handy as you can use a single framework for load testing different protocols, such as ZMTQ, SQL, Avro, MLLP, etc.
In this series of k6 extensions, let’s benchmark Redis now. According to redis.io, Redis is a type of in-memory data structure store that can be used as database, cache and message broker.
You might want to evaluate the performance or scalability of Redis instances in given hardware, giving you better insights into the throughput the Redis service can handle.
This tutorial covers Redis performance testing via two different approaches on a Linux machine:
- redis-benchmark
- xk6-redis
redis-benchmark
By default, Redis comes with its own benchmark utility called redis-benchmark. It is similar to Apache's ab utility and can simulate a number of clients sending a total number of queries simultaneously.
Options
Make sure that you have Redis installed in your system. If you have not done so, kindly head over to the official Redis download page and install it based on the instructions given.
Once you are done with it, you should be able to run the following command:
You should see the following output:
Examples
Depending on your needs, a typical example is to just run the benchmark with the default configuration:
It is a good idea to use the -q option. Here is an example for running 100k of requests in quiet mode:
In addition, you can run parallel clients via the -c option. The following example use 20 parallel clients for a total of 100k requests:
You can restrict the test to run only a subset of the commands. For example, you can use the following command to test only set and get commands:
In fact, you can run test on specific commands for benchmarking like the following example:
If your Redis server is running on a different hostname and port, you can benchmark the server as follows:
You should get the following output indicating the requests per second for each of the test conducted:
Latency
Sometimes, you might prefer to analyze the latency instead. There are two types of latency measurement provided by redis-cli:
- latency
- intrinsic latency
In this case, we measure latency as the time between sending a request to Redis and receiving a response. On the other hand, intrinsic latency refers to the system latency that is highly dependent on external factors such as operating system kernel or virtualization. Since Redis 2.8.7, you can measure the intrinsic latency independently.
Please note that you can only run redis-cli in the machine which hosts the Redis server unlike redis-benchmark which is runnable on the client machine. Besides that, this mode is not connected to a Redis server at all and the measurement is based on the largest time in which the kernel does not provide CPU time to run to the redis-cli process itself. As a result, it is not an actual measurement of the latency between client and Redis server.
Having said that, it does provide a quick analysis if there is something wrong with the machine that hosts the Redis server.
Run the following command to get the overall latency of your Redis server:
You should see an increase in the sample as time goes by and the average latency:
Use Ctrl+C to stop it as the process will run indefinitely without stopping.
For intrinsic latency, you should use the following command instead:
You can pass an integer representing the duration of the test. In this case, the test will run for 10 seconds. The output is as follows:
The average latency is about 0.22 milliseconds while the intrinsic latency is 0.063 microseconds.
Let’s proceed to the next section and start exploring another testing approach using k6.
xk6-redis
k6 provides the capabilities to do performance testing with scripting language. This is a big plus to developers and Q&A testers as you will have a better control of the entire workflow of the test. For example, you can ramp up or ramp down the requests at specific intervals of the test which is not achievable when using redis-benchmark.
Fortunately, k6 provides the xk6-redis extension as part of their ecosystem. You can use it directly to build your own custom k6 binaries for testing Redis server.
This extension comes with the following API:
Output | Usage |
---|---|
Client(options) | Represent the Client construtor. Returns a new Redis client object. |
client.set(key, value, expiration time) | Set the given key with the given value and expiration time. |
client.get(key) | Get returns the value for the given key. |
Building k6 with the redis extension
Before that, make sure you have the following installed in your machine:
- Go
- Git
Once you have completed the installation, run the following to install xk6 module:
If you have installed xk6 directory to Go module, you can make your Redis k6 build by running:
You should get a k6 executable in your current working directory.
Alternatively, you can download the pre-compiled binaries at the following Github repository. The latest version at the time of this writing is v0.4.1. If you have trouble identifying the architecture of your Linux machine, simply run the following command:
Let’s say that the command returns the following:
You should download the xk6_0.4.1_linux_amd64.tar.gz asset and extract it as follows:
You should get the following files in your working directory:
- README.md
- LICENSE
- xk6
Then, run the following command to build k6 for Redis:
You should have now a new k6 binary in your working directory.
k6 Script
Next, let’s create a new JavaScript file called test_script.js in the same directory as your k6 executable. Append the following import statement at the top of the file:
Continue by adding the following code which connect to your Redis server:
It accepts the following an object with the following fields:
- addr: hostname and port of your Redis server denoted as hostname:port.
- password: password of your Redis server.
- db: the db number ranging from 0 to 15.
To keep it simple and short, the test case is going to be as follows:
- Set a new key:value on start of test.
- Running parallel VUs to get the same key repeatedly.
The k6 setup function runs only once at the test start, independently of the test load and duration. Let’s set the key: value as follows:
The set function accepts three input parameters:
- key
- value
- expiration time
Then, define the default function which will be called repeatedly by each VU during the entire test:
The complete code is as follows:
Running the test
Save the test script and run the following command to the test your Redis server for 5 seconds:
By default, it is using one Virtual User (VU) but you can modify it with the --vus flag. You should see the following output:
This test reports that the Redis server handles 8401 iterations per second. Because each iteration refers to one execution of the default function and there is one request call in our default function, the server is handling 8401 GET requests per second in this test.
Scale the load
Let’s increase the load gradually until it encounters an error. For a start, set the VUs to 100 as follows:
The output is as follows:
It indicates that your Redis server can sustain about 22304 iterations per second for 100 users at the same time.
Continue the test and set the VUs to 1000 this time:
Depending on the configuration of your Redis, you might encounter the following error:
It indicates that you have reached the max number of clients allowed. You can check the number of active connection by running the following command inside redis-cli:
It will returns the following output:
To get the max limit, use the following instead:
The output is as follows:
Latency
Now, let’s have a look at how to get the latency via k6. At the time of this writing, the xk6-redis extension does not report latency as part of its metrics. However, you can easily extend the code in your script and implement your own custom metrics.
Have a look at the following workaround to measure latency. First, let’s add the following import statement at the top of your k6 script:
Then, initialize a Trend instance as follows:
It accepts two input arguments:
- name: the name of the custom metric.
- isTime: a boolean indicating whether the values added to the metric are time values or just untyped values.
Add the final touch by modifying the default function as follows:
Have a look at the following complete code which initialize the options directly inside the script:
You should be able to see redis_latency metrics once the test has completed.
⚠️ Please note that this workaround of measuring the latency is only indicative, as the JavaScript implementation adds an overhead that might skew the reported latency, especially when the latency is in the sub-microsecond range.
It would be great if the xk6-redis extension provided its own built-in Redis latency metrics similar to the HTTP request metrics. Measuring Redis latency in Go directly would be much more accurate and avoid the unnecessary RedisLatencyMetric script code.
Conclusion
All in all, redis-benchmark is a good tool that provides you with a quick glimpse of the performance of your Redis server. On the other hand, k6 is scriptable in JavaScript and can provide you with better control over the execution and workflow of your test. A scripting language is more flexible for testing various ways to connect and query your Redis server.
In fact, you can utilize both of the tools to get the best out of them. For example, you can run redis-benchmark when you install it on your machine for the first time, to get a rough idea of the performance. Subsequently, use k6 for more advanced cases like integrating your test with your existing toolbox or automating your testing.