It has already been established that k6 can run large load tests from a single instance, but what about multiple instances running a single test?
Several reasons why you may wish to run a distributed test include:
- Your system under test (SUT) should be accessed from multiple IP addresses.
- A fully optimized node cannot produce the load required by your extremely large test.
- Kubernetes is already your preferred operations environment.
For scenarios such as these, we've created the k6-operator.
k6-operator is an implementation of the operator pattern in Kubernetes, defining custom resources in Kubernetes. The intent is to automate tasks that a human operator would normally do; tasks like provisioning new application components, changing configurations, or resolving run-time issues.
The k6-operator defines the custom K6 resource type and listens for changes to, or creation of, K6 objects. Each K6 object references a k6 test script, configures the environment, and specifies the number of instances, as parallelism, for a test run. Once a change is detected, the operator will react by modifying the cluster state, spinning up k6 test jobs as needed.
Let's walk through the process for getting started with the k6-operator. The only requirement being access to a Kubernetes cluster and having the appropriate access and tooling.
Experiment in Docker
- Introducing k6-operator
- Get started with k6-operator
- 1. Install the operator
- 2. Create a test script
- 3. Add test scripts
- 4. Create a custom resource
- 5. Run your test
- 6. When things go wrong
- See also
The first step to running distributed tests in Kubernetes is to install the operator if not already installed in the cluster. At this time, installation does require having the project source code downloaded onto your system. Installation commands must be run from the source directory.
Besides privileged access to a Kubernetes cluster, installation will require that the system performing the installation has the following tools installed:
From your command-line, execute the following:
Ensure that your kubectl tool is set for the appropriate Kubernetes cluster. Then, from the k6-operator directory, you may now perform the installation:
By default, the operator will be installed into a new namespace, k6-operator-system. You can verify the successful installation by listing available resources within the namespace:
After a few moments, your resulting status should become Running as shown below:
NAME READY STATUS RESTARTS AGEk6-operator-controller-manager-7664957cf7-llw54 2/2 Running 0 160m
You are now ready to start create and execute test scripts!
Creating k6 test scripts for Kubernetes is no different from creating the script for the command-line. If you haven’t already created test cases for your system, then we suggest having a read through one of our guides for creating tests for websites and APIs/microservices:
In general, it is advised to start small and expand on your scripts over iterations. So let's start simple and create a test.js with the following content:
While creating scripts, run them locally before publishing to your cluster. This can give you immediate feedback if you have errors in your script.
Let's go ahead and verify our script is valid by performing a brief test:
We should see a successful execution and resulting output summary.
Using a ConfigMap is a quick and straightforward mechanism for adding your test scripts to Kubernetes. The kubectl tool provides a convenient method to create a new ConfigMap from a local script.
Let's create our ConfigMap as my-test with the content of our test.js script we created in the previous step:
Limitations exist on how large your test script can be when deployed within a ConfigMap. Kubernetes imposes a size limit of 1,048,576 bytes (1 MiB) for the data, therefore if your test scripts exceed this limit, you'll need to mount a PersistentVolume.
Check the motivations for when you should use a ConfigMap versus a PersistentVolume.
You should see confirmation with configmap/my-test created.
Setting up a PersistentVolume is beyond the scope of this guide, but enables access to a shared filesystem from your Kubernetes cluster via PersistentVolumeClaim.
When using this option, organize your test scripts in the applicable filesystem just as you would locally. This mechanism is ideal when breaking up monolithic scripts into reusable modules.
As seen on k6 Office Hours
Organizing your test scripts was part of the discussion during episode #76 of k6 Office Hours.
When using a PersistentVolume, the operator will expect all test scripts to be contained within a directory named /test/.
To learn more about creating PersistentVolume and PersistentVolumeClaim resources, review the Kubernetes documentation.
During installation, the K6 Custom Resource definition was added to the Kubernetes API. The data we provide in the custom resource K6 object should contain all the information necessary for the k6-operator to start a distributed load test.
Specifically, the main elements defined within the K6 object relate to the name and location of the test script to run, and the amount of parallelism to utilize.
The K6 custom resource provides many configuration options to control the initialization and execution of tests. For the full listing of possible options, please refer to the project source and README.
The following examples will show some common variations for the custom resource:
When the test script to be executed is contained within a ConfigMap resource, we specify the script details within the configMap block of YAML. The name is the name of the ConfigMap and the file is the key-value for the entry.
Let's create the file run-k6-from-configmap.yaml with the following content:
Recall when the script was added as a ConfigMap for our configuration values. We created the ConfigMap named my-test. The test script content was added to the map using the filename as the key-value, therefore the file value is test.js.
The amount of parallelism is up to you; how many pods do you want to split the test amongst? The operator will split the workload between the pods using execution segments.
It is important that the ConfigMap and CustomResource are created in the same Namespace.
If the test script to be executed is contained within a PersistentVolume, creation of a PersistentVolumeClaim will be required. We won't go into the details of PersistentVolumes and PersistentVolumeClaims, but to learn more, you should review the Kubernetes documentation.
Assume we've created a PersistentVolumeClaim named my-volume-claim against a PersistentVolume containing the test script /test/test.js, we can create the file run-k6-from-volume.yaml with the following content:
It is important that the PersistentVolumeClaim and CustomResource are created in the same Namespace.
Not everything should be included directly in your scripts. Well written scripts will allow for variability to support multiple scenarios and to avoid hard-coding values that tend to change. These could be anything from passwords to target urls, in addition to system options.
We can pass this data as environment variables for use with each pod executing your script. This can be defined explicitly within the K6 resource, or by referencing a ConfigMap or Secret.
The above YAML introduces the runner section. This section applies to each pod that will be running a portion of your test, based upon the desired parallelism.
Now, with the referenced resources, our test scripts can use environment variables as in the following:
k6 options can be specified in many ways, one being the command-line. Specifying options via command-line can still be accomplished when using the operator as shown with the following example:
Be sure to visit the options reference for a listing of available options.
Tests are executed by applying the custom resource K6 to a cluster where the operator is running. The test configuration is applied as in the following:
After completing a test run, you need to clean up the test jobs created. This is done by running the following command:
Sadly nothing works perfectly all the time, so knowing where you can go for help is important.
Be sure to search the k6-operator category in the community forum. k6 has a growing and helpful community of engineers working with k6-operator, so there's a good chance your issue has already been discussed and overcome. It's also in these forums where you'll be able to get help from members of the k6 development team.
Here are some additional resources to help on your learning journey: