Developer-centric load testing defined

We love fast apps, APIs and websites, and we know your users do too. We also love open-source, and at k6 we have a strong conviction of what developer-centric load testing in the era of DevOps should look like. This conviction is the driving force in everything we do, it is our guide for making product decisions as well as for how we provide support, market and evangelise our offering to you, our users. It forms our beliefs:

Simple testing is better than no testing

Load tests should mimic real world users/clients as closely as possible. Whether it’s the distribution of requests to your API or the steps of a user moving through a purchase flow. But, let’s be pragmatic for a second, the 80/20 rule states that you get 80% of the value from 20% of the work and a couple of simple tests are vastly better than no tests at all. Start small and simple, make sure you get something out of the testing first, then expand the test suite and add more complexity until you feel that you’ve reached the point where more effort spent on realism will not give enough return on your invested time.

There are two types of load tests that you could run. The “unit load test” and the “scenario load test”.

A unit load test is testing a single unit, like an API endpoint, in isolation. You might primarily be interested in how the endpoint’s performance trends over time, and be alerted to performance regressions. This type of test tends to be less complex than the scenario load test (described below) and so is a good starting point for your testing efforts. Build a small test suite of unit load tests first, then if you feel you need more realism you can go into scenario testing.

A scenario load test is for testing a real-world flow of interactions, eg. a user logging in, starting some activity, waiting for progress, and then logging out. The goal here is to test the target system with traffic that is consistent with what you’d see in the real world in terms of URLs/endpoints being hit. Usually this in effect means making sure the most critical flows through your app are performant. A scenario load test is as easy as compiling your unit load tests into some logical order related to how your users interact with your service.

import { check } from "k6";
import http from "k6/http";

export function testLogin(params) {
    let data = params || { username: "admin", password: "test" };
    http.post("http://demo.loadimpact.com/login", data);
}

export default function() {
    testLogin();
}

 

import { group } from "k6";
import { testLogin } from "units/login";

let users = open("users.csv");

export default function() {
    group("Login", function() {
        let user = users.getRandom();
        testLogin({ username: user.username, password: user.password });
    });
}

Load testing should be goal oriented

A prerequisite for successful load testing is having goals. We might have formal SLAs to test against or we just want want our api, app or site to respond instantly (<=100ms according to Jakob Nielsen), we all know how impatient we are as users waiting for an app or site to load.

That is why specifying performance goals is such an important part of load testing, eg. above what level is a response time not acceptable, and/or what is an acceptable request failure rate. It is also good practice to make sure that your load testing is functionally correct. Both the performance and functional goals can be codified using thresholds and checks (like asserts).

For load testing to be automated the load testing tool needs to be able to signal to the task runner, usually a CI server, whether the test has passed or failed.

With you goals defined, it’s a straightforward task to achieve this as your thresholds will, on failure, cause k6 to return a non-zero exit code.

import { check } from "k6";
import http from "k6/http";

export let options = {
    thresholds: {
        "http_req_duration{url:http://demo.loadimpact.com/login}": ["p95<100"]
    }
};

export function testLogin(params) {
    let data = params || { username: "admin", password: "test" };
    let res = http.post("http://demo.loadimpact.com/login", data);
    check(res, {
         "is status 200": (r) => r.status === 200
    });
}

export default function() {
    testLogin();
}

Load testing by developers

Load testing should be done by the people who know the application best, the developers, and we believe that developer tools should be open source to allow for a community to form and drive the project forward through discussions and contributions. Hence why we built k6, the load testing tool we’ve always wanted ourselves!

Developer experience is super important

Local environment

As developers we love our own local setup. We spend a lot of time and effort on making sure our favorite code editor and command line shell is as we want it to be, everything else is subpar, a hindrance to our productivity. The local environment is king. It’s where we should be coding our load test scripts and from where we should initiate our load tests.

Everything as code

DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs and tests. It can all just be code.

We check in our code at the entry point of a pipeline, version control (Git and Github in our case), and then it’s taken through a series of steps aimed at assuring quality and lowering risk of releases. Automation helps us keep these steps out of our way while maintaining control through fast feedback loops (context-switching is our enemy). If any step of the pipeline breaks (or fails) we want to be alerted in our communication channel of choice (in our case Slack), and it needs to happen as quickly as possible while we’re in the right context.

Our load testing scripts are no exception. We believe load test scripts should be plain code to get all the benefits of version control, as opposed to say unreadable and tool generated XML.

Automation

Load testing can easily be added as another step in the pipeline, picking up the load test scripts from version control to be executed. Truth be told though, if any step in the pipeline takes too long, it’s at risk of being “temporarily” turned off (“just for now, promise”). Whether it’s that Docker image build taking forever, or that 30 min load test ;-)

Yes, traditional scenario load tests are naturally in the risk zone of being axed in the name of this-step-is-taking-too-long as load tests need time to ramp-up and execute the user journeys with the simulated traffic to gain enough measurements that can be acted on. This is why we don’t recommend load tests to be run on every commit for scenario type load tests, but rather in the frequency range of “daily” for performance regression type tests. When merging code into a release branch or as a nightly build perhaps, so that you can have your snazzy load test report with your morning coffee before you’ve settled into your zone!

For unit load tests, where a single or only a few API endpoints are being tested, running performance regression tests on every commit is appropriate.

Load test in a pre-production environment

Load testing should happen pre-production. Testing in production is risking interruption to business, and should only be done if your processes are mature enough to support it, eg. you’re already doing chaos engineering ;)

Also, using an APM product in production is not a reason not to run load tests. It will not tell you how scalable your system is, it’s “just” deeper monitoring, observability.

  • Strict separation between production and pre-production environments making database dumps and restores infeasible and in some cases regulatorily impossible.
  • Scrubbing of data sources can be non-trivial and hard to verify full coverage. We don’t want to end up sending thousands of emails to real customers just because we missed to scrub the data properly!
  • Systems of today quickly turn complex with many moving parts. When load testing we need to make sure that our system does the right thing(tm) in terms of third-party integrations like credit card processors, email delivery services etc. that might need mock or pre-production values in the data layer.

Our commitment

We commit to build the load testing tool with the best developer experience, k6, and developing it in the open with the community, read our document on stewardship of the OSS k6 project. We believe this is key and the necessary foundation to build great developer tooling.

We aim to codify our 20 years of performance testing knowledge into algorithms to bring you automated test result analysis through our commercial offering, what we refer to as “performance insights”, relieving you of a lot of the work that has traditionally been part of the performance engineer’s responsibility, interpreting the vast amounts of data generated by a load test.

To summarize, let us be the performance engineering experts, be that step in your automation pipeline that quietly ensures the performance of your systems are in check and screams loudly when not, so that you can focus on your application code.

Are you ready to try k6?

Join the growing k6 community and start a new way of load testing.

Get Started >_