We love fast apps, APIs and websites, and we know your users do too. We also love open-source, and at k6 we have a strong conviction of what developer-centric load testing in the era of DevOps should look like. This conviction is the driving force in everything we do, it is our guide for making product decisions as well as for how we provide support, market and evangelise our offering to you, our users. It forms our beliefs:
Load tests should mimic real world users/clients as closely as possible. Whether it’s the distribution of requests to your API or the steps of a user moving through a purchase flow. But, let’s be pragmatic for a second, the 80/20 rule states that you get 80% of the value from 20% of the work and a couple of simple tests are vastly better than no tests at all. Start small and simple, make sure you get something out of the testing first, then expand the test suite and add more complexity until you feel that you’ve reached the point where more effort spent on realism will not give enough return on your invested time.
There are two types of load tests that you could run. The “unit load test” and the “scenario load test”.
A unit load test is testing a single unit, like an API endpoint, in isolation. You might primarily be interested in how the endpoint’s performance trends over time, and be alerted to performance regressions. This type of test tends to be less complex than the scenario load test (described below) and so is a good starting point for your testing efforts. Build a small test suite of unit load tests first, then if you feel you need more realism you can go into scenario testing.
A scenario load test is for testing a real-world flow of interactions, eg. a user logging in, starting some activity, waiting for progress, and then logging out. The goal here is to test the target system with traffic that is consistent with what you’d see in the real world in terms of URLs/endpoints being hit. Usually this in effect means making sure the most critical flows through your app are performant. A scenario load test is as easy as compiling your unit load tests into some logical order related to how your users interact with your service.
A prerequisite for successful load testing is having goals. We might have formal SLAs to test against or we just want our api, app or site to respond instantly (<=100ms according to Jakob Nielsen), we all know how impatient we are as users waiting for an app or site to load.
That is why specifying performance goals is such an important part of load testing, eg. above what level is a response time not acceptable, and/or what is an acceptable request failure rate. It is also good practice to make sure that your load testing is functionally correct. Both the performance and functional goals can be codified using thresholds and checks (like asserts).
For performance testing automation, the load testing tool needs to be able to signal to the task runner, usually a CI server, whether the test has passed or failed.
With your goals defined, it’s a straightforward task to achieve this as your thresholds will, on failure, cause k6 to return a non-zero exit code.
Load testing should be done by the people who know the application best, the developers, and we believe that developer tools should be open source to allow for a community to form and drive the project forward through discussions and contributions. Hence why we built k6, the load testing tool we’ve always wanted ourselves!
As developers we love our own local setup. We spend a lot of time and effort on making sure our favorite code editor and command line shell is as we want it to be, everything else is subpar, a hindrance to our productivity. The local environment is king. It’s where we should be coding our load test scripts and from where we should initiate our load tests.
DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs and tests. It can all just be code.
We check in our code at the entry point of a pipeline, version control (Git and Github in our case), and then it’s taken through a series of steps aimed at assuring quality and lowering risk of releases. Automation helps us keep these steps out of our way while maintaining control through fast feedback loops (context-switching is our enemy). If any step of the pipeline breaks (or fails) we want to be alerted in our communication channel of choice (in our case Slack), and it needs to happen as quickly as possible while we’re in the right context.
Our load testing scripts are no exception. We believe load test scripts should be plain code to get all the benefits of version control, as opposed to say unreadable and tool generated XML.
Load testing can easily be added as another step in the pipeline, picking up the load test scripts from version control to be executed. Truth be told though, if any step in the pipeline takes too long, it’s at risk of being “temporarily” turned off (“just for now, promise”). Whether it’s that Docker image build taking forever, or that 30 min load test ;-)
Yes, traditional scenario load tests are naturally in the risk zone of being axed in the name of this-step-is-taking-too-long as load tests need time to ramp-up and execute the user journeys with the simulated traffic to gain enough measurements that can be acted on. This is why we don’t recommend load tests to be run on every commit for scenario type load tests, but rather in the frequency range of “daily” for performance regression type tests. When merging code into a release branch or as a nightly build perhaps, so that you can have your snazzy load test report with your morning coffee before you’ve settled into your zone!
For unit load tests, where a single or only a few API endpoints are being tested, running performance regression tests on every commit is appropriate.
Load testing should happen pre-production. Testing in production is risking interruption to business, and should only be done if your processes are mature enough to support it, eg. you’re already doing chaos engineering ;)
Also, using an APM product in production is not a reason not to run load tests. It will not tell you how scalable your system is, it’s “just” deeper monitoring, observability.
We commit to build the load testing tool with the best developer experience, k6 OSS, and developing it in the open with the community, read our document on stewardship of the OSS k6 project. We believe this is key and the necessary foundation to build great developer tooling.
We aim to codify our 20 years of performance testing knowledge into algorithms to bring you automated test result analysis through our commercial cloud-based load testing SaaS offering, what we refer to as “performance insights”, relieving you of a lot of the work that has traditionally been part of the performance engineer’s responsibility, interpreting the vast amounts of data generated by a load test.
To summarize, let us be the performance engineering experts, be that step in your automation pipeline that quietly ensures the performance of your systems are in check and screams loudly when not, so that you can focus on your application code.
Are you ready to try k6?
Join the growing k6 community and start a new way of load testing.