Manifesto
We love fast apps, APIs, and websites. We know your users do too. We also love open source, and we have strong convictions about how developer-centric load testing should work in the era of DevOps.
From these convictions, we've formed a core set of beliefs. These beliefs drive everything we do. They guide how we make product decisions, how we provide support, how we market, and how we evangelise our offering to you, our users.
Load tests should mimic real-world users and clients as closely as possible. Whether it’s the distribution of requests to your API or the steps of a user moving through a purchase flow. But, let’s be pragmatic for a second. The 80/20 rule states that you get 80% of the value from 20% of the work and a couple of simple tests are vastly better than no tests at all. Start small and simple, make sure you get something out of the testing first, then expand the test suite and add more complexity until you feel that you’ve reached the point where more effort spent on realism will not give enough return on your invested time.
There are two types of load tests that you could run. The “unit load test” and the “scenario load test”.
A unit load test is testing a single unit, like an API endpoint, in isolation. You might primarily be interested in how the endpoint’s performance trends over time, and be alerted to performance regressions. This type of test tends to be less complex than the scenario load test (described below) and so is a good starting point for your testing efforts. Build a small test suite of unit load tests first, then if you feel you need more realism you can go into scenario testing.
A scenario load test is for testing a real-world flow of interactions, eg. a user logging in, starting some activity, waiting for progress, and then logging out. The goal here is to test the target system with traffic that is consistent with what you’d see in the real world in terms of URLs/endpoints being hit. Usually, this means making sure the most critical flows through your app are performant. A scenario load test is as easy as compiling your unit load tests into some logical order related to how your users interact with your service.
A successful load test needs a goal. We might have formal SLAs to test against, or we might just want our API, app, or site to respond instantly (<=100ms according to Jakob Nielsen). We all know how impatient we get while waiting for an app or site to load.
That is why specifying performance goals is such an important part of load testing, e.g. above what level is a response time not acceptable, and/or what is an acceptable request failure rate. It's also good practice to make sure that your load testing is functionally correct. Both the performance and functional goals can be codified using thresholds and checks (like asserts).
For performance testing automation, the load testing tool needs to be able to signal to the task runner (usually a CI server) whether the test has passed or failed.
With your goals defined, it’s a straightforward task to achieve this as your thresholds will, on failure, cause k6 to return a non-zero exit code.
Load testing should be done by the people who know the application best, the developers, and we believe that developer tools should be open source to allow for a community to form and drive the project forward through discussions and contributions. Hence why we built k6, the load testing tool we’ve always wanted ourselves!
As developers, we love our local setup. We spend a lot of time and effort making sure our favorite code editor and shell are how we want them to be. Everything else is subpar, a hindrance to our productivity. The local environment is king.
It’s where we should code our load-test scripts, and from where we should initiate our load tests.
DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs, and tests. It can all just be code.
We check in our code at the entry point of a pipeline, version control (Git and Github in our case), and then it’s taken through a series of steps aimed at assuring quality and lowering the risk of releases. Automation helps us keep these steps out of our way while maintaining control through fast feedback loops (context-switching is our enemy). If any step of the pipeline breaks (or fails) we want to be alerted in our communication channel of choice (in our case Slack), and it needs to happen as quickly as possible while we’re in the right context.
Our load testing scripts are no exception. We believe load test scripts should be plain code to get all the benefits of version control, as opposed to say unreadable and tool-generated XML.
Load testing can easily be added as another step in the pipeline, picking up the load test scripts from version control to be executed. Truth be told, if any step in the pipeline takes too long, it’s at risk of being “temporarily” turned off (“just for now, promise”). Whether it’s that Docker image build taking forever, or that 30 min load test ;-)
Yes, traditional scenario load tests are naturally in the risk zone of being axed in the name of this-step-is-taking-too-long as load tests need time to ramp up and execute the user journeys with the simulated traffic to gain enough measurements that can be acted on. This is why we don’t recommend load tests to be run on every commit for scenario type load tests, but rather in the frequency range of “daily” for performance regression type tests. When merging code into a release branch or as a nightly build perhaps, so that you can have your snazzy load test report with your morning coffee before you’ve settled into your zone!
For unit load tests, where a single or only a few API endpoints are being tested, running performance regression tests on every commit is appropriate.
Load testing should happen in pre-production. Testing in production risks interrupting business operations. It should be done only if your processes are mature enough to support it, e.g. you’re already doing chaos engineering. ;)
Also, using an application performance monitoring (APM) product in production is not a reason to avoid load tests.
We commit to building the load-testing tool with the best developer experience, k6 OSS, and developing it in the open with the community. Read our document on stewardship of the OSS k6 project. We believe this is key and the necessary foundation to build great developer tooling.
We aim to codify our 20 years of performance-testing knowledge into algorithms to bring you automated test-result analysis through our commercial cloud-based load testing SaaS offering. This analysis, which we call “performance insights,” interprets the vast amounts of data that a load test generates. This relieves you of much of the work that has traditionally been part of the performance engineer’s responsibility.
To summarize, let us be the performance-engineering experts, that step in your automation pipeline that quietly ensures that your system's performance is in check, and screams loudly when it's not. You'll have more time to develop your application code, and the application itself will perform more robustly.
Are you ready to try k6?
Join the growing k6 community and start a new way of load testing.