The success of creating a load and performance testing culture will be dependent on various factors. It’s critical to choose a tool that fits the requirements of you/your team and will help you meet your goals.
This worksheet is intended to help guide you to create a proof of concept and stay focused on the core task. A proof of concept should be fairly narrow in focus to start and able to easily expand should you have the time. POCs fail when you try to do too much, too quickly.
The first step should be to define goals you have for your proof of concept. These goals should be both related to your systems and the proof of concept itself. It should be clear so that you can reflect and say to yourself “this is done!”. Feel free to answer the questions below or write your own:
1 - What are you specifically testing?
- I am testing API endpoints for my web application.
- I am testing the most common user journeys of users of my web application
2 - What SLAs/SLOs do you currently have in place?
- Endpoints should return a response within X ms and have less than Y % total errors .
- When using our search feature, a response should return within X ms, and have no errors.
3 - Where are you testing?
- I am testing a pre-production environment that closely mimics production. The servers are similar in spec to production without autoscaling and our database contains a similar amount of data that we have in production. I will not be testing autoscaling in this POC.
- I do not have the luxury of a staging environment and I am testing production. I will schedule tests to run during non peak hours.
4 - What do you want to achieve in this POC?
- When I push new code to a particular branch, I want a test to automatically run in CI and pass/fail based on my requirements. If the test fails, I should receive a notification in Slack of the failure so I can investigate.
- I will schedule my tests to run nightly. I will analyze the performance trending graph to monitor for regressions. I will use thresholds to automatically fail tests and automatically abort in situations of extremely poor performance e.g. I will set an additional threshold at 3x my existing pass/fail criteria.
There are many cases where you will need to edit your test scripts to ensure proper functionality or just extend the realism.
- The majority or APIs, Web apps, and websites that users test are privileged.
- Some sites may handle/allow the same credentials being reused, but it’s often not a realistic case.
- The following points will improve your authentication cases.
- Most commonly used to correlate tokens/session IDs that protect against cross site forgery.
- Also useful when you need to handle data from a response.
- Allows for things such as dynamic user logins, searches, etc.
- k6 supports JSON out of the box. CSV is also doable with an external library.
- Supported out of the box when triggering tests from the CLI.
- Allows you to break your scripts into smaller more manageable pieces, reuse existing libraries and keep them neatly organized in your version control system.
- Develop your test cases like you would any other software.
|Source code||OSS, built in Go|
|Platform||Independent, can be run on Windows/Mac/Linux.|
|Distributed load generators||Load gens can be spun up on demand across 15+ AWS regions|
|Community||Active Slack and community forum. GitHub repo updated often|
|Recording/Traffic capture||HAR conversion supported, Chrome extension available in cloud|
|Conversion from other tools||Postman and JMeter converters available|
|Virtual Users||Each virtual user is a concurrent user. Virtual users can make multiple requests in parallel. Virtual Users execute test script|
|Script/test configuration||Can be controlled via command line flags, within an object in the script, or in a separate JSON file|
|Scenarios/Modularization||Test scripts can be modularized for organization. Virtual Users can complete different journeys programmatically|
|Protocols supported||HTTP(s) (including HTTP/2), websockets. gRPC planned in the future|
|Extensibility||Custom modules/libraries can be written. Existing libraries that don’t depend on browser APIs can be used. Possible to convert some Node modules using browserify|
Use this section to define some milestones to help you progress. Here’s an example. Your timeline and steps will likely vary.
1 - Run initial test examples. Get familiar with k6 and the cloud service.
- Due: 1 week - Jan 7.
2 - Implement and run baseline tests. Configure your test to validate your SLA/SLOs.
- Due: 1 week after completion of above - Jan 14.
3- Fix clear and obvious issues
- Unknown - dependent on if issues are found.
4 - Run tests frequently. Integrate into CI or schedule tests to run nightly.
- Due: 1 week after fixing issues - Date TBD.
Use this section to write down conclusions you came to during testing, good, bad, or indifferent. These should be related to the tool, your systems, and experience. If you have lingering questions, write them down here so you can get answers later.
Don’t let uncertainty bog you down! We’ve helped many users get through just about everything when it comes to testing. We are happy to share best practices, put another set of eyes on your code, direction on how to integrate with CI, or even give some guidance with results. Just send us a note: firstname.lastname@example.org. Feel free to include this worksheet as it can help us guide you!