Running tests within the web app is helpful when getting a feel for the tool or building a proof of concept. However, many users will find great flexibility when using k6 to trigger cloud tests from the command line.
Reasons for triggering cloud tests from the k6 CLI include:
- Storing test scripts in local version control.
- Modularization of scripts for collaboration and easier maintenance.
- Preference to work in your local environment.
- Integrating testing in CI/CD pipelines.
Instructions
First, you need to have a k6 Cloud account. If you don't have one, sign up here and get 50 cloud tests with the Free Trial.
Install k6 using the instructions here.
Authenticate to k6 Cloud from the CLI. Log in using your username and password or your API token.
Log in with username and passwordLog in with the API Tokenk6 login stores your API Token in a local config file to authenticate to k6 Cloud when running cloud commands.
Run your test in the cloud.
CLIDockerYou'll see k6 print some information and the URL of your test results.
Navigate to the URL to check your test results. When the test is running, the test result page is shown.
Learn more about test results on Analyzing Results.
Cloud execution options
All the k6 Options, like --vus and --duration are the same between the k6 run and k6 cloud commands. k6 aims to run the same script in different execution modes without making any script modifications.
Optionally, you can define some cloud options in your k6 script.
Name | Default | Description |
---|---|---|
name (string) | Optional. The name of the main script file, so something like "script.js". | The name of the test in the k6 Cloud UI. Test runs with the same name will be grouped together. |
projectID (number) | Optional. It is empty by default. | The ID of the project in which the test is assigned in the k6 Cloud UI. By default, the default project of the user default organization. |
distribution (object) | Optional. The equivalent of someDefaultLabel: { loadZone: "amazon:us:ashburn", percent: 100 }. | How the traffic should be distributed across existing Load Zones. The keys are string labels that will be injected as environment variables. |
staticIPs (boolean) | Optional. false by default | When set to true the cloud system will use dedicated IPs assigned to your organization to execute the test. |
note (string) | Optional. Empty by default. | Notes regarding the test, changes made, or anything that may be worth noting about your test. |
Running tests under a different project than your default one
By default tests and test runs will be created and run under your default project, in your default organization.
To create and run tests under a different project, whether under your default organization or one you've been invited to, you have to pass the Project ID to k6.
Select the project on the sidebar menu and you will find the Project ID in the header of the Project Dashboard page.
You have two options to pass the Project ID to k6:
Specify it in the script options:
script.jsSet the K6_CLOUD_PROJECT_ID environment variable when running the test.
Load zones
- Asia Pacific (Hong Kong) amazon:cn:hong kong
- Asia Pacific (Mumbai) amazon:in:mumbai
- Asia Pacific (Seoul) amazon:kr:seoul
- Asia Pacific (Singapore) amazon:sg:singapore
- Asia Pacific (Sydney) amazon:au:sydney
- Asia Pacific (Tokyo) amazon:jp:tokyo
- Canada (Montreal) amazon:ca:montreal
- Europe (Frankfurt) amazon:de:frankfurt
- Europe (Ireland) amazon:ie:dublin
- Europe (London) amazon:gb:london
- Europe (Paris) amazon:fr:paris
- Europe (Stockholm) amazon:se:stockholm
- South America (São Paulo) amazon:br:sao paulo
- US West (N. California) amazon:us:palo alto
- US West (Oregon) amazon:us:portland
- US East (N. Virginia) - DEFAULT amazon:us:ashburn
- US East (Ohio) amazon:us:columbus
Cloud execution tags
Tags is a powerful concept in k6 as it opens up for great flexibility in how you can slice and dice the result data.
When running a k6 test in the cloud we add two tags to all metrics:
Tag name | Type | Description |
---|---|---|
load_zone | string | The load zone from where the metric was collected. Values will be of the form: amazon:us :ashburn. |
instance_id | number | A unique number representing the ID of a load generator server taking part in the test. |
The cloud tags are automatically added when collecting the test metrics, and they work as regular tags.
For example, you can filter the results for a particular load zone on the k6 Cloud Results view.
Or define a Threshold based on the results of a load zone.
Environment variables
Environment variables set in the local terminal before executing k6 won't be forwarded to the k6 cloud service, and thus won't be available to your script when executing in the cloud.
With cloud execution, you must use the CLI flags (-e/--env) to set environment variables like -e KEY=VALUE or --env KEY=VALUE.
For example, given the script below, which reads the MY_HOSTNAME environment variable.
You'd execute it using the command like:
Injected environment variables on the cloud execution
When running in the k6 Cloud there will be three additional environment variables that can be used to find out in which load zone, server instance, and distribution label the given script is currently running.
Name | Value | Description |
---|---|---|
LI_LOAD_ZONE | string | The load zone from where the metric was collected. Values will be of the form: amazon:us :ashburn (see list above). |
LI_INSTANCE_ID | number | A sequential number representing the unique ID of a load generator server taking part in the test, starts at 0. |
LI_DISTRIBUTION | string | The value of the "distribution label" that you used in ext.loadimpact.distribution corresponding to the load zone the script is currently executed in. |
You can read the values of these variables in your k6 script as usual.
Differences between local and cloud execution
Iterations
Local execution has support for iterations based test length (-i or --iterations on CLI, and iterations in script options) which is not yet supported by the cloud execution mode.
Using setup/teardown life-cycle functions
Your setup and teardown life cycle functions are executed as normal when running cloud tests. Depending on the size of your test, it will execute from one or more cloud servers, but the setup and teardown will only execute from one server, so execute once for each test run. There's no guarantee though that the same cloud server that executed the setup() will execute the teardown().