When discussing complex topics, it is usually a good idea to define a clear, shared terminology to ensure that we leave as little room as possible for misunderstandings. Below, you'll find a list of terms commonly used within the k6 project and what we mean when we use them.
- Dynamic Data
- Endurance Testing
- Horizontal Scalability
- HTTP Archive
- k6 Cloud
- Load Test
- Performance Threshold
- Requests per Second
- Service-level Agreement
- Service-level Indicator
- Service-level Objective
- Smoke test
- Soak test
- Stress test
- System under test
- Test configuration
- Test Run
- Test Script
- User Journey
- User Scenario
- Vertical Scalability
- Virtual Users
Correlation is a term used for describing the process of capturing dynamic data, received from the system under test, for reuse in subsequent requests. A common use case for correlation is retrieving and reusing a session id, or token, throughout the whole lifetime of a virtual user.
Dynamic data is data that might, or will, change between test runs. Common examples are order ids, session tokens or timestamps.
Endurance testing is a synonym for soak testing.
Horizontal Scalability is a trait describing to what degree a system under test’s performance and capacity may be increased by adding more nodes (servers or computers for instance).
An HTTP Archive, or HAR file, is a file containing logs of a browser interactions with the system under test. All of the included transactions are stored as JSON-formatted text. These archives may then be used to generate test scripts using, for instance, the har-to-k6 Converter. For more details, see the HAR 1.2 Specification.
An iteration is an execution of the default function, or scenario exec function.
k6 Cloud is the common name for the entire cloud product, which is composed of both k6 Cloud Execution and k6 Cloud Test Results.
A load test is a type of test used to assess the performance of the system under test in terms of concurrent users or requests per second. See Load Testing.
A metric is a calculation that, using measurements, serves as an indicator of how the system under test performs under a given condition.
Parameterization refers to the process of building a test in such a way that the values used throughout the test might be changed without having to change the actual test script.
A performance threshold describes the limits of what is considered acceptable values for a metric produced by a performance test. In many ways, this is similar to an SLO, although a performance threshold only concerns itself with a single metric.
Reliability is a trait used to describe a system under test’s ability to produce reliable results consecutively, even under pressure.
Requests per Second, or RPS, is the rate at which requests are executed against the system under test.
Saturation is reached when the system under test is fully utilized and hence, unable to handle any additional requests.
Scalability is a trait used to describe to what degree a system under test’s performance or capacity may be increased by adding additional resources. See Vertical scalability and Horizontal scalability.
A service-level agreement, or SLA is an agreement made between the one providing the service and someone, often a user of the service, promising that the availability of the service will meet a certain level during a certain period.
If the service provider fails to deliver on that promise, some kind of penalty is usually applied, like a partial or full refund, or monetary compensation.
A service-level indicator, or SLI is the metric we use to measure whether a service meets the service-level objective (SLO). While doing performance monitoring this could, for instance, be the number of successful requests against the service during a specified period.
A service-level objective, or SLO is an actual target, either internal or part of the service-level agreement (SLA), for the availability of the service. This is often expressed as a percentage (99,2%, for instance). If the service meets or exceeds this target, it's deemed stable.
A smoke test is a type of test used to verify that the system under test can handle a minimal amount of load without any issues. It’s commonly used as a first step, to ensure that everything works as intended under optimal conditions, before advancing to any of the other performance test types. See Smoke Testing.
A soak test is a type of test used to uncover performance and reliability issues stemming from a system being under pressure for an extended period. See Soak Testing.
Stability is a trait used to describe a system under test’s ability to withstand failures and erroneous behavior under normal usage.
A stress test is a type of test used to identify the limits of what the system under test is capable of handling in terms of load. See Stress Testing.
System under test refers to the actual piece of software that we're currently testing. This could be an API, a website, infrastructure, or any combination of these.
The options object of a test script or configuration parameters passed via the CLI. See Options.
An individual execution of a test script. See Running k6.
A test script is the actual code you run as part of your test run, as well as any (or at least most) of the configuration needed to run the code. It defines how the test will behave as well as what requests will be made. See the Single Request example.
User journey is used to describe a sequence of actions taken by either a real or simulated user.
User Scenario is a synonym for user journey.
Vertical scalability is a trait describing to what degree a system under test’s performance or capacity may be increased by adding more hardware resources to a node (RAM, cores, bandwidth, etc.).
Virtual Users, or VUs are used to perform separate and concurrent executions of your test script. They can make HTTP(s) and WebSocket requests against a webpage or API.
Virtual Users, although emulated by k6 itself, can be used to mimic the behavior of a real user.
Virtual Users in context of Web Apps/Websites
Virtual Users are designed to act and behave like real users/browsers would. That is, they are capable of making multiple network connections in parallel, just like a real user in a browser would. When using a http.batch request, HTTP requests are sent in parallel. For further information, refer to the article about load testing websites.
Read more about using this formula in the tutorial to calculate the number of Virtual Users with Google Analytics.
Virtual Users in context of APIs
When testing individual API endpoints, you can take advantage of each VU making multiple requests each to produce requests per second(rps) a factor higher than your VU count. e.g. Your test may be stable with each VU making 10 rps each. If you wanted to reach 1000 RPS, you may only need 100 VUs in that case. For more information on testing APIs, please refer to our article API Load Testing.