Whether you’re new to DevOps or an experienced hand, sometimes it’s a challenge to explain the difference between performance testing and performance tuning. It happens to us, too. Here’s how we describe understanding performance testing and performance tuning.
Performance testing helps you see if your app, site or API responds quickly enough to be usable in most scenarios. As we’ve explained elsewhere, a full performance test takes some time and is best done with a daily or other regular build (but probably not more often than daily). A full performance test can take some time, and can illuminate where code changes affect performance.
The key in performance testing: is your app (or site or API) responding in a reasonable amount of time? Does your site load quickly so you don’t lose customers? When you set a performance benchmark - where site performance is acceptable under normal traffic and normal use - performance testing can tell you when that benchmark is reached.
Performance testing can also illuminate places in your app where performance suffers. When it does, it’s time for performance tuning.
Where performance testing is something you do regularly (daily builds, perhaps), performance tuning is what you do when your performance tests tell you something’s not responsive enough in your app, site, or API.
Performance tuning is akin to troubleshooting. You’ll compare your performance test results to your server logs or other DevOps instrumentation metrics. From that comparison, you’ll find performance bottlenecks and work to eliminate them. They may not all be code-related (in other words, it wasn’t your elegantly crafted code that created the problem). It could be a slow external API response, so you’ll need to create a way for that response to gracefully degrade. Or perhaps a particular database query is slow, so you’ll have to find ways to accelerate it.
From that description, it’s obvious that the two are related but not identical. You’ll likely tune less frequently than you test.
But that doesn’t mean you won’t still tune fairly often. For example, let’s say you’re working on a new feature and you commit some code on Monday. Tuesday, when you arrive for work, you notice the new code performed 10% slower in the overnight performance tests. That’s not a big deal, and well within acceptable performance levels. However, you’re still going to be optimizing that performance during your work on Tuesday: no-one wants performance to continually deteriorate.
While it’s important to understand the difference between these two often-confused terms, they’re not separate, siloed activities. They’re integrated into the development process. As in the example, without frequent performance testing and tuning, that code might go into production performing 10% more slowly while you start on the next feature, which may also perform 10% more slowly. Over time, each new feature introduces small performance regressions and the app’s performance suffers noticeably. In a "traditional” process, separating performance testing and performance tuning, those performance issues might be addressed months after they were introduced. Instead, catching issues early, with frequent testing and tuning, is much better. Not only is it easier for you, the developer, to fix issues while you’re in context working in that code, but also it prevents long-term app performance issues that await tuning. while the developer is in context, working on the piece of code that is the issue.
Both performance testing and performance tuning are thereby likely to be part of your DevOps, continuous integration / continuous delivery pipeline and schedule.