For more than ten years, I've been writing about page bloat, its impact on site speed, and ultimately how it affects your users and your business. You might think that this topic would be played out by now, but every year I learn new things – beyond the overarching fact that pages keep getting bigger and more complex, as you can see in this chart, using data from the HTTP Archive.
In this post, we'll cover:
"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"
"How do I know how fast my pages should be?"
"How can I demonstrate the business value of performance to people in my organization?"
If you've ever asked yourself any of these questions, then you could find the answers in identifying and understanding the performance plateau for your site.
The performance plateau is the point at which changes to your website’s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you’ve bottomed out in terms of business and user engagement metrics.
In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.
The concept of the performance plateau isn't new. I first encountered it more than ten years ago, when I was looking at data for a number of sites and noticed that – not only was there a correlation between performance metrics and business/engagement metrics – there was also a noticeable plateau in almost every correlation chart I looked at.
A few months ago someone asked me if I've done any recent investigation into the performance plateau, to see if the concept still holds true. When I realized how much time has passed since my initial research, I thought it would be fun to take a fresh look.
In this post, I'll show how to use your own data to find the plateau for your site, and then what to do with your new insights.
Being able to set a sample rate in your real user monitoring (RUM) tool allows you to monitor your pages while managing your spending. It's a great option if staying within a budget is important to you. With the ability to sample real user data, comes this question...
This frequently asked question doesn't have a simple answer. Refining your sample rate can be hit or miss if you aren’t careful. In a previous post, I discussed a few considerations when determining how much RUM data you really need to make informed decisions. If you sample too much, you may be collecting a lot of data you may never use. On the other hand, if you sample too little, you risk creating variability in your data that is hard to trust.
In this post, we are going to do a bit of research and let the data speak for itself. I took a look at the impact of sampling at various levels for three t-shirt sized companies (Small, Medium, Large) with the hope of providing some guidance for those of you considering sampling your RUM data.
In this post, I'll cover:
Experimentation tools that use asynchronous scripts – such as Google Optimize, Adobe Target, and Visual Web Optimizer – recommend using an anti-flicker snippet to hide the page until they've finished executing. But this practice comes with some performance measurement pitfalls:
In this post we'll look at how anti-flicker snippets work, their impact on Web Vitals, and how to measure the delay they add to visitors' experience.
Page Speed Benchmarks is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries – from retail to media – over the past year. This dashboard is publicly available (meaning you don't need a SpeedCurve account to explore it) and is a treasure trove of meaningful data that you can use for your own research.
The dashboard allows you to easily filter by region, industry, mobile/desktop, fast/slow, and key web performance metrics, including Google's Core Web Vitals. (Scroll down to the bottom of this post for more testing details.)
At the time of writing this post, these were the home pages with the fastest Start Render times in key industries:
As you can see, I've included Largest Contentful Paint alongside Start Render in this chart, for reasons I explain below.
I've been writing about page size and complexity for years. If you've been working in the performance space for a while and you hear me start to talk about page growth, I'd forgive you if you started running away. ;)
But pages keep getting bigger and more complex year over year – and this increasing size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.
In this post, we'll cover:
Here are some common questions I’m asked when I talk with people about performance:
Today, I’m very excited to announce the release of a new project that helps answer those questions – and more!
Page Speed Benchmarks is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries – from retail to media.
With Page Speed Benchmarks, you can do things like:
If you already like tools like the HTTP Archive, I think you'll love how you can use Page Speed Benchmarks to complement the insights you're already getting. Keep reading to find out how we set up these benchmarks, and how you can mine our test data – even if you're not a SpeedCurve user – for your own performance research.