I've been writing about page size and complexity for years. If you've been working in the performance space for a while and you hear me start to talk about page growth, I'd forgive you if you started running away. ;)
But pages keep getting bigger and more complex year over year – and this increasing size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.
In this post, we'll cover:
When we talk about page size, we're referring to overall page weight and complexity. This includes:
A Google machine-learning study I participated in a few years ago found that the total number of page elements was the single greatest predictor of conversions. The number of images on the page was the second greatest predictor.
We also found that the more scripts there were in a series of pages in a session, the less likely that session was to convert.
Image size is another issue, as excessive image weight hurts your SEO ranking in Google Image Search. Given that Image Search comprises upwards of 26% of Google searches, this is something you should care about. (You can dive deeper into image optimization and SEO tips in this article in Google Search Central.)
Before we get into these numbers, some background and caveats:
As someone who's been watching these numbers for more than ten years, this growth doesn't come as a surprise. The median size of 2159 KB is about what I expected to see, given how many pages I inspect in any given week that are much larger than this.
Predictably, much of this page growth is driven by images and JavaScript. Images account for roughly 945 KB (44%) of median desktop page weight, and JS accounts for about 500 KB (23%).
The pages being served to mobile users have experienced massive growth. At 1984 KB, the median mobile page is only somewhat smaller than the median desktop page (2159 KB). While it is possible to have large, robust pages that feel fast, you should care about page bloat in terms of how it affects mobile users, especially mobile-only users who are on older low-CPU devices, or who are dealing with bandwidth constraints or data limits.
We're serving about 876 KB of images and 453 KB of scripts to mobile – in other words 67% of total page weight. JavaScript is a massive CPU hog, so this is concerning, especially if your users are on older devices with less processing power. (If you're counting on your users having newer devices, you might want to rethink that. In recent years, the smartphone replacement cycle has gotten longer and it looks like this trend is here to stay.)
Focusing on medians is not enough. You should also care about your cohort of users at the 90th percentile. Ten percent of your users may not sound like much, but if your site gets 10 million visitors a month, that means a million of those people are having a really poor experience.
The 90th percentile page served to desktop is 8271 KB and contains 177 resources. Almost 75% of page weight is consumed by images, which add up to more than 5 MB.
The 90th percentile page served to mobile is only slightly smaller, at 7574 KB and 168 resources.
You can see this relative flatness at both the median and 90th percentile. This actually came as a bit of a surprise. I had assumed that there'd be more significant growth, especially given the growth in total page size. More on that in a bit.
No surprises here. We've moved considerably beyond the pared-down pages we used to serve to mobile users a decade ago.
We're serving fewer images, but the images we are serving are high-resolution and/or unoptimized. The median page today serves 25 images, compared to 42 images back in 2012. While the number of image requests has reduced dramatically, the combined size has increased almost threefold, from 331 KB to 945 KB.
This trend carries over to mobile. The number of image requests has remained the same, but in this case the total image size has increased almost 6X – from 151 KB to 876 KB.
Not only are we serving more scripts than ever – with all the performance risks that those entail – we're also bulking out pages with 500 KB of JS weight.
Mobile pages fare only slightly better with 453 KB of JS weight.
More stylesheets equal more risk of performance degradation. The amount of CSS on your pages is something to keep an eye on, because problematic CSS can block the rest of your page from rendering.
Google's Core Web Vitals are a set of metrics that are intended to focus on measuring performance from a user-experience perspective. While total page size and weight don't directly affect Web Vitals, there are some nuanced ways that you should think about the number and size of resources you're serving.
Largest Contentful Paint (LCP) measures when the largest visual element on the page renders. Page bloat issues that can hurt your LCP time include:
First Input Delay (FID) measures how quickly a page responds to a user interaction. Input delay happens when the browser's main thread is too busy to respond to the user. Commonly, this is due to the browser being busy parsing and executing large JavaScript files.
There's a lot of unnecessary JS on many pages, and as noted above, JS files have gotten bigger over the years. The more JS on your page, the more potential for slow FID times. As Tim Kadlec said a couple years back in his performance.now() talk When JavaScript Bytes:
JavaScript is, byte-for-byte, the most expensive resource on the web and we’re using more of it than ever in our sites. You can optimize the delivery, the parsing and the execution until you’re blue in the face but you’ll never make it as performant as simply not using it in the first place.
Cumulative Layout Shift (CLS) measures how visually stable a page is. It's a formula-based metric that, put simply, takes into account how much a page's visual content shifts within the viewport, combined with the distance that those visual elements shifted. You can dig deeper into the mechanics of how CLS is calculated, but the human-friendly definition is that CLS helps you understand how likely a page is to deliver a janky, unpleasant experience to viewers.
CLS is strongly affected by the number of resources on the page, and by how and when those resources are served. You can see this by looking at the synthetic test result for Sears.com (again, available via our Industry Benchmarks). The CLS score for this page is 1.0468. For context, Google recommends a score of 0.1 or less. Translation: This is a really, really janky page!
These screenshots highlight the most significant visual element shifts:
Not surprisingly, this page, while not overly huge in terms of total size (almost 3 MB), contains a massive number of requests. Of those 559 requests, the bulk is images (175 requests), JavaScript (140 requests), and 'other' (133 requests).
Looking at the waterfall chart for this same page, we see that there are:
That's a lot of requests!
Yes. While page size can be a red flag for real performance issues, if you care about user experience, you need to take a closer look at how your pages are built to see if the size and complexity of your pages actually affect how fast your site feels to your users.
It's not enough to look at crude metrics like total requests and size. You need to know:
Amazon is a good example of a site that serves large, fast pages. In this recent test run from our Industry Page Speed Benchmarks, you can see that the Amazon home page ranks fastest in terms of Start Render. This is despite the fact that the page contains 410 requests and weighs in at 4,311 KB – far beyond the median sizes shared above. Yet the page has a Start Render time of 0.3 seconds, a Largest Contentful Paint time of 0.48 seconds, and a CLS score of 0.1526.
Looking at a close-up of Amazon's waterfall chart (below) reveals why. While there are 38 resources that load before Largest Contentful Paint, only one of them is render blocking, and all of them are extremely lean.
I meet with so many people whose job it is to build and optimize websites. When we look at how their pages are built, I routinely witness their surprise at spotting things like ghost scripts, huge unoptimized images, and blocking resources they weren't aware of. These are smart people. The problem isn't them – it's the scale of their sites, the speed of their release cycles, and the number of people who touch each page.
We're never going to get our lean, pre-1999, under-1MB web pages back. But we can regain control over the pages we have today.
Your pages probably have a lot of unnecessary junk on them, and some of that junk is unoptimized. Too much stuff means you can't see the forest for the trees. You can have large, complex pages that still feel fast. The key to a good user experience is quickly delivering the most important content first. Here are some great resources for analyzing and optimizing the critical rendering path.
All the fancy performance monitoring tools in the world can't help you if you don't have a strong performance culture at your organization. Here are some tips and best practices to help on that journey.
Page bloat happens when people stop paying attention. We need to monitor our pages consistently over time.Integrating performance testing into your CI/CD process is a great way to fight regression, especially if you combine this with creating performance budgets. By creating performance budgets for key metrics – such as Start Render, Largest Contentful Paint, and various page size and weight metrics – you can get alerted when they go out of bounds.
While some of your users may have newer devices and speedy networks, not all are this lucky. If you're using a real user monitoring tool, keep an eye on your performance metrics at the 75th and 95th percentiles so you have an understanding of your site's less-optimal performance.
If you're not already using SpeedCurve to monitor your site's performance, start your free trial!