SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. Why you need to know your site's performance plateau (and how to find it) https://www.speedcurve.com/blog/web-performance-plateau <p style="text-align: left;"><span class="large-para">Have you ever wondered why your site got faster, but your business and user engagement metrics didn't improve? The answer might lie on the performance plateau.</span></p> <p style="text-align: left;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/performance-plateau-clv.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: left;">Have you ever asked yourself these questions?</p> <p style="text-align: left; padding-left: 30px;"><em>"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"</em></p> <p style="text-align: left; padding-left: 30px;"><em>"How do I know how fast my site should be?"</em></p> <p style="text-align: left; padding-left: 30px;"><em>"How can I demonstrate the business value of page speed to people in my organization?"</em></p> <p>The answers might lie with identifying and understanding the performance plateau for your site.</p><h2>What is the "performance plateau"?</h2> <p>The performance plateau is the point at which changes to your website&rsquo;s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you&rsquo;ve bottomed out in terms of business and user engagement metrics.</p> <p>In other words,&nbsp;<strong>if your page speed metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business</strong>.</p> <p style="font-size: 16px;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>A <a href="https://www.speedcurve.com/blog/site-speed-business-correlation/">correlation chart</a> is an essential tool for identifying your performance plateau. This chart shows that, for this site, bounce rate dramatically worsens when LCP time slows from 0.1s to 0.4s. After that, bounce rate plateaus &ndash; it remains at around 75% for all sessions with LCP times slower than 0.4s.&nbsp;</em></p> <p style="font-size: 16px;">The concept of the performance plateau isn't new. I first encountered it more than ten years ago, when I was looking at data for a number of sites. I noticed that not only was there a correlation between performance metrics and business/engagement metrics, there was also a noticeable plateau in almost every correlation chart I looked at.&nbsp;</p> <p style="font-size: 16px;">A few months ago someone asked me if I've done any recent investigation into the performance plateau, to see if the concept still holds true. When I realized how much time has passed since my initial research, I thought it would be fun to take a fresh look.</p> <p style="font-size: 16px;">In this post, I'll show how to use your own data to find the plateau for your site, and then what to do with your new insights.</p> <h2>Background</h2> <p>For this investigation, I selected four sites that experience a significant amount of user traffic. For each site, I used a month's worth of RUM (real user monitoring) data to generate correlation charts.</p> <p><a href="https://www.speedcurve.com/blog/site-speed-business-correlation/">Correlation charts</a> show the relationship between performance metrics &ndash; in these instances, Start Render and Largest Contentful Paint (LCP) &ndash; and user engagement (measured as bounce rate)<strong>.</strong> They're a great tool for showing non-technical folks how performance affects the business.</p> <p>(You can also create correlation charts that show&nbsp;<a href="https://support.speedcurve.com/docs/conversion-rates">the relationship between performance metrics and business metrics</a>, such as conversion rate and cart size, but bounce rate is easier to measure right out of the box with most RUM tools.)</p> <p><span style="font-size: 35px; color: #000000;">Results</span></p> <p>The correlation charts below show the distribution of all visits, with each yellow bar representing a cohort of visits that experienced a given Start Render or LCP time. The blue bar represents the change in bounce rate across all cohorts.</p> <p>In each of the correlation charts below, I've highlighted:</p> <ul> <li><strong>Optimal speed</strong>&nbsp;&ndash; The cohort of sessions that correlated with the lowest (aka best) bounce rate for that site</li> <li><strong>Beginning of the performance plateau</strong>&nbsp;&ndash; The cohort of sessions where the bounce rate begins to plateau</li> <li><strong>Median measurement</strong>&nbsp;for all visits represented in the chart</li> </ul> <p>Keep reading for observations and takeaways.</p> <h3>Site A</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site B</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site C</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site D</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Observations</h2> <h3>1. A clear performance plateau emerged for each site</h3> <p>Each site experienced a plateau at which business metrics remained more or less the same as performance continued to degrade.</p> <h3>2. Plateaus emerged for both Start Render and Largest Contentful Paint</h3> <p>While it's great to see Largest Contenful Paint validated as a meaningful page speed metric, I'm even happier to see Start Render receive validation. That's because Start Render is widely available across browsers, while LCP still has limited browser availability.&nbsp;</p> <h3>3. The plateau emerges surprisingly quickly in some cases</h3> <p>For example, Site C's performance plateau starts at 400 milliseconds. That's early!</p> <h3>4. There's a lot of variability in the distance between the optimal bounce rate and the plateau</h3> <p>For some sites, you can see a much steeper incline in the curve from optimal to plateau. For some sites (such as Site C), the difference was as little as 300 milliseconds. For others (such as Site A), the gap was as long as 9 seconds.</p> <h3>5. The plateau sometimes started later when looking at LCP</h3> <p>Creating correlation charts in both Start Render and LCP generated interesting results. In two of the four sites I looked at, the charts were roughly comparable. For the other two sites, the plateau started later for LCP than it did for Start Render. This could be attributed to the fact that LCP measures when the largest visual element has completely finished rendering, so it can occur much later than Start Render.</p> <h3>6. For some sites the performance plateau starts well before the median</h3> <p>Predictably, the optimal bounce rate generally correlated to the cohort of sessions that is much faster than the median. A bit more surprisingly, for some sites the performance plateau started well before the median. This could come as a scary revelation for some site owners, because it means that the bulk of your user sessions are occurring on the plateau.</p> <h2>How to measure the performance plateau for your own site</h2> <p>I can't emphasize enough that the examples I've shared are illustrative, not prescriptive. The performance plateau for your site will be different from the plateau for another site. <strong>You need to look at your own real user data. </strong>(If you're new to performance, you might be interested in&nbsp;<a href="https://support.speedcurve.com/docs/synthetic-vs-real-user-monitoring-rum">this synthetic and real user monitoring explainer</a>.)</p> <p>Fortunately, the process for identifying the low end of your site&rsquo;s performance threshold is fairly straightforward. All you need is access to a statistically significant amount of your RUM data, plus whatever analytics tool you use for tracking business or user engagement metrics.&nbsp;</p> <h3>Step 1: Identify the metrics you want to measure</h3> <p>As mentioned above, bounce rate is a good metric to start with, because it's already gathered automatically by most real user monitoring tools.</p> <p>If you have access to other data sources, then you can create a variety of correlation charts, If run an ecommerce site, then you can measure revenue, cart size, and conversion rate. If you work on a media site, then page views, session depth, and bounce rate matter.</p> <h3>Step 2: Gather lots of real user data</h3> <p>To ensure that you get statistically relevant results, the more data you can gather, the better. If your dataset is too small, you could get wonky results. When I conducted my investigation, I aggregated millions of transactions that took place over a single month. (If you're interested in trying real user monitoring, you can start a <a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials">free RUM trial</a> in SpeedCurve.)</p> <h3>Step 3: Create correlation charts</h3> <p>I've demonstrated how I like to show bounce rate (or whatever business/engagement metric you're plotting) across the distribution of sessions. (If you're a SpeedCurve user, <a href="https://support.speedcurve.com/docs/create-correlation-charts">here's how to create correlation charts</a>.)</p> <p><span style="font-size: 35px; color: #000000;">What to do with your findings</span></p> <p>After you've finished your own investigation, you can do a few things with the results:</p> <h3>1. Share your findings within your organization</h3> <p><a href="https://www.speedcurve.com/blog/site-speed-business-correlation/">Correlation charts</a> are a powerful tool for showing stakeholders the impact that site speed has on the business. Even if your results aren't what you hoped they would be, you can use this data to prove the value of continuing to invest in performance.</p> <h3>2. Understand why your business metrics are not improving despite your efforts</h3> <p>This might seem a bit demoralizing, but when you think about it, it's actually helpful to know. When you know where your performance pleateau begins, you can answer the question "Why don't my business or user engagement metrics improve when I make my site faster?" If you improve Start Render from 5 seconds to 3 seconds, but the performance plateau for your site starts at 2 seconds, you haven't yet made Start Render fast enough.&nbsp;</p> <h3>3. Change your performance goals</h3> <p>Set targets for moving more of your users into the cohorts that experience faster Start Render or LCP times. Ideally, improving key site speed metrics for more of your users should improve bounce rate (or whatever user engagement or business metric you're tracking) for more of your users. Ultimately, this is good for your business.</p> <p>You can use your performance plateau to set goals. Continuing with the example in point 1, above, if you know that the plateau starts at 2 seconds, you can create a Start Render target of 1.5 seconds to work toward.</p> <h3>4. Or DO NOT change your performance goals</h3> <p>In the Site C example, the optimal bounce rate occurs for the 100-millisecond LCP cohort, and the plateau starts just 300 milliseconds later. With a huge amount of work, you might succeed in delivering faster LCP times to more sessions, but would the effort be worth it?</p> <p>As the close-up view of the chart below shows, the bulk of sessions have speedy LCP times that are at the beginning of the performance plateau. In this case, the chart shows that perhaps you can be satisfied with your efforts, and your focus should be on fighting regressions and staying fast.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-c-closeup.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>5. Create a baseline to measure against</h3> <p>Repeat this exercise periodically &ndash; perhaps monthly, or semi-annually, or after a deploy where you've made a number of performance improvements &ndash; and compare the correlation charts over time. Ideally, you'll see more of your sessions fall into the faster section of the distribution, before the performance plateau.</p> <h2>Questions? Feedback?</h2> <p>If you experiment with creating correlation charts and plotting the performance plateau for your site, I'd love to hear about your results!</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 15 Apr 2025 00:00:00 +1200 Performance Hero: Alex Russell https://www.speedcurve.com/blog/performance-hero-alex-russell <p><span class="large-para">Our newest performance hero is passionate, provocative, and unapologetically honest. While he's a true champion for web performance, his impact can be measured more broadly across the web. Join us in celebrating Alex Russell!</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/alex-hero.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://infrequently.org/">Alex Russell</a>&nbsp;has been a strong voice in the web community for as long as I can remember. He's currently a Partner PM at Microsoft, working on Edge. Before that, he spent several years working at Google on Chrome, web standards, and much more.</p> <p>Not only is Alex an accomplished engineer, he's also an amazing speaker and writer. I last saw Alex on stage at performance.now() in November, where he delivered&nbsp;<a href="https://youtu.be/0XwWVjQOmyg?si=7600So9o2KzMiCKF">this inspiring talk</a>&nbsp;that got a lot of attendees talking.</p><p>After reaching out to Alex for this interview, I was pleased to see how thoughtful he was in his response. Not only did he provide great insights, he was quick to point out a handful of colleagues at Microsoft he felt more deserving of recognition, including <a href="https://www.linkedin.com/in/amiya-gupta">Amiya Gupta</a>, <a href="https://www.linkedin.com/in/ingrid-caldas-4b131823/">Ingrid Caldas</a>, and <a href="https://www.linkedin.com/in/pauljroy/">Paul Roy</a>.</p> <p>Alex's passion for the web platform &ndash; and making it accessible for all &ndash; carries through in his writing, his talks, and fortunately for us, his email responses! Here's our email interview.</p> <h3>How did you begin your journey in web performance?</h3> <p>"My work in the past decade on performance has been an exercise in working backwards from strategy to tactics.&nbsp;</p> <p>"The web has relatively few advantages versus its competitors (who are also, not coincidentally, also major browser vendors), but one of the most misunderstood advantages is perspective: we get to step back from the frothy churn of early capability introduction on native OSes and (if we're doing our job right) ride the 80/20 line to add capabilities that are important at about the same rate that they become commodities, but with the benefit of mistakes made by native competitors.&nbsp;</p> <p>"That digestion procession often gets weaponised (or worse, fetishised,&nbsp;&agrave;&nbsp;la TC39 and CSSWG) as "thoughtful deliberation" by internal enemies of the web, but when it's working as intended, competition and urgency to ship creates a diversity of views about how to introduce a capability without having explain their fundamental value.</p> <p>"But all of that value is contingent. If the web is a safer way, for example, to configure your brand-new IoT device versus downloading some app from a store &ndash; but the user experience sucks &ndash; no CEO or PM worth their salt will want to associate their brand with our platform.</p> <p>"The same is true all the way down the capability spectrum: the web's potential to introduce safe, privacy-respecting ways of accessing the full potential of your devices is only as realistic as the willingness of decision makers to bet on the experience of the web. And if that experience is, in general, bollocks... why offer it on the menu?</p> <p>"Make no mistake: Apple's work to kill the web is a two-front war. First, they are denying critical capabilities that they give every Tom, Dick, and Harry willing to pay $99 a year to put something in their store. Simultaneously, they are deeply committed to undermining the reliability of foundational experiences like tapping, swiping, and typing. If those feel like tosh on the web via the phones of CEOs and board members, would anyone invest in a web-based pitch? And why?</p> <p>"The web is not what it promises. The web is what it does.</p> <p>"And this strategy logic plays out in Android-land, where there are huge teams at Google that have no higher goal than to get folks that are happy making web apps to put something native in the Play Store instead. They haven't been quite as successful in keeping the Chromies down as Apple did the WebKittens, but the net effect of various management and political games should be read with not insignificant scepticism. Why, for example, is Google still withholding the ability for competing browsers to ship real PWAs on Android (a.k.a. WebAPKs)? This stuff matters, and if you only read public blogs, you might think it's just a random walk of technology futures explored and abandoned, but it's very much not that.</p> <p>"Web developers need to wake up.&nbsp;</p> <p>"Native ecosystems want to eat your lunch. They're doing so in an ongoing way, and to the extent that the web is a shitty, underpowered experience on most of the world's devices across most of the world's networks, we are handing our enemies a gift. I can't make you care about the web, but I can suggest that if you do, you should pay attention to the real and present dangers it faces; no matter how slow-moving they appear."</p> <h3>As a Partner PM at Microsoft, what are your primary responsibilities?</h3> <p>"MSFT is a... unique... place, and I learned pretty quickly after joining that my title is more like a partner in the law or audit firm sense. It's a description of generic seniority with a specialization attached, but at the level of Partner, the specialization is mostly about "How many reports do you have?" I have none (which is good for everyone involved).</p> <p>"I do a weird job &ndash; roughly speaking, <a href="https://infrequently.org/2024/10/platforms-are-competitions/">platform strategy on behalf of the web</a>&nbsp;&ndash; and it's not one that anyone hires for. That means everyone who does this job hides out in their orgs doing something else on paper. For my 12+ years at Google, I passed as a co-TL for large-scale projects inside the web platform org, and at MSFT that's nominally a product management job. But what I care about, and what I work on regardless of which domain it's in, is in making the web a success versus its extremely potent and vicious competitors; adversaries that are threatening to extinguish it entirely on mobile.</p> <p>"An incredibly small community of people do this work. In one way or another, we've all found ourselves working in and around Chromium. That's a little disconcerting, but it also makes sense: Blink is the only engine whose funders have made space to pursue an even moderately web-forward agenda in the era of entirely proprietary mobile OSes, and the Blink Launch Process has an explicit bias towards progress when sufficient need can be demonstrated. Nobody does this work at Apple or Mozilla; the folks who work in platform strategy at Apple are entirely on the other side (trying to extinguish the web, often weaponizing standards and standards-adjacent processes in the process) and Mozilla... well, the less said the better.</p> <p>"So what the heck does that have to do with performance?</p> <p>"First, I help lead an effort called Project Sidecar within the Edge team. We're a virtual team of volunteers that consult with anyone in the company that wants help with the web. That is a general-purpose offer, and our goal is to learn about problems that folks are experiencing all over the system, but today the effort mostly focuses on performance because of the dire place that many teams find themselves in. Large swaths of our org bought into the dogmas and prescriptions of the current JavaScript community, and that's just a recipe for pain whenever React is involved.&nbsp;</p> <p>"Next, I keep the flame alive for a rebirth of the mobile web. Microsoft isn't heavily invested in that future today, but they let me work on it in ways that I think will matter.</p> <p>"Lastly, a huge fraction of Edge's UI surfaces are web-based. Improving our own understanding of, and alignment with, the modern web is a surefire way to make Edge a faster, more competitive product offering for users on devices across the spectrum. And making things great for users at the margins is what I care most about."</p> <h3>You launched the 'Performance Inequality Gap' series in 2017 and have consistently provided updates since then. What is the outlook for 2025?</h3> <p>"Decidedly mixed. Apologies for the screenshots of SVG charts, but it helps to tell the story visually.</p> <p>"We're starting to turn the corner on process nodes filtering down from the wealthy to the less-well-off, and that was basically frozen in amber until 2022:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/img1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing performance per dollar for major phone manufacturers. Trend is upward, increasing steadily with a closing gap in 2024 between the segments." /></p> <p>"That nets out at improvements across the board in the past two years for perf per dollar, but not real-world performance:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/img2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing single-core scores over time increasing steadily, with a clear division between high-end and low-end devices." /></p> <p>"Average selling prices are also stagnant, which creates real tension. Premium buyers are still heavily segmented versus the rest of the market:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/img3.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing that prices for phones are very stagnant, with a large gap between high-end and low-end devices." /></p> <p>"And of course the one thing that hasn't changed &ndash; and the primary reason Apple still dominates in the metrics that matter most to web performance &ndash; is that every single Android SoC vendor continues to skimp on cache sizing:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/img4.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing major phone providers cache sizing over time. iPhone is an order of magnitude higher than the others." /></p> <h3>What projects are you currently involved in, and what are you most enthusiastic about?</h3> <p>"Some I can talk about, lots I can't, sadly.</p> <p>"I'm incredibly excited about <a href="https://github.com/MicrosoftEdge/MSEdgeExplainers/blob/main/PerformanceControlOfEmbeddedContent/explainer.md">the work my colleagues Nishita and Luis are doing</a> to bring some of the ideas behind "Never-Slow Mode" to a contemporary context.</p> <p>"We've experienced many teams being relatively ambivalent about performance... until they find their app embedded in the context of some other site that objects to their bulk. That interplay tends to deliver results, and the more the browser can serve as the mediator (and de-personalise the conversation) the better. So we're hoping to tier up from that work to a more expansive view of embedder and self-declared performance controls over time.</p> <p>"A ton of work has been happening in our V8 and tooling teams to facilitate better memory attribution. That's exciting to me because as teams become more advanced in their understanding of their own performance, they start to care more about these aspects. Having better tools helps there, and Sulekha Kulkarni's V8 team within Edge is doing great work to make that more legible.</p> <p>"Renewed energy around tools that will let us remove code from userland is always exciting, so I'm enthusiastic about customisable &lt;select&gt; (now that Apple has relented after a dozen years of blocking it) and energy around future-looking additions to Web Components.</p> <p>"I also think it's finally time for us to create separate read and write phases in the DOM for style readback. I need to get those ideas on paper, but I hope for some progress there this year. Exciting times."</p> <h3>Looking ahead to 2025, what do you anticipate will be the main challenges and opportunities in the field of performance?</h3> <p>"Web performance faces two main challenges today:</p> <ul> <li>&nbsp;The strength of the cult that has formed around React</li> <li>&nbsp;The failure of browser vendors to push back forcefully</li> </ul> <p>"<a href="https://www.speedcurve.com/web-performance-guide/get-started-with-core-web-vitals/">Core Web Vitals</a> are a shadow of what's possible. We need to imagine a world where browsers take the user's side much more aggressively (and always in an even-handed, privacy-preserving way). That's possible, and when browsers get there, it will do a lot to deflate the React bubble, which I think will be cathartic for businesses and developers trapped in a shockingly inefficient local minima."</p> <p><strong><em>Thank you, Alex! Your continued work on behalf of the web platform extends far beyond performance. Thank you for being such a strong voice in our community.&nbsp;</em></strong></p> <p><strong><em>Do you have someone you'd like to recognize as a Performance Hero? Let us know at support@speedcurve.com!</em></strong></p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><em><img class="blog-img" src="https://blog-img.speedcurve.com/img/539/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></em></a></p> Tue, 08 Apr 2025 00:00:00 +1200 Correlation charts: Connect the dots between site speed and business success https://www.speedcurve.com/blog/site-speed-business-correlation <p><span class="large-para">If you could measure the impact of site speed on your business, how valuable would that be for you? Say hello to correlation charts &ndash; your new best friend.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/social-world-best-correlation-chart.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Here's the truth: <strong>The business folks in your organization probably don't care about page speed metrics.</strong> But that doesn't mean they don't care about page speed. It just means you need to talk with them using metrics they already care about &ndash; such as conversion rate, revenue, and bounce rate.</p> <p>That's why correlation charts are your new best friend.</p><h2>What is a correlation chart?</h2> <p>A correlation chart is a powerful data visualization that shows you <strong>the relationship between your page speed metrics and your business and user engagement metrics</strong>.</p> <p>Correlation charts are generated using&nbsp;<a href="https://www.speedcurve.com/features/performance-monitoring/">real user monitoring (RUM)</a> data. They give you a histogram view of all your&nbsp;user&nbsp;traffic, broken out into cohorts based on performance metrics, such as Start Render, Largest Contentful Paint, Interaction to Next Paint, and more. Each cohort shows you the median time for whatever metric you're tracking for the session. (Those cohorts are represented in the yellow columns in the chart below.)</p> <p>The next layer of the chart is where things get really interesting.</p> <p>You also get an overlay (the blue line in the chart below) that shows you the business or user engagement metric &ndash; commonly conversion rate or bounce rate, but there are many more &ndash; that correlates to each of these cohorts. This lets you see at a glance how closely your business/engagement metric aligns with the speed of your site.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/worlds-best-correlation-chart.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>As pages get slower for this site, conversion rate (predictably) decreases. The cohort of users who experienced Largest Contentful Paint times of 1.1 second also had the highest conversion rate &ndash; more than 6 percent!&nbsp;</em></p> <p>You can also see how quickly conversions drop off as LCP time degrades. At 2.5 seconds &ndash; which is Google's recommended threshold&nbsp; for LCP &ndash; the conversion rate is well under 3 percent. That's a huge drop!</p> <h2>Communicate to a broad audience</h2> <p>Correlation charts let even the most non-technical stakeholder easily see the connection between performance and the business KPIs they care about.</p> <p>In my experience with talking about performance to a&nbsp;wide variety of audiences, <strong>correlation charts can be extremely effective in winning performance buy-in from key people in your organization</strong>. Not everyone understands the nuances of Core Web Vitals. But everyone understands revenue and bounce rate.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/business-folks.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: right;"><sup><a href="https://www.freepik.com/free-photo/happy-successful-business-team-posing_6447849.htm#fromView=search&amp;page=1&amp;position=7&amp;uuid=99557297-ad29-44ec-a0a5-551616396529&amp;query=business+people">Image by pch.vector on Freepik</a></sup></p> <h2>Identify the performance plateau for your site</h2> <p>If you've ever made your pages faster but didn't see any changes in your business or user engagement metrics, you were probably frustrated &ndash; and understandably so.&nbsp;<strong>When setting page speed goals for your site, you need to understand your performance plateau. </strong>To do this, you first need to create a correlation chart.</p> <p>The&nbsp;<a href="https://www.speedcurve.com/blog/web-performance-plateau/">performance plateau</a>&nbsp;is the point at which changes to your pages' rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you&rsquo;ve bottomed out in terms of business and user engagement metrics.&nbsp;</p> <p>In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/performance-plateau-clv.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>A correlation chart is an essential tool in identifying your performance plateau. For this site, the conversion rate plateaus at 2.8 seconds. To improve conversions, you would need to move more traffic to the higher-converting side of the chart &ndash; as close to 1.1 seconds as possible.</em></p> <h2>Validate your metrics</h2> <p>You don't want to waste time optimizing metrics that ultimately don't move the needle for your business or users. <strong>Correlation charts help you validate the performance metrics you're tracking and optimizing.&nbsp;</strong></p> <p>For example, when Google was in the process of evaluating Interaction to Next Paint (INP) as the new interactivity metric in Core Web Vitals, <a href="https://www.speedcurve.com/blog/INP-user-experience-correlation/">we conducted an independent analysis to validate that INP is a meaningful page speed metric</a>. (By our definition, a meaningful metric is one that can be demonstrated to affect business or user engagement KPIs.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/inp-vs-conversion-correlation-chart.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>In this correlation chart, the fastest INP time (48ms) correlates to a 9% conversion rate for this retail site. An INP degradation of just 50ms correlates to a 7.6% conversion rate &ndash; a significant drop!</em></p> <p>My fellow SpeedCurver Cliff Crocker looked at RUM data for several sites &ndash; and specifically focused on correlation charts for each site. Cliff determined that, yes, having a faster Interaction to Next Paint Time typically does correlate to better conversion rates. Knowing this, most people should feel confident that optimizing INP is a smart move.</p> <h2>Spot&nbsp;performance-blocking trends&nbsp;</h2> <p>You can even use correlation charts to see the relationship between page-construction metrics &ndash; like blocking JavaScript, blocking CSS, and number of image requests &ndash; with your other metrics! <strong>This lets you spot trends on your pages that could be hurting performance and your business.</strong>&nbsp;</p> <p style="text-align: center;"><img style="max-width: 1200px;" src="https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=1200&amp;auto=format,compress" sizes="(max-width: 1200px) 100vw, 60vw" srcset=" https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=250&amp;auto=format,compress 250w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=682&amp;auto=format,compress 682w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=972&amp;auto=format,compress 972w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=1188&amp;auto=format,compress 1188w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=1200&amp;auto=format,compress 1200w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=1250&amp;auto=format,compress 1250w, https://blog-img.speedcurve.com/img/lux-start-render-vs-bounce-rate-02.png?w=1689&amp;auto=format,compress 1689w" alt="Load Time vs Bounce Rate" width="100%" /></p> <p>For example, in the chart above, you can see that while there are fewer blocking resources on the faster pages, this number takes a sharp upturn starting with the cohort of pages that&nbsp;have a start render time&nbsp;of&nbsp;1.1&nbsp;seconds.</p> <p>If your goal is to deliver faster start render times to more users (and 1.1 seconds is a pretty good goal to shoot for), then this might trigger you to do an audit of your pages to analyze how your scripts and stylesheets are being executed.&nbsp;</p> <h2>Get started</h2> <p>We care about more than just showing you all your real user data. We want to show you the&nbsp;<em>most important</em>&nbsp;data. And we want to make it easy for you to share that data with people throughout your organization.&nbsp;</p> <p><strong>If you&rsquo;re already a SpeedCurve RUM user:</strong>&nbsp;Simple correlation charts are available at the top of your RUM &gt; Users dashboard. We capture bounce rate by default, so you'll see a correlation chart that shows you the relationship between Start Render and bounce rate.</p> <p>You can easily <a href="https://support.speedcurve.com/docs/create-correlation-charts">create custom correlation charts</a> in your Favorites dashboard. You can also <a href="https://support.speedcurve.com/docs/conversion-rates">add your own conversion rate data</a> &ndash; as well as <a href="https://support.speedcurve.com/docs/customer-data">other data</a> like cart size and revenue.&nbsp;</p> <p>Questions? Send us a note at support@speedcurve.com.</p> <p><strong>If you're&nbsp;a SpeedCurve Synthetic user, but haven't tried RUM yet:</strong>&nbsp;Start your free trial any time! All you have to do is grab the RUM ID for your team &ndash; on the RUM page visible in the main navbar when you log in &ndash; and&nbsp;<a href="https://support.speedcurve.com/docs/setup-guide#step-5--configuring-rum">install the RUM JS snippet on your site</a>. Email us at support@speedcurve.com if you have any questions!</p> <p><strong>If you&rsquo;re not a&nbsp;SpeedCurve user: </strong><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials">Sign up for your free trial</a>&nbsp;and get these powerful charts for your own site.</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/52/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Wed, 02 Apr 2025 00:00:00 +1300 Downtime vs slowtime: Which costs you more? https://www.speedcurve.com/blog/downtime-vs-slowtime <p><span class="large-para">Comparing site outages to page slowdowns is like comparing a tire blowout to a slow leak. One is big and dramatic. The other is quiet and insidious. Either way, you end up stranded on the side of the road.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/amazon-outage.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Downtime is horrifying for any company that uses the web as a vital part of its business (which is to say, most companies). Some of you may remember the Amazon outage of 2013, when the retail behemoth went down for 40 minutes. The incident made headlines, largely because&nbsp;<a href="https://venturebeat.com/business/amazon-website-down/">those 40 minutes were estimated to have cost the company $5 million in lost sales</a>.</p> <p>Downtime makes headlines:</p> <ul> <li>2015 &ndash; <a href="https://www.macobserver.com/news/app-store-spending-christmas-eve-new-years-eve/">12-hour Apple outage cost the company $25 million</a></li> <li>2016 &ndash; <a href="https://money.cnn.com/2016/09/07/technology/delta-computer-outage-cost/">5-hour outage caused an estimated loss of $150 million for Delta Airlines</a></li> <li>2019 &ndash; <a href="https://www.ccn.com/facebooks-blackout-90-million-lost-revenue/">14-hour outage cost Facebook an estimated $90 million</a></li> </ul> <p>It's easy to see why these stories capture our attention. These are big numbers! No company wants to think about losing millions in revenue due to an outage.&nbsp;</p> <h2>Page slowdowns can cause as much damage as downtime</h2> <p>While Amazon and other big players take pains to avoid outages, these companies also go to great effort to manage the day-to-day performance &ndash; in terms of page speed and user experience &ndash; of their sites. <strong>That&rsquo;s because these companies know that page slowdowns can cause at least as much damage as downtime.</strong></p><p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/lenny-rachitsky-quote2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>There are three metrics that are hit hard by slow page load times:</p> <ul> <li>Abandonment rate</li> <li>Revenue</li> <li>Brand health</li> </ul> <p>Let&rsquo;s take a deeper dive into the data behind each of these metrics.</p> <h2>Visitors may be more likely to permanently abandon slow sites than unavailable sites</h2> <p>If a website is temporarily down, there&rsquo;s a reasonable chance you&rsquo;ll try again later &ndash; assuming that you&rsquo;re reasonably motivated to track down whatever it was you were interested in finding on that site.</p> <p>But if a website or app is consistently laggy (read: many popular media sites), eventually, you just sort of drift away.</p> <p>Anecdotally, this makes sense &ndash; and there&rsquo;s research to back it up.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/abandonment-slow-vs-down2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />In one of the only studies (if not the only study) of the impact of outages versus slowdowns on abandonment rates, Akamai found that sites that went down experienced, on average, a permanent abandonment rate of 9 percent. <strong>Sites that suffered from slow performance experienced a 28% abandonment rate.</strong></p> <p>This isn&rsquo;t to say that site outages are nothing to be concerned about. A 9% permanent abandonment rate is extremely bad for your business. But a 28% abandonment rate is even worse.</p> <h2>Slow pages could have up to 2X more impact on revenue than downtime</h2> <p>This finding comes from a study that, to the best of my knowledge, is the only study that compares revenue losses due to downtime with losses due to page slowness.</p> <p>TRAC Research surveyed 300 companies and found that the average revenue loss for an hour of downtime was $21,000. For the same set of companies, average revenue loss due to an hour of performance slowdown (which was defined as response times exceeding 4.4 seconds) was much less &ndash; just $4,100.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/average-revenue-loss-per-hour.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Looking at just these two sets of numbers, outages seem like a bigger source of concern. But wait.</p> <p>According to the same survey, <strong>website slowdowns occurred 10X more frequently than outages</strong>. This changes the numbers considerably.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/average-revenue-loss-total.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />In other words, according to this research, <strong>slow-loading pages could have twice the impact on revenue that site failures do</strong>.</p> <p>Bear in mind, this is just one survey of 300 companies. The results cited here should definitely not be taken as a prediction of how your own site could fare. The most important takeaway from this study is that it forces us to ask ourselves:</p> <ul> <li>How do I calculate losses due to slowness on my site?</li> <li>How do these losses compare to downtime losses?</li> <li>Am I at risk of prioritizing one performance issue (downtime) over another equally critical issue (slowness)?</li> </ul> <h2>Slow sites may suffer more damage to brand health</h2> <p>Unless your site experiences frequent and noticeable outages, occasional failures won&rsquo;t undermine your brand. (In fact, remember back when Twitter made outages cute with the fail whale?)</p> <p>Most users accept sporadic downtime as part of the reality of using the web. They&rsquo;re less forgiving, however, if your site is routinely slow.</p> <p>First impressions matter &ndash; and they happen faster than you might think. According to <a href="https://fastspring.com/blog/first-impressions-matter-comes-website/#:~:text=We%20all%20make%20snap%20judgments,That's%200.05%20seconds.">one study</a>, <strong>we form our opinion of a website within the first 50 milliseconds</strong>. And once we&rsquo;ve formed that opinion, it colours how we feel about a site&rsquo;s credibility and usability, ultimately affecting whether or not we choose to make a purchase on that site.</p> <p>A few years ago, I directed a neuroscientific research project in which participants were asked to complete transactions on an e-commerce site using mobile devices. Some participants experienced normal speeds, while others experienced load times that were artificially throttled with a 500-millisecond network delay. Participants believed they were participating in a standard usability/brand perception study, so they had no idea that speed was a factor in the tests.</p> <p>After each set of tests, the research team conducted exit interviews with the subjects. Our subjects were asked to give their general impressions of each site and company. The results were revealing.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/fast-vs-slow-word-clouds.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Some participants picked up on the slight deterioration in performance (calling the slower site &ldquo;slow&rdquo; and &ldquo;sluggish&rdquo;), but those who used the slower site also developed negative perceptions of areas unrelated to speed:</p> <ul> <li>Content (&ldquo;boring&rdquo;)</li> <li>Visual design (&ldquo;tacky&rdquo; and &ldquo;confusing&rdquo;)</li> <li>Ease of navigation (&ldquo;frustrating&rdquo; and &ldquo;hard to navigate&rdquo;)</li> </ul> <p>In other words, <strong>the slower pages affected people&rsquo;s perception of three important aspects of the site that are closely aligned with brand perception</strong>.</p> <h2>Calculating the cost of "slowtime"</h2> <p>Calculating downtime loss is pretty straightforward. If your site averages $100,000 per hour in revenue, and you suffer a three-hour outage, you can estimate that you lost $300,000.</p> <p>Calculating losses due to slowdowns is not as straightforward, but you can still get an idea of what those losses might be.</p> <h3>1. Create a correlation chart for your site</h3> <p>Using your real user monitoring (RUM) data, create a <a href="https://support.speedcurve.com/docs/create-correlation-charts">correlation chart</a> for your site. A correlation chart gives you a histogram view (represented in the yellow bars in the chart below) of all your user traffic, broken out into cohorts based on performance metrics (such as Start Render and Largest Contentful Paint).</p> <p>The chart includes an overlay (represented in the blue line) that shows you a user engagement metric or business metric &ndash; such the bounce rate or conversion rate &ndash; that correlates to each of these cohorts. This lets you see at a glance the relationship between performance, user engagement, and your business.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/worlds-best-correlation-chart.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>This correlation chart shows that the highest conversion rate &ndash; slightly more than 6% &ndash; correlates with a 1.1s LCP time. As the LCP time gets slower, conversion rate worsens.&nbsp;</em></p> <h3>2. Identify the performance plateau for your site</h3> <p>The <a href="https://www.speedcurve.com/blog/web-performance-plateau/">performance plateau</a> is the point at which changes to your website&rsquo;s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter, because you&rsquo;ve bottomed out in terms of business and user engagement metrics.</p> <p>In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/performance-plateau-clv.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>At 2.8 seconds, the conversion rate for this site plateaus. In other words, improving LCP times within the plateau zone&nbsp; &ndash; for example, from 4 seconds to 3 seconds &ndash; will probably not help conversion rate.&nbsp;</em></p> <h3>3. Identify your cohort of users who drop off the performance plateau&nbsp;</h3> <p>Using the above data, you can easily calculate the number of users who experience the performance plateau, just by adding up all sessions in the cohorts from 2.8 seconds to 4.5 seconds.&nbsp;</p> <h3>4. Identify average customer lifetime value for your business</h3> <p>Your repeat customers are arguably your most valuable customers. That's why it's helpful to know the average customer lifetime value (CLV) for your business.</p> <p>CLV is a metric that projects the total average value of your customer based on past spending. Your accounting or finance teams will have this data. Calculating CLV can be complicated (and that complexity goes beyond the purposes of this post), but here's a simple way to approach it:</p> <ul> <li>Segment your customers based on their purchase recency, frequency, and monetary value</li> <li>Determine average order value</li> <li>Figure out average purchase frequency&nbsp;</li> <li>Calculate customer value</li> <li>Multiply customer value by average lifetime value</li> </ul> <p>For example, if you know that&nbsp;<span style="color: #000000;">the median spend of a returning customer over the past three years is $1,000, then predicted future value for the next three years is $1,000. Total customer lifetime value is $2,000.</span></p> <h3>5. Calculate the lost CLV</h3> <p>Using the stat that 28% of the customers who fall off your performance plateau will permanently abandon a site that is consistently slow, identify the lost CLV.</p> <h3>Example CLV calculation</h3> <p>For example:</p> <ul> <li>If the median value of a returning customer over the past three years is $1,000, then predicted future value for the next three years is $1,000.&nbsp;</li> <li>Your current converting user base is 100,000 customers. They have a collective CLV of $200 million. Their projected collective spend over the next three years is $100 million.</li> <li>10% of those customers (10,000) experience Largest Contentful Paint times at the poor end of the performance plateau.</li> <li>28% of those customers (2,800) will not return.</li> <li><strong>Your projected lost CLV is $2.8 million.</strong></li> </ul> <p>As said, this is a simplistic calculation, but it's a good starting point to calculate your own formula that you can use for your site and your users.</p> <p>Also note that this formula focuses only on lost CLV. It doesn't consider the immediate lost revenue from customers who abandon their transaction due to slowness.&nbsp;</p> <h2>Preventing outages is just one piece of the performance pie</h2> <p>If your business is reliant on your site, then you most definitely should care about preventing outages. You should, of course, conduct load testing and availability testing, and you should have effective load balancing and failover systems in place wherever possible.</p> <p>But protecting your site from failure is just one piece of the performance pie. It&rsquo;s a big piece, to be sure, but there are others. You also need to:</p> <ul> <li><strong>Track</strong> your site&rsquo;s performance using synthetic monitoring (AKA lab testing) and real user monitoring (AKA field data)</li> <li><strong>Correlate</strong> performance metrics (such as Start Render, LCP, INP) with business and user engagement metrics (conversions, bounce rate, etc.) using your real user monitoring tool</li> <li><strong>Integrate</strong>&nbsp;synthetic&nbsp;testing with your CI/CD process to catch regressions in your staging environment</li> <li><strong>Create</strong> performance budgets to get real-time alerts when key pages slow down</li> <li><strong>Drill down</strong>&nbsp;into your synthetic/lab data to resolve performance issues as they occur</li> <li><strong>Look</strong> for opportunities to further optimize your pages (hint: images and third parties are a great place to start)</li> </ul> <p>SpeedCurve lets you fight regressions from multiple angles of attack, so you can keep your site fast and your users happy &ndash; and your business successful.&nbsp;<a href="https://www.speedcurve.com/signup/">Give us a try!</a></p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/538/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 18 Mar 2025 00:00:00 +1300 NEW! Synthetic test agent updates: Chrome, Firefox and Lighthouse https://www.speedcurve.com/blog/new-synthetic-test-agent-updates-chrome-and-lighthouse <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/536/logos.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Logos for Chrome, Firefox and Lighthouse" /></p> <p>This month, we've made some updates to our synthetic testing agents. In addition to upgrading the underlying operating system, we've added support for:</p> <ul> <li>Lighthouse 12.3.0 (previously 10.4.0)</li> <li>Chrome 133 (previously 126)</li> <li>Firefox 135 (previously 128)</li> </ul><h2>What has changed?</h2> <p>We understand the sensitivity related to changes in your performance data.</p> <p>Synthetic updates are known to cause baseline changes due to hardware changes, browser optimization or in the case of Lighthouse, changes to the methodology.</p> <p>Here is a rundown of what's changed in this update:</p> <h3>Chrome</h3> <p>Moving from Chrome 126 to 133 should not have a huge impact on your metrics.</p> <p>As of Chrome 130, <a href="https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/metrics_changelog/2024_10_lcp.md#exclude-transparent-text-from-lcp">transparent text is no longer eligible to be considered for Largest Contentful Paint</a>&nbsp;(LCP) but this change doesn't appear to affect a larger number of sites.&nbsp;<br /><br />There were a <a href="https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/metrics_changelog/inp.md">number of updates to Chrome</a>, which may affect Interaction to Next Paint (INP) introduced between 126 and 130, but as we don't measure INP with synthetic, there is no impact to your metrics (only your RUM data).</p> <h3><span style="color: #1f1f1f; font-size: 17px; font-weight: 400;">scheduler.yield was introduced in Chrome 129 and this may reduce Total Blocking Time (TBT) for some sites.</span></h3> <h3>Firefox</h3> <p>There was no identified impact to metrics between versions 128-135.</p> <p>See full release history <a href="https://www.mozilla.org/en-US/firefox/releases/">here</a>.&nbsp;</p> <h3>Lighthouse</h3> <p>There were no significant changes to the Performance category of Lighthouse.</p> <p>However, there were several updates to the Accessibility and SEO categories that included new audits and improved weighting. Additionally, the PWA category has been removed entirely.</p> <p>You can learn more about those changes from the resources below:</p> <ul> <li><a href="https://developer.chrome.com/blog/lighthouse-11-0/">What's new in Lighthouse 11</a></li> <li><a href="https://github.com/googlechrome/lighthouse/releases">Lighthouse 12.3.0 release notes</a></li> <li><a href="https://github.com/GoogleChrome/lighthouse/compare/v10.4.0...v12.3.0">Change log comparison between 12.3.0 and 10.4.0</a></li> </ul> <h3>OS update</h3> <p>The operating system our agents use has been upgraded to Ubuntu 24.04. This upgrade was overdue and will allow us to rollout updates to our agent more quickly in the future.<br /><br />The OS update has had an impact on some metrics, and during extensive testing, we've observed that Total Blocking Time (TBT) has improved due to this change.</p> <h2>Impact on metrics</h2> <p>To provide a baseline when upgrading browsers, Lighthouse, or other components of our agents, we regularly measure the speed of more than 200 sites in both our production and pre-production environments.&nbsp;</p> <p>During this upgrade, we've found that most metrics have remained reasonably consistent between the new and existing versions of the agent, but that Total Blocking Time (TBT) has improved significantly.</p> <p>Across the corpus of sites we're testing, we've observed that <strong>Total Blocking Time (TBT) has improved by&nbsp;approximately 16%</strong> at the 75th percentile.&nbsp;</p> <p>After investigating, we've identified this improvement is due to the OS upgrade to Ubuntu 22.04</p> <p>Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) have remained largely unchanged.</p> <h2>What should you do next?</h2> <p>Our observations on how the upgrade affects metrics are based on the sites in our test corpus, and every site is different.<br /><br /><strong>While Total Blocking Time might be the only expected change, there is always a chance your metrics will change based on the optimizations discussed above.</strong><br /><br />Over the next few weeks, we recommend reviewing your performance budgets and checking that they are still appropriate for your site and then adjusting them if necessary.<br /><br />This is a practice we recommend doing on a regular basis as part of a <strong>'get fast, stay fast' methodology</strong>, which you can learn more about in our <a href="https://www.speedcurve.com/web-performance-guide/complete-guide-performance-budgets/">Web Performance Guide to Performance Budgets</a>.&nbsp;</p> <h2>Private agents</h2> <p>For customers who host their own <a href="https://support.speedcurve.com/docs/private-agents">private SpeedCurve agents</a>: We're planning to release an updated version of the Docker container in the next few weeks.</p> <p>If you have any questions about the upgrade, its impact on your metrics, or any questions about SpeedCurve in general, you can reach us at support@speedcurve.com.</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/536/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Sun, 02 Mar 2025 00:00:00 +1300 Performance Hero: Sergey Chernyshev https://www.speedcurve.com/blog/performance-hero-sergey-chernyshev <p><span class="large-para">We often hear how special, generous, and supportive the web performance community is. This didn't happen overnight. This month, we're excited to recognize someone who has been a huge part of creating the community culture we enjoy today: Sergey Chernyshev.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/534/sergey-new.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Whether answering questions on social media, helping someone with a proposal for a conference talk, or simply being welcoming and kind to newcomers, webperf folks are some of the most generous people you could ever hope to find. There are so many folks out there who are organizing, educating, evangelizing, and building great tooling in an effort to improve user experience on the web. Sergey has been doing all of those things earlier and longer than almost everyone!</p><p>Sergey is a well-known early champion of web performance and user experience. Among other things...</p> <ul> <li>Back in 2009, he started the first <a href="https://www.meetup.com/web-performance-ny/">web performance meetup group</a> in New York City, which is still running strong today.</li> <li>He lit the match for many conversations about how to improve speed through his Meet for Speed events (also still going strong), where brave souls ask for critique of websites while a panel of experts perform forensic analysis of front-end code.</li> <li>He has brought in countless speakers, in addition to his own talks, to give different perspectives on front-end performance.</li> <li>He has helped people organize their own web performance meetup groups in cities around the world.</li> <li>He has built an assortment of free and open-source tools &ndash; from <a href="https://github.com/sergeychernyshev/showslow">ShowSlow</a> (a circa-2009 synthetic monitoring tool) to <a href="https://www.speedpatterns.com/">Speed Patterns</a> (a collection of design patterns for fast user experiences on the web that is a current work in progress). You can check out his many other tools and projects on <a href="https://www.sergeychernyshev.com/">his personal site</a>.</li> </ul> <p>On top of everything, he is kind, funny, and always up for a hallway conversation.</p> <p>Sergey currently works as a Speed Manager at Cloudflare. He took some time out of his personal schedule to give us a little more insight into the path he's taken.</p> <h3>How did you get your start in web performance?</h3> <p>"I was always curious about making web pages faster and configured compression and optimized my databases, architecting things to do less work when a user loads the page.</p> <p>Then I attended a first talk by <a href="https://stevesouders.com/about.php">Steve Souders</a>&nbsp;[a fellow <a href="https://www.speedcurve.com/about/">SpeedCurver</a>!] at one of the Web 2.0 conferences that O'Reilly organized, where he introduced the topic of web (or front-end) performance and immediately converted me on the spot, as I realized that we were mostly looking in the wrong place."</p> <h3>What prompted you to start the first web performance meetup group?</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/534/sergey-2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>"I was already interested in meetups and attended quite a few in NYC, so after another Web 2.0 conference and chatting with Steve, I thought that I cannot really wait another year to talk to people about this important topic and decided to step up as an organizer and created a topic of Web Performance on Meetup.com and started the group with monthly events."</p> <h3>When was the first NY WebPerf meetup, and do you recall who spoke?</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/534/meet4speed.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Sergey presenting at a meet4speed event in NYC" /></p> <p>"The first event was in May 2009 with my own presentation about the Tools of the Trade, I still have the deck for it <a href="https://www.sergeychernyshev.com/talks/Web_Performance/Tools_of_the_trade.html%20">here</a>&nbsp;&ndash; it's fun to reminisce about it. Then I was able to find more folks in the NYC scene and we had a PHP/MySQL/front-end trio talk at Etsy's office, and then it kept rolling. ;)"</p> <h3><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/534/showslow.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Show Slow logo. Black text and yellow background." /><br />ShowSlow was one of my favorite tools early in my career. What drove you to build it and are there any plans to revisit it or any of your other open-source projects?</h3> <p>"I built ShowSlow, which was the first open-source synthetic tracker, I believe. It started with my reading an obscure post by <a href="https://bsky.app/profile/stoyan.me">Stoyan Stefanov</a> about YSlow beacon setting, which I ended up reverse-engineering and basically creating an endpoint to collect that data.</p> <p>I built it because I wanted to track those synthetic scores and to keep the conversation going without decision makers manually running YSlow on their machines. Then, later, I convinced the Google team to add similar beacon functionality to the Page Speed Insights plugin they built, and then convinced <a href="https://bsky.app/profile/patmeenan.com">Patrick Meenan</a> to add it to WPT [WebPageTest] as well.</p> <p>It was a great collaboration of open sorcerers and I loved it.</p> <p>I don't think I'd be resurrecting it, because&nbsp;<a href="https://www.sitespeed.io/">sitespeed.io</a> basically replaced it and they did a much better job maintaining it, plus RUM replaced synthetic monitoring anyway, and there are plenty of affordable commercial tools like <a href="https://speedcurve.com">SpeedCurve</a> that can spend time maintaining these ever-more-complex solutions.</p> <p>I am revisiting a few projects here and there, such as <a href="https://ux-speed-calculator.netlify.app/">UX Speed Calculator</a>. I hope to give it a few more features and potentially create a talk to show progression of understanding of speed data. And maybe the UX Capture library and methodology will get a facelift now that <a href="https://developer.chrome.com/docs/web-platform/soft-navigations-experiment">soft navigations might come back</a>, we have much better APIs for collecting metrics, and <a href="https://github.com/bloomberg/container-timing">container timing</a> might get standardized.</p> <p>Or maybe my idea of <a href="https://github.com/BorderlessFramework/borderless">Borderless Computation</a> can be the next big hit, but I fear that it is just too crazy and unrealistic. ;)</p> <p>If only we had more hours in a day..."</p> <h3>What are you most excited about in 2025, related to performance, and what keeps you up at night?</h3> <p>"In general I want to get into more 'design and build fast experiences'. I hope to one day write more articles for <a href="https://www.speedpatterns.com/">Speed Patterns</a> and build a speed museum that I've been thinking of for a while now.</p> <p>In terms of technologies, we do have a few innovations like <a href="https://developer.mozilla.org/en-US/docs/Web/API/Speculation_Rules_API">Speculation Rules</a> and a few related things like <a href="https://developer.mozilla.org/en-US/docs/Web/API/View_Transition_API">View Transitions</a>, that finally break the old paradigms of the existing web and can potentially bring big changes. But as with any big and shiny feature, there is still a long way to go before they become mainstream.</p> <p>One day I hope we can find a working and cool alternative to the SPA pandemic and make it possible and fun to make things useful and fast at the same time.</p> <p>The <a href="https://infrequently.org/2024/01/performance-inequality-gap-2024/">performance inequality gap</a> is growing at an alarming pace and I feel that we all need to find better solutions for this problem. It is possible that one does not exist, though. Inequality is, unfortunately, a norm of our life and the web performance aspect is just a reflection of it."</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/534/sergey-geekbench.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>What advice do you have for someone who is interested in learning more about web performance?</h3> <p>"Read the docs, but also learn to triage the issues using the tools. Learn what makes things slow and how browsers and networks work and how they talk to the servers.</p> <p>And more generically, don't take anything at face value. All the rules that were written so far were written by people like you. Understand why things are the way they are, and that will give you a good direction for your exploration and decision making.</p> <p>Last but not least, remember that we are not solving problems for computers. We are solving them for people, as we try to make them less frustrated with computers."</p> <p><em>Thank you Sergey! Lucky for us, you show no signs of slowing down (pun intended). The world is a faster, better place with you in it!</em></p> <p><em>Do you have someone you'd like to recognize as a Performance Hero? Let us know at support@speedcurve.com!</em></p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><em><img class="blog-img" src="https://blog-img.speedcurve.com/img/534/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></em></a></p> Tue, 18 Feb 2025 00:00:00 +1300 Six things that slow down your site's UX (and why you have no control over them) https://www.speedcurve.com/blog/six-things-page-speed-user-experience <p><span class="large-para">Have you ever looked at the page speed metrics &ndash; such as Start Render and Largest Contentful Paint &ndash;&nbsp;</span><span style="font-size: 22px;">for your site&nbsp;</span><span style="font-size: 22px;">in both your synthetic and real user monitoring tools and wondered "Why are these numbers so different?"</span></p> <p style="text-align: right;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/retro-computer.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: right;"><sup>Photo by&nbsp;<a href="https://www.freepik.com/">Freepik</a></sup></p> <p>Part of the answer is this: You have a lot of control over the design and code for the pages on your site, plus a decent amount of control over the first and middle mile of the network your pages travel over. But when it comes to the last mile &ndash; or more specifically, the last few feet &ndash; matters are no longer in your hands.&nbsp;</p> <p>Your synthetic testing tool shows you how your pages perform in a clean lab environment, using variables &ndash; such as browser, connection type, even CPU power &ndash; that <em>you've</em> selected.</p> <p>Your real user monitoring (RUM) tool shows you how your pages perform out in the real world, where they're affected by a myriad of variables that are completely outside your control.&nbsp;</p> <p>In this post we'll review a handful of those performance-leaching culprits that are outside your control &ndash; and that can add precious&nbsp;<em>seconds</em> to the amount of time it takes for your pages to render for your users. Then we'll talk about how to use your monitoring tools to understand how your real people experience your site.</p><h2>Consider this scenario...</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/family-laptop.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: right;"><sup>Photo by <a href="https://www.freepik.com/free-photo/family-smiles-front-laptop_26203064.htm">andreas on Freepik</a></sup></p> <ul> <li>Someone visits your site on their family computer, which is shared by four people.</li> <li>The computer is a MacBook Pro they bought in 2018, so it still feels newish to them. (They have no plans to replace it any time soon. Kids can be rough on laptops!)</li> <li>The primary browser is Chrome 108 (circa 2022) with a half-dozen nifty toolbar add-ons, including social widgets and a parental control plugin.&nbsp;</li> <li>Each family member has their own browser window with anywhere between 10-20 tabs open at any given time. They keep the same browser window open for days or weeks (sometimes even months) on end.</li> <li>They tend to leave all their other applications open, for easy access. This includes MS Word and Excel, PS Elements, Zoom, and Stop Motion Studio (for the kids!). They're concerned about internet security, so they'e also running antivirus software.</li> <li>They think they have high-speed internet because they're paying their service provider for it, but they're using an eight-year-old modem.</li> <li>They only restart their machine when it's running so slowly that it becomes intolerable &mdash; or when it crashes.</li> </ul> <p><strong>If a crash happens when they're on your site, your user doesn't blame any of the factors listed above.</strong> Chances are they blame your site. This holds true for slowdowns as well.</p> <p>Is this fair? Not really.</p> <p>It's also not fair that those lost seconds and lost visitors could result in lost revenue for your company, but this is the world we live in.</p> <p>Let's go into more detail about end-user performance culprits.</p> <h2>1. End-user connection speed</h2> <p>If you live in an urban centre, you may enjoy connection speeds of 150 Mbps or more. You may find it hard to believe that there are still many rural communities where internet users typically experience connection speeds of just 6-10 Mbps. And as the graph below shows, even <a href="https://www.highspeedinternet.com/resources/fastest-metros-internet">some urban centres can suffer download speeds as low as 20 Mbps</a>.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/fastest-slowest-internet.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Here are some <strong>minimum</strong> download speed requirements for common online activities:</p> <ul> <li>Check email and basic web browsing: 0.5-1 Mbps</li> <li>Music streaming: 1-2 Mbps</li> <li>SD video streaming: 2-3 Mbps</li> <li>Video calls and gaming: 3-5 Mbps</li> <li>HD video streaming: 5-25 Mbps</li> <li>Stream 4K content and play competitive online games: 25-50 Mbps</li> </ul> <p>Keep in mind that the numbers above are bare minimums. Meeting these thresholds does not guarantee an optimal experience.&nbsp;</p> <p><strong>The number of internet users in your house could increase your download speed needs by 2X or more.</strong> If multiple people are using the connection at the same time &ndash; which is more common than not &ndash; then your requirements could be double, or even triple, the numbers listed above.&nbsp;</p> <p><strong>Smart devices also affect connection speeds.</strong> The connected devices in your home &ndash; such as smart thermostats, lighting, and security systems &ndash; are all quietly consuming more bandwidth in the background.&nbsp;</p> <h2>2. Older hardware</h2> <p>If you subscribe to faster service through your ISP, but you're using an older modem and/or an older router, you may not be getting the service you're paying for. For a myriad of reasons, older hardware can't always accommodate faster speeds.</p> <p><strong>Most people use the same hardware for between five to ten years.</strong>&nbsp;It's recommended that you replace your modem and router at least every five years &ndash; or even as often as two or three years, depending on the quality of the hardware and how it's treated during its lifespan. (Despite this, I&rsquo;ve yet to encounter an ISP that proactively reminds customers to upgrade their hardware.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/modem.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><sup>If your modem looks like this, replace it immediately.</sup></p> <p><strong>Specifications and standards are in flux.</strong> Back around 2011, most cable companies made the switch from DOCSIS 2.0 to DOCSIS 3.0. (DOCSIS stands for "data over cable service interface specifications".) If you're not using a DOCSIS 3.0 or 3.1 modem, then you haven't been fully leveraging your high-speed plan.</p> <p><strong>We don't know how many users are still using DOCSIS 2.0.</strong>&nbsp;While it's to be hoped that most internet users are using DOCSIS 3.0 at minimum, there's definitely no guarantee. I searched for numbers on how many people might still be using DOCSIS 2.0, but couldn't find anything. But given that a non-trivial number of people hold on to their modems for 10-15 years, it feels safe to assume that some folks are still using hardware that undermines performance.</p> <p>(Note that the specs for DOCSIS 4.0 have been released, but DOCSIS 4.0 modems aren't available for retail purchase yet. After DOCSIS 4.0 starts being broadly released &ndash; possibly later this year &ndash; we should be ready for the same scenario to play out all over again.)</p> <h2>3. Older desktop and mobile devices</h2> <p>While the current "industry standard" sets the lifecycle of a desktop computer at four to five years, data shows that the average person keeps their computer for longer than that.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/statista.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><strong>In the US, the average person replaces their desktop computer every 5.6 years.</strong>&nbsp;In those 5.6 years, performance can seriously degrade &mdash; sometimes due to viruses or, more commonly, due to simply running low on memory.&nbsp;</p> <p><strong>5.6 years is just an average.</strong>&nbsp;Many people hold on to their computers for much longer. In other words, more than half of the people coming to our sites could be using significantly older devices.&nbsp;</p> <p><strong>The desktop replacement lifecycle is increasing, not decreasing, over time.</strong>&nbsp;According to&nbsp;<a href="https://www.statista.com/statistics/267465/average-desktop-pc-lifespan/">Statista</a>, the average lifecycle could increase to 6.5 years by 2027.&nbsp;&nbsp;</p> <p><strong>This holds true for smartphones, as well.</strong> The smartphone replacement cycle is a fair bit shorter, but the lengthening trend also appears here. In the US, the average expected lifecycle of smartphones is currently around 2.8 years (again according to <a href="https://www.statista.com/statistics/619788/average-smartphone-life/">Statista</a>). In a couple of years, that lifecycle is expected to grow to about three years.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/statista-smartphones.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>(Personal disclosure: In September 2024, I reluctantly retired my seven-year-old iPhone 7. I mention this for anyone who might not believe that many people choose not to upgrade their phones every couple of years.)</p> <h2>4. Browser version</h2> <p>You upgrade your browser religiously. Many people do not or cannot, so it's unwise to assume that your users are running the latest and greatest version of their chosen browser.</p> <p>To illustrate, let's look at the&nbsp;<a href="https://gs.statcounter.com/browser-version-market-share/desktop/worldwide/#monthly-202401-202501">latest Chrome stats from Statcounter</a>. Some interesting things to note:</p> <ul> <li>A small number of users are still using Chrome 39, which was released in 2014.</li> <li>Versions 122 to 132 were released over the past 12 months. These are used by 88.32% of Chrome users.</li> <li>11.15% of Chrome users use versions (77 to 120) that were released between 2019 to 2023. In other words, they're using versions that are up to five years old.</li> <li>0.52% of Chrome users are currently using versions that are between 5-10 years old. Does that not sound like much? There are an estimated 3.5 billion Chrome users worldwide, so 0.52% of that is more than 18 million people.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/chrome-version-breakdown.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /><strong>Why does the age of the browser version matter?</strong> Outdated browsers may not support new web technologies and standards, resulting in compatibility issues and slower loading times (not to mention security issues, but that's a whole other problem).</p> <h2>5. Browser use (and abuse)</h2> <p>Browser age is just one issue. There are a number of other variables that can affect browser performance, such as:</p> <ul> <li><strong>Cache and cookie overload</strong> &ndash; Most people don't clear their cache and cookies regularly (if at all). Over time, these files pile up and slow down the browser.</li> <li><strong>Multiple windows and tabs</strong>&nbsp;&ndash; Having too many windows and tabs open at once requires more system resources, which leads to slower performance.</li> <li><strong>Malware and adware</strong> &ndash; These can cause a number of problems, including redirects, unwanted popups, and other slowdowns.</li> <li><strong>Browser extensions</strong>&nbsp;&ndash; Not all plugins affect performance, but some definitely do, particularly security plugins. The more plugins, the greater the risk of page slowdowns.</li> </ul> <p><a href="https://bsky.app/profile/tkadlec.bsky.social/post/3lh2tm2j57k27"><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/plugins2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p style="text-align: center;"><sup>Performance expert <a href="https://bsky.app/profile/tkadlec.bsky.social/post/3lh2tm2j57k27">Tim Kadlec on Bluesky</a>&nbsp;</sup></p> <p><strong>Related: Some productivity experts say too many open tabs can lead to cognitive overload and reduced focus for users.</strong> According to some experts, you should never have more than 3-5 tabs open at a time. Raise your hand if you have more than five tabs open right now. *RAISES HAND*</p> <h2>6. Other applications&nbsp;</h2> <p>Running too many non-web applications at the same time affects web performance. But many non-techie internet users don&rsquo;t know this. Some other things that aren't common knowledge outside our tech silo:</p> <ul> <li><strong>Automatic software updates</strong> &ndash; Consumes CPU and memory, potentially leading to slower browsing.</li> <li><strong>Antivirus software</strong> &ndash; Scans incoming files to identify and eliminate viruses and other malware such as adware, spyware, trojan horses, etc. It does this by analyzing all files coming through your browser in realtime, meaning that files are paused for inspection before being permitted to download. Because of this inspection, a performance penalty is inevitable and unavoidable. The extent of the penalty depends on the software and on the composition of the page/resource being rendered in the browser.</li> </ul> <h2>Takeaways</h2> <p>As I said at the top of this post, you have no control over potential problems at the very front end of the user experience. But that doesn&rsquo;t mean you shouldn&rsquo;t arm yourself with the knowledge that problems are occurring.</p> <h3>1. Optimize your pages</h3> <p>While you can&rsquo;t control the end-user environment, you have tons of control over your pages. Optimizing your <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-images/">images</a> and <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-javascript/">JavaScript</a> (including <a href="https://www.speedcurve.com/web-performance-guide/third-party-web-performance/">third parties</a>), <a href="https://www.speedcurve.com/web-performance-guide/leveraging-browser-caching-for-faster-load-times/">smarter browser caching</a>, creating <a href="https://www.speedcurve.com/web-performance-guide/complete-guide-performance-budgets/">performance budgets</a> to catch regressions &mdash; these are just a few of the techniques you can leverage to mitigate the damage done by things like poor connection speeds and older hardware.</p> <h3>2. Use a reliable content delivery network</h3> <p>In addition to optimizing the heck out of your pages, you should also consider using a reliable content delivery network (CDN).&nbsp;</p> <h3>3. Use real user monitoring to understand real-world performance</h3> <p>Don't rely on your own experience using your site. You also shouldn't rely on synthetic tests to give you a true sense of how your pages perform. Your synthetic testing tools show you how your pages perform in a clean lab environment. Synthetic monitoring is essential for establishing a baseline and showing you the impact of code and design changes. That's only one half of what you need to know. <a href="https://www.speedcurve.com/web-performance-guide/synthetic-vs-real-user-monitoring/">You need to combine synthetic and real user monitoring.</a></p> <p>Real user monitoring (RUM) tools show you how your pages perform out in the real world, where they're affected by variables outside your control. You need real user monitoring that gives you full visibility into how actual people are experiencing your site.</p> <h3>4. Look at performance at the 75th and 95th percentile, not just the median</h3> <p>Too many people focus on what performance and user experience look like at the median, and they neglect the large number of users at the 75th and 95th percentiles. Here's when and why you should focus on median, 75th percentile, and 95th percentile results:</p> <ul> <li><strong>Median</strong> &ndash; This is typically a stable measurement, so it's good for seeing long-term trends; however, the median will typically not show short-term trends or anomalies. Importantly, it also doesn't give you any visibility into the user experience being tracked in the worst-performing half of your data.</li> <li><strong>75th percentile</strong> &ndash; This is a good balance of representing the vast majority of measurements, while not being affected by outliers. While not as stable as the median, the 75th percentile is a good choice for seeing medium- to long-term trends. The 75th percentile is the best value to use when setting performance budgets. It's the percentile that Google recommends using when monitoring <a href="https://www.speedcurve.com/web-performance-guide/get-started-with-core-web-vitals/">Core Web Vitals</a>.</li> <li><span style="font-size: 17px;"><strong>95th percentile</strong> &ndash; This encompasses the experience of almost all of your users, with only the most severe outliers excluded. This makes it perfect for spotting short-term trends and anomalies. If your metrics look good at the 95th percentile, you can feel assured that the majority of your visitors are having a fast user experience.</span></li> </ul> <h2>Summary</h2> <p>You have a lot of control over the design and code for the pages on your site, plus a decent amount of control over the first and middle mile of the network your pages travel over. But when it comes to the last mile &ndash; or more specifically, the last few feet &ndash; matters are no longer in your hands. This is why you need to combine synthetic and real user monitoring for visibility into the full breadth of your users' experiences.</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/514/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 11 Feb 2025 00:00:00 +1300 Page bloat update: How does ever-increasing page size affect your business and your users? https://www.speedcurve.com/blog/page-bloat-2025 <p><span class="large-para">The median web page is 8% bigger than it was just one year ago. How does this affect your page speed, your Core Web Vitals, your search rank, your business, and most important &ndash; your users? Keep scrolling for the latest trends and analysis.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/2024-page-bloat-hero.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>For almost fifteen years, I've been writing about page bloat, its impact on site speed, and ultimately how it affects your users and your business. You might think this topic would be exhausted by now, but every year I learn new things &ndash; beyond the overarching fact that pages keep getting bigger and more complex, as you can see in this chart, using data from the HTTP Archive:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/2b-resource-size-breakdown-2022-to-2024.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In this post, we'll cover:</p> <ul> <li>How much pages have grown over the past year</li> <li>How page bloat hurts your business and &ndash; at the heart of everything &ndash; your users</li> <li>How page bloat affects Google's Core Web Vitals (and therefore SEO)</li> <li>If it's possible to have large pages that still deliver a good user experience</li> <li>Page size targets</li> <li>How to track page size and complexity</li> <li>How to fight regressions</li> </ul><p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/leeeeeeroy-jenkins.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: center;"><em>anymouse123456 via&nbsp;<a href="https://news.ycombinator.com/item?id=39564632">Hacker News</a></em></p> <h2>What do we mean when we talk about page size?</h2> <p>When we talk about page size, we're referring to overall page weight and complexity. This includes:</p> <ul> <li><strong>Size</strong>&nbsp;&ndash; Total page weight in bytes. Size matters, especially to mobile users who have limited and/or metered data.</li> <li><strong>Resources</strong>&nbsp;&ndash; Total number of resources (HTML, images, fonts, etc.) on the page. The more resources, the greater the complexity of the page &ndash; and the increased likelihood of rendering delays, and even blockages.&nbsp;</li> <li><strong>HTML</strong>&nbsp;&ndash; Typically the smallest resource on the page, HTML's performance risk is usually negligible. (Having said that, a while back I dug into a page where the total HTML size jumped dramatically because of a bunch of inline JavaScript, which led to rendering delays, so keeping an eye on HTML size is still a good idea.)</li> <li><strong>Images</strong>&nbsp;&ndash; Often the greatest contributor to page bloat. Looking at the 90th percentile of the distribution of page weight, images account for a whopping 6.6 MB of a roughly 11.1 MB page. In other words, images comprised almost 60% of the total page weight. And if that already wasn&rsquo;t enough, the number of images on a page has been linked to lower conversion rates on retail sites. <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-images/">Learn how to optimize images.</a></li> <li><strong>Video</strong>&nbsp;&ndash; Over the past couple of years, video has proliferated hugely. This is a potential cause for concern for anyone who cares about metrics like Largest Contentful Paint, which measures the largest visual element on a page &ndash; including videos. (More on that below.)</li> <li><strong>JavaScript</strong>&nbsp;&ndash; A page can have a relatively low JS weight but still suffer from JS-inflicted performance problems. Even a single 100 KB third-party script can wreak havoc with your page. The more scripts on your page, the greater the risk. It&rsquo;s not enough to focus solely on blocking scripts. It&rsquo;s possible for your pages to contain zero blocking scripts and still have less-than-optimal performance because of how your JavaScript is rendered. That&rsquo;s why it&rsquo;s so important to understand CPU usage on your pages, because JavaScript consumes more CPU than all other browser activities combined. When JavaScript blocks the CPU, the browser can&rsquo;t respond to user input. This creates what&rsquo;s commonly called "jank" &ndash; that annoying feeling of jittery, unstable page rendering. <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-javascript/">Learn more about optimizing JS.</a></li> <li><strong>CSS</strong>&nbsp;&ndash; Like JavaScript, CSS doesn&rsquo;t have to be bulky to cause problems. Poorly executed stylesheets can create a host of performance problems, ranging from stylesheets taking too long to download and parse, to improperly placed stylesheets that block the rest of the page from rendering. And, similar to JavaScript, more CSS files equals more potential trouble. <a href="https://www.speedcurve.com/web-performance-guide/using-critical-css-for-faster-rendering/">Review some CSS optimization tips and best practices.</a></li> </ul> <h2>How does page bloat hurt Core Web Vitals?</h2> <p><span style="color: #000000;"><a href="https://www.speedcurve.com/web-performance-guide/get-started-with-core-web-vitals/">Core Web Vitals</a>&nbsp;are a Google search ranking factor. Given that Google continues to dominate search usage, you should care about Vitals alongside the other metrics you should be tracking.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/core-web-vitals-new.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><span style="color: #000000;">Page bloat can have a huge impact on your Vitals:&nbsp;</span></p> <ul> <li><strong>Cumulative Layout Shift</strong>&nbsp;&ndash;&nbsp;Excessive resources can contribute to a poorer CLS score, as more elements shift on the page.&nbsp;</li> <li><strong>Largest Contentful Paint</strong>&nbsp;&ndash; LCP measures when the largest visual element (image or video) in the viewport finishes rendering. Heavier visual elements can take much longer to render, especially videos.&nbsp;If you're serving huge videos that take several seconds to fully render, that could hurt your LCP times.</li> <li><strong>Interaction to Next Paint &amp; Total Blocking Time</strong>&nbsp;&ndash; Excessive and/or non-performant JavaScript can hurt interactivity metrics, like INP and TBT. The more scripts on your pages, the greater the risk.</li> </ul> <h2>How does page bloat hurt your business?</h2> <p>A&nbsp;<a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-load-time/">Google machine-learning study</a>&nbsp;I helped with a few years ago found a few strong predictors of whether or not a page resulted in a conversion, ranked in the following order:</p> <ol> <li>The total number of page elements was the single greatest predictor of conversions.</li> <li>The number of images on the page was the second greatest predictor.</li> <li><span style="color: #1f1f1f;">The more scripts there were in a series of pages in a session, the less likely that session was to convert.</span></li> </ol> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/google-study2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Image size is another issue, as excessive image weight hurts your SEO ranking in Google Image Search. Given that image search comprises upwards of 26% of Google searches, this is something you should care about. (You can dive deeper into <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-images/">best practices for image optimization</a>.)&nbsp;</p> <h2>How does page bloat hurt your visitors?</h2> <p>In his excellent ongoing series of blog posts,&nbsp;<a href="https://infrequently.org/2022/12/performance-baseline-2023/">The Performance Inequality Gap</a>, Alex Russell makes a compelling case that serving huge pages is an "ethical crisis for front end" as more and more users rely solely on low-powered mobile devices to deliver essential information and services:</p> <blockquote> <p>The "i" in iPhone stands for "inequality".</p> <p>Premium devices are largely absent in markets with billions of users thanks to the chasm of global wealth inequality....</p> <p>As smartphone ownership and use grow, the frontends we deliver remain mediated by the properties of those devices. The inequality between the high-end and low-end is only growing, even in wealthy countries. What we choose to do in response defines what it means to practice UX engineering ethically.</p> <p>Developers are clearly out of touch with market ground-truth. Building an understanding of the differences in the experiences of the wealthy vs. working-class users can make the privilege bubble's one-way mirror perceptible from the inside.</p> </blockquote> <h2>HTTP Archive research: Background and caveats</h2> <p>Before we get into the analysis, some context:</p> <ul> <li><strong>The numbers cited below all come from the&nbsp;<a href="https://httparchive.org/reports/page-weight">HTTP Archive</a>.</strong>&nbsp;I looked at the top 1M sites, focusing on median and 90th percentile numbers. This is to try to understand how a "typical" page might perform, as well as pages in the longtail.&nbsp;</li> <li><strong>These numbers should NOT be taken as a benchmark for your own site.</strong>&nbsp;You haven't necessarily achieved anything great if your pages are smaller than this, nor have you failed by default if your pages are bigger. I'll go into this more below.</li> </ul> <h2>Desktop: The median page has grown by 8% in one year</h2> <p>If you look at the year-over-year increase from 2467.5 KB to 2675.2 KB, you're forgiven if your initial reaction is "that's not too bad". But 8% growth is significant. At that rate, the median page could be 3 MB by the end of this year. That's not trivial.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/1-median-desktop-2023-to-2025.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Even if you consider this growth not too worrisome, it's important to keep in mind that it's just an aggregated number, which masks the more dramatic increases in specific resource types, such as images, JavaScript, and video.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/2b-resource-size-breakdown-2022-to-2024.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Desktop: Median JavaScript weight increased by 10%</h2> <p>A 10% year-over-year increase in JS size is something to take note of. All in, JS weight has increased by 28% since 2022.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/3-javascript-growth-median.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Each script on your page represents a potential single point of failure, especially if the script is render-blocking or has an excessive Long Tasks time. The more scripts on your page, the greater the performance risk.</p> <p>Even if the script doesn't block the page from rendering, excessive and unoptimized JavaScript can hurt interactivity metrics like Interaction to Next Paint and Total Blocking Time. Remember: <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-javascript/">The single best thing you can do with JavaScript is to avoid using it when you don't need to.</a></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/4-javascript-size-vs-requests-median-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's interesting to note that the number of scripts on the median page has more or less remained the same, as shown above. This suggests that the increase in JS weight is coming because individual scripts are becoming heavier.</p> <h2>Desktop: Median image weight increased by 5%</h2> <p>This finding came as a somewhat pleasant surprise. While the upward trend is still present, it's not quite as alarming as it has been in previous years.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/5-image-size-median-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Still, it's important to keep in mind that 1058 KB of image weight represents roughly 40% of the total weight of the median page. Improving <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-images/">image optimization</a> and rendering best practices could have a noticeable impact on page speed.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/6-image-size-vs-requests.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Similar to the observation about JS size and resources above, it's interesting to see that while the number of image requests has been more or less stable over the past couple of years, the total size has increased. This suggests that the size of individual images is trending upward.</p> <h2>Desktop: Median video weight increased by 17%</h2> <p>As mentioned earlier, video is one of the main growth areas. (If you're adding up the numbers and wondering how the video weight makes sense given the median page weight cited earlier, keep in mind that these medians are calculated based on pages that contain these resources. Of the one million URLs tracked by the HTTP Archive, not all contain video.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/7-video-median-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>As mentioned earlier in this post, Largest Contentful Paint (one of Google's Core Web Vitals) measures when the largest visual element (image or video) in the viewport finishes rendering. Heavier visual elements can take much longer to render, especially videos.</p> <p>In other words, if you serve huge videos to your users, it could be hurting your LCP times, which could then be hurting your Google search ranking.</p> <p><img class="blog-img" style="font-size: 35px; color: #000000;" src="https://blog-img.speedcurve.com/img/474/8-video-size-vs-requests-median-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>While not all pages serve video, those that do are serving larger and larger video files. The median page that contains video serves just 2 files, but the total weight has increased by 28% in just two years.&nbsp;</p> <h2>Mobile: Median page has grown by 7.5% in one year</h2> <p>A few years ago, it seemed like mobile page growth had slowed down, but it once again appears to be on the upswing. The median page served to mobile is well over 2 MB &ndash; and keep in mind that this is being served to low-powered and/or connectivity-impaired mobile devices.&nbsp;&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/15-median-page-growth-mobile.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />While the median mobile page is still a bit smaller than the desktop page,&nbsp;when we look at the breakdown across resource types, we can see growth in images, JavaScript, and video, just as we did with desktop:</p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/9-mobile-vs-desktop-resource-size.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></h2> <h2>90th percentile sees a 24% increase</h2> <p>Things get really interesting when we compare numbers at the 90th percentile. This is where we can see explosive growth over the past year.</p> <p>The 90p page currently weighs in at over 11 MB. At this rate of growth, the 90p page could reach 14 MB by the end of this year.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/11-90p-desktop-2023-to-2025.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>90p mobile page is 4X larger than median mobile page</h2> <p>The 90th percentile page served to mobile is almost 10 MB, making it four times larger than the median page. As before, consider the range of mobile devices that might be struggling to render this massive page.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/12-mobile-page-size-med-vs-90p.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>90p page contains 2.5X more requests than the median page</h2> <p>The median page served to desktop contained a total of 76 resources (HTML, JS, images, videos, etc.) compared to the 90th percentile with a total of 189 resources. In other words, the 90p page contains almost three times the number of requests.&nbsp;</p> <p>While it wasn't surprising to see the large number of image, JavaScript, and video requests at the 90th percentile, it was a surprise to see that the 90p page contained 12 HTML requests and 30 CSS requests.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/13-resource-requests-breakdown-median-vs-90p-grouped.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Most 90p page weight comes from video, images, and JavaScript</h2> <p>As mentioned earlier, not all pages contain video, but those that do at the 90th percentile contain a LOT &ndash; almost 20 MB!</p> <p>Image weight comes next, weighing in at 6.6 MB. JavaScript runs third at almost 2 MB.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/14-resource-size-breakdown-median-vs-90p-grouped.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Can large pages ever deliver a good user experience?</h2> <p>Yes. While page size can be a red flag for performance issues, you need to take a closer look at how your pages are built to see if the size and complexity of your pages actually affect how fast your site feels to your users.</p> <p>It's not enough to look at crude metrics like total requests and size. You need to know:</p> <ul> <li>How many of your requests are blocking requests?</li> <li>If your page contains blocking requests, how many of them occur in the critical rendering path? That is, how many blocking requests are there before key page metrics like Start Render and Largest Contentful Paint?</li> <li>How many of your potentially problematic requests come from third parties, and how do you maintain visibility into how they're performing?</li> <li>Are the most important images on your page the first images to render? How quickly do they show up?</li> </ul> <h2>How much content *should* you serve?</h2> <p>Making your pages as small as possible is in the best interest of your users who don't have access to fast networks and devices. Alex Russell <a href="https://infrequently.org/2024/01/performance-inequality-gap-2024/">suggests</a> these per-page content targets for first-load under 3 seconds on 75th percentile devices:</p> <ul> <li><strong>For JS-centric pages</strong> &ndash; No more than 365 KB of JS and 365 KB of markup, for a total of 730 KB</li> <li><strong>For markup-centric pages</strong>&nbsp;&ndash; No more than 75 KB of JavaScript and 1.3 MB of markup, for a total of 1.4 MB</li> </ul> <p>This is how those recommended thresholds stack up against the current reality:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/474/16-recommendations.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Are these targets aggressive? Yes. Will you be able to meet them? Possibly not. But as the saying goes, don't let perfect be the enemy of good.&nbsp;</p> <h2>Takeaways</h2> <p>I meet with so many people who build and optimize websites. When we investigate how their pages are built, I routinely witness surprise at spotting things like ghost scripts, huge unoptimized images, and blocking resources they weren't aware of. These are smart people. The problem isn't them &ndash; it's the scale of their sites, the speed of their release cycles, and the number of people who touch each page.</p> <p>We might never get our lean, pre-1999, under-1MB web pages back. But we can regain control over the pages we have today.</p> <h3>1. Understand the critical rendering path for each page</h3> <p>Your pages probably have a some dead weight on them, and some of that weight is unoptimized. Too much stuff means you can't see the forest for the trees. The key to a good user experience is quickly delivering the most important content first. Here are some <a href="https://developers.google.com/web/fundamentals/performance/critical-rendering-path/measure-crp">great resources for analyzing and optimizing the critical rendering path</a>.</p> <h3>2. Make sure everyone who touches a page understands the performance impact of what they do</h3> <p>All the fancy performance monitoring tools in the world can't help you if you don't have a strong performance culture at your organization. Here are some <a href="https://www.speedcurve.com/web-performance-guide/performance-culture-best-practices/">tips and best practices</a> to help on that journey.</p> <h3>3. Use performance budgets to fight regression</h3> <p>Page bloat happens when people stop paying attention. Pages need to be monitored consistently over time. <a href="https://www.speedcurve.com/web-performance-guide/continuous-performance-monitoring/">Integrating performance testing into your CI/CD process</a> is a great way to fight regression, especially if you combine this with <a href="https://www.speedcurve.com/web-performance-guide/complete-guide-performance-budgets/">creating performance budgets</a>. By creating performance budgets for key metrics &ndash; such as Start Render, Largest Contentful Paint, Interaction to Next Paint, and various page size and weight metrics &ndash; you can get alerted when they go out of bounds.</p> <h3>4. Don't assume hardware and networks will mitigate page bloat</h3> <p>Increased page size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.</p> <h3>5. Don't forget to monitor longtail performance</h3> <p>While some of your users may have newer devices and speedy networks, not all are this lucky.&nbsp;If you're using a&nbsp;<a href="https://www.speedcurve.com/web-performance-guide/synthetic-vs-real-user-monitoring/">real user monitoring</a>&nbsp;tool,&nbsp;<a href="https://www.speedcurve.com/web-performance-guide/web-performance-for-product-managers/">keep an eye on your performance metrics at the 75th and 95th percentiles</a>&nbsp;so you have an understanding of your site's less-than-optimal performance.&nbsp;</p> <h2>Questions or feedback?</h2> <p>I'd love to hear your thoughts and insights. If you're interested in tracking page size and complexity for your own site, we'd love to have you&nbsp;<a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><strong>try SpeedCurve for free</strong></a>!</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 28 Jan 2025 00:00:00 +1300 Performance Hero: Annie Sullivan https://www.speedcurve.com/blog/performance-hero-annie-sullivan <p><span class="large-para">Let's kick off the new year by celebrating someone who has not just had a huge impact on web performance over the past few years, but who has even more exciting stuff in the works for the future: Annie Sullivan!</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/532/annie.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Annie leads the Chrome Speed Metrics team at Google, which has arguably had the most significant impact on web performance of the past decade. We've gotten to know Annie through frequent discussions, feedback sessions, and hallway talks at various events. Most recently we caught her <a href="https://youtu.be/ORg88SshSEQ?si=zm9JlemKP1RPZszB">closing keynote at performance.now()</a> in November.&nbsp;</p> <p>Speaking from experience, driving change at scale from within a large organization can be very challenging. Annie and her team navigate this arduous task with true passion for web performance and for improving the user experience. Read on for a great recap of a recent discussion with Annie and just a few of the highlights that make her a true performance hero.</p><p>In <a href="https://youtu.be/ORg88SshSEQ?si=zm9JlemKP1RPZszB">her recent performance.now() talk</a>, which is a must-watch, Annie took us through key learnings she has had while working in performance for the last couple of decades.</p> <p>In her preamble, she celebrated some of the successes our community has had over the past year. This slide really stuck out to me as an example of how we collectively have a huge impact on the user experience:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/532/cwvimpact.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Slide from Annie's talk showing the impact in years of CWV improvements." /></p> <p>You know when you are measuring success in 'years of time saved' that you've had a meaningful impact. Nice job, everyone!</p> <p>It's really important to acknowledge that none of this would have been possible without the great work from Annie and her small-but-mighty Speed Metrics team at Google.&nbsp;Here is a little more we learned about Annie and her team.</p> <h3>How did you get your start in web performance?</h3> <p>"The very first web performance project I did was back in the late nineties! I was running a message board for the local punk rock scene in Detroit. I started with <a href="https://www.scriptarchive.com/wwwboard.html">WWWboard</a>, but it had a race condition that led to overwriting posts. So I decided to write my own message board in PHP and MySQL, which better managed transactions. But my original version was slow, because I queried the database for every page load. Of course writes were much less common than reads, so I added a caching layer for reads, and that did the trick.</p> <p>"After I finished college, I did performance work here and there while developing games for Playstation 2, and later at Google. Then in 2008, <a href="https://markcarrigan.net/2016/01/10/googles-war-against-latency/">Google issued a code yellow</a> for application speed, and I was the code yellow lead for Google Docs. A critical part of the code yellow was ensuring Google's sites would be fast for users across the globe, even if they had slow networks and low end devices. Leadership wanted to know the real page load times end users were experiencing. So in addition to all the optimization work we did for Google Docs, I got to spend a lot of time and energy working on the measurement problem: how can we get end-to-end latency numbers? How do we slice and dice them to find problem areas? What should we tell browser vendors is missing? That got me started on a long journey I'm still on today."</p> <p>(We always knew you were punk rock, Annie, but we had no idea you'd been at performance since the nineties!)&nbsp;</p> <h3>What is the charter of the Chrome Speed Metrics team?</h3> <p>"The Chrome Speed Metrics team aims to <a href="https://chromium.googlesource.com/chromium/src/+/main/docs/speed_metrics/README.md#:~:text=The%20Chrome%20Speed%20Metrics%20team%20aims%20to%20quantify%20users%27%20experience%20of%20the%20web%20to%20provide%20Chrome%20engineers%20and%20web%20developers%20the%20metrics%2C%20insights%2C%20and%20incentives%20they%20need%20to%20improve%20it">quantify users' experience of the web</a> to provide Chrome engineers and web developers the metrics, insights, and incentives they need to improve it.</p> <p>(At SpeedCurve, we've worked closely with this team over the years, and can tell you that everyone is invested in this mission. The Speed Metrics team is a great example of how to build a performance first culture within your organization, whether that's a ginormous company like Google or a small shop like SpeedCurve, passionate people truly can make a difference.)</p> <h3>2024 seemed like it was full of big wins. What was the one you were most proud of?</h3> <p>"With the introduction of the Long Animation Frames API [LoAF], sites have better insight into the causes of slow JavaScript than ever before. With the additional data, we've been able to reach out to several third parties who've then made improvements, and I've heard positive stories from several other members of the web perf community who've done the same. There were two case studies highlighting third party wins published on <a href="https://web.dev/">web.dev</a> (<a href="https://web.dev/case-studies/pubconsent-inp?hl=en">1</a>, <a href="https://web.dev/case-studies/taboola-inp?hl=en">2</a>), and Google Publisher Tag launched <a href="https://developers.google.com/publisher-tag/reference?hl=en#googletag.config.PageSettingsConfig_threadYield">a new yielding strategy</a>. All the feedback from third parties we got from this process helped us prioritize the scheduler.yield() API so that it's easier for third parties to work well with the rest of the content on the page.</p> <p>"I know third parties have long been a point of frustration for web performance enthusiasts, but it's been amazing to see the optimizations here, which can make thousands or even millions of sites faster overnight."</p> <p>(We agree the visibility that LoAF attribution brings is exciting! Everyone loves to hate third parties, but with the exception of synthetic testing, we haven't had a lot to work from. SpeedCurve is hard at work looking at how to leverage the LoAF API to improve our RUM data. We hope that third parties embrace this data and continue to look at improving. However, we are also hearing that a fair number of LoAFs responsible for poor Interaction to Next Paint (INP) are surprisingly coming from first-party JavaScript!)</p> <h3>What are you working on now and/or what are you most excited about?</h3> <p>"I'm currently working with Michal Mocny [on the Google Speed Metrics team] on integrating soft navigations into Core Web Vitals. I'm really excited about getting a deeper understanding of single page applications and their performance. From my perspective, it's the biggest blind spot in web performance and every day I learn something new."</p> <p>(We've been following this work closely and are thrilled to see it getting so much attention. SPAs continue to be a challenge for performance, starting with measuring the true user experience. Looking forward to seeing this evolve!)</p> <h3>As we roll into 2025, what do you see as the single biggest challenge in front of us?</h3> <p>"I always get excited about challenges, and I think the biggest one is working together! There are so many wonderful developments in the world of standards--increased engagement from other vendors, the RUM CG, the Container Timing work. I'm excited for the challenge of collaborating effectively with people around the world. There's so much potential to bring many more viewpoints to the table."</p> <p>(Well put, Annie. We couldn't agree more. And here's a shameless plug for the W3C RUM Community Group, which has generated a lot of interest and had its first meeting on January 17. <a href="https://www.w3.org/community/rumcg/">Learn more</a> about the RUM Community Group.)</p> <p>We are soooooo excited for 2025, especially with awesome leaders like Annie at the helm.&nbsp;</p> <p><em>Do you have someone you'd like to recognize as a Performance Hero in 2025?&nbsp;<a href="mailto:support@speedcurve.com">Let us know!</a></em></p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><em><img class="blog-img" src="https://blog-img.speedcurve.com/img/532/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></em></a></p> Mon, 20 Jan 2025 00:00:00 +1300 Our 10 most popular web performance articles of 2024 https://www.speedcurve.com/blog/popular-web-performance-posts-2024 <p><span class="large-para">We love writing articles and blog posts that help folks solve real web performance and UX problems. Here are the ones you loved most in 2024. (The number one item may surprise you!)</span></p> <p><a href="https://www.speedcurve.com/web-performance-guide/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/529/herman-bench-social.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Some of these articles come from our recently published&nbsp;<a href="https://www.speedcurve.com/web-performance-guide/">Web Performance Guide</a>&nbsp;&ndash; a collection of evergreen how-to resources (written by actual humans!) that will help you master website monitoring, analytics, and diagnostics. The rest come from this blog, where we tend to publish industry news and analysis.&nbsp;</p> <p>Regardless of the source, we hope you find these pieces useful!</p><h2>10. Five ways cookie consent managers hurt web performance (and how to fix them)</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/cookie-consent.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Cookie consent popups and banners are everywhere, and they're silently hurting the speed of your pages. <a href="https://www.speedcurve.com/blog/web-performance-cookie-consent/">Learn the most common problems &ndash; and their workarounds</a> &ndash; with measuring performance with content manager platforms in place.</p> <h2>9. Best practices for optimizing images</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/image-optimization.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>They say a picture is worth a thousand words. Unfortunately that picture can also cost you 1,000 kilobytes. Images are an important part of providing a rich, user-friendly experience online. It&rsquo;s critical to optimize how they&rsquo;re loaded and how much they weigh. <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-images/">Here's a detailed checklist of best practices and how-tos</a> to make sure your beautiful images aren't hurting your page speed.</p> <h2>8. Understanding and improving INP</h2> <p>Interaction to Next Paint (INP) is the Core Web Vital that measures how responsive a page is to visitor interactions. According to Google, a 'good' INP time is faster that 200 milliseconds. Learn&nbsp;<a href="https://www.speedcurve.com/web-performance-guide/understanding-and-improving-interaction-to-next-paint/">how to identify and debug slow interactions</a>&nbsp;&ndash; and most important, how to make them faster, improve your INP time, and potentially improve your Google search rank!</p> <h2>7. Understanding and improving LCP</h2> <p>Largest Contentful Paint (LCP) is the Core Web Vital that measures when the largest visual element on the page renders. To make Google happy, aim for an LCP time under 2.5 seconds. <a href="https://www.speedcurve.com/web-performance-guide/understanding-and-improving-largest-contentful-paint/">Here's everything you need to know</a> to start measuring, debugging, and optimizing LCP.</p> <h2>6. Best practices for optimizing JS</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/javascript.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Byte for byte, no resource affects page speed more than JavaScript. JavaScript affects network performance, CPU processing time, memory usage, and overall user experience. Inefficient scripts can slow down your website, making it less responsive and more frustrating for your users. <a href="https://www.speedcurve.com/web-performance-guide/best-practices-for-optimizing-javascript/">This guide walks you through essential techniques</a> for reducing the negative impact of JS on your pages by focusing on reducing the impact on the initial load, as well as reducing the impact of the actual JS interaction itself.</p> <h2>5. A complete guide to performance budgets</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/perf-budgets.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's easier to build a fast website than it is to keep a website fast. If you've invested countless hours in speeding up your site, but you're not using performance budgets to prevent regressions, you could be at risk of wasting all your efforts! <a href="https://www.speedcurve.com/web-performance-guide/complete-guide-performance-budgets/">Here's how to get started.</a></p> <h2>4. Get started with Core Web Vitals</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/core-web-vitals.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>This is it, your one-stop shop. <a href="https://www.speedcurve.com/web-performance-guide/get-started-with-core-web-vitals/">Everything you need to know about Core Web Vitals</a> &ndash; from SEO and business impact to how to continuously monitor, catch regressions, and fix issues with each Vital.</p> <h2>3. The psychology of site speed</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/cognitive-load.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />If you don't consider time a crucial usability factor, you're missing a fundamental aspect of the user experience. Before getting into the technical side of web performance, it's important to first understand <a href="https://www.speedcurve.com/web-performance-guide/the-psychology-of-web-performance/">the roots of our craving for lightning-fast online experiences</a>.&nbsp;</p> <h2>2. Fifteen page speed optimizations that sites ignore (at their own risk)</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/15-optimizations.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Last summer, I analyzed the home pages of twenty leading websites and found that many sites are not taking advantage of&nbsp;a surprising number of optimization best practices &ndash; to the detriment of their performance metrics, and more importantly, to the detriment of their users and ultimately their business. Which of <a href="https://www.speedcurve.com/blog/15-neglected-page-speed-optimizations/">these neglected page speed optimizations</a> could you be missing out on?</p> <h2>1. Averages, medians, and percentiles</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/averages-medians-percentiles.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />This remains the most-read article in our entire library. It goes to show that there are a lot of people out there trying to understand their data. There are many ways to view aggregate web performance data, including 50th percentile, 75th percentile, 95th percentile, and average. What do they each mean, and which should you use? <a href="https://www.speedcurve.com/web-performance-guide/averages-medians-percentiles/">Find out here.</a></p> <h2>Looking ahead...</h2> <p>We love feedback! Did you find these articles helpful? Is there a topic or set of best practices you'd like us to write about? Let us know at <strong>support@speedcurve.com</strong>.</p> <p><a href="https://www.speedcurve.com/signup/?utm_source=blog&amp;utm_medium=blog&amp;utm_campaign=blog-trials"><img class="blog-img" src="https://blog-img.speedcurve.com/img/530/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Thu, 19 Dec 2024 00:00:00 +1300