SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. Demystifying Cumulative Layout Shift with CLS Windows https://www.speedcurve.com/blog/cls-windows-core-web-vitals <p>As we all know, naming things is hard.</p> <p>Google's Core Web Vitals are an attempt to help folks new to web performance focus on three key metrics. Not all of these metrics are easy to understand based on their names alone:</p> <ul> <li><strong>Largest Contentful Paint (LCP)</strong> &ndash; When the largest visual element on the page renders</li> <li><strong>First Input Delay (FID)</strong> &ndash; How quickly a page responds to a user interaction (FID will be replaced by Interaction to Next Paint in March 2024)</li> <li><strong>Cumulative Layout Shift (CLS)</strong> &ndash; How visually stable a page is</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/457/cls-windows-core-web-vitals.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Any time a new metric is introduced, it puts the burden on the rest of us to first unpack all the acronyms, and then explore and digest what concepts the words might refer to. This gets even trickier if the acronym stays the same, but the logic and algorithm behind the acronym changes.</p> <p>In this post, we will dive deeper into Cumulative Layout Shift (CLS) and how it has quietly evolved over the years. Because CLS has been around for a while, you may already have some idea of what it represents. Before we go any further, I have a simple question for you:</p> <blockquote> <p>How do you think Cumulative Layout Shift is measured?&nbsp;</p> </blockquote> <p>Hold your answer in your head as we explore the depths of CLS. I'm interested if your assumptions were correct, and there's a poll at the bottom of this post I'd love you to answer.</p> <p>&nbsp;</p><h2 style="text-align: left;">Old-school CLS</h2> <p>In May 2020, Google introduced <a href="https://web.dev/cls/">CLS</a> as part of <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals.</a> To help improve user experience, CLS tracked any elements that moved around on the page as it renders. You know... that annoying moment when the ad pops in late in the page render and pushes the content you were already reading further down the page. That yucky moment is called a layout shift.</p> <p>With the original version of CLS, all those layout shifts went through some funky maths, which compared the size of the layout shift to the size of the viewport and how far the element moved, and then converted it into a numeric score. The scores of all the individual layout shifts were then added together to give you the&nbsp;<em>cumulative layout shift.</em></p> <p><em><img class="blog-img" src="https://blog-img.speedcurve.com/img/457/cls-score2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></em></p> <p>I've always found it odd that Google left the representation of CLS as a decimal score. I'm a simpleton and find parsing all those decimals takes extra mental effort. It feels more complicated than it needs to be. Since it's an unintuitive metric anyway and takes effort to understand, why not just turn it into a score out of 100? That's why people love the Lighthouse scores so much.</p> <h2>The problem</h2> <p>After the launch of CLS, a major issue quickly surfaced. Old-school CLS measured <strong>all</strong> the layout shifts over the <strong>entire</strong> lifespan of the page. This collided head-on with developers moving to Single Page Apps (SPAs). Since a SPA is by definition a full app shoved inside a single page, the CLS for a SPA would continue to grow for the lifetime of the app/page. This led to massive CLS scores for SPAs and a rethink of how CLS should be measured.</p> <h2>Introducing CLS windows</h2> <p>In 2021, Google <a href="https://web.dev/evolving-cls/">relaunched CLS with a session window.</a> The idea was to measure a burst of layout shifts within a 5s session window. A new session starts if there is a gap of more than 1s between layout shifts. At the end of the page lifespan, the session window with the highest total layout shifts becomes CLS.</p> <p><video style="width: 100%; max-width: 600px;" autoplay="autoplay" loop="loop" muted="" controls="controls"><source src="https://blog-img.speedcurve.com/img/457/session-window.webm" type="video/webm" /> <source src="https://blog-img.speedcurve.com/img/457/session-window.mp4" type="video/mp4" /></video></p> <p style="text-align: center;"><em>Example of CLS sessions windows from <a href="https://web.dev/evolving-cls/">web.dev</a></em></p> <p>By introducing a session window, Google reduced the impact of the SPA problem. The CLS for any long-lived page was now limited to a 5s window containing the largest layout shifts.</p> <p>(Measuring the performance of SPAs is still an ongoing problem for Core Web Vitals, however, and there's more work being done on measuring metrics relative to the <a href="https://developer.chrome.com/blog/soft-navigations-experiment/">soft navigations of a SPA</a>.)</p> <h2>Same name, different algorithm... yikes</h2> <p>This was a major change to CLS that I think a lot of people missed. Google did a great job of consulting the web performance community on the change, and those changes were well thought out and welcomed. But by keeping the naming of CLS the same, many people we speak to still fundamentally misunderstand what CLS represents. As it still says on the tin, many people still believe that CLS is the accumulation of <strong>all</strong> the layout shifts on a page. That's what it used to be, but it's not anymore. Maybe Google should have renamed CLS to indicate how different it is now. I don't think anyone could face a four letter acronym. CLSW? WCLS?&nbsp;</p> <p>To help people get their heads around CLS windows, we've introduced a new visualization to SpeedCurve that shows you each session window, as well as which session is the largest and therefore used as the CLS score.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/457/layout_shifts.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Layout shifts" /></p> <p style="text-align: center;"><em>A synthetic test showing three session windows with the largest window of 0.71 used as the CLS score.</em></p> <h2>Any layout shift is a bad shift</h2> <p>Your pages should really be fully rendered in under 5s, but if you have slow loading pages and dodgy third parties that take longer than 5s to stop jiggling your elements about, then there's a chance your CLS score doesn't accurately let you know just how bad your page is. Only the worst layout shifts within the 5s session window will contribute to CLS.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/457/hidden_shifts.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Hidden layout shifts" /></p> <p>In the example above, there are large shifts in the second session window, but these are not included in the final CLS score because the total of the layout shifts in the third session window is larger. This masks the fact that overall this page has a total layout shift score nearly twice as bad as what is reported by CLS, and the user suffered through many large layout shifts as the page rendered.</p> <h2>All metrics need validation before you rely on them</h2> <p>The scenario described above is something to watch out for and a good reminder that you have to validate what a metric represents on your pages. In the same way that you need to check that the element selected for Largest Contentful Paint is an element you actually care about, you should also check that windowed CLS is not hiding any large layout shift(s) from you.</p> <p>From a user's perspective, any large layout shift is a bad shift. So on our Test dashboard, we continue to show you all the layout shifts that happened as the page rendered and which specific elements moved. You really do need to get rid of all of them for your page to feel as stable and solid as possible as it renders.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/457/big_shift.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Large layout shift" /></p> <p>Focus on improving your user experience first and making your pages feel rock solid as they render. That will naturally lead to a lower CLS score and improved ranking on Google.</p> <h2>Don't forget: Core Web Vitals are just a means to an end</h2> <p>Core Web Vitals are meant to help you measure and improve user experience. You can either try and improve your Core Web Vitals metrics, or just work on improving your user experience and the metrics will follow. <a href="https://www.speedcurve.com/customers/">At SpeedCurve, we're here to help you craft a fast and joyful user experience.</a> Metrics are just a means to that end goal.</p> <p>Learn more:</p> <ul> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Guide to Core Web Vitals</a></li> <li><a href="https://support.speedcurve.com/docs/track-core-web-vitals">How to track Core Web Vitals</a></li> <li><a href="https://support.speedcurve.com/docs/diagnose-cumulative-layout-shift-cls-issues">How to diagnose CLS issues</a></li> </ul> <h2>Super-quick poll</h2> <p>Before you depart, I'd love to know if the windowed CLS changes were already something you understood, or if you thought all layout shifts were still included in the Cumulative Layout Shift score.&nbsp;</p> <div id="strawpoll_kogjkBa09Z6" class="strawpoll-embed" style="height: 516px; max-width: 640px; width: 100%; margin: 0 auto; display: flex; flex-direction: column;"><iframe id="strawpoll_iframe_kogjkBa09Z6" style="position: static; visibility: visible; display: block; width: 100%; flex-grow: 1;" title="StrawPoll Embed" src="https://strawpoll.com/embed/kogjkBa09Z6" frameborder="0" allowfullscreen="">Loading...</iframe></div> Tue, 01 Aug 2023 00:00:00 +1200 Exploring performance and conversion rates just got easier https://www.speedcurve.com/blog/web-performance-conversions <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/conversion-lcp-trend-social.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Demonstrating the impact of performance on your users &ndash; and on your business &ndash; is one of the best ways to get your company to care about the speed of your site.</p> <p>Tracking goal-based metrics like conversion rate alongside performance data can give you richer and more compelling insights into how the performance of your site affects your users. This concept is not new by any means. In 2010, the Performance and Reliability team I was fortunate enough to lead at Walmartlabs&nbsp;<a href="https://www.slideshare.net/devonauerswald/walmart-pagespeedslide">shared our findings</a>&nbsp;around the impact of front-end times on conversion rates. (This study and a number of other case studies tracked over the years can be found at&nbsp;<a href="https://wpostats.com/">WPOstats</a>.)</p> <p>Setting up conversion tracking in SpeedCurve RUM is fairly simple and definitely worthwhile. This post covers:</p> <ul> <li>What is a conversion?</li> <li>How to track conversions in SpeedCurve</li> <li>Using conversion data with performance data for maximum benefit</li> <li>Conversion tracking and user privacy</li> </ul><h2>What is a conversion?</h2> <p>The performance of your website can make or break engagement with your users. Lighthouse scores and audits and Core Web Vitals thresholds (<a href="https://www.speedcurve.com/features/performance-monitoring/">which are both available in SpeedCurve</a>) offer guidance on thresholds and optimizations, but they can't show you how performance affects user behavior and outcomes.</p> <p>It's not enough to improve your Lighthouse score or Core Web Vitals results. You need to validate that improving your performance metrics actually helps improve user experience, and ultimately your business metrics. This is where conversion tracking comes in.</p> <p>Simply put, a conversion is a measurable outcome for a user session. Those outcomes can take many forms, such as:</p> <ul> <li>Checkout events, such as adding an item to a shopping cart or paying for an order</li> <li>Signing up for a newsletter</li> <li>Downloading an ebook</li> <li>Requesting a quote for a service</li> <li>Booking a hotel reservation</li> </ul> <p>Those are all examples of conversions.</p> <h2><span style="font-size: 35px; color: #000000;">How do you track conversions in SpeedCurve?</span></h2> <p>One of the biggest factors I've seen that keeps people from tracking conversions alongside performance data is the assumption that it's difficult to configure and requires code modifications. This is no longer the case!&nbsp;</p> <p>Conversions are one example of&nbsp;<a href="https://support.speedcurve.com/docs/custom-data">custom data</a>&nbsp;you can collect in SpeedCurve. There are two supported methods for tracking a conversion in SpeedCurve:</p> <ol> <li>Using the&nbsp;<a href="https://support.speedcurve.com/docs/using-js-rum-api#tracking-conversions-with-the-js-rum-api">JavaScript API</a>, you can instrument your code to include:<br /><code>LUX.addData('order_confirmed', 1)</code></li> <li>Or you can elect to flag conversions based on the traffic to a known&nbsp;<a href="https://support.speedcurve.com/docs/using-url-patterns">URL pattern</a><br /><code>domain.com/checkout/orderconfirmation*</code></li> </ol> <p>This second option has become the preferred method for users due to the frictionless 'no code' approach.</p> <p><strong>Learn more:</strong>&nbsp;<a href="https://support.speedcurve.com/docs/conversions" target="_blank" rel="noopener">Setting up conversion tracking in SpeedCurve</a></p> <h2><span style="font-size: 35px; color: #000000;">Methods and use cases for using conversion data</span></h2> <p>After you've added conversion data to SpeedCurve, there are a few different ways you can use it.</p> <h3>Correlation charts</h3> <p>Correlation charts demonstrate how sensitive users are to different metrics. Correlation charts give you a histogram view of all your user traffic, broken out into cohorts based on performance metrics such as Start Render and Largest Contentful Paint. The chart also includes an overlay that shows you a user engagement metric or business metric &ndash; such the bounce rate or conversion rate &ndash; that correlates to each of these cohorts. This lets you see at a glance the relationship between performance, user engagement, and your business.</p> <p>In the example below, Largest Contentful Paint has a demonstrated negative correlation with the conversion rate. In other words, as sessions get slower, the conversion rate gets worse.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/correlation.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram of Largest Contentful Paint and a correlated line chart showing conversion rate at each bucket" /></p> <p>Some metrics &lt;cough,&nbsp;<a href="https://www.speedcurve.com/blog/first-input-delay-google-core-web-vitals/">First Input Delay</a>, cough, cough&gt; end up having little to no correlation with goal-based metrics like conversion.</p> <p>Sometimes results vary by site. For example, conversion rates for luxury goods tend to be much lower than for commodity goods. So for some brands, not only are conversions low, but users seem to have much more tolerance due to higher switching costs. This is why it's so important to look at your own data when understanding which metrics matter for your site.</p> <p>Whether you attempt to shoot holes in it or not, it's pretty easy to get behind the idea that the data in correlation charts is at least directionally accurate.&nbsp;</p> <p>Illustrating this relationship can help justify resourcing for page speed and improving the user experience. Stakeholders who aren't in the weeds on performance can at least understand the potential impact on revenue.</p> <p>On the other hand, this chart is not super helpful as an operational tool on a daily basis. Typically it is used to prove a point to stakeholders, or validate which metrics you should be focused on. It's not something you really need in your daily performance toolbox.</p> <p><strong>Learn more:</strong> <a href="https://support.speedcurve.com/docs/create-correlation-charts" target="_blank" rel="noopener">Creating correlation charts</a></p> <h3>Time series charts and trend analysis</h3> <p>Trending metrics over time is nothing new and pretty commonplace for any metrics you collect consistently. Conversion is no different. It's important to see how user behavior changes during time of day, time of week, during a major campaign, etc.</p> <p>It's also helpful to see if performance is having a sustained direct impact. In this example, there was a regression in LCP, which appears to have had a sustained impact on conversion rate.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/lcp_conv_timeseries.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart showing LCP increase with associated drop in conversion rate." /></p> <p><strong>Learn more:</strong> <a href="https://support.speedcurve.com/docs/trend-metrics-compare-time-periods">Trend metrics &amp; compare time periods</a></p> <h3>A/B testing and cohort analysis</h3> <p>Segmenting user sessions gives you the ability to compare cohorts to each other and see how each group trends over time.</p> <p>In the example below, segmentation is done using <a href="https://support.speedcurve.com/docs/dimensions">custom dimensions</a> to compare an A/B test where front-end optimizations were made in the variant. This allows you to see if improvements to conversion are consistent or more variable and not as definitive.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/conversion-a_b-trend.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series showing conversion rates by A/B test" /></p> <p>After looking at this data, there was much more confidence in the positive impact the changes had on conversion rates.</p> <p><strong>Learn more:</strong> <a href="https://support.speedcurve.com/docs/ab-testing-rum">A/B testing in RUM</a></p> <h3>Comparing converted sessions and pages</h3> <p>Once you've started tracking conversions, you can really open up the possibilities in some of our default dashboards.</p> <p>We recently released the <a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard/">RUM Compare</a> dashboard. This unique view allows you to compare different cohorts to each other and understand the differences in metrics, often driven by demographics or particular clusters of users exhibiting a distinct behavior. Sometimes this is a time-based cohort, while other times it's an A/B test. However, one of the more interesting questions this can help to answer is "What is different between converted sessions and non-converted sessions?"&nbsp;</p> <p>For example:</p> <p>On the surface, in the chart below things look to be in conflict with the principle that faster pages = $$. As you can see, the converted sessions have a much slower LCP overall than non-converted sessions. Say it isn't so!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/comapare_summary.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Summary of comparison between converted and non-converted sessions." /></p> <p>However, after more inspection you see that the makeup of page views for converted sessions contains a lot more checkout pages, which in this case have more backend time, driving up all the front-end focused metrics, such as LCP, for the overall session numbers.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/compare_pages.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison of page views between converted and non-converted sessions" /></p> <p>If you break this down further to look at individual page performance where there were common pages, such as product pages, now you can see a different story. Illustrated here, the pages for converted sessions appear have a much faster LCP:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/454/product_lcp_compare.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Bubble chart comparison of page views for largest contentful paint" /></p> <p>In this case, there is a strong argument for making the product pages faster, even though LCP scores may have been flagged as 'passing' the Core Web Vitals threshold. This is just one example of how the RUM Compare dashboard can be used to explore your converted user base.&nbsp;</p> <h2>User privacy</h2> <p>Data privacy is serious business and always has been at SpeedCurve. We sometimes get customers who are concerned about tracking conversion, fearing that the information collected from end users contains PII (personally identifiable information).&nbsp;</p> <p><a href="https://support.speedcurve.com/docs/rum-data#what-speedcurve-rum-does-not-collect">SpeedCurve RUM does not gather any personally identifiable information</a> such as IP addresses, localStorage, and cookies. By tracking conversion, you are simply providing another segment to look at your data. No additional user information is collected. (<a href="https://support.speedcurve.com/docs/data-protection-gdpr-compliance">Click here</a> to learn more about SpeedCurve and data privacy.)</p> <h2>Summary</h2> <p>As a SpeedCurve RUM customer, it's easier than ever to start tracking conversion alongside your performance metrics. Doing so may open up a lot of opportunity to make business-driven decisions about performance.</p> <p>Not currently a SpeedCurve customer? <a href="https://www.speedcurve.com/signup/">Start a free trial today!</a></p> Tue, 11 Jul 2023 00:00:00 +1200 What is page bloat? And how is it hurting your business, your search rank, and your users? https://www.speedcurve.com/blog/page-bloat-web-performance <p>For more than ten years, I've been writing about page bloat, its impact on site speed, and ultimately how it affects your users and your business. You might think that this topic would be played out by now, but every year I learn new things &ndash; beyond the overarching fact that pages keep getting bigger and more complex, as you can see in this chart, using data from the HTTP Archive.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-total-size-breakdown.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In this post, we'll cover:</p> <ul> <li>How much pages have grown over the past year</li> <li>How page bloat hurts your business and &ndash; at the heart of everything &ndash; your users</li> <li>How page bloat affects Google's Core Web Vitals (and therefore SEO)</li> <li>If it's possible to have large pages that still deliver a good user experience</li> <li>Page size targets</li> <li>How to track page size and complexity</li> <li>How to fight regressions</li> </ul><h2>What do we mean when we talk about page size?</h2> <p>When we talk about page size, we're referring to overall page weight and complexity. This includes:</p> <ul> <li><strong>Size</strong>&nbsp;&ndash; Total page weight in bytes. Size matters, especially to mobile users who have limited and/or metered data.</li> <li><strong>Resources</strong>&nbsp;&ndash; Total number of resources (HTML, images, fonts, etc.) on the page. The more resources, the greater the complexity and the increased likelihood of rendering delays and blockages.&nbsp;</li> <li><strong>HTML</strong>&nbsp;&ndash; Typically the smallest resource on the page, HTML's performance risk is usually negligible. Having said that, I recently did some digging into a page where the total HTML size jumped dramatically because of a bunch of inline JavaScript, which led to rendering delays, so keeping an eye on HTML size is still a good idea.</li> <li><strong>Images</strong>&nbsp;&ndash; Often the greatest contributor to page bloat. Looking at the 90th percentile of the distribution of page weight, images account for a whopping 5.7 MB of a roughly 8.2 MB page. In other words, images comprised almost 75% of the total page weight. And if that already wasn&rsquo;t enough, the number of images on a page has been linked to lower conversion rates on retail sites. (More on that later.)</li> <li><strong>Video</strong> &ndash; For the first time in the many years that I've been covering the topic of page growth, I'm going to focus on the volume of video that pages contain. That's because video seems to have had a surge in growth, which should be a cause of concern for anyone who cares about metrics like Largest Contentful Paint, which measures the largest visual element on a page &ndash; including videos. (More on that below.)</li> <li><strong>JavaScript</strong>&nbsp;&ndash; A page can have a relatively low JS weight but still suffer from JS-inflicted performance problems. Even a single 100 KB third-party script can wreak havoc with your page. The more scripts on your page, the greater the risk. It&rsquo;s not enough to focus solely on blocking JavaScript. It&rsquo;s possible for your pages to contain zero blocking resources and still have less-than-optimal performance because of how your JavaScript is rendered. That&rsquo;s why it&rsquo;s so important to understand CPU usage on your pages, because JavaScript consumes more CPU than all other browser activities combined. While JavaScript blocks the CPU, the browser can&rsquo;t respond to user input. This creates what&rsquo;s commonly called &ldquo;jank&rdquo; &ndash; that annoying feeling of jittery, unstable page rendering.</li> <li><strong>CSS</strong>&nbsp;&ndash; Like JavaScript, CSS doesn&rsquo;t have to be bulky to cause problems. Poorly executed stylesheets can create a host of performance problems, ranging from stylesheets taking too long to download and parse, to improperly placed stylesheets that block the rest of the page from rendering. And, similar to JavaScript, more CSS files equals more potential trouble.</li> </ul> <h2>How does page bloat hurt Core Web Vitals?</h2> <p><span style="color: #000000;"><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a> are a Google search ranking factor. Given that Google continues to dominate search usage, you should care about Vitals alongside the other metrics you should be tracking.</span></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/core-web-vitals.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><span style="color: #000000;">Page bloat can have a huge impact on your Vitals:&nbsp;</span></p> <ul> <li><span style="color: #000000;"><strong>Cumulative Layout Shift</strong> &ndash;&nbsp;</span><span style="color: #000000;">Excessive resources can contribute to a poorer CLS score, as more elements shift on the page.&nbsp;</span></li> <li><span style="color: #000000;"><strong>Largest Contentful Paint</strong> &ndash; LCP measures when the largest visual element (image or video) in the viewport finishes rendering. Heavier visual elements can take much longer to render, especially videos.&nbsp;</span><span style="color: #000000;">If you're serving huge videos that take several seconds to fully render, it could be hurting your LCP times.</span></li> <li><span style="color: #000000;"><strong>Interaction to Next Paint &amp; Total Blocking Time</strong> &ndash; Excessive and/or non-performant JavaScript can hurt interactivity metrics, like INP, FID, and TBT. The more scripts on your pages, the greater the risk.</span></li> </ul> <h2>How does page bloat hurt your business?</h2> <p>A&nbsp;<a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-load-time/">Google machine-learning study</a>&nbsp;I helped with a few years ago found a few strong predictors of whether or not a page resulted in a conversion, ranked in the following order:</p> <ol> <li>The total number of page elements was the single greatest predictor of conversions.</li> <li>The number of images on the page was the second greatest predictor.</li> <li><span style="color: #1f1f1f;">The more scripts there were in a series of pages in a session, the less likely that session was to convert.</span></li> </ol> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/google-study2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Image size is another issue, as excessive image weight hurts your SEO ranking in Google Image Search. Given that Image Search comprises upwards of 26% of Google searches, this is something you should care about. (You can dive deeper into image optimization and SEO tips in&nbsp;<a href="https://developers.google.com/search/docs/advanced/guidelines/google-images">this article</a>&nbsp;in Google Search Central.)&nbsp;</p> <h2>How does page bloat hurt your visitors?</h2> <p>In his excellent series of blog posts, <a href="https://infrequently.org/2022/12/performance-baseline-2023/">The Performance Inequality Gap</a>, Alex Russell makes a compelling case that serving huge pages is an "ethical crisis for front end":</p> <blockquote> <p>Meanwhile, sites continue to send more script than is reasonable for 80+% of the world's users, widening the gap between the haves and the have-nots...</p> <p>Essential public services are now delivered primarily through digital channels in many countries. This means what the frontend community celebrates and promotes has a stochastic effect on the provision of those services &mdash; which leads to an uncomfortable conversation because, taken as a whole, it isn't working.</p> <p>Pervasively poor results are part of why responsible public sector organisations are forced to develop HTML-first, progressive enhancement guidance in stark opposition to the "frontend consensus".</p> <p>This is an indictment: modern frontend's fascination with towering piles of JavasScript complexity is not delivering better experiences for most users.</p> </blockquote> <h2>HTTP Archive research: Background and caveats</h2> <p>Before we get into the analysis, some context:</p> <ul> <li><strong>The numbers cited below all come from the&nbsp;<a href="https://httparchive.org/reports/page-weight">HTTP Archive</a>.</strong>&nbsp;I looked at the top 1M sites and top 1K sites, focusing on median and 90th percentile numbers. This is to try to understand how a "typical" page might perform, as well as pages in the "longtail". (It's super important to understand longtail performance. If 10% of your visitors are suffering, that's a lot of users!)</li> <li><strong>These numbers should not be taken as a benchmark for your own site.</strong>&nbsp;You haven't necessarily achieved anything great if your pages are smaller than this, nor have you failed by default if your pages are bigger.&nbsp;</li> </ul> <h2>Desktop: The median page has grown 6% in one year</h2> <p>If you look at the year-over-year increase from 2087.2 to 2214.3 KB, you're forgiven if your initial reaction is "that's not too bad". But 6% growth is significant. At that rate, the median page will be close to 3 MB within five years. That's not trivial.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-total-size-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Even if you consider this growth not too worrisome, it's important to keep in mind that it's just an aggregated number, which masks the more dramatic increases in specific resource types, such as JavaScript and video.</p> <h2>Desktop: Median JavaScript weight increased by 14%</h2> <p>Almost 627 KB of JavaScript &ndash; spread across the 24 JS requests contained on the median page &ndash; is a lot of JavaScript.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-javascript-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Each script on your page represents a potential single point of failure, especially if the script is render-blocking or has an excessive Long Tasks time. The more scripts on your page, the greater the performance risk. Even if the script doesn't block the page from rendering, excessive and unoptimized<span style="color: #000000;">&nbsp;JavaScript can hurt interactivity metrics like Interaction to Next Paint and Total Blocking Time.&nbsp;</span></p> <h2>Desktop: Median video weight increased by 32%</h2> <p>In other words, the amount of video being served has increased by almost one-third. That's huge.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-video-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><span style="color: #000000;">As mentioned earlier in this post, Largest Contentful Paint (one of Google's Core Web Vitals)</span><span style="color: #000000;">&nbsp;measures when the largest visual element (image or video) in the viewport *finishes* rendering. Heavier visual elements can take much longer to render, especially videos. If</span><span style="color: #000000;">&nbsp;you're serving huge videos to your users, it could be hurting your LCP times, which could then be hurting your Google search ranking.</span></p> <p><span style="color: #000000;">(If you're adding up the numbers and wondering how the video weight makes sense given the overall page weight, it's important to keep in mind that these medians are calculated based on pages that contain these resources. Of the million URLs tracked by the HTTP Archive, not all contain video.)&nbsp;</span></p> <h2>Mobile: The median page is smaller (but there's a catch)</h2> <p>It's genuinely heartening to see that, overall, pages served to mobile haven't gotten bigger.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-total-size.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>However, when we look at the breakdown across resource types, we can see growth in video and JavaScript, as we did with desktop:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-total-size-breakdown.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>More on this below...</p> <h2>Mobile: Median video weight increased by 62%</h2> <p>It's hard to decide which is more concerning: the fact that video weight has increased so dramatically in just one year, or the fact that the median page that serves video to mobile devices now serves more than 2.5 MB of video.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-video.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's easy to forget that not everyone has unlimited data. If you're serving unasked-for video to your users, you could be eating up their data plans.</p> <p>It's also important to keep in mind that Google's recommended 2.5s threshold for "good" LCP applies to both mobile and desktop experiences &ndash; it doesn't differentiate. If your videos are the hero element on your pages, this could be causing your LCP times for mobile to be dangerously slower than what Google demands.</p> <p>(<span style="color: #000000;">If you're skimming this post and missed the earlier note about video weight, I'll mention again that these medians are calculated based on pages that contain these resources. Of the million URLs tracked by the HTTP Archive, not all contain video. That's why the numbers for median total size don't "match" the median video weight.)&nbsp;</span></p> <h2>Mobile: Median JavaScript weight increased by 11%</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-javascript.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Almost 580 KB of JavaScript &ndash; spread across the 24 JS requests contained on the median page &ndash; is a lot of JavaScript. Given that mobile devices tend to have slower CPUs than desktop, parsing all that JavaScript comes with a performance cost.</p> <h2>Median vs 90th percentile: Total size and resources</h2> <p>Things get really interesting when we compare numbers at the median versus 90th percentile. As said earlier,&nbsp;<span style="color: #000000;">this is to try to understand how a "typical" page might perform, as well as pages in the "longtail". Why is it important to understand longtail performance? Think of it this way: if 10% of your visitors are suffering, that's a lot of visitors!</span></p> <h3><span style="color: #000000;">90p page is almost 4X larger than median page</span></h3> <p><span style="color: #000000;">The 90th percentile page served to desktop is close to 9MB, making it almost four times larger than the median page.</span></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/median-vs-90p-total-size.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />The 90th percentile page served to mobile is similarly huge, coming in at almost 8 MB.&nbsp;</span></p> <h3><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/median-vs-90p-total-size-mobile2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span><span style="color: #000000;">90p page contains almost 3X the number of requests</span></h3> <p><span style="color: #000000;">The median page served to desktop contained a total of 82 resources (HTML, JS, images, videos, etc.) compared to the 90th percentile with a total of 233 resources. In other words, the 90p page contains almost three times the number of requests.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/median-vs-90p-breakdown-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>You can see the breakdown of requests across different resource types:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/median-vs-90p-breakdown-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Not surprisingly, images and JS account for the majority of requests. I was surprised to see how many requests come from CSS and HTML. And I was very surprised to see that the 90p page serves a total of 12 videos!</p> <h2>Desktop: 90p JavaScript weight increased by almost 20%</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-90p-javascript.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>20% growth in a single resource is alarming on its own. Also alarming is the fact that pages with almost 2 MB of JavaScript &ndash; across 77 scripts &ndash;are being served to users. It bears repeating: excessive JS can hurt all your user-oriented performance metrics. If the scripts are blocking the main thread, then they're affecting the page <em>looks</em> to your users. And if those scripts are slow to respond to user interaction, then they're also affecting how the page <em>feels</em> to your users.</p> <h2>Mobile: 90p JavaScript weight increased by 17%</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-90p-javascript-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>As stated above, this growth is concerning, alongside the fact that such a massive amount of CPU-intensive JavaScript is being served to mobile devices. Phone processors have gotten better, but they're still no match for a desktop. Next time you're visiting a site and your phone starts to heat up in your hand, remember this chart.</p> <h2>Desktop: 90p video weight increased by almost 28%</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-90p-video.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Mobile: 90p video weight increased by 77%</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-90p-video-2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Yes, mobile video weight at the 90th percentile is actually greater than desktop video weight &ndash; not completely surprising when you consider that 12 or more video resources are being served to mobile devices.&nbsp;</p> <h2>Top 1K vs Top 1M</h2> <p>The top thousand pages tend to belong to leading sites. Those businesses perhaps have more awareness of the importance of performance optimization, along with the resources to monitor and tackle performance issues. This might be why the top 1K pages &ndash; across desktop and mobile &ndash; tend to be smaller and leaner.&nbsp;</p> <h3 style="text-align: center;">Desktop image weight</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-1k-vs-1m-images.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3 style="text-align: center;">Desktop video weight</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/desktop-1k-vs-1m-video.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3 style="text-align: center;">Mobile image weight<img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-1k-vs-1m-images.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></h3> <h3 style="text-align: center;">Mobile video weight</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/mobile-1k-vs-1m-video.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The important takeaway here isn't that if your site isn't in the top 1K you should just assume that poor performance comes with the territory. Instead, this is a great opportunity to audit your key pages and see what you can delete, defer, or optimize, so that you can emulate what bigger players are doing.</p> <h2>Is it possible to have large pages that deliver a good user experience?</h2> <p>Yes. While page size can be a red flag for real performance issues, if you care about user experience, you need to take a closer look at how your pages are built to see if the size and complexity of your pages actually affect how fast your site feels to your users.</p> <p>It's not enough to look at crude metrics like total requests and size. You need to know:</p> <ul> <li>How many of your requests are blocking requests?</li> <li>If your page contains blocking requests, how many of them occur in the critical rendering path? That is, how many blocking requests are there before key page metrics like Start Render and Largest Contentful Paint?</li> <li>How many of your potentially problematic requests come from third parties, and how do you maintain visibility into how they're performing?</li> <li>Are the most important images on your page the first images to render? How quickly do they show up?</li> </ul> <p>Amazon is a good example of a site that serves large, fast pages, as you can see in this recent look at our&nbsp;<a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/start-render/">Retail Page Speed Benchmarks</a>:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/retail-benchmarks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In the waterfall chart for&nbsp;<a href="https://app.speedcurve.com/benchmark/retail-us/test/230628_39_d33f72f43a7e07bb13743bf9c3b840ea/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">this test run</a>, you can see why the Amazon home page ranks fastest for Start Render. There's only a small number of resources on the critical rendering path, and they're all highly optimized.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/amazon-waterfall.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>So, despite the fact that the page contains 459 requests and weighs in at 4,412 KB &ndash; far beyond the median sizes shared above...</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/amazon-breakdown.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>...the page has a Start Render time of 0.4 seconds, a Largest Contentful Paint time of 0.63 seconds, and a Last Painted Hero time of 3.4 seconds.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/amazon-rendering-metrics.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>To recap: yes, you can have pages that are large and fast, but you need to be mindful of the critical rendering path.&nbsp;</p> <h2>How much content should you be serving?</h2> <p>Having said all that, making your pages as small as possible is in the best interest of your users who don't have access to fast networks and devices. Alex Russell&nbsp;<a href="https://infrequently.org/2022/12/performance-baseline-2023/">suggests</a> these per-page content targets for 2023:</p> <ul> <li>150 KB of HTML, CSS, images, and render-blocking font resources</li> <li>No more than 300-350 KB of JavaScript</li> </ul> <p>Are these targets aggressive? Yes. Will you be able to meet them? Possibly not. But as the saying goes, don't let perfect be the enemy of good.</p> <h2>Takeaways</h2> <p>I meet with so many people who build and optimize websites. When we investigate how their pages are built, I routinely witness surprise at spotting things like ghost scripts, huge unoptimized images, and blocking resources they weren't aware of. These are smart people. The problem isn't them &ndash; it's the scale of their sites, the speed of their release cycles, and the number of people who touch each page.</p> <p>We're never going to get our lean, pre-1999, under-1MB web pages back. But we can regain control over the pages we have today.</p> <h3>1. Understand the critical rendering path for each page</h3> <p>Your pages probably have a some dead weight on them, and some of that weight is unoptimized. Too much stuff means you can't see the forest for the trees. The key to a good user experience is quickly delivering the most important content first. Here are some&nbsp;<a href="https://developers.google.com/web/fundamentals/performance/critical-rendering-path/measure-crp">great resources for analyzing and optimizing the critical rendering path</a>.</p> <h3>2. Make sure everyone who touches a page understands the performance impact of what they do</h3> <p>All the fancy performance monitoring tools in the world can't help you if you don't have a strong performance culture at your organization. Here are some&nbsp;<a href="https://support.speedcurve.com/docs/performance-culture-best-practices">tips and best practices</a>&nbsp;to help on that journey.</p> <h3>3. Use performance budgets to fight regression</h3> <p>Page bloat happens when people stop paying attention. Pages need to be monitored consistently over time. <a href="https://support.speedcurve.com/docs/continuous-integration">Integrating performance testing into your CI/CD process</a>&nbsp;is a great way to fight regression, especially if you combine this with <a href="https://www.speedcurve.com/blog/performance-budgets/">creating&nbsp;performance budgets</a>. By creating performance budgets for key metrics &ndash; such as Start Render, Largest Contentful Paint, and various page size and weight metrics &ndash; you can get alerted when they go out of bounds.&nbsp;</p> <h3>4. Don't assume hardware and networks will mitigate page bloat</h3> <p>Increased page size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.</p> <h3>5. Don't forget to monitor longtail performance</h3> <p>While some of your users may have newer devices and speedy networks, not all are this lucky.&nbsp;If you're using a&nbsp;<a href="https://support.speedcurve.com/docs/synthetic-vs-real-user-monitoring-rum">real user monitoring</a>&nbsp;tool,&nbsp;<a href="https://support.speedcurve.com/docs/performance-for-product-managers">keep an eye on your performance metrics at the 75th and 95th percentiles</a>&nbsp;so you have an understanding of your site's less-than-optimal performance.&nbsp;</p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/455/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Questions or feedback?</h2> <p>I'd love to hear your thoughts and insights. If you're interested in tracking page size and complexity for your own site, we'd love to have you&nbsp;<a href="https://www.speedcurve.com/signup/"><strong>try SpeedCurve for free</strong></a>!</p> Tue, 04 Jul 2023 00:00:00 +1200 SpeedCurve Recipes: Multi-step test scripting made easy! https://www.speedcurve.com/blog/speedcurve-recipes-multi-step-test-scripting-made-easy <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/453/beaker-and-flask-transparent.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>You may already know that SpeedCurve lets you <a href="https://support.speedcurve.com/docs/get-started-synthetic#step-1--add-urls-regions-browsers-and-custom-settings" target="_blank" rel="noopener">test multiple URLs</a> for a site across a wide selection of <a href="https://support.speedcurve.com/docs/browsers-and-devices">browsers</a>, <a href="https://support.speedcurve.com/docs/browsers-and-devices">devices</a>, <a href="https://support.speedcurve.com/docs/browsers-and-devices#connection-bandwidth">connection types</a>, and <a href="https://support.speedcurve.com/docs/test-agent-locations">geolocations</a>.</p> <p>You may also know that SpeedCurve lets you <a href="https://support.speedcurve.com/docs/synthetic-test-scripts">add synthetic scripts to your test settings</a>, which lets you easily do things like simulate a repeat view or block a third party.</p> <p>But did you also know that we've made it easy to simulate more complex use cases? These include:</p> <ul> <li>User journeys through your site</li> <li>Checkout processes</li> <li>Submit a login form</li> <li>SPA and AJAX navigations</li> <li>Set cookies</li> </ul> <p>Keep reading to learn how Recipes make it easy for you to test a variety of scenarios that can help you understand how your visitors are experiencing your site &ndash; and how to improve their experience!</p><h2>What is a Recipe?</h2> <p><a href="https://support.speedcurve.com/recipes" target="_blank" rel="noopener">SpeedCurve Recipes</a> are detailed walkthroughs &ndash; including sample scripts &ndash; of multi-step tasks.</p> <p>You still have the ability to <a href="https://support.speedcurve.com/docs/synthetic-test-scripts">write your own custom scripts</a> from scratch and add them to your SpeedCurve test settings. With Recipes, we want to make your testing easier, so you have more time to do the important work of making your pages faster for your users.</p> <p>Here are some of the most-used Recipes:</p> <ul> <li><a href="https://support.speedcurve.com/recipes/simulate-a-user-journey" target="_blank" rel="noopener"><strong>Simulate a user journey</strong></a> &ndash; Use data logging to complete a multi-step script that measures an individual page.</li> <li><a href="https://support.speedcurve.com/recipes/simulate-a-checkout-process"><strong>Simulate a checkout process</strong></a> &ndash; Simulate a user journey through the checkout process.&nbsp;</li> <li><a href="https://support.speedcurve.com/recipes/simulate-a-user-journey-with-ajax-or-spa-navigation"><strong>Simulate a user journey with a SPA or AJAX navigation</strong></a> &ndash; Step through a user journey that includes a hard navigation as well as a SPA or AJAX navigation in the flow.</li> <li><a href="https://support.speedcurve.com/recipes/submitting-a-login-form"><strong>Submit a login form</strong></a> &ndash; Example scripts that authenticate the browser session in different ways.</li> <li><a href="https://support.speedcurve.com/recipes/request-blocking-for-synthetic-tests"><strong>Block a request (such as a third party)</strong></a> &ndash; Block a specific domain, or block everything *but* a specific domain.</li> <li><a href="https://support.speedcurve.com/recipes/override-dns-in-a-script" target="_blank" rel="noopener"><strong>Override DNS in a script</strong></a> &ndash;&nbsp;This script creates a fake DNS entry, forces a resolution to the localhost, sets a cookie on the domain, and navigates to the URL.</li> </ul> <p>We're always adding more, so let us know if you'd like to have one that you can't find in our&nbsp;<a href="https://support.speedcurve.com/recipes">Recipe book</a>.</p> <h2>How to add Recipes to your tests</h2> <p>To use a Recipe in SpeedCurve, you need to:</p> <h3>1. Copy the Recipe template you want to use</h3> <p>Browse the <a href="https://support.speedcurve.com/recipes">Recipe book</a> to find the script you need. Here's the script template for the <a href="https://support.speedcurve.com/recipes/simulate-a-user-journey">'Simulate a User Journey' recipe</a>:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/453/script.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>2. Customize the script template</h3> <p>For example, in the sample above, you would add your own URLs for the first and last pages in the user flow.</p> <h3>3. Insert the script into your synthetic test settings</h3> <p>You can add a script to any URL in your synthetic settings by opening up the site, and then clicking the tag icon to the right of the URL. That will expand the test parameters for that image, including a window for you to insert your script.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/453/script-icon.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><strong>IMPORTANT:</strong> <strong>You need to add separate versions of the same URL if you want to test it in multiple ways.</strong> For example, if you want to test the same URL as a regular first view as well as a repeat view, you'll need to create a separate test URL for each view, and then apply the 'Repeat View' script to the second URL.&nbsp;</p> <h2>RUM Recipes</h2> <p>We've also recently started to create Recipes for real user monitoring. For these Recipes, you can add custom JavaScript to your pages:</p> <ul> <li><a href="https://support.speedcurve.com/recipes/track-size-for-a-single-resource"><strong>Track size for a single resource</strong></a>&nbsp;&ndash; Capture a number of size metrics (total bytes, compressed/uncompressed size, compression ratio, HTTP headers) for a specific resource.</li> <li><a href="https://support.speedcurve.com/recipes/track-size-for-a-group-of-assets"><strong>Track size for a group of images</strong></a>&nbsp;&ndash; Find all &lt;img&gt; resources that come from a specific domain, and then capture a number of metrics, such as largest downloaded image size, total bytes for all images, largest non-cached image size, and more.</li> </ul> <p>We'll be adding more RUM Recipes in the future. Let us know what you'd like to see!</p> <h2>Questions or feedback?</h2> <p>We hope that Recipes help make your testing workflow easier, so you have more time to focus on making your pages faster. If you have any questions about how to add a Recipe to your testing, send us a note at support@speedcurve.com.</p> Mon, 05 Jun 2023 00:00:00 +1200 Farewell FID... and hello Interaction to Next Paint! https://www.speedcurve.com/blog/interaction-to-next-paint-core-web-vitals <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inp-header.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Today at Google I/O 2023, it was announced that Interaction to Next Paint (INP) is no longer an experimental metric. INP will replace First Input Delay (FID) as a Core Web Vital in March of 2024.&nbsp;</p> <p>It's been three years since the Core Web Vitals initiative was kicked off in May 2020. In that time, we've seen people's interest in performance dramatically increase, especially in the world of SEO. It's been hugely helpful to have a simple set of three metrics &ndash; focused on loading, interactivity, and responsiveness &ndash; that everyone can understand and focus on.</p> <p>During this time, SpeedCurve has stayed objective when looking at the CWV metrics. When it comes to new performance metrics, it's easy to jump on hype-fuelled bandwagons. While we definitely get excited about emerging metrics, we also approach each new metric with an analytical eye. For example, back in November 2020, <a href="https://www.speedcurve.com/blog/first-input-delay-google-core-web-vitals/">we took a closer look at one of the Core Web Vitals, First Input Delay</a>, and found that it was sort of 'meh' overall when it came to meaningfully correlating with actual user behavior.</p> <p>Now that INP has arrived to dethrone FID as the responsiveness metric for Core Web Vitals, we've turned our eye to scrutinizing its effectiveness.</p> <p>In this post, we'll take a closer look and attempt to answer:</p> <ul> <li>What is Interaction to next Paint?</li> <li>How does INP compare to FID?</li> <li>What is a 'good' INP result?</li> <li>Will there be differences between INP collected in RUM vs. Chrome User Experience Report (CrUX)?</li> <li>What correlation does INP have with real user behavior?</li> <li>When should you start caring about INP?</li> <li>How can you see INP for your own site in SpeedCurve?</li> </ul> <p>Onward!</p><h2>What is Interaction to Next Paint?</h2> <p>Interaction to Next Paint (INP) is intended to measure how responsive a page is to user interaction(s).&nbsp;This is measured based on how quickly the page responds <em>visually</em> after a user interaction (i.e. when the page paints something in the browser's next frame after you interact with it). Because INP measures actual user interactions, it can only be monitored using a <a href="https://www.speedcurve.com/features/performance-monitoring/">real user monitoring (RUM) tool</a>.&nbsp;&nbsp;</p> <p>For the purposes of INP, an interaction is considered any of the following:</p> <ul> <li>Mouse click</li> <li>Touchscreen tap</li> <li>Key press</li> </ul> <p>INP is measured using the Event Timing API, which as of today is <a href="https://caniuse.com/mdn-api_performanceeventtiming">supported</a> in Chrome, Edge, Firefox and Opera. Alas, there is no support for Safari.</p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inpcaniuse.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Browser support for the EventTiming API" /></h2> <p>For a more detailed explainer of INP &ndash; including a breakdown of what's in an interaction, how INP is calculated and more &ndash; see&nbsp;<a href="https://web.dev/inp/">this post by Jeremy Wagner</a>&nbsp;from the Google Chrome team.</p> <h2>What is a 'good' INP number?</h2> <p style="font-size: 16px;">INP is a new metric, so the suggested thresholds from Google are subject to change. For now, those thresholds are as follows:</p> <ul> <li style="font-size: 16px;"><strong>Good</strong> &ndash; Under 200ms</li> <li style="font-size: 16px;"><strong>Needs improvement</strong> &ndash; Between 200-500ms</li> <li style="font-size: 16px;"><strong>Poor</strong> &ndash; More than 500ms</li> </ul> <p>Note that these thresholds are all based on RUM data at the 75th percentile.&nbsp;</p> <p style="font-size: 16px;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inp-thresholds.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Illustration of recommended thresholds for Good (&lt;200ms), Needs Improvement(200-500ms) and Poor(&gt;500ms)" /></p> <h2>What can affect INP?</h2> <p>Most user interactions will take place after the page is initially loaded, so it's important to understand that the measurement is typically captured after most of your other metrics (i.e. FCP, LCP, Load, etc.) have occurred. Oftentimes the page seems dormant at this point, but this is not always the case.</p> <p>Things that can affect INP include:</p> <ul> <li>Long-running JavaScript event handlers</li> <li>Input delay due to <a href="https://www.speedcurve.com/blog/javascript-long-tasks/">Long Tasks</a> blocking the main thread&nbsp;</li> <li>Poorly performing JavaScript frameworks</li> <li>Page complexity leading to presentation delay</li> </ul> <p>For some ideas around optimizing INP, I've included some great resources at the bottom of this article.</p> <h2>How do INP numbers compare to FID?</h2> <p>There are a few important things to understand about FID versus INP. Most importantly, INP includes more than just input delay. Processing time and presentation delay are also included. This is inclusive of time spent running event handlers, as well as rendering/painting delay. The following is a comparison of FID and INP at the 75th percentile for Mobile and Desktop for RUM data collected from SpeedCurve.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inp_mobile.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Illustration showing mobile comparison of INP and FID numbers. FID 17ms and INP 120ms." /><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inp_desktop.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Illustration showing desktop comparison of INP and FID numbers. FID 17ms and INP 120ms." /></p> <p>It's good to see INP values so much higher than FID, which was expected but still nice to see. While desktop numbers look encouraging, mobile is clearly an opportunity for focus. This isn't a big surprise considering the impact JavaScript has on lower-powered mobile devices.&nbsp;</p> <h2>What can I expect when comparing CrUX data to RUM?</h2> <p>There are some important caveats to understand when comparing INP measured from CrUX (Chrome user experience report) to INP from RUM. While RUM and CrUX do their best to align, there are other times where you may see differences between the two. Barry Pollard wrote a comprehensive post on this found <a href="https://web.dev/crux-and-rum-differences/">here</a>.</p> <p>For INP specifically, there a a few gotchas to consider:</p> <ul> <li>For CrUX, INP can change&nbsp;during the lifecycle of the page. Today, SpeedCurve is reporting INP for the first interaction. Other RUM tooling may also be limited by when the beacon is being sent, likely only collecting interactions that occur before the page is loaded.</li> <li>RUM, which uses JavaScript APIs, will not be able to collect INP from within iframes for the page they are measuring, while CrUX will.</li> <li>INP is collected for all <a href="https://caniuse.com/mdn-api_performanceeventtiming">supported browsers</a> in SpeedCurve RUM. CrUX is a subset of Chrome users only.</li> </ul> <h2>How does INP correlate with user behavior?</h2> <p>While understanding INP is important, does it really correlate with how users interact with your site? It's easy to focus so much on improving metrics such as Core Web Vitals that we lose site of the bigger picture. Creating a delightful experience for end users is at the heart of web performance.</p> <p>One way of understanding how metrics such as INP align with users is to correlate with outcomes, such as conversion rates. Here is an example of how INP correlated with conversion rates for four different sites.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/inp_v_conversion.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Correlation charts showing the relationship between INP and conversion rates for four different sites." /></p> <p>Not surprisingly, we see the impact is different based on the slope of the conversion line, as well as the distribution of INP across user sessions. However, it's notable that there is an overall negative correlation between INP and conversion. This tells us that yes, INP seems to be a meaningful metric when it comes to user-perceived performance.</p> <h2>Measuring INP in SpeedCurve</h2> <p>We've added INP everywhere that your Vitals are showcased throughout SpeedCurve &ndash; including the Home, Vitals, Performance, and JavaScript dashboards.</p> <p><img class="blog-img" style="font-size: 35px; color: #000000;" src="https://blog-img.speedcurve.com/img/449/inp_vitals_dashboard.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="INP summary, histogram and time series from the SpeedCurve Vitals dashboard." /></p> <p>FID hasn't disappeared from the dashboards yet, but likely will as we get closer to March 2024. If you're still interested in tracking FID at that point, you can do that by&nbsp;<a href="https://support.speedcurve.com/docs/custom-charts-dashboards">creating custom charts in your Favorites dashboards</a>.</p> <h2>Summary</h2> <p>We hope that focusing on Interaction to Next Paint will drive more attention to the issues that were being glossed over by FID. There has been a lot of back and forth over whether or not improving your Core Web Vitals has a significant impact on search rankings. I don't have any data to make an informed statement on that question.</p> <p>However, I will say that if you're solely optimizing your site for SEO purposes, perhaps you're missing the bigger picture. <strong>Doing the right thing for your users &ndash; including creating a responsive and delightfully fast experience &ndash; will have a lasting impact on your brand and yes, your revenue.&nbsp;</strong></p> <h2>Resources</h2> <ul> <li><a href="https://web.dev/inp/">INP Explainer</a></li> <li><a href="https://web.dev/optimize-inp/">Optimizing INP</a></li> <li><a href="https://support.speedcurve.com/docs/add-interaction-to-next-paint-inp-to-your-core-web-vitals-dashboard">Adding INP to Favorites in SpeedCurve</a> [video]</li> <li><a href="https://developer.chrome.com/blog/inp-in-frameworks/">Modern Frameworks and INP</a></li> <li><a href="https://chromium.googlesource.com/chromium/src/+/master/docs/speed/metrics_changelog/README.md">Core Web Vitals Changelog</a></li> </ul> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/449/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Wed, 10 May 2023 00:00:00 +1200 A Complete Guide to Performance Budgets https://www.speedcurve.com/blog/performance-budgets <p style="text-align: left;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/perf-budgets-hero.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: left;">It's easier to make a fast website than it is to keep a website fast. If you've invested countless hours in speeding up your pages, but you're not using performance budgets to prevent regressions, you could be at risk of wasting all your efforts.</p> <p>In this post we'll cover how to:</p> <ul> <li>Use performance budgets to fight regressions</li> <li>Understand the difference between performance budgets and performance goals</li> <li>Identify which metrics to track</li> <li>Validate your metrics to make sure they're measuring what you think they are &ndash; and to see how they correlate with your user experience and business metrics</li> <li>Determine what your budget thresholds should be</li> <li>Focus on the pages that matter most</li> <li>Get buy-in from different stakeholders in your organization</li> <li>Integrate with your CI/CD process</li> <li>Synthesize your synthetic and real user monitoring data</li> <li>Maintain your budgets</li> </ul> <p>This bottom of this post also contains a collection of case studies from companies that are using performance budgets to stay fast.&nbsp;</p> <p>Let's get started!</p><h2>Background: How performance budgets work</h2> <h3>1. What is a performance budget?</h3> <p>A performance budget is a threshold that you apply to the metrics you care about the most. You can then configure your monitoring tools to send you alerts &ndash; or even break the build, if you're testing in your staging environment &ndash; when your budgets are violated.</p> <h3>2. What should a performance budget look like?</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/cls-perf-budget.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>A good performance budget chart, such as the one above, should show you:</p> <ul> <li>The metric you're tracking</li> <li>The threshold you've created for that metric</li> <li>When you exceed that threshold</li> <li>How long you stayed out of bounds</li> <li>When you returned to below the threshold</li> </ul> <h3>3. How do you determine budget thresholds?</h3> <p>A good practice is to:</p> <ol> <li>Look at your last 2-4 weeks of data for a given metric</li> <li>Identify the worst number</li> <li>Set your performance budget for that number</li> </ol> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/good-budget.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In the example above, you can see a time series chart that shows Largest Contentful Paint times over a one-month period. The slowest LCP time is 8.64 seconds, therefore the performance budget &ndash; represented by the red line &ndash; is set for 8.64 seconds. As the person responsible for the performance of this page, you don't want to see it get worse than this while you work to make things faster.</p> <h3>4. Performance budgets vs. performance goals</h3> <p>Your performance budgets are NOT the same as your performance goals.</p> <p><strong>Performance goals are aspirational.</strong>&nbsp;They answer the question <em>"How fast do I want to be eventually?"</em></p> <p><strong>Performance budgets are practical.</strong>&nbsp;They answer the question <em>"How can I keep my site from getting slower while I work toward my performance goals?"</em></p> <p>Let's continue with the example above, where the worst LCP time was 8.64 seconds, making that the most pragmatic performance budget. At the same time, the person responsible for this page might know that they would like the LCP time to eventually be 2.5 seconds or less, which is Google's recommendation for page experience and SEO. That's the goal, but it's not the budget.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/budget-vs-goals.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's important to have your 2.5 second performance goal in mind, but it's not helpful to make that number your performance budget. If you do, you'll end up with a chart that looks like this:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/bad-budget.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Why this chart isn't helpful:</p> <ul> <li><strong>It's demoralizing.</strong> It looks like a history of failure that's intended to shame you, not help you.</li> <li><strong>It's not actionable.</strong> Because you've already violated your budget, you won't get alerts if performance degrades even further.</li> <li><strong>It's ignorable.</strong> Because it's demoralizing and not actionable, you'll quickly learn to ignore it or rationalize it away.</li> </ul> <h3>5. Why do you need performance budgets?</h3> <p>Performance budgets mitigate two of the biggest challenges you probably face in your daily life: not enough time, plus too many people touching the site.</p> <p><strong>You're busy.</strong> You don't have time to check your charts daily to make sure your latest deploy hasn't introduced any regressions. After you've set up performance budgets, you can relax knowing that everything is fine until you get an alert in your inbox or Slack channel.</p> <p><strong>Your site has a number of people contributing content,</strong>&nbsp;such as unoptimized images and third-party tags that have the potential to seriously hurt the speed of your pages. You might not know that a new 1MB hero image is suddenly slowing down an important landing page, but the performance budget you created for tracking image weight violations knows.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/image-weight.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Now that you understand the "what" and "why", let's talk about getting started with your own budgets.</p> <h2>Get started with performance budgets</h2> <h3>6. Which metrics should you start with?</h3> <p>With hundreds of potential performance metrics to track, this is a huge question. While you can (and arguably should) track many metrics across your site, you don't need to set budgets for all of them. In fact, I strongly urge you <strong>not</strong> to do that.</p> <p>Instead, I recommend starting with what I call Minimum Viable Budgets. Even if you create performance budgets for just one or two metrics, that's a great start. That approach lets you:</p> <ul> <li>Get comfortable with the mechanics of budgets</li> <li>Confirm that your budgets are working (i.e., you're catching regressions in a timely manner)</li> <li>Teach other people in your organization why these metrics matter</li> <li>Avoid alert fatigue</li> </ul> <p>Here are a few metrics to consider, and why:</p> <p><strong>&gt; Backend (Synthetic and RUM)</strong></p> <p>Also known as Time to First Byte (TTFB), this is the time from the start of the initial navigation until the first byte is received by the browser (after following redirects). Even if you're not responsible for backend time (e.g., you're a front-end developer), it's a good idea to track it because it can delay all your other metrics.</p> <p><strong>&gt; Start Render (Synthetic and RUM)</strong></p> <p>Start Render time is measured as the time from the start of the initial navigation until the first non-white content is painted to the browser display. Even if that first visible paint isn't a meaningful amount of content, it's still a useful signal that the page is working, and it can help stop users from bouncing.</p> <p>People don't talk much about Start Render these days, perhaps because newer, shinier metrics have emerged. But I've participated in many usability studies that have found a strong, consistent correlation between Start Render time, user engagement, and business metrics.</p> <p><strong>&gt; Largest Contentful Paint (Synthetic and RUM)</strong></p> <p>Largest Contentful Paint (LCP) is one of Google's&nbsp;<a href="https://www.speedcurve.com/blog/web-vitals-user-experience/">Core Web Vitals</a>. LCP is the time at which the largest visible element in the viewport is rendered. It's only tracked on certain elements, e.g., IMG and VIDEO.</p> <p>LCP is only available in Chromium-based browsers, so if you have a significant number of users that come in via other browsers, you should also consider tracking Last Painted Hero, below.</p> <p><strong><span style="color: #000000;">&gt; Last Painted Hero (Synthetic)</span></strong></p> <p><span style="color: #000000;"><span style="color: #1f1f1f;">Last Painted Hero (LPH) is a synthetic metric that's measurable in any browser. (Fun fact:&nbsp;</span><a href="https://www.speedcurve.com/blog/last-painted-hero/">Largest Contentful Paint was partially inspired by Last Painted Hero.</a><span style="color: #1f1f1f;">) LPH shows you when the last piece of critical content is painted in the browser. It's a handy metric for knowing when all your important content has rendered.</span></span></p> <p><strong><span style="color: #000000;">&gt; Cumulative Layout Shift (Synthetic and RUM)</span></strong></p> <p><span style="color: #000000;"><a href="https://www.speedcurve.com/blog/google-cumulative-layout-shift/">Cumulative Layout Shift (CLS)</a><span style="color: #1f1f1f;">&nbsp;is another one of Google's Core Web Vitals. CLS is a score that captures how often a user experiences unexpected layout shifts as the page loads. Elements like ads and custom fonts can push important content around while a user is already reading it. A poor CLS score could be a sign that page feels janky to your users.</span></span></p> <p><strong><span style="color: #000000;">&gt; Interaction to Next Paint (RUM)</span></strong></p> <p><span style="color: #000000;">While Interaction to Next Paint (INP) is not a Core Web Vital, it's currently in the experimental stage as a potential lead-up to being added, so it bears mentioning here.&nbsp;</span></p> <p><span style="color: #000000;">INP measures a page's responsiveness to individual user interactions. According to the <a href="https://web.dev/inp/">Chrome dev team</a>:</span></p> <p><em><span style="color: #000000;">"INP is a metric that aims to represent a page's overall interaction latency by selecting one of the single longest interactions that occur when a user visits a page. </span><span style="color: #000000;">INP logs the latency of all interactions throughout the entire page lifecycle. The highest value of those interactions &ndash; or close to the highest for pages with many interactions &ndash; is recorded as the page's INP. A low INP ensures that the page will be reliably responsive at all times."</span></em></p> <p><strong><span style="color: #000000;">&gt; Long Tasks (Synthetic and RUM)</span></strong></p> <p><span style="color: #000000;"><a href="https://www.speedcurve.com/blog/javascript-long-tasks/">Long Tasks time</a><span style="color: #1f1f1f;">&nbsp;is the total time of all your JavaScript tasks over 50ms, from navigation start until the page is fully loaded. Tracking Long Tasks will give you a better understanding of the impact that Long Tasks have on the entire page load and your users. This can be especially handy if you have a lot of third-party tags on your pages, as third parties can be a heavy contributor to excessive Long Tasks time.&nbsp;</span></span></p> <p><strong><span style="color: #000000;">&gt; Lighthouse Scores (Synthetic)</span></strong></p> <p><a href="https://developers.google.com/web/tools/lighthouse">Google Lighthouse</a>&nbsp;is an open-source tool that checks your page against rules for Performance, PWA, Accessibility, Best Practice, and SEO. For each of those categories, you get a score out of 100 and recommendations on what to fix.&nbsp;It can be a good idea to track your Performance score to make sure you're not regressing, and then if you do experience a regression, drill down into your audits to identify the cause.</p> <p><strong><span style="color: #000000;">&gt; Page Size and Weight (Synthetic)</span></strong></p> <p>If you're concerned about serving huge pages to your mobile users, or if you're worried about site contributors accidentally publishing huge unoptimized image and video files, then you should consider tracking metrics like page size and weight.</p> <p>In an ideal world, pages served to mobile devices should be under 1 MB &ndash; and definitely not more than 2 MB &ndash; but I often see pages in excess of 5 MB. Media sites are particularly prone to this issue.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/page-size.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Above is a content breakdown for the home page of a mainstream news site. The page contains 480 requests and is over 13 MB in size. Looking at the resource breakdown, you might want to set specific performance budgets on JavaScript and image size.</p> <p><strong><span style="color: #000000;">&gt; Element Timing (Synthetic and RUM)</span></strong></p> <p>Similar to LCP, Element Timing measures when a DOM element is rendered. Unlike LCP, Element Timing allows you (rather than Chrome) to decide which elements you want to measure. And unlike LCP, Element Timing allows you to measure more than one element on a page. (Learn more&nbsp;<a href="https://www.speedcurve.com/blog/element-timing-one-true-metric/">here</a>.)</p> <p>Element Timing is a somewhat more advanced metric, so if you're just getting started, you might want to make note of it now and come back to it later, when you're ready.</p> <p><strong><span style="color: #000000;">&gt; User Timing (Synthetic and RUM)</span></strong></p> <p>You can create custom metrics to track everything from headlines to call-to-action buttons. Twitter has used custom timers to create a&nbsp;<a href="https://blog.twitter.com/engineering/en_us/a/2012/improving-performance-on-twittercom.html">Time to First Tweet</a>&nbsp;metric. Pinterest has created a&nbsp;<a href="https://timkadlec.com/remembers/2018-03-27-notes-on-making-pinterest-fast-at-perfmatters/#:~:text=Pinterest%20chose%20a%20custom%20metric,impact%20on%20the%20actual%20experience.">Pinner Wait Time</a>&nbsp;metric. Using the&nbsp;<a href="/blog/We%20recommend using our &quot;Long Tasks&quot; metric to track all the long tasks from initial page navigation right through to fully loaded. Do use TBT to help understand your Lighthouse score and Web Vitals metrics, but focus on the Long Tasks time metric to get a full understanding of the impact long tasks have on the whole page load and your users.">W3C User Timing spec</a>, you can add timestamps around the specific page elements that matter most to you. (<a href="https://support.speedcurve.com/docs/custom-metrics">Here's how</a>&nbsp;to add custom timers in SpeedCurve.)&nbsp;</p> <p>Like Element Timing, User Timing requires some expertise to identify what you want to track and then add the timestamps to your pages, as well as ongoing maintenance. Still, they're worth investigating (if not now, then later) if you have the resources and the need.</p> <h3>7. Focus on key pages</h3> <p>You don't need to apply performance budgets to all your pages. When it comes to the impact of page speed on metrics like conversion rate and bounce rate, some pages are more critical than others.</p> <p>These are some of the key pages for retail, ranked in order of impact:</p> <ul> <li>Product detail</li> <li>Product category&nbsp;</li> <li>Shopping cart</li> <li>Home&nbsp;</li> </ul> <p>And these are some key pages for media and other sites:</p> <ul> <li>Articles</li> <li>Search</li> <li>Home</li> </ul> <p>Keep in mind that your budgets will most likely vary from page to page, because the performance of your pages may differ widely due to how each page is built.</p> <h3>8. Get buy-in from different stakeholders</h3> <p>Everyone who touches a page should understand the performance impact of changes they introduce to that page. They should also collaborate on setting performance budgets and remediating budget violations.</p> <p>For example, if your marketing team is responsible for adding and maintaining third-party tags, they should:</p> <ul> <li><strong>Have a basic understanding of the metrics</strong> &ndash; such as Long Tasks time &ndash; that measure the performance of each tag.</li> <li><strong>Collaborate on setting the performance budget</strong> &ndash; again, based on the worst result over the previous 2-4 weeks &ndash; for each metric.</li> <li><strong>Receive alerts</strong> (typically via email, Slack, or whatever webhook you use) when the performance budget is violated.</li> <li><strong>Participate in identifying and fixing the issue</strong>&nbsp;(or at least be cc'ed) and get alerted when the budget has been resolved.</li> </ul> <p>In SpeedCurve, you can <a href="https://support.speedcurve.com/docs/custom-charts-dashboards">set up separate dashboards for each stakeholder group</a> in your organization. You can create charts and performance performance budgets within each dashboard, and then configure alerts to be sent only to specific stakeholders.</p> <p>Below is an example of a dashboard created for an SEO team. It focuses on the Lighthouse SEO score, as well as Largest Contentful Paint and Cumulative Layout Shift, as those are both Core Web Vitals and therefore important search ranking factors.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/seo-dashboard2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Note that in the Lighthouse chart at the top of this dashboard, you can click on any data point to drill down into your&nbsp;<a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse test details</a>, where among other things, you can get detailed audits that recommend what you can fix on the page.</p> <h3><span style="color: #000000;">9. Use synthetic testing to visually validate your metrics</span></h3> <p><span style="color: #000000;">The metrics mentioned above are not hard-and-fast suggestions. That's because a metric that is relevant and helpful for one page may not be helpful for another.&nbsp;</span><span style="color: #000000;">Before you invest the time and energy in setting up performance budgets for a metric, first take a good look at how that metric aligns with your own data. </span></p> <p><span style="color: #000000;">The easiest way to validate your metrics is to look at rendering filmstrips in your synthetic test data, like this:</span></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/check-metrics.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><span style="color: #000000;">In the example above (taken from our&nbsp;<a href="https://app.speedcurve.com/benchmark/retail-us/test/230426_9H_62a12a785f91f8d0e2d8843acaf29245/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">Industry Benchmarks</a>) you can see:</span></p> <ul> <li><span style="color: #000000;">Start Render does correlate to content appearing in the viewport.</span></li> <li><span style="color: #000000;">Largest Contentful Paint doesn't quite align with the appearance of the largest image.</span></li> <li><span style="color: #000000;">Last Painted Hero, on the other hand, does align with the largest image.</span></li> <li><span style="color: #000000;">Visually Complete comes in much later and arguably isn't helpful for this page.</span></li> </ul> <p><span style="color: #000000;">Based on these observations, you might choose to focus on Start Render and Last Painted Hero.</span></p> <p><span style="color: #000000;">If you need to validate more metrics, you can <a href="https://support.speedcurve.com/docs/test-details">look at your waterfall chart</a> and see how the various metrics line up with the rendering filmstrip, like this:</span></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/waterfall.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><span style="color: #000000;">Using this view, it's relatively fast and easy to see which metrics work or don't work for a given page. It's important to keep in mind that just because a metric isn't relevant for one page, that doesn't necessarily mean it's a bad metric. Often, any variability you might see is due to how the page is built.</span></p> <h3><span style="color: #000000;">10. Use real user monitoring to validate user and business impact</span></h3> <p><span style="color: #000000;">This step might not be essential, but if you can do it, it's a good way to give yourself the confidence that you're tracking the right metrics. Ultimately, you want to know that changes you make to your site &ndash; for better or for worse &ndash; will directly affect user behaviour and business outcomes.</span></p> <p><span style="color: #000000;">This is where real user monitoring (RUM) really shines. RUM can track data about bounce rate and conversion rate (along with other user experience and business KPIs). Using this data alongside your performance data, you can create correlation charts that demonstrate the relationship between performance and business results.</span></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/site-1-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><span style="color: #000000;">In the correlation chart above, you can clearly see that as LCP times get slower, bounce rate gets worse. This chart demonstrates that, for this particular site, LCP time is a good metric to set a performance budget for.</span></p> <h3>11. Synthesize your synthetic and real user monitoring data</h3> <p>In an ideal world, you're using both synthetic and real user monitoring (RUM). Several metrics are available in both tools, so you can create charts in which you track the same metric in both synthetic and RUM.</p> <p>(It's important to know that your synthetic and RUM metrics most likely will not match, for reasons explained <a href="https://support.speedcurve.com/docs/synthetic-vs-lux-data">here</a>. This is nothing to be concerned about. The important thing to track is consistency and changes within a single tool and settings.)</p> <p>For a metric that's available in synthetic and RUM, such as Start Render or Largest Contentful Paint, you might want to consider this type of setup:</p> <p><strong>Track the metric for synthetic and RUM within the same chart.</strong></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/syn-rum-0.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><strong>Create the performance budget for the RUM metric, so you get an alert when the budget is violated. This lets you know that real users are experiencing this issue.</strong></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/syn-rum-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><strong><span style="color: #000000;">Because you're tracking synthetic data in the same chart, you can easily drill down and get detailed test results and diagnostics.</span></strong></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/syn-rum-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><strong><span style="color: #000000;">Add a note to the chart, stating when you implemented the necessary fixes.&nbsp;</span><span style="color: #000000;">After your fixes go live, you can see (and get an alert) when your metric returns to normal.</span></strong></p> <p><span style="color: #000000;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/syn-rum-3.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></span></p> <p><span style="color: #000000;">This is just one potential configuration. If you're using your RUM and synthetic data in other ways, I'd love to learn more about it! Leave a comment below.</span></p> <h3>12. Set up alerting (but not too much!)</h3> <p>Avoiding alert fatigue is crucial to the success of your performance budget strategy. If you're just starting out, it's absolutely fine to focus on just a handful of metrics. You can create performance budgets for all of them, but if you're setting up alerting, focus on just setting up alerts for critical metrics such as Backend time, Start Render, and Largest Contentful Paint.&nbsp;</p> <h3>13. Integrate with your CI/CD process</h3> <p>You can <a href="https://support.speedcurve.com/docs/continuous-integration">integrate your performance budgets and alerts with your CI/CD process</a>.&nbsp;This gives you the ability to do a couple of things:</p> <ul> <li><strong>Run synthetic tests in your staging/development environment</strong> and get alerts if any changes you've introduced have caused budget violations before the page goes live. You can even opt to break the build if any of your budgets are violated.</li> <li><strong>Run tests each time you do a deploy</strong>, catching issues immediately after they go live.&nbsp;</li> </ul> <h2>Maintain your budgets</h2> <p>Your budgets will ideally change over time, as your various metrics (hopefully) improve. After you've taken the time to create your performance budgets, you want them to stay relevant and helpful.&nbsp;</p> <h3>14. Update your budgets</h3> <p>If you're taking the practical, iterative approach recommended above, then you should revisit your budgets every 2-4 weeks and adjust them (hopefully downward) accordingly.&nbsp;</p> <p>You should also periodically revisit your metrics &ndash; going through the validation steps described in steps 9 and 10 above &ndash; to make sure you're still tracking the right things go through the validation Are you still tracking the right metrics?&nbsp;</p> <h2>Case studies</h2> <p>Here's how SpeedCurve users have used performance budgets in a number of different ways to stay fast:</p> <ul> <li><a href="https://web.dev/telegraph/">Improving Cumulative Layout Shift at The Telegraph</a></li> <li><a href="https://medium.com/farfetch-tech-blog/how-to-build-a-performance-culture-at-scale-1ed8dc8e79cd">How Farfetch built a performance culture at scale</a></li> <li><a href="https://www.shopify.com/partners/blog/narrative-web-performance">Crafting Faster Themes at Shopify</a></li> <li><a href="https://responsivewebdesign.com/podcast/vox-media-performance/">Building a mobile-first and performance-first culture at Vox Media</a></li> <li><a href="https://nooshu.com/blog/2020/06/17/example-synthetic-web-performance-monitoring-dashboards-using-speedcurve/">How GOV.UK uses performance budgets to keep their site accessible</a></li> <li><a href="https://tech.holidayextras.com/tackling-single-page-application-performance-at-scale-85c67613e4b7">Holiday Extras tackles SPA performance at scale</a></li> <li><a href="https://www.zillow.com/tech/bigger-faster-more-engaging-budget/">Zillow Gets Bigger, Faster, and More Engaging While on a Budget</a></li> <li><a href="https://medium.com/wehkamp-techblog/how-performance-budgets-and-speedcurve-are-helping-us-keeping-a-high-performant-website-283c0233daba">How performance budgets help Wehkamp keep a highly performant website</a></li> <li><a href="https://medium.com/adeo-tech/fostering-a-web-performance-culture-on-leroymerlin-fr-41619e1473d6">Fostering a web performance culture on leroymerlin.fr</a></li> </ul> <h2>More resources</h2> <p>If you're a SpeedCurve user, these resources will help you get started with performance budgets. If you're not using SpeedCurve yet, <a href="https://www.speedcurve.com/signup/">signing up for a free trial is easy!</a></p> <ul> <li><a href="https://support.speedcurve.com/docs/performance-budgets-and-alerts">Create performance budgets and alerts</a></li> <li><a href="https://support.speedcurve.com/docs/status-dashboard">See the current state of all your performance budgets on your Status dashboard</a></li> <li><a href="https://support.speedcurve.com/docs/continuous-integration">Integrate your synthetic testing into your CI/CD environment</a></li> <li><a href="https://support.speedcurve.com/docs/first-third-parties">Track individual first and third parties</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Get started with Core Web Vitals</a></li> </ul> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/442/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 02 May 2023 00:00:00 +1200 NEW! Lighthouse 10, Core Web Vitals updates, and Interaction to Next Paint https://www.speedcurve.com/blog/lighthouse-core-web-vitals-updates <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/448/inp-lcp-cls.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>There is a lot of excitement in the world of web performance these days, and April has been no exception! At SpeedCurve, we've been focused on staying on top of the items that affect you the most.</p> <p>Here is a look at what's new in SpeedCurve:</p> <ul> <li>Support for Lighthouse 10, including metric scoring changes as well as audits</li> <li>Updated RUM Core Web Vitals, including the much-anticipated addition of Interaction to Next Paint (INP)</li> </ul> <p>All of this work driven by the community is having a big impact in our collective goal&nbsp;to make performance accessible for everyone.</p> <p>Read on to learn more about these exciting changes!&nbsp;</p><h2>What changed in Lighthouse 10?</h2> <p>Here's how the update to Lighthouse 10 may affect your metrics.</p> <h3>Scoring</h3> <p>Lighthouse scoring has been updated for the first time since version 8. Metrics and scoring weights include the deprecation of Time to Interactive (TTI) and the increase in weight for Cumulative Layout Shift (CLS).&nbsp;</p> <p>Weighting adjustments are highlighted in bold below:</p> <ul> <li>First Contentful Paint (FCP): 15</li> <li>Speed Index: 10</li> <li>Largest Contentful Paint (LCP): 25</li> <li><span style="text-decoration: line-through;"><strong>Time To Interactive (TTI): 15</strong></span></li> <li>Total Blocking Time (TBT): 30</li> <li>Cumulative Layout Shift (CLS): 15 -&gt; <strong>25</strong></li> </ul> <h3>What impact will this have on&nbsp;<em>my</em> score?</h3> <p>According to Google:</p> <p style="padding-left: 30px;"><em>&ldquo;We expect this to improve most pages' performance scores, since most pages tend to score better on CLS than TTI. In an analysis of 13 million page loads in the latest HTTP Archive run, 90% of those pages would see an improvement in their Lighthouse performance score, with 50% of them seeing a performance improvement of more than 5 points.&ldquo;</em></p> <p>According to us:</p> <p style="padding-left: 30px;"><em>"It depends."</em></p> <p>Basically, we see a trend similar to the trend observed by Google. Overall, sites saw some improvement in their Lighthouse scores. However, sites that have not-so-great CLS scores might see their Lighthouse Performance score degrade.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/448/lh_losing.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing a large degradation in the Lighthouse Performance score." /></p> <p>If you are delivering a consistent experience AND already have super CLS, you're probably going to see an improvement in your Lighthouse score. It also helps that, in this case, a poor TTI is now being omitted from Lighthouse scoring.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/448/lh_winning.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing a massive improvement in Lighthouse Performance score." /></p> <h3>Audits</h3> <p>SpeedCurve leverages the performance audits from Lighthouse throughout your dashboards. With the release of Lighthouse 10, one new performance audit was added and another best practices audit was modified:</p> <ul> <li><strong>Page prevented back/forward (BF) cache navigation &ndash;&nbsp;</strong>When a site is able to leverage BF cache navigations, users can see a huge benefit and a perceived 'instant' loading of the page. When a BF cache navigation is prevented, Lighthouse lists the causes for the failure.<br /><br />(Important:<span style="color: #1f1f1f;">&nbsp;Changes to performance for BF cache navigations will likely not show up in your synthetic monitoring tests &ndash; with the exception of repeat views, which may see an improvement. However, your RUM data should reflect these improvements. Look for an update to in a future release that will allow closer inspection of BF cache navigations)</span></li> </ul> <ul> <li><strong>Allows users to paste into input fields &ndash;&nbsp;</strong>This best practices audit has been updated for Lighthouse 10. Previously, the audit checked for password inputs only. Now the audit includes all input fields that are not read-only.</li> </ul> <h2>Core Web Vitals updates</h2> <h3>Cumulative Layout Shift (CLS)</h3> <p style="font-size: 16px; font-weight: 400;">There was an update to CLS a while ago that didn't quite make sense for us to adopt until now. We've updated our RUM library to include the latest CLS implementation, which includes <a href="https://web.dev/evolving-cls/">session windows</a>. This may have a positive impact on CLS scores, especially for longer lived pages. We've updated this for synthetic too!</p> <h3>Interaction to Next Paint (INP)</h3> <p style="font-size: 16px; font-weight: 400;">There has been a lot of buzz about this potential replacement for First Input Delay (FID). Our fingers are crossed that we will say good riddance to FID and that this metric will do a better job of identifying responsiveness issues for applications. Look for an upcoming INP explainer and analysis after we've had a chance to observe it in the wild for a bit.</p> <p style="font-size: 16px; font-weight: 400;">For now, <a href="https://support.speedcurve.com/docs/custom-charts-dashboards">you can track INP in your Favorites dashboards</a>, but it has yet to be added to the Vitals dashboard or other RUM dashboards due to its experimental status. We quietly added INP on March 20, so if you start tracking it in your Favorites, you should already have a few weeks worth of data to explore.</p> <p style="font-size: 16px; font-weight: 400;">The recommended thresholds at the 75th percentile are as follows:</p> <p style="font-size: 16px; font-weight: 400;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/448/inp-thresholds.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="font-size: 16px; font-weight: 400; padding-left: 30px;"><strong>Good: &lt; 200ms</strong></p> <p style="font-size: 16px; font-weight: 400; padding-left: 30px;"><strong>Needs Improvement: 200ms - 500ms</strong></p> <p style="font-size: 16px; font-weight: 400; padding-left: 30px;"><strong>Poor: &gt; 500ms</strong></p> <p style="font-size: 16px; font-weight: 400;">So far, we are seeing that on average most sites are around or above the 200ms threshold. It's important to note that the recommended thresholds may still be adjusted, as this is still an experimental metric.</p> <p style="font-size: 16px; font-weight: 400;"><strong>Interested in tracking INP for your site?</strong> Here is a quick explainer video that shows you how to add INP to your custom Favorites dashboard.</p> <div class="video"><iframe src="https://www.youtube.com/embed/_5p3cCkJoLE" width="560" height="315" frameborder="0" allowfullscreen=""></iframe></div> <p>&nbsp;</p> <p>Here are some useful links where you can learn more!</p> <ul> <li><strong><a href="https://developer.chrome.com/blog/lighthouse-10-0/">What's new in Lighthouse 10?</a></strong></li> <li><strong><a href="https://web.dev/bfcache/">What is bfcache?</a></strong></li> <li><a href="https://web.dev/cls/"><strong>Cumulative Layout Shift explained</strong></a></li> <li><a href="https://web.dev/inp/"><strong>Interaction to Next Paint</strong></a></li> </ul> Mon, 10 Apr 2023 00:00:00 +1200 NEW! Home, Site, Page and Tests dashboards https://www.speedcurve.com/blog/new-dashboards-home-site-page-tests <p>Things have been busy over here at SpeedCurve HQ! Coming off of the back of our latest <a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard">RUM Compare dashboard</a> release, we are super excited to launch four new dashboards to make your life better, your work easier, and your websites faster.</p> <ul> <li><strong>Home</strong> - Well, now you have one. We needed a place for folks to land when they were jumping into all that SpeedCurve has to offer. The newly released Home page is a great starting place, pulling together views of both RUM and Synthetic data and directing you on where to dig in next.</li> <li><strong>Sites</strong> - We've rebuilt our Sites dashboard to give you a holistic look across all of the URLs you are monitoring. Start here to understand which areas of your site need the most attention and identify areas you can have the most impact.</li> <li><strong>Pages&nbsp;</strong>- This detailed view gives you trended information for all the metrics you know and love. Easily identify areas of opportunity and how things are performing over time for specific pages you are monitoring.</li> <li><strong>Tests&nbsp;</strong>- We've had a lot of requests for this, so we're happy to deliver it. You can now see a list of your latest synthetic tests and spot any failed tests.&nbsp;</li> </ul> <p>Let's take a look!</p><h2>Home</h2> <p>This is the new default dashboard you'll see when you jump into SpeedCurve for the first time. If you are using both RUM and Synthetic, you'll see both datasets reflected in this view. Otherwise, the metrics reflected will be from your synthetic data.</p> <p>Beginning with a snapshot of your Core Web Vitals, this page highlights key areas of interest and a high-level snapshot for the selected site. Links from each section take you to a deeper look within other areas of your dashboards.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/home.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve's Home dashboard showcasing various areas for web performance investigation." /></p> <p>If you have <a href="https://support.speedcurve.com/docs/continuous-integration">integrated SpeedCurve into your CI/CD environment</a> (leveraging the <a href="https://support.speedcurve.com/reference/get-all-deploys">deploy API</a>), or if you are <a href="https://support.speedcurve.com/docs/manual-test-deploy">manually running synthetic tests</a> using the 'Test Now' option, you'll see the state of your last deployment compared to the previous. And finally, a 7-day trend is displayed for your most important metrics.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/home2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Continued view of the Home page below the first viewport." /></p> <h2>Site Dashboard</h2> <p><span style="color: #000000;">This view is intentionally focused on the latest data you've collected for your Site which gives you a great starting place when managing performance across various URLs. What pages have the slowest LCP? Home, Order and Print look like a good place to start digging.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/site1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of the Site dashboard illustrating CWV for all urls monitoring in the Site" /></p> <p><span style="color: #000000;">How are users perceiving the visual rendering of all the pages in my site? Which pages should I dive into first?</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/filmstrip.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison of filmstrips for urls within the site" /></p> <p>From the LightHouse audits run against your site, you can quickly see which pages need the most attention across the 5 areas of focus.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/lighthouse.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Lighthouse scores compared across all urls in a site" /></p> <p>An aggregate view of the most critical areas for improvement identifies which recommendations will have the most collective impact across your pages.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/improvements.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="List of recommended improvements for the site" /></p> <p>&nbsp;</p> <h2>Page Dashboard</h2> <p>From the Site dashboard, you can narrow your focus and dive into the new Page dashboard. Here you'll see key metrics for the page trended over time. You can use the filters at the top of the page to further refine your data by browser, region, and date range.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of Synthetic Page dashboard showing CWV over time." /></p> <p>A look at the key rendering moments and a full filmstrip for the page focuses on user-perceived performance.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page dashboard illustrating key rendering times for the page as well as full filmstrip." /></p> <p>Now that you understand how the page is performing, you can take a look at how the construction for the page has changed over time and where optimization opportunities exist, such as reducing network requests and their file size.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page3.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Content requests over time for the page showing an improvement, decrease in requests over time." /></p> <p>Finally, a list of prioritized improvements for the page is displayed. Pay special attention to critical recommendations that are flagged with the Core Web Vitals metrics. If you implement those recommendations, you should see an improvement in your Vitals.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page_improvements.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A list of prioritized page improvements from the most recent lighthouse audit " /></p> <h2>Tests Dashboard</h2> <p>We love simplicity, and this dashboard is a great example of why. We have received a lot of feature requests such as "Just show me a list of the tests I've run recently" as well as "How can I quickly see which tests are failing?" This dashboard achieves both!&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/tests.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Tests dashboard showing the most recent synthetic tests that have run for a Site." /></p> <p>If you want to refine this view, you can apply filters to show you tests for a specific site, page, region or browser. You can also filter on test status ('succeeded' or 'failed') and test type ('scheduled test' or 'deploy').</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/testsfilter.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Filter options for the tests dashboard." /></p> <p>Finally, if you have any failed tests we'll show those at the top of the previously mentioned Home page in order to quickly alert you to the failures. You can then click through to the test details to determine why the test failed.</p> <p>We hope this dashboard will help to quickly inform you so you can take action and protect your synthetic checks budget.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/failures.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>&nbsp;</p> <p>We hope you find these latest additions to SpeedCurve useful. As always, <a href="mailto:support@speedcurve.com">let us know</a> what you think!</p> <h2>Related:</h2> <ul> <li><a href="https://support.speedcurve.com/docs/get-started-synthetic">Getting started with SpeedCurve</a></li> <li><a href="https://support.speedcurve.com/docs/continuous-integration">Integrating performance testing into your CI environment</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-site-dashboard-walkthrough">Site dashboard walkthrough</a> (video)&nbsp;</li> <li><a href="https://support.speedcurve.com/docs/filter-your-speedcurve-dashboards">How to filter your SpeedCurve dashboards</a>&nbsp;(video)</li> <li><a href="https://support.speedcurve.com/docs/failed-synthetic-tests">Troubleshooting failed synthetic tests</a></li> </ul> Mon, 13 Mar 2023 00:00:00 +1300 Why you need to know your site's performance plateau (and how to find it) https://www.speedcurve.com/blog/web-performance-poverty-line <p style="text-align: center;"><strong>"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"</strong></p> <p style="text-align: center;"><strong>"How do I know how fast my pages should be?"</strong></p> <p style="text-align: center;"><strong>"How can I demonstrate the business value of performance to people in my organization?"</strong></p> <p>If you've ever asked yourself any of these questions, then you could find the answers in identifying and understanding the performance plateau for your site.</p> <h2>What is the "performance plateau"?</h2> <p>The performance plateau is the point at which changes to your website&rsquo;s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you&rsquo;ve bottomed out in terms of business and user engagement metrics.</p> <p>In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The concept of the performance plateau isn't new. I first encountered it more than ten years ago, when I was looking at data for a number of sites and noticed that &ndash; not only was there a correlation between performance metrics and business/engagement metrics &ndash; there was also a noticeable plateau in almost every correlation chart I looked at.&nbsp;</p> <p>A few months ago someone asked me if I've done any recent investigation into the performance plateau, to see if the concept still holds true. When I realized how much time has passed since my initial research, I thought it would be fun to take a fresh look.</p> <p>In this post, I'll show how to use your own data to find the plateau for your site, and then what to do with your new insights.</p><h2>Background</h2> <p>For this new investigation, I selected four sites that experience a significant amount of user traffic. For each site, I used a month's worth of RUM (real user monitoring) data to generate correlation charts.</p> <p><a href="https://support.speedcurve.com/docs/create-correlation-charts">Correlation charts</a> show the relationship between performance metrics &ndash; in these instances, Start Render and Largest Contentful Paint (LCP) &ndash; and user engagement (measured as bounce rate)<strong>.</strong> They're a great tool for showing non-technical folks how performance affects the business. (You can also create correlation charts that show&nbsp;<a href="https://support.speedcurve.com/docs/conversion-rates">the relationship between performance metrics and business metrics</a>, such as conversion rate and cart size, but bounce rate is easier to measure right out of the box with most RUM tools.)</p> <p><span style="font-size: 35px; color: #000000;">Results</span></p> <p>The correlation charts below show the distribution of all visits, with each yellow bar representing a cohort of visits that experienced a given Start Render or LCP time. The blue bar represents the change in bounce rate across all cohorts.</p> <p>In each of the correlation charts below, I've highlighted:</p> <ul> <li><strong>Optimal speed</strong>&nbsp;&ndash; The cohort of sessions that correlated with the lowest (aka best) bounce rate for that site</li> <li><strong>Beginning of the performance plateau</strong>&nbsp;&ndash; The cohort of sessions where the bounce rate begins to plateau</li> <li><strong>Median measurement</strong>&nbsp;for all visits represented in the chart</li> </ul> <p>Keep reading for observations and takeaways.</p> <h3>Site A</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site B</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site C</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site D</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Observations</h2> <h3>1. A clear performance plateau emerged for each site</h3> <p>Each site experienced a plateau at which business metrics remained more or less the same as performance continued to degrade.</p> <h3>2. Plateaus emerged for both Start Render and Largest Contentful Paint</h3> <p>I expected the results for Start Render, as it's been around as a page speed metric for many years, and has been proven to correlate to business metrics. LCP is relatively new, and has received a great deal of attention since it was introduced a couple of years ago as part of Google's <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a>. It's good to see evidence that it's a meaningful metric to track in terms of business goals.</p> <h3>3. The plateau emerges surprisingly quickly in some cases</h3> <p>For example, Site C's performance plateau starts at 400 milliseconds. That's early!</p> <h3>4. There's a lot of variability in the distance between the optimal bounce rate and the plateau</h3> <p>For some sites, you can see a much steeper incline in the curve from optimal to plateau. For some sites (such as Site C), the difference was as little as 300 milliseconds. For others (such as Site A), the gap was as long as 9 seconds.</p> <h3>5. The plateau sometimes started later when looking at LCP</h3> <p>Creating correlation charts in both Start Render and LCP generated interesting results. In two of the four sites I looked at, the charts were roughly comparable. For the other two sites, the plateau started later for LCP than it did for Start Render. This could be attributed to the fact that LCP measures when the largest visual element has completely finished rendering, so it can occur much later than Start Render.</p> <h3>6. For some sites the performance plateau starts well before the median</h3> <p>Predictably, the optimal bounce rate generally correlated to the cohort of sessions that is much faster than the median. A bit more surprisingly, for some sites the performance plateau started well before the median. This could come as a scary revelation for some site owners, because it means that the bulk of your user sessions are occurring on the plateau.</p> <h2>How to measure the performance plateau for your own site</h2> <p>I can't emphasize enough that the examples I've shared are illustrative, not prescriptive. The performance plateau for your site will be different from the plateau for another site. <strong>You need to look at your own real user data. </strong>(If you're new to performance, you might be interested in&nbsp;<a href="https://support.speedcurve.com/docs/synthetic-vs-real-user-monitoring-rum">this synthetic and real user monitoring explainer</a>.)</p> <p>Fortunately, the process for identifying the low end of your site&rsquo;s performance threshold is fairly straightforward. All you need is access to a statistically significant amount of your RUM data, plus whatever analytics tool you use for tracking business or user engagement metrics.&nbsp;</p> <h3>Step 1: Identify the metrics you want to measure</h3> <p>As mentioned above, bounce rate is a good metric to start with, because it's already gathered automatically by most real user monitoring tools.</p> <p>If you have access to other data sources, then you can create a variety of correlation charts, If run an ecommerce site, then you can measure revenue, cart size, and conversion rate. If you work on a media site, then page views and bounce rate matter.</p> <h3>Step 2: Gather lots of real user data</h3> <p>To ensure that you get statistically relevant results, the more data you can gather, the better. If your dataset is too small, you could get wonky results. When I conducted my investigation, I aggregated millions of transactions that took place over a single month. (If you're interested in trying real user monitoring, you can start a <a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">free RUM trial</a> in speedCurve.))</p> <h3>Step 3: Create correlation charts</h3> <p>I've demonstrated how I like to show bounce rate (or whatever business/engagement metric you're plotting) across the distribution of sessions. (If you're a SpeedCurve user, <a href="https://support.speedcurve.com/docs/create-correlation-charts">here's how to create correlation charts</a>.)</p> <p><span style="font-size: 35px; color: #000000;">What to do with your findings</span></p> <p>After you've finished your own investigation, you can do a few things with the results:</p> <h3>1. Share your findings within your organization</h3> <p>Correlation charts are a powerful tool for showing stakeholders the impact that site speed has on the business. Even if your results aren't what you hoped they would be, you can use this data to prove the value of continuing to invest in performance.</p> <h3>2. Understand why your business metrics are not improving despite your efforts</h3> <p>This might seem a bit demoralizing, but when you think about it, it's actually helpful to know. When you know where your performance pleateau begins, you can answer the question "Why don't my business or user engagement metrics improve when I make my site faster?" If you improve Start Render from 5 seconds to 3 seconds, but the performance plateau for your site starts at 2 seconds, you haven't yet made Start Render fast enough.&nbsp;</p> <h3>3. Change your performance targets</h3> <p>Set targets for moving more of your users into the cohorts that experience faster Start Render or LCP times. Ideally, improving key site speed metrics for more of your users should improve bounce rate (or whatever user engagement or business metric you're tracking) for more of your users. Ultimately, this is good for your business.</p> <p>You can use your performance plateau to set goals. Continuing with the example in point 1, above, if you know that the plateau starts at 2 seconds, you can create a Start Render target of 1.5 seconds to work toward.</p> <h3>4. Or don't change your performance targets</h3> <p>In the Site C example, the optimal bounce rate occurs for the 100-millisecond LCP cohort, and the plateau starts just 300 milliseconds later. With a huge amount of work, you might succeed in delivering faster LCP times to more sessions, but would the effort be worth it?</p> <p>As the close-up view of the chart below shows, the bulk of sessions have speedy LCP times that are at the beginning of the performance plateau. In this case, the chart shows that perhaps you can be satisfied with your efforts, and your focus should be on fighting regressions and staying fast.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-c-closeup.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>5. Create a baseline to measure against</h3> <p>Repeat this exercise periodically &ndash; perhaps monthly, or semi-annually, or after a deploy where you've made a number of performance improvements &ndash; and compare the correlation charts over time. Ideally, you'll see more of your sessions fall into the faster section of the distribution, before the performance plateau.</p> <h2>Questions? Feedback?</h2> <p>If you experiment with creating correlation charts and plotting the performance plateau for your site, I'd love to hear about your results!</p> <h2>Related:&nbsp;</h2> <ul> <li><a href="https://support.speedcurve.com/docs/performance-for-retailers">Web performance for retailers</a></li> <li><a href="https://support.speedcurve.com/docs/create-correlation-charts">How to create correlation charts</a></li> <li><a href="https://support.speedcurve.com/docs/competitive-benchmarking">How to benchmark your site against your competitors</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">How to set up real user monitoring (RUM)</a></li> </ul> Mon, 06 Mar 2023 00:00:00 +1300 2022 in review: New dashboards, Core Web Vitals enhancements, third-party tracking & more! https://www.speedcurve.com/blog/2022-recap <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/2022.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Every year feels like a big year, and 2022 has been no exception. Not only did we celebrate our ninth birthday (!!!) we also:</p> <ul> <li>Launched a series of new and improved dashboards to help you identify and fix your performance issues more quickly and efficiently (More on those below.)</li> <li>Expanded our <a href="https://support.speedcurve.com/changelog/synthetic-new-regions-italy-bahrain-south-africa-indonesia">test agent locations</a> to bring you even more places to test from worldwide</li> <li>Jumped into in-person events, and were so happy to catch up with some of you at <a href="https://smashingconf.com/">SmashingConf</a> and <a href="https://perfnow.nl/">performance.now()</a> (You can watch videos of <a href="https://www.youtube.com/watch?v=cnr3CJwpaps&amp;t=2s">my talk about performance budgets</a> and <a href="https://www.youtube.com/watch?v=CFvTCiIDwMM">Andy Davies' talk about third parties</a>.)</li> </ul> <p>Keep reading for a full recap of the past year...</p><h2>Home dashboard</h2> <p>It only took nine years, but we now have a <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">home page</a>! ;-)</p> <p>Your Home dashboard brings together your Synthetic and RUM data to provide an easy-to-digest &ndash; and easy-to-share &ndash; snapshot of how your site is performing. You can see at a glance:</p> <ul> <li>Key metrics like <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a> and your <a href="https://support.speedcurve.com/docs/user-happiness">User Happiness</a> score</li> <li>The current status of your <a href="https://support.speedcurve.com/docs/performance-budgets-101">performance budgets</a></li> <li>Performance in <a href="https://support.speedcurve.com/docs/test-agent-locations">different geolocations</a> around the world</li> <li>Your current <a href="https://support.speedcurve.com/docs/competitive-benchmarking">competitive benchmarks</a> status</li> <li>Top <a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse recommendations</a> for performance improvements</li> </ul> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/home-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>(Hint: If you're considering installing a <a href="https://support.speedcurve.com/docs/share-dashboards">big-screen performance monitor</a> at your organization, this dashboard is a good candidate for that.)</p> <h2>Synthetic Site dashboard</h2> <p>You may have noticed that your <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">Site dashboard</a> has been given a complete overhaul. It now focuses on helping you spot differences between the performance of your pages, making it easier to identify which pages in your site have performance issues.</p> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/site-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>When you identify a page that needs further investigation, you can quickly navigate through to your new Synthetic Page dashboard...</p> <h2>Synthetic Page dashboard</h2> <p>You'll find your new <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">Page dashboard</a> to be similar to your old Synthetic Site dashboard. Here you can still find time series charts that show how performance is trending over time across a number of key metrics, including Core Web Vitals.</p> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/page-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Contextual Web Vitals recommendations</h2> <p>As you may already know, <a href="https://support.speedcurve.com/docs/lighthouse">we run full Lighthouse audits</a> on every page you test in synthetic. The results &ndash; including detailed performance optimization recommendations &ndash; are available in your test details.</p> <p>In addition to that, we now badge all performance recommendations with the Core Web Vitals metric (LCP, CLS, FID, and TBT) they affect. Fix those performance issues and you should see an improvement in your Core Web Vitals and Lighthouse scores.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/contextual-vitals-recommendations.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Here are the dashboards where you can find your Vitals-badged performance recommendations:</p> <ul> <li>Home</li> <li>Vitals</li> <li>Improve</li> <li>Synthetic Site</li> <li>Synthetic Page</li> </ul> <h2>RUM Compare dashboard</h2> <p>With your <a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard/">RUM Compare dashboard</a>, you can easily generate side-by-side comparisons for any two cohorts of real user data.</p> <p><a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/rum-compare-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Some of the many reasons you might want to compare RUM cohorts:</p> <ul> <li><strong>Improve Core Web Vitals</strong> by identifying the tradeoffs between pages that have different layout and construction</li> <li><strong>Triage a performance regression</strong> related to the latest change or deployment to your site by looking at a before/after comparison</li> <li><strong>Explore and compare different out-of-the-box cohorts</strong>, such as device types, geographies, page labels, and more</li> <li><strong>Analyze A/B tests or experiments</strong> to understand which had the most impact on user behaviour, as well as performance</li> <li><strong>Optimize your funnel</strong> by understanding differences between users that convert or bounce from your site and users who don't</li> <li><strong>Evaluate CDN performance</strong> by exploring the impact of time-of-day traffic patterns</li> </ul> <h2>RUM update: Page labels</h2> <p>Page labels are an integral part of getting the most out of dashboards like your RUM Compare dashboard (above). We've added <a href="https://www.speedcurve.com/blog/page-labels/">new capabilities for defining page labels</a> with an easy, no-code, server-side (SpeedCurve) option using URL patterns. (This means you now have four options for labeling pages in RUM, depending on your use case.)</p> <p><a href="https://www.speedcurve.com/blog/page-labels/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/rum-page-labels.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>RUM update: Performance heat maps</h2> <p>We've added heat maps to your RUM Performance dashboard. These let you quickly spot which page labels or paths are outliers. You can then prioritize and focus on these areas to improve important metrics, such as Largest Contentful Paint. Fixing high-traffic but poor-performing areas of your site will help lift your overall metrics.</p> <p><a href="https://support.speedcurve.com/changelog/rum-performance-heat-maps"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/performance-heatmaps.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Synthetic update: Compare third parties</h2> <p>When <a href="https://support.speedcurve.com/docs/bookmark-and-compare-tests">comparing any two synthetic tests</a>, we now show you a comparison of common third parties and identify which third parties are unique to each test. This lets you quickly spot changes in common third parties, as well as new third parties that might have introduced performance issues.</p> <p><a href="https://support.speedcurve.com/docs/first-third-parties"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/compare-third-parties.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Expanded Industry Speed Benchmarks</h2> <p><a href="https://app.speedcurve.com/benchmarks/">Page Speed Benchmarks</a> is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries &ndash; from retail to media &ndash; over the past year. The dashboard allows you to easily filter by region, industry, mobile/desktop, fast/slow, and key web performance metrics, including Core Web Vitals.</p> <p><a href="https://app.speedcurve.com/benchmarks/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/industry-benchmarks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>The <a href="https://app.speedcurve.com/benchmarks/">Industry Benchmarks dashboard</a> is publicly available (meaning you don't need a SpeedCurve account to explore it) and is a treasure trove of meaningful data that you can use for your own research.</p> <h2>Lots of new videos!</h2> <p>We've added a plethora of new videos to give you everything from <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">dashboard tours</a> to best practices for <a href="https://support.speedcurve.com/docs/diagnose-a-third-party-regression">fighting third-party regressions</a> and <a href="https://support.speedcurve.com/docs/track-core-web-vitals">tracking Core Web Vitals</a>. You can find them in our <a href="https://support.speedcurve.com/docs/welcome">Support Hub</a>.</p> <p><a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/support-videos.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Here's to the year that was and the year ahead. :-)</h2> <p>As we enter our tenth year of bringing you the very best performance monitoring tools we can muster &ndash; and helping you deliver fast, joyous experiences to your users &ndash; we welcome your feedback and suggestions. Let us know your thoughts by leaving a comment or sending us a note at support@speedcurve.com.</p> Thu, 22 Dec 2022 00:00:00 +1300