SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. NEW! RUM Compare dashboard https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard <p>Exploring real user (RUM) data can be a hugely enlightening process. It uncovers things about your users and their behavior that you never might have suspected. That said, it's not uncommon to spend precious time peeling back the layers of the onion, only to find false positives or uncertainty in all that data.</p> <p>At SpeedCurve, we believe a big part of our job is making <em>your job</em> easier. This was a major driver behind the Synthetic Compare dashboard we released last year, which so many of you given us great feedback on.</p> <p>As you may have guessed, since then we've been hard at work coming up with the right way to explore and compare your RUM datasets using a similar design pattern. Today, we are thrilled to announce your new RUM Compare dashboard!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/rum-compare-overview.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>With your RUM Compare dashboard, you can easily generate side-by-side comparisons for any two cohorts of data. Some of the many reasons you might want to do this include:</p> <ul> <li><strong>Improve Core Web Vitals</strong> by identifying the tradeoffs between pages that have different layout and construction</li> <li><strong>Triage&nbsp;a performance regression</strong> related to the latest change or deployment to your site by looking at a before/after comparison</li> <li><strong>Explore and compare different out-of-the-box cohorts,</strong> such as device types, geographies, page labels, and more</li> <li><strong>Analyze A/B tests or experiments</strong> to understand which had the most impact on user behavior, as well as performance&nbsp;</li> <li><strong>Optimize your funnel</strong> by understanding differences between users that convert or bounce from your site and users who don't</li> <li><strong>Evaluate CDN performance</strong> by exploring the impact of time-of-day traffic patterns</li> </ul> <p>Let's take a tour...</p><h2>How to compare RUM sessions</h2> <p>In September of last year, <a href="https://www.speedcurve.com/blog/real-user-monitoring-sessions-dashboard/">we released</a> the RUM Sessions dashboard. Along with the Sessions dashboard, we provided you with the ability to drill into specific datapoints on a time series to explore those sessions, along with the ability to bookmark them. You'll now find the ability to compare sessions from a time series with those existing options.</p> <p>To compare two different data points from any RUM time series chart, simply click 'Compare Sessions' on two separate data points:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/tooltip.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A tooltip that displays a performance metric and options for viewing, bookmarking and comparing sessions." /></p> <p>After you've selected two sessions to compare, you'll see a dashboard with those sessions in a side-by-side comparison:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/chart_compare.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of the Compare dashboard after clicking on two data points from a series" /></p> <h2>Filtering your data</h2> <p>Navigating directly to the dashboard is another option. From the RUM menu in the side navigation, click on 'Compare'. By default you are presented with a view of Mobile vs. Desktop device types:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/mobile_desktop.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Compare dashboard with Mobile vs. Desktop cohorts" /></p> <p>You can alter each of the cohorts by adding and removing filters from the menu for the cohort:&nbsp;</p> <p><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/444/filter_menu.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Menu displaying different filter options for the cohort" /></p> <p>If you're already experienced with our other RUM dashboards, the different options available in your filter will look familiar. The 'Metric Threshold' filter has also been introduced as a way for you to explore 'fast' versus 'slow' sessions for a metric. To use, simply select the metric along with the upper and lower bounds for each cohort.</p> <p>To demonstrate, let's look at a comparison of fast sessions (0-2 seconds) to slow sessions (2-15 seconds) for Largest Contentful Paint:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/metric_threshold.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Metric threshold filter showing LCP metric with min 0 and max 2 seconds." /></p> <h2>Dashboard components</h2> <h3>Overview</h3> <p>This comparison highlights the comparison of key performance metrics between the cohorts. Here we can see that the faster cohort has a median LCP of 1.25 seconds, while the slower is at 3 seconds. More importantly, we see the bounce rate is 27% higher for the slower subset of users. Speed matters!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/overview.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison table called Overview which contains a subset of key metrics compared between the cohorts" /></p> <h3>Sample sizes</h3> <p>If you've spent any amount of time exploring your RUM data, you've probably learned that the size of your population matters a lot. This chart lets you quickly discern whether or not you're comparing statistically viable sets of data. While the example below is not a 50/50 split, there seems to be enough data in the slower cohort compared to the faster one to make some valid observations.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/size_sample.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing the population size of each cohort by pageviews." /></p> <h3>Location</h3> <p>Understanding where your users are coming from and how their experience may vary is a key part of RUM analysis, specifically as you make investment decisions on CDNs or CSPs. While not largely pronounced, in the visualization below you can see there is a larger subset of 'fast' users coming from Norway and the United States. Note that you have the ability in this graph to toggle between an absolute value and a percentage. This is key when you are comparing datasets that are not the same size.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/locationmap.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing a world map with population density visualized as bubbles. Table below showing numeric representation by Country." /></p> <h3>User Happiness</h3> <p>Somewhat self-explanatory, the comparison of your 'Happy Page Views' shown here illustrates a lot more of your slower users in the 'Unhappy' bucket. More can be found on our Happiness Index and how it's calculated&nbsp;<a href="https://support.speedcurve.com/docs/user-happiness">here</a>.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/happiness.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Threshold comparison of happy, ok and unhappy users between cohorts" /></p> <h3>Looking at cohorts by other filters</h3> <p>This table breakdown defaults to a breakdown across cohorts by Browser, but there are several other dimensions to explore. In this example, we don't see much of a browser-driven impact to LCP. (If you're wondering where iPhone is, keep in mind that, unfortunately, not all metrics are supported across platforms.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/browser.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table breakdown of the population of browser types for each cohort" /></p> <p>However, here we can see that a breakdown by connection speed highlights that while most of the users are on fast connections, there are a higher percentage of users coming from slower connection speeds in the slow cohort.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/connection.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table showing a population comparison by connection speed for each cohort." /></p> <h3>Event Timeline&nbsp;</h3> <p>Your event timeline allows you to compare key metrics between your cohorts. This can be useful in identifying other metrics that may be more interesting or potentially responsible for downstream metrics such as LCP. Backend time here is 500ms slower, which affects ALL metrics.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/eventtimeline.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison of the two cohorts event timeline for key metrics." /></p> <h3>Metrics</h3> <p>The last, and potentially most compelling, component in the dashboard is focused on metrics. This component contains a LOT of information you can dig into.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/metrics.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table showing a comparison of a long list of metrics" /></p> <p>When you expand each metric, you have the ability to look at a comparison distribution, a time series comparison or a breakdown across all of our key dimensions. In this distribution chart, you can see that fast users often have network times of 0 seconds, most likely due to connection reuse.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/distribution.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Distribution comparison of the two cohorts for Network time" /></p> <p>Stacked bubble charts can help you quickly identify the presence of outliers. In the graph below, a lot of the slow LCP traffic is driven by older versions of Chrome. Oftentimes this can be automated traffic or other non-user traffic.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/bubbles.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Stacked bubble chart showing a breakdown by browser type. One outlier for Chrome 104 is highlighted." /></p> <h2>We welcome your feedback!</h2> <p>Phew! That's a lot of information packed into a single dashboard.&nbsp;We're excited to hear how you put this new RUM capability to work for your site. <a href="mailto:support@speedcurve.com">Drop us a line</a>&nbsp;and let us know what you think!</p> <p>And if you're not yet using our real user monitoring to learn how actual users are experiencing your site, <a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">try our RUM for free</a>!</p> Thu, 01 Dec 2022 00:00:00 +1300 Page labels: Why they are important and how to use them https://www.speedcurve.com/blog/page-labels <p>Labeling your pages in your synthetic and real user monitoring (RUM) tools is a crucial step in your performance monitoring setup. We recently released some exciting new capabilities for labeling your RUM pages that we want to share with you. This is also a great opportunity to reiterate why page labels are important, and to show you how easy it is to apply labels to your pages.</p> <h2>Why should I care about page labels?</h2> <p>There are so many benefits to labeling your pages in both your synthetic and real user monitoring (RUM) tools. Page labels give you the ability to:</p> <ul> <li>Compare and benchmark similar pages across different sites</li> <li>Create more meaningful performance budgets in RUM</li> <li>Correlate RUM data with synthetic diagnostics in the same charts (e.g., for Core Web Vitals that are measurable in synthetic and RUM)</li> <li>Prioritize performance optimizations according to their projected impact on real users</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/rum-vs-synthetic-budget-lcp.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Ready to learn more? Let's get to it!</p><h2>How do page labels help me?</h2> <p>Page labels give you the opportunity to gain richer insights from your data, and then to make that data actionable. Here are a few examples.&nbsp;</p> <h3>Compare and benchmark similar pages across different sites</h3> <p>For synthetic testing, page labels give you the ability to add URLs to a defined segment. When you configure a synthetic test, you provide the URL as well as a page label. There are a few common use cases for comparing similar pages:</p> <ul> <li><a href="https://support.speedcurve.com/docs/synthetic-page-labels">Compare different URLs of the same type</a> across different sites you manage.</li> <li><a href="https://support.speedcurve.com/docs/private-qa-development-and-staging-websites">Compare a pre-production URL</a> to a page on your production site.</li> <li><a href="https://support.speedcurve.com/docs/competitive-benchmarking" target="_blank" rel="noopener">Benchmark key pages</a>&nbsp;against those of your competitors.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/auto-benchmark.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart comparison of LCP for three elective car manufacturers home pages." /></p> <p style="text-align: center;"><em>Benchmark Dashboard from SpeedCurve</em></p> <h3>URL paths can be too granular</h3> <p>While <a href="https://support.speedcurve.com/docs/rum-page-labels#page-label-url-rules">using URL paths to filter your RUM data</a> can be really useful, by default when looking at datasets for a specific page template it is also important to have the ability to look at them as groups or segments.</p> <p>To illustrate, let's compare two distribution charts: the first for a set of grouped product pages (PDP), and the second for a unique URL path for a specific product page.</p> <p>This distribution for pages labeled as 'PDP' (product details page) is a good example of a fully-formed set of data representing a population:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/pdp-histo-blue.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Fully formed histogram for page load time" /></p> <p>Next, this distribution illustrates the issue with cardinality that occurs when attempting to analyze a specific URL. Even the p75 is quite different between the two distributions:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/product-url-histo-blue.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Sparse histogram of page load time for a unique url" /></p> <p>It's difficult to attempt to draw firm conclusions or use the information in the second chart to inform your performance budgets for such a small dataset.</p> <p>For more information on interpreting histograms, <a href="https://support.speedcurve.com/docs/how-to-read-a-histogram" target="_blank" rel="noopener">see this support article</a>.</p> <h3>Combine synthetic and RUM datasets</h3> <p>One of the things SpeedCurve is known for (and extremely passionate about) is using the complementary datasets from RUM and synthetic monitoring. When set up correctly, you have the ability to match synthetic page labels with RUM page labels. This opens up a lot of opportunity to do things like:</p> <h4>Present both datasets on a single chart</h4> <p>This example shows Largest Contentful Paint (LCP) from RUM for a page labeled 'Home' along with Image Size and the Lighthouse Performance score from a synthetic test with the same label:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/combined-chart.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart showing LCP from RUM data, image size and lighthouse score from synthetic." /></p> <h4>Correlate RUM data with synthetic diagnostics for Core Web Vitals</h4> <p>When using both RUM and synthetic, <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">your SpeedCurve Vitals dashboard</a> provides the most informed metrics (Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift) from RUM, alongside synthetic diagnostics such as LCP element identification and performance optimization recommendations.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/cwv-dash.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Core web vitals dashboard showing combined RUM and Synthetic datasets" /></p> <h4>Prioritize performance opportunities by RUM impact</h4> <p>Your <a href="https://support.speedcurve.com/docs/aggregated-lighthouse-results">Improve dashboard</a> aggregates performance recommendations from synthetic testing and identifies the percentage of your RUM traffic that would benefit. This is extremely helpful when trying to prioritize performance work for the most impact on your actual users.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/improve-dash.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table from the Improve dashboard showing recommendations from synthetic audit and the percentage of RUM traffic that would be impacted." /></p> <h2>How do I set up page labels in SpeedCurve?</h2> <h3>Synthetic page labels</h3> <p>This is really straightforward. From your Site Settings, simply add a label next to the URL you want to test.</p> <p>As a best practice, page labels should be unique for each URL if you are planning on comparing them in your benchmarks or correlating them with your RUM data.</p> <p>As a rule of thumb, you should have page labels for everything you label in RUM. For an overview of Site Settings, <a href="https://support.speedcurve.com/docs/setup-guide#step-4--add-sites-and-urls-for-testing">see this support article</a>.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/site-settings.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="URLs section of the Site Settings page highlighting a page label" /></p> <h3>RUM page labels</h3> <p><strong>Product update!</strong> We've recently rolled out the ability for customers to define page labels with a no-code, server-side (SpeedCurve) option using URL patterns (option 2 below). There are now four options for labeling pages in RUM depending on your use case. The following are in priority order. If you use more than one option to define a unique page label, we will default to option with the highest priority.</p> <h4>Option 1: Defining page labels using the RUM API</h4> <p>To programmatically assign labels, simply call the API from your pages using the following JavaScript placed after your SpeedCurve snippet:</p> <p style="padding-left: 60px;"><code>LUX = window.LUX || {};</code></p> <p style="padding-left: 60px;"><code></code><code>LUX.label = "Home";</code></p> <p>For more information on the RUM JS API, see <a href="https://support.speedcurve.com/docs/rum-js-api" target="_blank" rel="noopener">this support article</a>.</p> <h4>Option 2: Defining page labels using URL rules (<strong>NEW!</strong>)</h4> <p>This no-code approach is by far the most convenient. From your RUM Settings, choose the Page Label URL Rules tab in the top navigation and select 'Add Label'.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/add_page_label.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page Label Groups interface with selected Add Label button" /></p> <p>Next, choose a name for your label and add one or more URL rules. Full paths are supported (including domain), as well as paths relative to the root domain. Wildcard characters may also be used as demonstrated below.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/new_rule.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A new page label rule called 'article' being created with an absolute path and a path containing a wildcard" /></p> <p>Now that you've added a rule (or rules), click on 'URLs with no associated Page Label' to show a list of URLs that are unmatched. You can then build a rule that matches unmatched URLs and assign it to an existing page label.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/reconcile.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page Label reconcile tool showing a list of unmatched page labels and an input field for pattern matching from the list." /></p> <h4>Option 3: Use an existing JavaScript variable</h4> <p>This is a good option if you're already using labels that you'd like to repurpose. From your RUM Settings, choose the 'Page Label JS Variable' tab. Simply fill in the field provided with the JavaScript variable, array, or property. (Note that functions are not supported.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/page_label_variable.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page label JS Variable UI with a form field populated with a variable called page_name" /></p> <h4>Option 4: Default to <code>document.title</code></h4> <p>If no other options are used to define a page label, the <code>document.title</code> of the page will be used. This is typically not the most desirable option due to the uniqueness of the title. The titles are sometimes extremely long and separated by pipes and spaces (' | ') which can make it hard to navigate dashboard filters, chart titles, etc. Additionally, in some cases titles are so unique that you'll run into the same issues discussed earlier around URL cardinality.&nbsp;</p> <p>For more information on setting up your RUM page labels, see <a href="https://support.speedcurve.com/docs/rum-page-labels" target="_blank" rel="noopener">this support article</a>.</p> Wed, 02 Nov 2022 00:00:00 +1300 Element Timing: One true metric to rule them all? https://www.speedcurve.com/blog/element-timing-one-true-metric <p><span style="color: #1f1f1f; font-size: 16px;">One of the great things about Google's <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a> is that they provide a standard way to measure our visitors&rsquo; experience. Core Web Vitals can answer questions like:</span></p> <ul> <li><strong>When was the largest element displayed?</strong> Largest Contentful Paint (LCP) measures when the largest visual element (image or video) finishes rendering.</li> <li><strong>How much did the content move around as it loads?</strong> Cumulative Layout Shift (CLS) measures the visual stability of a page.</li> <li><strong>How responsive was the page to visitors' actions?</strong> First Input Delay (FID) measures how quickly a page responds to a user interaction, such as a click/tap or keypress.</li> </ul> <p>Sensible defaults, such as Core Web Vitals, are a good start, but one pitfall of standard measures is that they can miss what&rsquo;s actually most important.</p> <h2>The (potential) problems with Largest Contentful Paint</h2> <p>Largest Contentful Paint (LCP) makes the assumption that the largest visible element is the most important content from the visitors&rsquo; perspective; however, we don&rsquo;t have a choice about which element it measures. LCP may not be measuring the most appropriate &ndash; or even the same &ndash; element for each page view.</p> <h3>The LCP element can vary for first-time vs repeat visitors</h3> <p>In the case of a first-time visitor, the largest element might be a consent banner. On subsequent visits to the same page, the largest element might be an image for a product or a photo that illustrates a news story.</p> <p>The screenshots from What Hifi (a UK audio-visual magazine) illustrate this problem. When the consent banner is shown, then one of its paragraphs is the LCP element. When the consent banner is not shown, an article title becomes the LCP element. In other words,&nbsp; the LCP timestamp varies depending on which of these two experiences the visitor had!</p> <p style="text-align: center;"><a href="https://www.whathifi.com"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/whathifi2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="https://www.whathifi.com with and without the consent banner visible" /><em>What Hi Fi</em></a><em>&nbsp;with and without the consent banner visible</em></p><h3>Even when it&rsquo;s visible, a cookie banner may not always be the LCP element</h3> <p>In this example from my local newspaper, the story image is displayed and then the consent dialog covers it. But because the text elements in the dialog are smaller than the image, the image is measured for LCP.</p> <p style="text-align: center;"><a href="https://www.stroudnewsandjournal.co.uk/sport"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/stroudnewsandjournal.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="LCP element on www.stroudnewsandjournal.com with and without consent banner visible" /><em>Stroud News</em></a><em> with and without the consent banner visible</em></p> <p style="text-align: center;">&nbsp;</p> <h3>The largest element can vary by viewport size</h3> <p>On some devices What HiFi has a story image hidden behind the ad at the bottom of the page, which is gets measured as LCP. But more commonly we see variations between desktop and mobile viewport sizes, where different elements are measured for LCP.</p> <p>Joules clothing has this challenge on its product listing pages where the message about the impact of UK postal services going on strike is the LCP element at mobile viewport sizes, but one of the product images becomes LCP element at larger viewports.</p> <p style="text-align: center;"><a href="https://www.joules.com"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/joules-mobile-v-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparing the LCP element between mobile and desktop on on Joules product listing pages" /><em>Joules</em></a><em>&nbsp;on mobile and desktop viewports</em></p> <p style="text-align: center;">&nbsp;</p> <h3>LCP uses the rendering timestamp to prioritize same-size elements</h3> <p>Joules&rsquo; also highlights another other challenge with LCP &ndash; if I asked you to guess which element would be measured for LCP you&rsquo;d probably guess the first product image, but as all four images are the same size, and the third image was actually rendered first, it&rsquo;s measured for the LCP timestamp. This might be fine in some situations, but the LCP element might not be the most important image from your &ndash; or your visitors' &ndash; perspective.</p> <h3>Other caveats...</h3> <p>Those are just some examples of the issues we come across when measuring visitors&rsquo; experiences day-to-day. There are other examples, such as:</p> <ul> <li>image elements that are animated in, and</li> <li>'soft-navigations' in single-page apps (SPAs).<span class="Apple-converted-space">&nbsp;</span></li> </ul> <h2><span class="Apple-converted-space">How to understand which elements are chosen for LCP</span></h2> <p><span class="Apple-converted-space">If you want to understand which elements are being chosen for LCP, you can either use <a href="https://www.speedcurve.com/features/performance-monitoring/">synthetic monitoring</a>&nbsp;or the <a href="https://github.com/GoogleChrome/web-vitals/blob/main/src/attribution/onLCP.ts#L52">webvital.js library</a>, which includes support to help determine which element is the LCP element.</span></p> <p>With LCP, Chrome is making a best guess on which element is most important, but it may not be consistent between different pages, or different viewports, and it can be affected by other UI elements too.<span class="Apple-converted-space">&nbsp;</span></p> <p>There's no guarantee that LCP is measuring the moments that matter to your site and your visitors. That&rsquo;s where Element Timing helps.</p> <h2>What is Element Timing?</h2> <p>Similar to LCP, Element Timing measures when a DOM element is rendered. Unlike LCP, Element Timing allows you (rather than Chrome) to decide which elements you want to measure. And unlike LCP, Element Timing allows you to measure more than one element on a page.</p> <p>Element Timing is implemented by adding the <code>elementtiming</code> attribute to the element you want to measure, with its value set to the identifier the timestamp will be given.</p> <p>For example, if we wanted to measure when an image is displayed, we could use this markup:</p> <pre class="language-markup"><code>&lt;img src=&rdquo;/images/logo.png&rdquo; elementtiming="logo-shown" /&gt;</code></pre> <p class="p2" style="margin: 0px; font-variant-numeric: normal; font-variant-east-asian: normal; font-stretch: normal; font-size: 13px; line-height: normal; font-family: 'Helvetica Neue'; min-height: 15px; color: #000000;">&nbsp;</p> <p>Then, when the logo is displayed, an Element Timing entry with the identifier &lsquo;logo-shown&rsquo; will be created, containing a timestamp and other data about the element, e.g., intersection with the viewport.</p> <p>Timing entries can be retrieved using a Performance Observer, and the data can be forwarded to a RUM or analytics product:</p> <pre class="language-markup"><code>&lt;script&gt; const observer = new PerformanceObserver((list) =&gt; { let entries = list.getEntries().forEach((entry) =&gt; { // process entry here; }); }); observer.observe({ entryTypes: ["element"] }); &lt;/script&gt; </code></pre> <h2>How Element Timing can fill the gaps that Largest Contentful Paint leaves</h2> <p>Revisiting the Joules&rsquo; example from earlier, we could use Element Timing to measure when landmarks such as the logo or category title are displayed. We could also measure when the first product image is rendered.</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/joules-landmarks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Possible landmarks we might want to measure using Element Timing on Joules' product listing page" /><em>Example landmarks on Joules' Product Listing Page (PLP)</em></p> <p style="text-align: center;">&nbsp;</p> <p>Similarly, a publisher might want to measure when their logo, the story headline, and the accompanying image are shown.</p> <p>It should also be possible to use Element Timing to measure some pop-ups, such as consent banners; however, for now this probably depends on a site serving its own consent banner rather than relying on a third-party service.</p> <h2>A replacement for LCP in single-page apps?</h2> <p>A common question we get from customers with single-page applications (SPAs) is:&nbsp;<em>"How do I measure LCP after a route change?"</em></p> <p>The short answer is: <em>"You can&rsquo;t."</em></p> <p>LCP starts measuring when a navigation starts and stops on user input. SPAs often only have a single navigation as the initial page loads, with responses to user actions updating the contents of the page rather than navigating to a new one.</p> <p>Element Timing doesn't have these restrictions. It can measure new elements as they're inserted into the DOM and rendered, even after user interaction.<br /><br />In the past we've typically relied on <a href="https://support.speedcurve.com/docs/single-page-applications#using-custom-metrics-user-timing-marks--measures-in-a-spa">User Timing's marks and measures</a> to measure how long a route change or other action takes to complete. With Element Timing we can also measure when the visual changes are displayed and get a more accurate picture of the users' experience.</p> <h2>How to collect Element Timing data in SpeedCurve</h2> <p>Element Timings can be collected in both Synthetic and RUM by configuring them in the <a href="https://support.speedcurve.com/docs/custom-metrics-video">Custom Metrics</a> section of your settings:</p> <p><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/add-custom-metric.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Adding a custom metric in SpeedCurve" /></p> <p style="text-align: center;"><em>Adding a custom metric in SpeedCurve</em></p> <p style="text-align: center;"><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/configuring-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Configuring Element Timing in SpeedCurve" /></p> <p style="text-align: center;"><em>Configuring Element Timing in SpeedCurve</em></p> <p style="text-align: center;">&nbsp;</p> <p>After you&rsquo;ve started collecting Element Timings, you can add them to custom charts on your <a href="https://support.speedcurve.com/docs/custom-charts-dashboards">Favorites dashboards</a> and start tracking the moments that matter for you and your visitors. You can also <a href="https://support.speedcurve.com/docs/performance-budgets-and-alerts">create performance budgets and get alerts</a> when they exceed their thresholds.</p> <p>In this example, a retailer is tracking when their logo and the first product image is displayed:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/visualising-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Timeseries chart visualising the times when the logo and first product image was displayed" /></p> <p style="text-align: center;"><em>Visualising when a retailer's logo and first product image is displayed to a visitor</em></p> <h2>Limitations of Element Timing</h2> <p>Currently, Element Timing has a few limitations.</p> <h3>Only available in Chromium-based browsers</h3> <p>Neither <a href="https://webkit.org/status/">WebKit's</a>&nbsp;nor <a href="https://mozilla.github.io/standards-positions/"><span class="s1" style="color: #dca10d;">Firefox&rsquo;s</span></a> status trackers mention it,&nbsp;but as it's implemented as an attribute in markup, it will degrade gracefully in those browsers:</p> <p style="text-align: center;"><span class="s2" style="text-decoration-line: underline;"><a href="https://caniuse.com/?search=elementtiming"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/can-i-use-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table from caniuse.com showing browser support for Element Timing" /><em>Can I Use Element Timing?</em></a></span></p> <p>&nbsp;</p> <h3>Supported on a limited subset of HTML elements</h3> <p>For privacy and security reasons Element Timing is only supported on a limited subset of HTML elements:</p> <ul> <li><code>img</code> elements</li> <li><code>image</code> elements within an SVG</li> <li><code>video</code> element's poster image</li> <li>elements with a <code>background-image</code></li> <li>text nodes</li> </ul> <p>But even with just this subset we can still measure many of the loading milestones we care about, such as key images and headings.</p> <h3>Can't track elements within iframes</h3> <p>As Element Timing only measure elements in the current document, we can&rsquo;t yet use it to track when say an embedded YouTube video renders or when ads are displayed, as these are typically contained within an iframe. (There&rsquo;s a <a href="https://docs.google.com/document/d/1LbXafcZMHagLReSkEkjhQdOIzeA9RS8_2e0_g5QLgY0"><span class="s1" style="color: #dca10d;">proposal to allow iframes to expose some of their performance data to their parent</span></a> that might enable this, but it&rsquo;s at an early stage.)</p> <h3>Elements in the ShadowDOM are currently excluded from Element Timing</h3> <p><a href="https://youtu.be/sOaZsMFScE0">Watch the discussion</a> about this at the most recent Web Performance Working Group TPAC meeting.</p> <h2>Summary</h2> <p>Largest Contentful Paint is a useful default, but the largest element isn't necessarily the most important from a visitor's perspective.</p> <p>Element Timing allows us to choose the elements that are most important on our own pages, and therefore measure moments that are most essential to our visitors' experience.</p> <p>You can use SpeedCurve to collect and track Element Timings so you can see how the performance of key elements changes over time or varies between different pages.</p> <p>If you've been using Element Timings to measure your visitors' experiences, or if you've implemented them in a SPA, we'd love to&nbsp;<a href="mailto:support@speedcurve.com">hear about it</a>.</p> <h2>References</h2> <ul> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://mozilla.github.io/standards-positions/">Mozilla Standards Positions</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://webkit.org/status/">WebKit Feature Status</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://wicg.github.io/element-timing/">Element Timing API Specification</a></span></li> <li><a href="https://w3c.github.io/user-timing/">W3C User Timing API Specification</a></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://caniuse.com/?search=elementtiming">Browser support for ElementTiming</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://support.speedcurve.com/docs/customer-data#adding-custom-data-to-settings">Configuring SpeedCurve Custom Data</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://docs.google.com/document/d/1LbXafcZMHagLReSkEkjhQdOIzeA9RS8_2e0_g5QLgY0">Proposal for Cross Frame Performance Timeline</a></span></li> </ul> Wed, 21 Sep 2022 00:00:00 +1200 July 2022 product update: Web Vitals support, more synthetic test agents & performance heat maps https://www.speedcurve.com/blog/july-2022-product-update <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/july-banner.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's been another busy month here at SpeedCurve! Check out our latest product updates below.</p><h2>RUM: Performance heat maps</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/rum-perf-heat-maps.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Fixing high-traffic but poorly performing areas of your site will improve your overall metrics &ndash; and create a faster, better experience for more of your users. To help you do this, we've added heat maps to your RUM Performance dashboard that let you quickly spot which page labels or paths are experiencing performance issues. You can then prioritize and focus on these areas to improve important metrics like Largest Contentful Paint.</p> <h2>Synthetic: New test regions in Italy, Bahrain, South Africa, and Indonesia</h2> <p>Based on your feedback and requests, we've added four new regions from which you can run your synthetic tests:</p> <ul> <li>Italy (Milan)</li> <li>South Africa (Cape Town)</li> <li>Bahrain (Manama)</li> <li>Indonesia (Jakarta)</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/new-test-regions.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>If none of <a href="https://support.speedcurve.com/docs/test-agent-locations">our current test regions</a> gives you the testing locations you need you can also <a href="https://support.speedcurve.com/docs/private-agents">run your own private agent</a>.</p> <h2>Change summary metrics on your charts</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/summary-metrics.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>We've added a new display option that lets you change the summary metrics shown at the top of each chart. You can now choose between Aggregate, Minimum, Maximum or Latest. Hovering over any metric will give you a tooltip with more context.</p> <h2>In case you missed it...</h2> <p>Here are some other features we've released in recent months:</p> <p><a href="https://support.speedcurve.com/changelog/vitals-recommendations"><strong>Core Web Vitals badges in performance recommendations</strong></a> &ndash; Identify exactly which performance issues you need to fix to improve your Lighthouse and CWV metrics. All performance recommendations are badged, so you can see which Core Web Vitals you're improving when you fix performance issues. (You can find recommendations in your Vitals and Improve dashboards, as well as test details.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/vitals-badges.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://support.speedcurve.com/changelog/synthetic-update-compare-third-parties"><strong>Compare third parties</strong></a> &ndash; When comparing two synthetic tests, see a comparison of common third parties and identify which third parties are unique to each test.</p> <p><a href="https://support.speedcurve.com/docs/bookmark-and-compare-tests"><strong>Bookmarks dashboard for both Synthetic and RUM</strong></a> &ndash; Compare any two synthetic tests or RUM sessions.</p> <p><a href="https://support.speedcurve.com/changelog/rum-update-path-filter"><strong>RUM path filters</strong></a> &ndash; Monitor popular or custom user paths in your data.</p> <h2>Most-watched help videos</h2> <p>We've created two video series &ndash; <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough"><strong>dashboard tours</strong></a> and <a href="https://support.speedcurve.com/docs/track-core-web-vitals"><strong>how-to videos</strong></a> &ndash; to help you find and fix performance issues on your site. Here are the current top five most-watched videos:</p> <ul> <li><a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">Vitals Dashboard tour</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-first-and-third-party-dashboard-walkthrough">First and Third Party Dashboard tour</a></li> <li><a href="https://support.speedcurve.com/docs/track-core-web-vitals">Track Google's Core Web Vitals (and create performance budgets for them)</a></li> <li><a href="https://support.speedcurve.com/docs/diagnose-cumulative-layout-shift-cls-issues">Diagnose Cumulative Layout Shift (CLS) issues</a></li> <li><a href="https://support.speedcurve.com/docs/investigate-slow-time-to-interactive-tti">Investigate why your Time to Interactive (TTI) is slow</a></li> </ul> <h2>Most-read support articles</h2> <p>We take a lot of pride in maintaining a comprehensive <a href="https://support.speedcurve.com/docs/welcome"><strong>Support Hub</strong></a>. These are the most popular articles users looked up in the past month:</p> <ul> <li><a href="https://support.speedcurve.com/docs/average-median-percentiles">Understand medians, averages, and percentiles</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Get started with Core Web Vitals</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">Get started with real user monitoring (RUM)</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-test-scripts">Synthetic test scripting</a></li> <li><a href="https://support.speedcurve.com/docs/metrics-glossary">Glossary of performance metrics</a></li> </ul> <h2>Questions? Feedback? Suggestions?</h2> <p>We'd love to hear from you! Leave a comment below or email us at support@speedcurve.com.</p> Wed, 27 Jul 2022 00:00:00 +1200 June 2022 product update: Performance recommendations on Vitals dashboard, RUM path filters & more https://www.speedcurve.com/blog/product-update-june-2022 <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-product-june-2022-newer.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>We've been busy here at SpeedCurve HQ! Here's a roundup of our recent product updates.</p><h2>Performance recommendations on your Vitals dashboard</h2> <p><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals"><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-vitals-audits.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>On <a href="https://www.youtube.com/watch?v=6dOBbvh4ZLA">your Vitals dashboard</a>, you now get performance recommendations that are specific to each of the Vitals you're tracking &ndash; Largest Contentful Paint, First Input Delay, Total Blocking Time, and Cumulative Layout Shift. This makes your Vitals dashboard a powerful tool for not only seeing how your metrics perform relative to Google's thresholds, but also diagnose your biggest pain points and get prioritized solutions.</p> <h2>Vitals badges on Lighthouse performance audits</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-vitals-badges2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Elsewhere in SpeedCurve, all the performance recommendations you see in your&nbsp;<a href="https://youtu.be/6dOBbvh4ZLA">Vitals</a>&nbsp;and&nbsp;<a href="https://support.speedcurve.com/docs/aggregated-lighthouse-results">Improve</a>&nbsp;dashboards &ndash; as well as in your synthetic test details &ndash; are now badged so you can see which&nbsp;<a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Web Vitals</a>&nbsp;they affect. Fix those issues and you should see improvements in your Vitals and&nbsp;<a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse</a>&nbsp;scores.</p> <h2>RUM update: Path filters</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-rum-paths.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><code></code></p> <p><code></code></p> <h2>RUM update: lux.js v301</h2> <p>New features:</p> <ul> <li>The synthetic onload time for <a href="https://support.speedcurve.com/docs/single-page-applications">SPAs</a> can be marked with <code>LUX.markLoadTime()</code>, allowing <code>LUX.send()</code> to be called later in the page lifecycle.?</li> <li>Added the <a href="https://speedcurve-metrics.github.io/lux.js/debug-parser.html">SpeedCurve RUM Debug Parser</a> to help interpret the debug messages.?</li> <li><code>LUX.getDebug()</code> now includes events that help to debug some metrics including LCP, CLS, element timing, and long tasks.?</li> <li>Source maps are now available for lux.js.?</li> </ul> <p>Bug fixes:</p> <ul> <li>Fixed a bug where JavaScript errors were only tracked on the first SPA page view.?</li> </ul> <h2>Synthetic update: Compare third parties</h2> <p><a href="https://youtu.be/UrF6HpFC08Q"><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-third-parties2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p style="color: #1f1f1f; font-family: Gotham, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">&nbsp;</p> <p style="color: #1f1f1f; font-family: Gotham, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">When&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://support.speedcurve.com/docs/bookmark-and-compare-tests">comparing two tests</a>, we now give you a comparison of common&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://support.speedcurve.com/docs/first-third-parties">third parties</a>. We also identify which third parties are unique to each test. Using this feature, you can quickly identify new and problematic third parties.&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://youtu.be/UrF6HpFC08Q">This short video</a>&nbsp;explains how to diagnose third-party regressions in SpeedCurve.</p> <h2>New in the Support Hub</h2> <ul> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Get started with Core Web Vitals</a>?</li> <li><a href="https://support.speedcurve.com/docs/seo-and-web-performance">SEO and web performance</a>?</li> <li><a href="https://support.speedcurve.com/docs/custom-metrics-for-anti-flicker-snippets">Custom metrics for anti-flicker snippets</a>?</li> <li><a href="https://support.speedcurve.com/docs/cls-scores-in-rum-vs-synthetic">Understand Cumulative Layout Shifts (CLS) scores in RUM vs synthetic?</a></li> <li><a href="https://support.speedcurve.com/docs/investigate-rum-sessions">Investigate RUM Sessions</a>?</li> </ul> <h2>Questions? Feedback? Suggestions?</h2> <p>We'd love to hear from you! Send us a note at support@speedcurve.com</p> Mon, 13 Jun 2022 00:00:00 +1200 Sampling RUM: A closer look https://www.speedcurve.com/blog/sampling-rum <p>Being able to set a sample rate in your real user monitoring (RUM) tool allows you to monitor your pages while managing your spending. It's a great option if staying within a budget is important to you. With the ability to sample real user data, comes this question...</p> <h2>"What should my RUM sample rate be?"</h2> <p>This frequently asked question doesn't have a simple answer. Refining your sample rate can be hit or miss if you aren&rsquo;t careful. In a <a href="https://www.speedcurve.com/blog/sampling-real-user-monitoring/">previous post</a>, I discussed a few considerations when determining how much RUM data you really need to make informed decisions. If you sample too much, you may be collecting a lot of data you may never use. On the other hand, if you sample too little, you risk creating variability in your data that is hard to trust.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/rum-sample-rate2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In this post, we are going to do a bit of research and let the data speak for itself. I took a look at the impact of sampling at various levels for three t-shirt sized companies (Small, Medium, Large) with the hope of providing some guidance for those of you considering sampling your RUM data.</p> <p>In this post, I'll cover:</p> <ul> <li>Methodology</li> <li>Key findings</li> <li>Considerations</li> <li>Recommendations</li> </ul><h2 id="methodology">Methodology</h2> <h3>Traffic size</h3> <p>I tried to keep this research as simple as possible. We see a large variety of sites at SpeedCurve, representing an assortment of countries, industry segments, traffic levels and more. For the purposes of this study, I'll use example sites from three cohorts:</p> <ol> <li>Large: &gt;1M daily page views</li> <li>Medium: 250K-500K daily page views</li> <li>Small: 10K-100K daily page views</li> </ol> <p>It's important to note that the sites I looked at collect 100% of their RUM data.&nbsp;</p> <h3>Time frame</h3> <p>24 hours. Traffic fluctuates based on the hour of the day, day of the week, and due to seasonality. I looked at the same date, mid-week, for each of the sites, which represented a consistent pattern of daily traffic.</p> <h3>Metric</h3> <p>This was a little tough. Not all metrics are created equal and I try to avoid picking favorites. At the time of this writing, Largest Contentful Paint (LCP) is <a href="https://caniuse.com/mdn-api_largestcontentfulpaint" target="_blank" rel="noopener">not supported by all browsers</a>, so it brings with it a bit of bias. This is true of many of the metrics we collect at SpeedCurve. We'll discuss this and other considerations a bit later. In the end, I settled on <a href="https://caniuse.com/mdn-api_performancetiming_loadeventend" target="_blank" rel="noopener">loadEventEnd</a>&nbsp;due to the fact that it has widespread support across browser platforms.&nbsp;</p> <h3>Sampling method</h3> <p>At SpeedCurve, we have the ability to sample based on sessions versus randomly sampling page views. We feel it's more important to maintain the integrity of the session than to specify precisely how many page views you want to look at. Because we track and identify user sessions, it made things a lot easier for me to sample the data&nbsp;<em>after</em> the fact.</p> <h3>Interpreting the data</h3> <p>There are a lot of ways to compare the data. I'm not a data scientist and I wanted to demonstrate the impact of sampling using views of the data that are familiar to those who have at least seen performance data before.</p> <p style="padding-left: 30px;"><strong>Aggregates:</strong> We will look at the percentage change between the 50th, 75th, and 95th percentiles. I considered anything under 5% acceptable.</p> <p style="padding-left: 30px;"><strong>Histograms:</strong> You can learn a lot if you just look at your data. Histograms are great for showing the performance characteristics of an entire population. (<a href="https://support.speedcurve.com/docs/how-to-read-a-histogram">Learn more about understanding and interpreting histograms</a><span style="color: #1f1f1f;">.)&nbsp;</span><span style="color: #1f1f1f;">For this experiment, we are comparing the overall shape and distribution of our sampled versus unsampled&nbsp;populations. In some cases, the aggregates may have been under 5%, but the the histogram was very sparse and didn't resemble the original distribution. </span><span style="color: #1f1f1f;">For example, the differences between these two histograms is obvious despite their medians being within reason. When looking at the 95th percentile, you observe that the long tail is essentially 'missing' from the sampled data. While somewhat unscientific, I used the eyeball test along with the aggregates to decide if the rate was appropriate.</span></p> <p><span style="color: #1f1f1f;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/bad_histo.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of 100% and 1% histogram samples." /><br /></span></p> <p style="padding-left: 30px;"><span style="color: #1f1f1f;"><strong>Time series:</strong>&nbsp;</span>Intraday variability is important if you're using the RUM data operationally. A simple time series will be used to illustrate how sampling impacts the 'noise' factor.</p> <h2 id="findings">Key findings</h2> <h3>TL;DR</h3> <p>For the most part, I found that if the sampled population of users was greater than 3,000, the aggregate stats were pretty close to your upsampled population (1-2% difference in the median). However, you should read on to understand some of the trade-offs dependent on your use case for RUM. Or, if you'd rather, go ahead and <a href="#recommendations">jump to the results.</a></p> <h3>RUM for reporting</h3> <p>If you're simply looking to RUM as a reporting tool that can represent your daily performance, you're in luck. You can get away with a relatively small sample of your overall population depending on your size.</p> <p>To determine the smallest sample rate for each group, we looked at a combination of the aggregate numbers and a comparison of the histograms. Note the consistency in the 95th percentile illustrated in these comparison charts.&nbsp;</p> <h4>Small (10K-100K daily page views sampled at 50%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_small.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 50% of population for a small site." /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h4>Medium (250K-500K daily page views sampled at 10%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_medium.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 10% of population for a medium site." /></p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h4><br />Large (&gt;1M daily page views sampled at 1%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_large.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 1% of population for a large site." /></p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table3.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h3>&nbsp;</h3> <h3>Intraday performance monitoring</h3> <p>You might be one of those sites deploying code multiple times a day. Maybe you're susceptible to variability from things such as third parties, varying traffic patterns, or other unknowns. (Aren't we all?) If this is the case, you may have more operational need for RUM. Your sampling rate can have a bit of impact on whether or not your data appears noisy or unpredictable.</p> <p>Looking at the recommended rates from the previous use case, the examples below show you how much you'll need to dial that up to get a reliable picture of hourly performance, and even more if you are looking at real-time monitoring (by minute).</p> <h4 style="text-align: left;"><strong><span style="color: #1f1f1f;">Hourly monitoring:</span></strong></h4> <h4 style="text-align: left; padding-left: 90px;">Small &ndash; increased from 50% -&gt; 75%</h4> <h4><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/small_timeseries_compare_by-hour.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a small site." /></h4> <p style="text-align: center;"><em>While increasing the rate helped remove some of the large deviations seen, the data is naturally much more variable for small-traffic sites.</em></p> <h4 style="text-align: left; padding-left: 90px;">Medium &ndash; increased from 10% -&gt; 25%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/medium_timeseries_compare_by-hour.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a medium site." /></p> <p style="text-align: center;"><em style="text-align: center;">While the peak hours were somewhat consistent at 10%, increasing the rate to 25% removed the larger off-peak deviations.</em></p> <h4 style="text-align: left; padding-left: 90px;">Large &ndash; increased from 1% -&gt; 10%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/large_timeseries_compare.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a large site." /></p> <p style="text-align: center;"><em style="text-align: center;">Increasing the rate by 10% greatly improved consistency for the large-traffic site.</em></p> <p><strong>Real-time monitoring:</strong>&nbsp;</p> <h4 style="text-align: left; padding-left: 90px;">Small &ndash; increased from 75% -&gt; 95%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/small_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a small site." /></p> <p style="text-align: center;"><em style="text-align: center;">For some of the larger spikes in the data, increasing the sample to 95% was effective. However, given how variable the data is, it's hard to say if real-time monitoring of smaller sites like this is really very effective.</em></p> <h4 style="text-align: left; padding-left: 90px;">Medium &ndash; increased from 25% -&gt; 75%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/medium_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a medium site." /></p> <p style="text-align: center;"><em style="text-align: center;">For the medium-traffic site, there was benefit&nbsp;when increasing the rate to 75%.&nbsp;</em></p> <h4 style="text-align: left; padding-left: 90px;">Large &ndash; increased from 10% -&gt;&nbsp; 40%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/large_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a large site." /></p> <p style="text-align: center;"><em style="text-align: center;">For this particular large-traffic site, getting real-time data consistent with the whole population required a much larger increase in the sample rate than anticipated.</em></p> <p>&nbsp;</p> <div style="overflow-x: auto;"> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table4.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h2 id="considerations">Considerations</h2> <h3>Data segmentation</h3> <p>Here comes the kicker. One of the great things about RUM is the ability to slice and dice your data. The distribution of your user population is made up of all types of user experiences. This has a pretty big impact on your sample rate, as when you filter/segment/slice/dice your data, you're effectively reducing the size of your population.</p> <p><strong>When determining how sampling will be affected by the segments you care about, get an idea of the percentage of traffic that is represented by the segment and factor that percentage into your overall rate.</strong> Some of the common segments include country, page types, device types and browsers. After applying a lot of segmentation to the experiments above, a good rule of thumb is to increase your sample rate by 50% (or collect 100% of the data for small sites).</p> <h3>Metrics</h3> <p>As mentioned earlier, there are some metrics (okay, many metrics) that aren't supported across browsers. Just as you would increase your sample rate for the segments, <strong>you should consider increasing the sample rate for metrics such as FCP, LCP and Total Blocking Time, which don't have broad browser support</strong>. This is also true of some network-related metrics that don't occur on every page load (DNS, Connect, SSL, Redirect).</p> <h3>Increasing time windows</h3> <p>It's sometimes recommended that you need to capture 100% of your data if you are comparing RUM data for different experiments, or capturing conversion data in order to understand the business impact of performance. This is not always the case. As an alternative, <strong>you can look at a much larger time window with a LOT more sampled data</strong>. This is also true of sites with low traffic numbers. Simply expand your time window until you have a healthy distribution.<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/largerwindows-histo.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparing 3 histograms to show the impact of widening your time window." /></p> <h2 id="recommendations">Recommendations</h2> <p>The intent of this post was to help provide some direction around sampling RUM data. The recommended levels are not intended to be precise, as there are too many factors that could influence things one way or the other. Use this table as a guide in addition to the knowledge you have about your users:</p> <p>&nbsp;</p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table5.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h2>Learn more about RUM sampling</h2> <p>As you may have guessed, SpeedCurve supports data sampling in RUM. <a href="https://support.speedcurve.com/docs/sample-rate" target="_blank" rel="noopener">This article goes into detail about how our RUM sampling works</a> and explains the different ways you can implement it. If you have any questions or feedback, we'd love to hear from you. Leave a comment below or send us a note at support@speedcurve.com.</p> </div> Wed, 01 Jun 2022 00:00:00 +1200 Understanding the performance impact of anti-flicker snippets https://www.speedcurve.com/blog/web-performance-anti-flicker-snippets <p>Experimentation tools that use asynchronous scripts &ndash; such as Google Optimize, Adobe Target, and Visual Web Optimizer &ndash;&nbsp; recommend using an anti-flicker snippet to hide the page until they've finished executing. But this practice comes with some performance measurement pitfalls:</p> <ul> <li>Hiding the contents of the page can have a dramatic effect on the Web Vitals that measure visual experience, such as First Contentful Paint (FCP) and Largest Contentful Paint (LCP)</li> <li>Anti-flicker snippets can also affect Cumulative Layout Shift (CLS) and the synthetic equivalent of First Input Delay (FID), Total Blocking Time (TBT).</li> </ul> <p>In this post we'll look at how anti-flicker snippets work, their impact on Web Vitals, and how to measure the delay they add to visitors' experience.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/anti-flicker-vc-bounce-rate.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p><h2>Hiding the page contents</h2> <p>Normally web pages are rendered progressively. As browsers start to receive content, they can layout and render page elements and display the content bit by bit.</p> <p>Anti-flicker snippets hide the contents of a page until the the experimentation tool (e.g. Google Optimize) has finished applying its experiments. The hypothesis is that if a visitor sees the page changing, it may influence how they behave &ndash; either because they had an unpleasant experience or simply because they became aware that they're in an experiment.&nbsp;<br /><br />But as you can see in these filmstrips of Wiggle, a UK cycling retailer, hiding the page can have a dramatic impact on a visitor's experience:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/wiggle.001.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Filmstrip showing how Google Optimize's Anti-Flicker snippet delays content from being visible" /></p> <p style="text-align: center;"><em>Comparison of a site loading with (top) and without (bottom) the anti-flicker snippet enabled</em></p> <p style="text-align: center;">&nbsp;</p> <p><strong>The test in top row has the anti-flicker snippet in place.</strong> The content is displayed 'all at once' when the anti-flicker snippet reveals the page.&nbsp;</p> <p><strong>The test in the bottom row has the anti-flicker snippet removed.</strong> The page loads progressively, with the menu and logo appearing about 1.5s before the hero image. The anti-flicker snippet delayed showing the content for two seconds.</p> <p>These tests were over a fast network connection, so First Contentful Paint and Last Contentful Paint happened simultaneously. On slower connections, however, there is a gap between FCP and LCP &ndash; and some progressive rendering &ndash; but FCP still won't start until the anti-flicker snippet finishes.<br /><br />(I chose Wiggle semi-randomly after searching the HTTP Archive for sites that use Google Optimize.)</p> <h2>Effects on other Web Vitals</h2> <p>Anti-flicker snippets can affect other Web Vitals, too:</p> <ul> <li><strong>Decrease in Total Blocking Time</strong> &ndash; TBT is a synthetic monitoring metric that measures how long JavaScript and others tasks prevent the page from handling user interaction. It starts measuring at First Contentful Paint (FCP) and stops at Time to Interactive (TTI). As anti-flicker snippets delay FCP, then the window for measuring the Long Tasks gets smaller, which means you may see a decrease in TBT.</li> <li><strong>Lower Cumulative Layout Shift score</strong> &ndash; CLS measures how much content moves around on the page. If some of this movement happens while the page is hidden, then the CLS score will be lower.</li> </ul> <p>To summarize: On one hand, anti-flicker snippets make metrics such as FCP (and possibly LCP) worse. On the other hand, they can appear to improve TBT and CLS.</p> <p>My instinct is that showing content to the visitor sooner may be a higher priority than the incidental TBT or CLS boost. But rather than trust my instincts, we should measure the impact of hiding the page. To do that, we need to understand a little about how the snippets work.</p> <h2>How anti-flicker snippets work</h2> <p>Anti-flicker snippets typically add a style with <code>opacity: 0</code> to the elements to be hidden. In Google Optimize's default case is the whole document.</p> <p>The (un-minified) Optimize snippet below declares the <code>.async-hide</code> class in a style block, and then applies it to the document using a script. It also defines a function to remove the class and sets a timer to call this function after four seconds.</p> <p>The <code>.async-hide</code> class will either be removed when Google Optimize finishes applying its variants or when the timeout value is reached. In the example below, FCP can be delayed by up to four seconds (the default in the snippet example).</p> <pre class="language-markup"><code>&lt;!-- anti-flicker snippet for Google Optimize (recommended) --&gt; &lt;style&gt; .async-hide { opacity: 0 !important } &lt;/style&gt; &lt;script&gt; (function(a, s, y, n, c, h, I, d, e) { s.className += ' ' + y; h.start = 1 * new Date; h.end = I = function() { s.className = s.className.replace(RegExp(' ?' + y), '') }; (a[n] = a[n] || []).hide = h; setTimeout(function() { I(); h.end = null }, c); h.timeout = c; } )(window, document.documentElement, 'async-hide', 'dataLayer', 4000, { 'GTM-XXXXXX': true }); &lt;/script&gt;?</code></pre> <p>&nbsp;</p> <p>If you'd like to understand the snippet in more detail, there's an annotated version in this Optimize support article: <a href="https://developers.google.com/optimize">Using the Optimize anti-flicker snippet | Google Developers</a></p> <p>As a fallback, four seconds is a long time. Based on the Chrome UX Report thresholds, a page needs to display the Largest Contentful Paint element within 2.5s for it to be considered good.</p> <p>Not every visitor may reach that timeout. For some visitors the experiments may complete soon enough to avoid it.</p> <p>How often the snippet reaches the timeout will depend on factors like:</p> <ul> <li>the number of experiments,</li> <li>how long the experiments take to execute,</li> <li>what device the visitor is using, and</li> <li>the speed of the network the device is connected to.</li> </ul> <p>If we measure how long the page is hidden, we can start to understand how Optimize affects our visitors experiences, the range of delays it adds, and how it influences visitor behaviour.&nbsp;</p> <h2>Measuring how long the page is hidden</h2> <p>Unfortunately, Google Optimize &ndash; like most third-party tags &ndash; doesn't expose any timing information for its key milestones (page hidden, page shown), but there are still ways we can measure them.</p> <h3>1. Update the Optimize anti-flicker snippet to include performance marks and measures&nbsp;</h3> <p>A start mark is recorded just before the hide class is added, and then when the class is remove an end mark, and duration measure are recorded.</p> <pre class="language-markup"><code>&lt;!-- anti-flicker snippet for Google Optimize (recommended) --&gt; &lt;style&gt; .async-hide { opacity: 0 !important } &lt;/style&gt; &lt;script&gt; (function(a, s, y, n, c, h, I, d, e) { performance.mark('anti-flicker-start'); s.className += ' ' + y; h.start = 1 * new Date; h.end = I = function() { s.className = s.className.replace(RegExp(' ?' + y), '') performance.mark('anti-flicker-end'); performance.measure('anti-flicker-duration', 'anti-flicker-start', 'anti-flicker-end'); }; (a[n] = a[n] || []).hide = h; setTimeout(function() { I(); h.end = null }, c); h.timeout = c; } )(window, document.documentElement, 'async-hide', 'dataLayer', 4000, { 'GTM-XXXXXX': true }); &lt;/script&gt;</code></pre> <pre></pre> <p>&nbsp;</p> <p>Editing the predefined snippet might be a bit fragile, as in the future someone might not notice it's been customised and overwrite it with the default version.</p> <h3>2. Create a second snippet that uses a MutationObserver to detect when the <code>async-hide</code> class is removed from the document</h3> <p>This is probably more sustainable as it's less prone to being overwritten.</p> <pre class="language-javascript"><code>(function (node, selector, name) { performance.mark(name + '-start'); const callback = function (mutationsList, observer) { // Use traditional 'for loops' for IE 11 support for (const mutation of mutationsList) { if (mutation.attributeName === 'class' &amp;&amp; !mutation.target.classList.contains(selector) &amp;&amp; mutation.oldValue.includes(selector)) { performance.mark(name + '-end'); performance.measure(name + '-duration', name + '-start', name + '-end'); observer.disconnect(); break; } } } const observer = new MutationObserver(callback); observer.observe(node, { attributes: true, attributeOldValue: true }); })(document.documentElement, 'async-hide', 'anti-flicker'); </code></pre> <p><span style="color: #1f1f1f; font-family: Gotham, sans-serif;"><br />This measurement snippet should be placed immediately after Google Optimize's anti-flicker snippet. It creates a mark when it runs, and then another when the class is removed from the document. It also creates a measure to record how long the page was hidden for.</span></p> <p>The snippet takes three parameters:</p> <ol> <li>the element that's being hidden,</li> <li>the name of the class used to hide it, and</li> <li>a prefix for the name of the marks and measures.</li> </ol> <p>The first two must match their equivalents in the anti-flicker snippet.<br /><br />A similar measuring approach can be used for Adobe Target and VisualWebOptimizer. There are example snippets for these in our support docs:&nbsp;<a href="https://support.speedcurve.com/docs/custom-metrics-for-anti-flicker-snippets">Custom metrics for anti-flicker snippets</a></p> <h2>Using the data</h2> <p>Once the snippet is installed on the page and the&nbsp;<a href="https://support.speedcurve.com/docs/custom-metrics">User Timing metrics configured in SpeedCurve</a>, they can be included in dashboard charts. For example, you can <a href="https://support.speedcurve.com/docs/create-correlation-charts">create a correlation chart</a> in RUM to plot how bounce rate is affected by the length of time page is hidden for.<br /><br /><br /><a href="https://support.speedcurve.com/docs/create-correlation-charts"><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/anti-flicker-duration-vs-bounce-rate.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing bounce rate increasing as the page is hidden for longer" /></a></p> <p style="text-align: center;"><em>Correlation chart showing relationship between bounce rate and how long the page is hidden</em></p> <p><br />In this example, the chart shows the bounce rate increasing when the page is hidden for longer durations. It also shows that some visitors are waiting four seconds before the see any content!</p> <p>Measuring how long the page is hidden opens up opportunities to get a better understanding of how this wait time affects our visitors' experience. We can use the <a href="https://support.speedcurve.com/docs/investigate-rum-sessions">RUM Sessions dashboard</a> to identify and explore which visitors are being affected by slow Optimize experiments. We can experiment with reducing the timeout so that visitors won't see a blank screen for as long. Or if we recorded which variant the visitor was seeing via&nbsp;<a href="https://support.speedcurve.com/docs/customer-data">RUM's custom data API</a>, we could see which experiments took the longest to execute.&nbsp;<br /><br />(While testing the snippet, I discovered a common third-party reviews service was corrupting the duration measure, so in some cases you may need to switch to using the anti-flicker end mark instead)</p> <h2>Summary</h2> <p>Managing the performance of third-party tags is a key aspect of delivering great web experiences. Unfortunately, the performance of third-party tags can be pretty opaque. In an ideal world, tag vendors would use marks and measures to help us understand how their tags behave in the field, but until they do, browser APIs such as MutationObserver and User Timing can help us to measure some aspects of them.</p> <p>If you've got other third-party tags that you'd like help with measuring, or if you try out one of the snippets for measuring anti-flicker snippets, we'd love to <a href="mailto:support@speedcurve.com">hear from you</a>.</p> Thu, 28 Apr 2022 00:00:00 +1200 Industry page speed benchmarks (March 2022) https://www.speedcurve.com/blog/page-speed-benchmarks-march-2022 <p><a href="https://app.speedcurve.com/benchmarks/">Page Speed Benchmarks</a> is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries &ndash; from retail to media &ndash; over the past year. This dashboard is publicly available (meaning you don't need a SpeedCurve account to explore it) and is a treasure trove of meaningful data that you can use for your own research.</p> <p>The dashboard allows you to easily filter by region, industry, mobile/desktop, fast/slow, and key web performance metrics, including Google's Core Web Vitals. (Scroll down to the bottom of this post for more testing details.)</p> <p>At the time of writing this post, these were the home pages with the fastest Start Render times in key industries:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/industry-benchmarks-top-sites-march2022.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>As you can see, I've included Largest Contentful Paint alongside Start Render in this chart, for reasons I explain below.</p><h2>Key metrics</h2> <p>Numbers are good, but visuals are even better. Below you can see the fastest sites &ndash; ranked by Start Render time &ndash; in each category, along with screenshots taken from their rendering timelines, which show you what the viewer sees at key render moments:</p> <ul> <li><a href="https://support.speedcurve.com/docs/metrics-glossary#start-render-synthetic-and-rum"><strong>Start Render</strong></a> &ndash;&nbsp;The time from the start of the initial navigation until the first non-white content is painted to the browser display.&nbsp;</li> <li><strong><a href="https://support.speedcurve.com/docs/metrics-glossary#largest-contentful-paint-synthetic-and-rum">Largest Contentful Paint</a></strong> &ndash; When the largest element &ndash; usually image or video &ndash; in the viewport is rendered. LCP is one of Google's <a href="https://www.speedcurve.com/blog/web-vitals-user-experience/">Core Web Vitals</a>, so it should be on your radar, especially if you care about SEO.</li> <li><a href="https://support.speedcurve.com/docs/metrics-glossary#hero-rendering-times-synthetic"><strong>Last Painted Hero</strong></a> &ndash;&nbsp;When the last piece of critical content (largest image, largest background image and/or first H1 tag) is painted in the browser.&nbsp;</li> <li><a href="https://support.speedcurve.com/docs/metrics-glossary#visually-complete-synthetic"><strong>Visually Complete</strong></a> &ndash; The time at which all the content in the viewport has finished rendered and nothing changed in the viewport after that point as the page continued loading.</li> </ul> <p>These visuals are a great tool for validating the best metrics to focus on for your pages. Looking at the screenshots below, you can really see the disparities between metrics, especially for pages served to mobile.</p> <h2><span style="font-size: 35px; color: #000000;">Fastest Media Sites</span></h2> <h3>UK &amp; EU</h3> <p><strong>The Guardian</strong> (<a href="https://app.speedcurve.com/benchmark/media-eu/test/220331_K1_5e8901097f34e8917d0a816eb8200720/?share=freljsuj6913s9s5an29pktz92alec">view desktop test results</a>)&nbsp;</p> <p><a href="https://app.speedcurve.com/benchmark/media-eu/test/220331_K1_5e8901097f34e8917d0a816eb8200720/?share=freljsuj6913s9s5an29pktz92alec"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-eu-desktop-guardian.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>El Pais</strong> (<a href="https://app.speedcurve.com/benchmark/media-eu/test/220331_SD_d05b2d6a5a95a36b020af0d025e49151/?share=freljsuj6913s9s5an29pktz92alec">view mobile test results</a>)&nbsp;</p> <p><a href="https://app.speedcurve.com/benchmark/media-eu/test/220331_SD_d05b2d6a5a95a36b020af0d025e49151/?share=freljsuj6913s9s5an29pktz92alec"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-eu-mobile-el-pais.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>United States</h3> <p><strong>LA Times</strong> (<a href="https://app.speedcurve.com/benchmark/media-us/test/220330_7J_14ea715c67cf8fa06caee4349e8dc934/?share=5o1gzxgw7797gujjwwgn9i1kffs4g1">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/media-us/test/220330_7J_14ea715c67cf8fa06caee4349e8dc934/?share=5o1gzxgw7797gujjwwgn9i1kffs4g1"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-us-desktop-latimes.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>LA Times</strong> (<a href="https://app.speedcurve.com/benchmark/media-us/test/220331_XH_6910e72661c1f6aa7faec6e84fd05cc0/?share=5o1gzxgw7797gujjwwgn9i1kffs4g1">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/media-us/test/220331_XH_6910e72661c1f6aa7faec6e84fd05cc0/?share=5o1gzxgw7797gujjwwgn9i1kffs4g1"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-us-mobile-latimes.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>Japan&nbsp;</h3> <p><strong>Jiji</strong> (<a href="https://app.speedcurve.com/benchmark/media-jp/test/220331_73_07c3c7fd88cf65dd31fa7d3dee01b33e/?share=oo2me887vtotlylh8bjtx9pz259zt7">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/media-jp/test/220331_73_07c3c7fd88cf65dd31fa7d3dee01b33e/?share=oo2me887vtotlylh8bjtx9pz259zt7"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-jp-desktop-jiji.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Nikkei</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/media-jp/test/220331_60_89dab7755b00ada5e33ff1238e809c33/?share=oo2me887vtotlylh8bjtx9pz259zt7">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/media-jp/test/220331_60_89dab7755b00ada5e33ff1238e809c33/?share=oo2me887vtotlylh8bjtx9pz259zt7"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/media-jp-mobile-nikkei.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Fastest Retail Sites</h2> <h3>UK &amp; EU</h3> <p><strong>Carrefour</strong> (<a href="https://app.speedcurve.com/benchmark/retail-eu/test/220331_M7_e2c7a3106e36482fa6dc0b0b9aaae374/?share=tib9jvxn423ft62vmp5x4ntc16krkb">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-eu/test/220331_M7_e2c7a3106e36482fa6dc0b0b9aaae374/?share=tib9jvxn423ft62vmp5x4ntc16krkb"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-eu-desktop-carrefour.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Edeka</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/retail-eu/test/220331_63_c44fa6aefc7a5bfb3dfec3196e2a1fe5/?share=tib9jvxn423ft62vmp5x4ntc16krkb">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-eu/test/220331_63_c44fa6aefc7a5bfb3dfec3196e2a1fe5/?share=tib9jvxn423ft62vmp5x4ntc16krkb"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-eu-mobile-edeka.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>United States</h3> <p><strong>Amazon</strong> (<a href="https://app.speedcurve.com/benchmark/retail-us/test/220331_X5_d8c87f369d2a841f704fb8b6cf5b3ca7/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-us/test/220331_X5_d8c87f369d2a841f704fb8b6cf5b3ca7/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-us-desktop-amazon.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Wish</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/retail-us/test/220331_4N_b384f1781bc7986c7715fc0e277377fb/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-us/test/220331_4N_b384f1781bc7986c7715fc0e277377fb/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-us-mobile-wish.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>Japan</h3> <p><strong>Rakuten</strong> (<a href="https://app.speedcurve.com/benchmark/retail-jp/test/220331_2T_a3655522fb0df5c109622b465b1fbb9c/?share=u1lwaz44pc9tv5m44vpbqp41w1ldfy">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-jp/test/220331_2T_a3655522fb0df5c109622b465b1fbb9c/?share=u1lwaz44pc9tv5m44vpbqp41w1ldfy"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-jp-desktop-rakuten.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Yodobashi</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/retail-jp/test/220331_TW_6331ab3eee98f4888ce770970a2eaf66/?share=u1lwaz44pc9tv5m44vpbqp41w1ldfy">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/retail-jp/test/220331_TW_6331ab3eee98f4888ce770970a2eaf66/?share=u1lwaz44pc9tv5m44vpbqp41w1ldfy"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/retail-jp-mobile-yodobashi.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Fastest Travel Sites</h2> <h3>UK &amp; EU</h3> <p><strong>Airbnb</strong> (<a href="https://app.speedcurve.com/benchmark/travel-eu/test/220331_KN_760b1e4a0ba44c7a796d8bae4fbb9fcf/?share=i256t8bjv28n4i2ui7ffbvwkulnrr2">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-eu/test/220331_KN_760b1e4a0ba44c7a796d8bae4fbb9fcf/?share=i256t8bjv28n4i2ui7ffbvwkulnrr2"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-eu-desktop-airbnb.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Skyscanner</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/travel-eu/test/220331_2P_90e975b1abea387ff9d6e4ab438e67fc/?share=i256t8bjv28n4i2ui7ffbvwkulnrr2">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-eu/test/220331_2P_90e975b1abea387ff9d6e4ab438e67fc/?share=i256t8bjv28n4i2ui7ffbvwkulnrr2"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-eu-mobile-skyscanner.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>United States</h3> <p><strong>HomeAway</strong> (<a href="https://app.speedcurve.com/benchmark/travel-us/test/220331_XY_70b091a425ecb3683215ad1727f1ddf6/?share=lzc3770o3nzhn5zwl6j5xvqeak8np1">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-us/test/220331_XY_70b091a425ecb3683215ad1727f1ddf6/?share=lzc3770o3nzhn5zwl6j5xvqeak8np1"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-us-desktop-homeaway.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Lonely Planet</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/travel-us/test/220331_TH_e01e26d48bedc21a8f2cd2d41cb55ea1/?share=lzc3770o3nzhn5zwl6j5xvqeak8np1">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-us/test/220331_TH_e01e26d48bedc21a8f2cd2d41cb55ea1/?share=lzc3770o3nzhn5zwl6j5xvqeak8np1"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-us-mobile-lonelyplanet.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h3>Japan</h3> <p><strong>Rakuten Travel</strong> (<a href="https://app.speedcurve.com/benchmark/travel-jp/test/220331_P4_7ea2cc9bee57c81882c00df5a51ba5c1/?share=9ub9f5s75dvgwcquk3me8qxj769ldm">view desktop test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-jp/test/220331_P4_7ea2cc9bee57c81882c00df5a51ba5c1/?share=9ub9f5s75dvgwcquk3me8qxj769ldm"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-jp-desktop-rakuten.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Expedia</strong>&nbsp;(<a href="https://app.speedcurve.com/benchmark/travel-jp/test/220331_MB_0acb3a49090c47b3e43ba9a607d08000/?share=9ub9f5s75dvgwcquk3me8qxj769ldm">view mobile test results</a>)</p> <p><a href="https://app.speedcurve.com/benchmark/travel-jp/test/220331_MB_0acb3a49090c47b3e43ba9a607d08000/?share=9ub9f5s75dvgwcquk3me8qxj769ldm"><img class="blog-img" src="https://blog-img.speedcurve.com/img/435/travel-jp-mobile-expedia.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Takeaways</h2> <h3>1. Start Render is an important metric for measuring user-perceived responsiveness.</h3> <p>Visual feedback that something is happening on the page is important to users. This is why it's frequently proven to be a good metric to use when creating correlation charts that map performance to user engagement and business metrics, like bounce rate and conversion rate.&nbsp;</p> <h3>2. If you're not tracking Largest Contentful Paint, you should be... especially for mobile.</h3> <p>To me, the biggest takeaway here is noting the disparity between Start Render and Largest Contentful Paint. Some disparity is to be expected, as Start Render measures when the first pixels start to appear on the page, and LCP measures when the largest visual element (image or video) finishes rendering.</p> <p>In most cases, the difference between Start Render and Largest Contentful Paint isn't huge for pages served to desktop. Mobile tells a different story, even among the fastest pages we tested. For example:</p> <ul> <li>LA Times home page had a Start Render time of 2.5 seconds, but its LCP was 37.42 seconds.&nbsp;</li> <li>Lonely Planet Start Render was 3.1 seconds, while LCP was 37.22 seconds.</li> <li>Expedia had a Start Render time of 5.8 seconds, while LCP was 35.4 seconds.</li> </ul> <p>While those were the most glaring examples, most of the other sites also had significant differences. There's also this gotcha...</p> <h3>3. LCP doesn't always correlate to meaningful content in the viewport... specifically for mobile.</h3> <p>If you're serving a lot of mobile users, you may want to validate that LCP is a valid metric to track &ndash; and troubleshoot any issues that are preventing it from being measured correctly.</p> <h3><span style="color: #000000;">4. Visually Complete is not a meaningful metric for most sites.</span></h3> <p>Visually Complete was a helpful metric for its time, but it came with its fair share of gotchas &ndash; such as the fact that it sometimes didn't fire until long after the page had fully rendered. You can see examples of this above. While we still track Visually Complete in SpeedCurve for folks who are still using it, <a href="https://www.speedcurve.com/blog/performance-budgets-guide/">we consider it unofficially deprecated</a> in favour of more precise metrics.</p> <h2>Testing details</h2> <p>Here's how we set up testing for the Page Speed Benchmarks:</p> <ul> <li>Home pages of 10 leading sites in the US, EU, and Japan, in each of the following industries: Auto, Finance, Media, Retail, Tech, and Travel.</li> <li>Tested on our private agents in Frankfurt (EU), US East Coast (US), and Japan.</li> <li>Tested once per day on a Chrome desktop browser with a fast connection (25Mbps/10Mbps 10ms RTT).&nbsp;</li> <li>Tested once per day on an emulated Nexus 5X mobile at 3G Slow (400Kbps/400Kbps 400ms RTT).&nbsp;&nbsp;</li> <li>Three tests per test time, with the medians used in the charts.</li> </ul> <h2>Create your own custom benchmark dashboard</h2> <p>If you're not already using SpeedCurve, I encourage you to <a href="https://app.speedcurve.com/setup/trial/"><strong>sign up for a free trial</strong></a> and <a href="https://support.speedcurve.com/docs/competitive-benchmarking"><strong>create your own benchmarks dashboard</strong></a> where you can see how your site compares to your competitors.</p> <p>I also encourage you to check out the entire <a href="https://app.speedcurve.com/benchmarks/"><strong>Industry Page Speed Benchmarks dashboard</strong></a> (no login required). You can drill down into the historical test data for every site. If you spot something interesting, let me know!</p> Mon, 04 Apr 2022 00:00:00 +1200 Ten years of page bloat: What have we learned? https://www.speedcurve.com/blog/ten-years-page-bloat <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-2012-2022.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>I've been writing about page size and complexity for years. If you've been working in the performance space for a while and you hear me start to talk about page growth, I'd forgive you if you started running away. ;)</p> <p>But pages keep getting bigger and more complex year over year &ndash; and this increasing size and complexity is not fully mitigated by faster devices and networks, or by our hard-working browsers. Clearly we need to keep talking about it. We need to understand how ever-growing pages work against us. And we need to have strategies in place to understand and manage our pages.&nbsp;</p> <p>In this post, we'll cover:</p> <ul> <li>How big are pages today versus ten years ago?</li> <li>How does page bloat hurt your business?</li> <li>How does page bloat affect other metrics, such as Google's Core Web Vitals?</li> <li>Is it possible to have large pages that deliver a good user experience?</li> <li>What can we do to manage our pages and fight regression?</li> </ul><h2>What do we mean when we talk about page size?</h2> <p style="font-size: 16px;">When we talk about page size, we're referring to overall page weight and complexity. This includes:</p> <ul style="font-size: 16px;"> <li><strong>Size</strong>&nbsp;&ndash; Total page weight in bytes. Size matters especially to mobile users who have limited and/or metered data.</li> <li><strong>Resources</strong>&nbsp;&ndash; Total number of resources on the page (listed below). The more resources, the greater the complexity and the increased likelihood of rendering delays and blockages.</li> <li><strong>HTML</strong>&nbsp;&ndash; Typically the smallest resource on the page, HTML's performance risk is usually negligible. Having said that, I recently did some digging into a page where the total HTML size jumped dramatically because of a bunch of inline JavaScript, which led to rendering delays, so keeping an eye on HTML size is still a good idea.</li> <li><strong>Images</strong>&nbsp;&ndash; Often the greatest contributor to page bloat. Looking at the 90th percentile of the distribution of page weight, images account for a whopping 5.7 MB of a roughly 8.2 MB page. In other words, images comprised almost 75% of the total page weight. And if that already wasn&rsquo;t enough, the number of images on a page has been linked to lower conversion rates on retail sites. (More on that later.)</li> <li><strong>JavaScript</strong>&nbsp;&ndash; A page can have a relatively low JS weight but still suffer from JS-inflicted performance problems. Even a single 100 KB third-party script can wreak havoc with your page. The more scripts on your page, the greater the risk. It&rsquo;s not enough to focus solely on blocking JavaScript. It&rsquo;s possible for your pages to contain zero blocking resources and still have less-than-optimal performance because of how your JavaScript is rendered. That&rsquo;s why it&rsquo;s so important to understand CPU usage on your pages, because JavaScript consumes more CPU than all other browser activities combined. While JavaScript blocks the CPU, the browser can&rsquo;t respond to user input. This creates what&rsquo;s commonly called &ldquo;jank&rdquo; &ndash; that annoying feeling of jittery, unstable page rendering.</li> <li><strong>CSS</strong>&nbsp;&ndash; Like JavaScript, CSS doesn&rsquo;t have to be bulky to cause problems. Poorly executed stylesheets can create a host of performance problems, ranging from stylesheets taking too long to download and parse, to improperly placed stylesheets that block the rest of the page from rendering. And, similar to JavaScript, more CSS files equals more potential trouble.</li> </ul> <h2><span style="font-size: 35px; color: #000000;">How does page bloat hurt your business?</span></h2> <p>A&nbsp;<a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/mobile-page-speed-load-time/">Google machine-learning study</a>&nbsp;I participated in a few years ago found that the total number of page elements was the single greatest predictor of conversions. The number of images on the page was the second greatest predictor.</p> <p><a href="https://www.slideshare.net/tammyeverts/using-machine-learning-to-determine-drivers-of-bounce-and-conversion-66319405"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/machine-learning-images.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>We also found that the more scripts there were in a series of pages in a session, the less likely that session was to convert.&nbsp;</p> <p><a href="https://www.slideshare.net/tammyeverts/using-machine-learning-to-determine-drivers-of-bounce-and-conversion-66319405"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/machine-learning-scripts.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Image size is another issue, as excessive image weight hurts your SEO ranking in Google Image Search. Given that Image Search comprises upwards of 26% of Google searches, this is something you should care about. (You can dive deeper into image optimization and SEO tips in&nbsp;<a href="https://developers.google.com/search/docs/advanced/guidelines/google-images">this article</a>&nbsp;in Google Search Central.)&nbsp;</p> <h2>How big are pages today versus ten years ago?</h2> <p>Before we get into these numbers, some background and caveats:</p> <ul> <li><strong>These numbers all come from the <a href="https://httparchive.org/reports/page-weight">HTTP Archive</a>.</strong> It's important to mention that there have been changes to how the Archive collects data over the years. Having said that, looking at data over the past ten years, it's safe to make the observation that pages are definitely trending bigger.</li> <li><strong>I intentionally left out the numbers for video, because they seemed inconsistent.</strong> For the purposes of this post, they're not high priority, so I'm fine with setting them aside for now.</li> <li><strong>These numbers should not be taken as a benchmark for your own site.</strong> You haven't necessarily achieved anything great if your pages are smaller than this, nor have you failed if your pages are bigger.&nbsp;</li> <li><strong>Not all pages are getting bigger.</strong> Many have gotten smaller over the years. Maybe yours is one of them!</li> </ul> <h3>1. The median desktop page is 3X bigger now than ten years ago</h3> <p>As someone who's been watching these numbers for more than ten years, this growth doesn't come as a surprise. The median size of 2159 KB is about what I expected to see, given how many pages I inspect in any given week that are much larger than this.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>2. Images and JavaScript comprise two-thirds of total page weight</h3> <p>Predictably, much of this page growth is driven by images and JavaScript. Images account for roughly 945 KB (44%) of median desktop page weight, and JS accounts for about 500 KB (23%).&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/new-desktop-page-weight-breakdown-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>3. The median mobile page is almost 7X bigger than ten years ago</h3> <p>The pages being served to mobile users have experienced massive growth. At 1984 KB, the median mobile page is only somewhat smaller than the median desktop page (2159 KB). While it is possible to have large, robust pages that feel fast, you should care about page bloat in terms of how it affects mobile users, especially mobile-only users who are on older low-CPU devices, or who are dealing with bandwidth constraints or data limits.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/mobile-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>4. Images and JavaScript comprise the bulk of mobile page weight</h3> <p>We're serving about 876 KB of images and 453 KB of scripts to mobile &ndash; in other words 67% of total page weight. JavaScript is a massive CPU hog, so this is concerning, especially if your users are on older devices with less processing power. (If you're counting on your users having newer devices, you might want to rethink that. In recent years, <a href="https://www.cnet.com/tech/mobile/getting-a-new-iphone-every-2-years-is-making-less-sense-than-ever/">the smartphone replacement cycle has gotten longer</a> and it looks like this trend is here to stay.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/new-mobile-page-weight-breakdown-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>5. Pages are huge at the 90th percentile, and the bulk is image weight</h3> <p>Focusing on medians is not enough. You should also care about your cohort of users at the 90th percentile. Ten percent of your users may not sound like much, but if your site gets 10 million visitors a month, that means a million of those people are having a really poor experience.</p> <p>The 90th percentile page served to desktop is 8271 KB and contains 177 resources. Almost 75% of page weight is consumed by images, which add up to more than 5 MB.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-90th-percentile-kb.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The 90th percentile page served to mobile is only slightly smaller, at 7574 KB and 168 resources.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/mobile-90th-percentile-kb.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>6. The number of resources served to desktop has stayed flat over the years</h3> <p>You can see this relative flatness at both the median and 90th percentile. This actually came as a bit of a surprise. I had assumed that there'd be more significant growth, especially given the growth in total page size. More on that in a bit.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/new-desktop-resources.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>7. But more resources are being served to mobile</h3> <p>No surprises here. We've moved considerably beyond the pared-down pages we used to serve to mobile users a decade ago.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/new-mobile-resources.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>8. Image requests are way down, while image size is way up</h3> <p>We're serving fewer images, but the images we are serving are high-resolution and/or unoptimized. The median page today serves 25 images, compared to 42 images back in 2012. While the number of image requests has reduced dramatically, the combined size has increased almost threefold, from 331 KB to 945 KB.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-images-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>This trend carries over to mobile. The number of image requests has remained the same, but in this case the total image size has increased almost 6X &ndash; from 151 KB to 876 KB.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/mobile-images-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>9. JavaScript requests have doubled, while JS size has almost quadrupled</h3> <p>Not only are we serving more scripts than ever &ndash; with all the performance risks that those entail &ndash; we're also bulking out pages with 500 KB of JS weight.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-scripts-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Mobile pages fare only slightly better with 453 KB of JS weight.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/mobile-scripts-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>10. CSS requests have more than doubled for desktop and mobile</h3> <p>More stylesheets equal more risk of performance degradation. The amount of CSS on your pages is something to keep an eye on, because problematic CSS can block the rest of your page from rendering.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/desktop-css-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/mobile-css-2012-2022.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>How does page bloat affect Core Web Vitals?</h2> <p>Google's <a href="https://web.dev/learn-web-vitals/">Core Web Vitals</a> are a set of metrics that are intended to focus on measuring performance from a user-experience perspective. While total page size and weight don't directly affect Web Vitals, there are some nuanced ways that you should think about the number and size of resources you're serving.</p> <h3>Largest Contentful Paint&nbsp;</h3> <p>Largest Contentful Paint (LCP) measures when the largest visual element on the page renders. Page bloat issues that can hurt your LCP time include:</p> <ul> <li><strong>Slow or blocking scripts and stylesheets</strong> that load at the beginning of the page's rendering path can delay when images start to render.</li> <li><strong>Unoptimized images with excessive load times.</strong> Largest Contentful Paint includes the entire time it takes for the image to finish rendering. If your image starts to render at the 1-second mark but takes 4 seconds to fully render, then your LCP time is 5 seconds. This falls short of <a href="https://web.dev/vitals/">Google's threshold of 2.5 seconds for Largest Contentful Paint</a>.</li> </ul> <h3>First Input Delay</h3> <p>First Input Delay (FID) measures how quickly a page responds to a user interaction. Input delay happens when the browser's main thread is too busy to respond to the user. Commonly, this is due to the browser being busy parsing and executing large JavaScript files.</p> <p>There's a lot of unnecessary JS on many pages, and as noted above, JS files have gotten bigger over the years. The more JS on your page, the more potential for slow FID times. As Tim Kadlec said a couple years back in his performance.now() talk <a href="https://www.youtube.com/watch?v=JvJ0v5OohNg">When JavaScript Bytes</a>:</p> <blockquote> <p>JavaScript is, byte-for-byte, the most expensive resource on the web and we&rsquo;re using more of it than ever in our sites. You can optimize the delivery, the parsing and the execution until you&rsquo;re blue in the face but you&rsquo;ll never make it as performant as simply not using it in the first place.</p> </blockquote> <h3>Cumulative Layout Shift</h3> <p>Cumulative Layout Shift (CLS) measures how visually stable a page is. It's a formula-based metric that, put simply, takes into account how much a page's visual content shifts within the viewport, combined with the distance that those visual elements shifted. You can <a href="https://web.dev/cls/">dig deeper</a> into the mechanics of how CLS is calculated, but the human-friendly definition is that CLS helps you understand how likely a page is to deliver a janky, unpleasant experience to viewers.&nbsp;</p> <p>CLS is strongly affected by the number of resources on the page, and by how and when those resources are served. You can see this by looking at the <a href="https://app.speedcurve.com/benchmark/retail-us/test/220307_0Q_629df0001e6f8cd98dabbb42196ff7a5/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">synthetic test result for Sears.com</a>&nbsp;(again, available via our <a href="https://app.speedcurve.com/benchmarks/usa/retail/slow/cumulative-layout-shift/">Industry Benchmarks</a>). The CLS score for this page is 1.0468. For context, Google recommends a score of 0.1 or less. Translation: This is a really, really janky page!</p> <p>These screenshots highlight the most significant visual element shifts:</p> <p><a href="https://app.speedcurve.com/benchmark/retail-us/test/220307_0Q_629df0001e6f8cd98dabbb42196ff7a5/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/cls-sears.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Not surprisingly, this page, while not overly huge in terms of total size (almost 3 MB), contains a massive number of requests. Of those 559 requests, the bulk is images (175 requests), JavaScript (140 requests), and 'other' (133 requests).&nbsp;</p> <p><a href="https://app.speedcurve.com/benchmark/retail-us/test/220307_0Q_629df0001e6f8cd98dabbb42196ff7a5/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/sears-page-size.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Looking at the <a href="https://app.speedcurve.com/benchmark/retail-us/test/220307_0Q_629df0001e6f8cd98dabbb42196ff7a5/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">waterfall chart for this same page</a>, we see that there are:</p> <ul> <li>16 requests before Start Render</li> <li>52 requests before Largest Contentful Paint, and</li> <li>62 requests before the Largest Layout Shift (a CLS-related metric that SpeedCurve captures)</li> </ul> <p>That's a lot of requests!</p> <h2>Is it possible to have big pages that deliver a good user experience?</h2> <p style="font-size: 16px;">Yes. While page size can be a red flag for real performance issues, if you care about user experience, you need to take a closer look at how your pages are built to see if the size and complexity of your pages actually affect how fast your site feels to your users.</p> <p style="font-size: 16px;">It's not enough to look at crude metrics like total requests and size. You need to know:</p> <ul style="font-size: 16px;"> <li>How many of your requests are blocking requests?</li> <li>If your page contains blocking requests, how many of them occur in the critical rendering path? That is, how many blocking requests are there before key page metrics like Start Render and Largest Contentful Paint?</li> <li>How many of your potentially problematic requests come from third parties, and how do you maintain visibility into how they're performing?</li> <li>Are the most important images on your page the first images to render? How quickly do they show up?</li> </ul> <p style="font-size: 16px;">Amazon is a good example of a site that serves large, fast pages. In&nbsp;<a href="https://app.speedcurve.com/benchmark/retail-us/test/220302_0C_0b8d7eaa99a061a20d14cd485217b33a/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">this recent test run</a>&nbsp;from our&nbsp;<a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/start-render/">Industry Page Speed Benchmarks</a>, you can see that the Amazon home page ranks fastest in terms of Start Render. This is despite the fact that the page contains 410 requests and weighs in at 4,311 KB &ndash; far beyond the median sizes shared above. Yet the page has a Start Render time of 0.3 seconds, a Largest Contentful Paint time of 0.48 seconds, and a CLS score of 0.1526.</p> <p style="font-size: 16px;"><a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/start-render/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/amazon-benchmarks-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p style="font-size: 16px;">Looking at a close-up of Amazon's waterfall chart (below) reveals why. While there are 38 resources that load before Largest Contentful Paint, only one of them is render blocking, and all of them are extremely lean.</p> <p style="font-size: 16px;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/434/amazon-waterfall.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Takeaways</h2> <p>I meet with so many people whose job it is to build and optimize websites. When we look at how their pages are built, I routinely witness their surprise at spotting things like ghost scripts, huge unoptimized images, and blocking resources they weren't aware of. These are smart people. The problem isn't them &ndash; it's the scale of their sites, the speed of their release cycles, and the number of people who touch each page.</p> <p>We're never going to get our lean, pre-1999, under-1MB web pages back. But we can regain control over the pages we have today.</p> <h3>1. Understand the critical rendering path for each page</h3> <p>Your pages probably have a lot of unnecessary junk on them, and some of that junk is unoptimized. Too much stuff means you can't see the forest for the trees. You can have large, complex pages that still feel fast. The key to a good user experience is quickly delivering the most important content first. Here are some <a href="https://developers.google.com/web/fundamentals/performance/critical-rendering-path/measure-crp">great resources for analyzing and optimizing the critical rendering path</a>.</p> <h3>2. Make sure everyone who touches a page understands the performance impact of what they do</h3> <p>All the fancy performance monitoring tools in the world can't help you if you don't have a strong performance culture at your organization. Here are some <a href="https://support.speedcurve.com/docs/performance-culture-best-practices">tips and best practices</a> to help on that journey.</p> <h3>3. Fight regression</h3> <p>Page bloat happens when people stop paying attention. We need to monitor our pages consistently over time.<a href="https://support.speedcurve.com/docs/continuous-integration">Integrating performance testing into your CI/CD process</a> is a great way to fight regression, especially if you combine this with creating&nbsp;<a href="https://support.speedcurve.com/docs/performance-budgets-101">performance budgets</a>. By creating performance budgets for key metrics &ndash; such as Start Render, Largest Contentful Paint, and various page size and weight metrics &ndash; you can get alerted when they go out of bounds.&nbsp;</p> <h3>4. Don't assume hardware and networks will mitigate page bloat</h3> <p>While some of your users may have newer devices and speedy networks, not all are this lucky.&nbsp;If you're using a <a href="https://support.speedcurve.com/docs/synthetic-vs-real-user-monitoring-rum">real user monitoring</a> tool, <a href="https://support.speedcurve.com/docs/performance-for-product-managers">keep an eye on your performance metrics at the 75th and 95th percentiles</a> so you have an understanding of your site's less-optimal performance.&nbsp;</p> <hr /> <p style="text-align: center;"><strong><em>If you're not already using SpeedCurve to monitor your site's performance,&nbsp;</em></strong><strong><em><a href="https://app.speedcurve.com/setup/trial/">start your free trial</a>!</em></strong></p> Tue, 08 Mar 2022 00:00:00 +1300 NEW: RUM Live and Page Views dashboards https://www.speedcurve.com/blog/new-rum-live-and-page-views-dashboards <p>Shortly before the end of the year, we snuck in a couple of last-minute gifts for 2021. It was a great year for SpeedCurve with a lot of renewed focus on RUM. We couldn't think of a better way to finish out the year than to launch the new <strong>Live</strong> and <strong>Page Views</strong> dashboards. Let's take a look!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/large_animated_rum.gif" alt="Animated RUM Live dashboard from SpeedCurve" /></p><h2>RUM Live</h2> <p>How are users experiencing my site right now? The Live dashboard gives you a near real-time view of how users are interacting with your website. This is a great way to spot current issues or just answer the question '<strong>Are my users happy?</strong>'</p> <p>There are several components on this dashboard, which all automatically refresh to give you the most current look at user performance.</p> <h3>Summary</h3> <p>Here you'll find a high-level overview for your site over the most recent time period, so you can see basic usage metrics at a glance. The default time period is 30 minutes, but you can select anything from 15 minutes to 12 hours.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/summary.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Summary table for Live dashboard" /></p> <h3>Geographic performance</h3> <p>Understanding performance across different markets is critical for site owners. Are investments you've made into your CDN paying off? Did you realize you were getting traffic from Turkey? Now you can quickly see performance by country for any of your favorite metrics. The gradient scale will help you to understand varying levels of performance, while the size of the bubble indicates the volume of users in that region.</p> <h3><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/geo.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="World map showing performance by country" /></h3> <h3>User Happiness and Core Web Vitals</h3> <p>Our <a href="https://support.speedcurve.com/docs/user-happiness" target="_blank" rel="noopener">user happiness</a> scoring quickly identifies the impact performance has on the user experience of your site. We've also included a new performance time-series view with page view columns to indicate the performance of key metrics compared with the size of the sample over the period. <a href="https://www.speedcurve.com/blog/new-vitals-dashboard/">Core Web Vitals</a> are presented by default, but you can select any of the metrics you care about for these charts.</p> <h3><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/happiness_cwv.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Charts illustrating SpeedCurve's User Happiness index and Core Web Vitals." /></h3> <h3>Sessions</h3> <p>Finally, the last component on this dashboard is the 'Latest Sessions' table. This is a list of the most current user sessions on your site.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/sessions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table of unique user sessions." /></p> <p>From any one of the listed sessions in the table, you can click through to look at the details, including the user's journey and the page level performance across all collected metrics. (Note that SpeedCurve RUM does not collect personal identifiable information (PII). <a href="https://support.speedcurve.com/docs/rum-data">Learn more</a> about the data that our RUM does and does not collect.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/session_details.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="image of the session details for an anonymous user" /></p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/page_details.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Performance details for a specific page in the user's session" /></h2> <h2>Page Views</h2> <p>Are things working as expected? Did my most recent change to RUM take effect? Am I collecting any data? Are my new custom metrics working as expected? Sometimes, you just need to <a href="https://www.youtube.com/watch?v=coNDCIMH8bk" target="_blank" rel="noopener">look at your data</a>. The Page Views dashboard allow you to do just that.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/pageviews.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve's RUM Page Views dashboard" /></p> <p>By drilling into any of the summarized entries in the table, you'll find everything we know about that page view. This includes both default and <a href="https://support.speedcurve.com/docs/custom-metrics" target="_blank" rel="noopener">custom metrics</a> as well as any <a href="https://support.speedcurve.com/docs/customer-data">custom data</a> you've defined.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/433/pageview_detail.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Expanded view of custom and default metrics for a specific page view" /></p> <h2>There was at least one good thing about 2021</h2> <p>We hope you enjoy all the new features we've added to RUM over the last year. It's exciting for us to continue to push the boundaries of what we can do with real user data and see firsthand how it helps our customers. In case you are wondering, 2022 has even more RUM goodness in store! As always, <a href="mailto:support@speedcurve.com" target="_blank" rel="noopener">feedback is welcomed</a> and encouraged.</p> <p>Not a RUM user? <a href="https://www.speedcurve.com/" target="_blank" rel="noopener">Start a free trial today</a>!</p> Tue, 18 Jan 2022 00:00:00 +1300