SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. NEW! Home, Site, Page and Tests dashboards https://www.speedcurve.com/blog/new-dashboards-home-site-page-tests <p>Things have been busy over here at SpeedCurve HQ! Coming off of the back of our latest <a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard">RUM Compare dashboard</a> release, we are super excited to launch four new dashboards to make your life better, your work easier, and your websites faster.</p> <ul> <li><strong>Home</strong> - Well, now you have one. We needed a place for folks to land when they were jumping into all that SpeedCurve has to offer. The newly released Home page is a great starting place, pulling together views of both RUM and Synthetic data and directing you on where to dig in next.</li> <li><strong>Sites</strong> - We've rebuilt our Sites dashboard to give you a holistic look across all of the URLs you are monitoring. Start here to understand which areas of your site need the most attention and identify areas you can have the most impact.</li> <li><strong>Pages&nbsp;</strong>- This detailed view gives you trended information for all the metrics you know and love. Easily identify areas of opportunity and how things are performing over time for specific pages you are monitoring.</li> <li><strong>Tests&nbsp;</strong>- We've had a lot of requests for this, so we're happy to deliver it. You can now see a list of your latest synthetic tests and spot any failed tests.&nbsp;</li> </ul> <p>Let's take a look!</p><h2>Home</h2> <p>This is the new default dashboard you'll see when you jump into SpeedCurve for the first time. If you are using both RUM and Synthetic, you'll see both datasets reflected in this view. Otherwise, the metrics reflected will be from your synthetic data.</p> <p>Beginning with a snapshot of your Core Web Vitals, this page highlights key areas of interest and a high-level snapshot for the selected site. Links from each section take you to a deeper look within other areas of your dashboards.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/home.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve's Home dashboard showcasing various areas for web performance investigation." /></p> <p>If you have <a href="https://support.speedcurve.com/docs/continuous-integration">integrated SpeedCurve into your CI/CD environment</a> (leveraging the <a href="https://support.speedcurve.com/reference/get-all-deploys">deploy API</a>), or if you are <a href="https://support.speedcurve.com/docs/manual-test-deploy">manually running synthetic tests</a> using the 'Test Now' option, you'll see the state of your last deployment compared to the previous. And finally, a 7-day trend is displayed for your most important metrics.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/home2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Continued view of the Home page below the first viewport." /></p> <h2>Site Dashboard</h2> <p><span style="color: #000000;">This view is intentionally focused on the latest data you've collected for your Site which gives you a great starting place when managing performance across various URLs. What pages have the slowest LCP? Home, Order and Print look like a good place to start digging.</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/site1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of the Site dashboard illustrating CWV for all urls monitoring in the Site" /></p> <p><span style="color: #000000;">How are users perceiving the visual rendering of all the pages in my site? Which pages should I dive into first?</span></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/filmstrip.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison of filmstrips for urls within the site" /></p> <p>From the LightHouse audits run against your site, you can quickly see which pages need the most attention across the 5 areas of focus.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/lighthouse.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Lighthouse scores compared across all urls in a site" /></p> <p>An aggregate view of the most critical areas for improvement identifies which recommendations will have the most collective impact across your pages.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/improvements.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="List of recommended improvements for the site" /></p> <p>&nbsp;</p> <h2>Page Dashboard</h2> <p>From the Site dashboard, you can narrow your focus and dive into the new Page dashboard. Here you'll see key metrics for the page trended over time. You can use the filters at the top of the page to further refine your data by browser, region, and date range.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of Synthetic Page dashboard showing CWV over time." /></p> <p>A look at the key rendering moments and a full filmstrip for the page focuses on user-perceived performance.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page dashboard illustrating key rendering times for the page as well as full filmstrip." /></p> <p>Now that you understand how the page is performing, you can take a look at how the construction for the page has changed over time and where optimization opportunities exist, such as reducing network requests and their file size.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page3.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Content requests over time for the page showing an improvement, decrease in requests over time." /></p> <p>Finally, a list of prioritized improvements for the page is displayed. Pay special attention to critical recommendations that are flagged with the Core Web Vitals metrics. If you implement those recommendations, you should see an improvement in your Vitals.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/page_improvements.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A list of prioritized page improvements from the most recent lighthouse audit " /></p> <h2>Tests Dashboard</h2> <p>We love simplicity, and this dashboard is a great example of why. We have received a lot of feature requests such as "Just show me a list of the tests I've run recently" as well as "How can I quickly see which tests are failing?" This dashboard achieves both!&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/tests.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Tests dashboard showing the most recent synthetic tests that have run for a Site." /></p> <p>If you want to refine this view, you can apply filters to show you tests for a specific site, page, region or browser. You can also filter on test status ('succeeded' or 'failed') and test type ('scheduled test' or 'deploy').</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/testsfilter.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Filter options for the tests dashboard." /></p> <p>Finally, if you have any failed tests we'll show those at the top of the previously mentioned Home page in order to quickly alert you to the failures. You can then click through to the test details to determine why the test failed.</p> <p>We hope this dashboard will help to quickly inform you so you can take action and protect your synthetic checks budget.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/445/failures.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>&nbsp;</p> <p>We hope you find these latest additions to SpeedCurve useful. As always, <a href="mailto:support@speedcurve.com">let us know</a> what you think!</p> <h2>Related:</h2> <ul> <li><a href="https://support.speedcurve.com/docs/get-started-synthetic">Getting started with SpeedCurve</a></li> <li><a href="https://support.speedcurve.com/docs/continuous-integration">Integrating performance testing into your CI environment</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-site-dashboard-walkthrough">Site dashboard walkthrough</a> (video)&nbsp;</li> <li><a href="https://support.speedcurve.com/docs/filter-your-speedcurve-dashboards">How to filter your SpeedCurve dashboards</a>&nbsp;(video)</li> <li><a href="https://support.speedcurve.com/docs/failed-synthetic-tests">Troubleshooting failed synthetic tests</a></li> </ul> Mon, 13 Mar 2023 00:00:00 +1300 Why you need to know your site's performance plateau (and how to find it) https://www.speedcurve.com/blog/web-performance-poverty-line <p style="text-align: center;"><strong>"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"</strong></p> <p style="text-align: center;"><strong>"How do I know how fast my pages should be?"</strong></p> <p style="text-align: center;"><strong>"How can I demonstrate the business value of performance to people in my organization?"</strong></p> <p>If you've ever asked yourself any of these questions, then you could find the answers in identifying and understanding the performance plateau for your site.</p> <h2>What is the "performance plateau"?</h2> <p>The performance plateau is the point at which changes to your website&rsquo;s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you&rsquo;ve bottomed out in terms of business and user engagement metrics.</p> <p>In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The concept of the performance plateau isn't new. I first encountered it more than ten years ago, when I was looking at data for a number of sites and noticed that &ndash; not only was there a correlation between performance metrics and business/engagement metrics &ndash; there was also a noticeable plateau in almost every correlation chart I looked at.&nbsp;</p> <p>A few months ago someone asked me if I've done any recent investigation into the performance plateau, to see if the concept still holds true. When I realized how much time has passed since my initial research, I thought it would be fun to take a fresh look.</p> <p>In this post, I'll show how to use your own data to find the plateau for your site, and then what to do with your new insights.</p><h2>Background</h2> <p>For this new investigation, I selected four sites that experience a significant amount of user traffic. For each site, I used a month's worth of RUM (real user monitoring) data to generate correlation charts.</p> <p><a href="https://support.speedcurve.com/docs/create-correlation-charts">Correlation charts</a> show the relationship between performance metrics &ndash; in these instances, Start Render and Largest Contentful Paint (LCP) &ndash; and user engagement (measured as bounce rate)<strong>.</strong> They're a great tool for showing non-technical folks how performance affects the business. (You can also create correlation charts that show&nbsp;<a href="https://support.speedcurve.com/docs/conversion-rates">the relationship between performance metrics and business metrics</a>, such as conversion rate and cart size, but bounce rate is easier to measure right out of the box with most RUM tools.)</p> <p><span style="font-size: 35px; color: #000000;">Results</span></p> <p>The correlation charts below show the distribution of all visits, with each yellow bar representing a cohort of visits that experienced a given Start Render or LCP time. The blue bar represents the change in bounce rate across all cohorts.</p> <p>In each of the correlation charts below, I've highlighted:</p> <ul> <li><strong>Optimal speed</strong>&nbsp;&ndash; The cohort of sessions that correlated with the lowest (aka best) bounce rate for that site</li> <li><strong>Beginning of the performance plateau</strong>&nbsp;&ndash; The cohort of sessions where the bounce rate begins to plateau</li> <li><strong>Median measurement</strong>&nbsp;for all visits represented in the chart</li> </ul> <p>Keep reading for observations and takeaways.</p> <h3>Site A</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-1-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site B</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-2-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site C</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-3-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Site D</h3> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-start-render.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-4-lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Observations</h2> <h3>1. A clear performance plateau emerged for each site</h3> <p>Each site experienced a plateau at which business metrics remained more or less the same as performance continued to degrade.</p> <h3>2. Plateaus emerged for both Start Render and Largest Contentful Paint</h3> <p>I expected the results for Start Render, as it's been around as a page speed metric for many years, and has been proven to correlate to business metrics. LCP is relatively new, and has received a great deal of attention since it was introduced a couple of years ago as part of Google's <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a>. It's good to see evidence that it's a meaningful metric to track in terms of business goals.</p> <h3>3. The plateau emerges surprisingly quickly in some cases</h3> <p>For example, Site C's performance plateau starts at 400 milliseconds. That's early!</p> <h3>4. There's a lot of variability in the distance between the optimal bounce rate and the plateau</h3> <p>For some sites, you can see a much steeper incline in the curve from optimal to plateau. For some sites (such as Site C), the difference was as little as 300 milliseconds. For others (such as Site A), the gap was as long as 9 seconds.</p> <h3>5. The plateau sometimes started later when looking at LCP</h3> <p>Creating correlation charts in both Start Render and LCP generated interesting results. In two of the four sites I looked at, the charts were roughly comparable. For the other two sites, the plateau started later for LCP than it did for Start Render. This could be attributed to the fact that LCP measures when the largest visual element has completely finished rendering, so it can occur much later than Start Render.</p> <h3>6. For some sites the performance plateau starts well before the median</h3> <p>Predictably, the optimal bounce rate generally correlated to the cohort of sessions that is much faster than the median. A bit more surprisingly, for some sites the performance plateau started well before the median. This could come as a scary revelation for some site owners, because it means that the bulk of your user sessions are occurring on the plateau.</p> <h2>How to measure the performance plateau for your own site</h2> <p>I can't emphasize enough that the examples I've shared are illustrative, not prescriptive. The performance plateau for your site will be different from the plateau for another site. <strong>You need to look at your own real user data. </strong>(If you're new to performance, you might be interested in&nbsp;<a href="https://support.speedcurve.com/docs/synthetic-vs-real-user-monitoring-rum">this synthetic and real user monitoring explainer</a>.)</p> <p>Fortunately, the process for identifying the low end of your site&rsquo;s performance threshold is fairly straightforward. All you need is access to a statistically significant amount of your RUM data, plus whatever analytics tool you use for tracking business or user engagement metrics.&nbsp;</p> <h3>Step 1: Identify the metrics you want to measure</h3> <p>As mentioned above, bounce rate is a good metric to start with, because it's already gathered automatically by most real user monitoring tools.</p> <p>If you have access to other data sources, then you can create a variety of correlation charts, If run an ecommerce site, then you can measure revenue, cart size, and conversion rate. If you work on a media site, then page views and bounce rate matter.</p> <h3>Step 2: Gather lots of real user data</h3> <p>To ensure that you get statistically relevant results, the more data you can gather, the better. If your dataset is too small, you could get wonky results. When I conducted my investigation, I aggregated millions of transactions that took place over a single month. (If you're interested in trying real user monitoring, you can start a <a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">free RUM trial</a> in speedCurve.))</p> <h3>Step 3: Create correlation charts</h3> <p>I've demonstrated how I like to show bounce rate (or whatever business/engagement metric you're plotting) across the distribution of sessions. (If you're a SpeedCurve user, <a href="https://support.speedcurve.com/docs/create-correlation-charts">here's how to create correlation charts</a>.)</p> <p><span style="font-size: 35px; color: #000000;">What to do with your findings</span></p> <p>After you've finished your own investigation, you can do a few things with the results:</p> <h3>1. Share your findings within your organization</h3> <p>Correlation charts are a powerful tool for showing stakeholders the impact that site speed has on the business. Even if your results aren't what you hoped they would be, you can use this data to prove the value of continuing to invest in performance.</p> <h3>2. Understand why your business metrics are not improving despite your efforts</h3> <p>This might seem a bit demoralizing, but when you think about it, it's actually helpful to know. When you know where your performance pleateau begins, you can answer the question "Why don't my business or user engagement metrics improve when I make my site faster?" If you improve Start Render from 5 seconds to 3 seconds, but the performance plateau for your site starts at 2 seconds, you haven't yet made Start Render fast enough.&nbsp;</p> <h3>3. Change your performance targets</h3> <p>Set targets for moving more of your users into the cohorts that experience faster Start Render or LCP times. Ideally, improving key site speed metrics for more of your users should improve bounce rate (or whatever user engagement or business metric you're tracking) for more of your users. Ultimately, this is good for your business.</p> <p>You can use your performance plateau to set goals. Continuing with the example in point 1, above, if you know that the plateau starts at 2 seconds, you can create a Start Render target of 1.5 seconds to work toward.</p> <h3>4. Or don't change your performance targets</h3> <p>In the Site C example, the optimal bounce rate occurs for the 100-millisecond LCP cohort, and the plateau starts just 300 milliseconds later. With a huge amount of work, you might succeed in delivering faster LCP times to more sessions, but would the effort be worth it?</p> <p>As the close-up view of the chart below shows, the bulk of sessions have speedy LCP times that are at the beginning of the performance plateau. In this case, the chart shows that perhaps you can be satisfied with your efforts, and your focus should be on fighting regressions and staying fast.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/443/site-c-closeup.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>5. Create a baseline to measure against</h3> <p>Repeat this exercise periodically &ndash; perhaps monthly, or semi-annually, or after a deploy where you've made a number of performance improvements &ndash; and compare the correlation charts over time. Ideally, you'll see more of your sessions fall into the faster section of the distribution, before the performance plateau.</p> <h2>Questions? Feedback?</h2> <p>If you experiment with creating correlation charts and plotting the performance plateau for your site, I'd love to hear about your results!</p> <h2>Related:&nbsp;</h2> <ul> <li><a href="https://support.speedcurve.com/docs/performance-for-retailers">Web performance for retailers</a></li> <li><a href="https://support.speedcurve.com/docs/create-correlation-charts">How to create correlation charts</a></li> <li><a href="https://support.speedcurve.com/docs/competitive-benchmarking">How to benchmark your site against your competitors</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">How to set up real user monitoring (RUM)</a></li> </ul> Mon, 06 Mar 2023 00:00:00 +1300 2022 in review: New dashboards, Core Web Vitals enhancements, third-party tracking & more! https://www.speedcurve.com/blog/2022-recap <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/2022.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Every year feels like a big year, and 2022 has been no exception. Not only did we celebrate our ninth birthday (!!!) we also:</p> <ul> <li>Launched a series of new and improved dashboards to help you identify and fix your performance issues more quickly and efficiently (More on those below.)</li> <li>Expanded our <a href="https://support.speedcurve.com/changelog/synthetic-new-regions-italy-bahrain-south-africa-indonesia">test agent locations</a> to bring you even more places to test from worldwide</li> <li>Jumped into in-person events, and were so happy to catch up with some of you at <a href="https://smashingconf.com/">SmashingConf</a> and <a href="https://perfnow.nl/">performance.now()</a> (You can watch videos of <a href="https://www.youtube.com/watch?v=cnr3CJwpaps&amp;t=2s">my talk about performance budgets</a> and <a href="https://www.youtube.com/watch?v=CFvTCiIDwMM">Andy Davies' talk about third parties</a>.)</li> </ul> <p>Keep reading for a full recap of the past year...</p><h2>Home dashboard</h2> <p>It only took nine years, but we now have a <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">home page</a>! ;-)</p> <p>Your Home dashboard brings together your Synthetic and RUM data to provide an easy-to-digest &ndash; and easy-to-share &ndash; snapshot of how your site is performing. You can see at a glance:</p> <ul> <li>Key metrics like <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a> and your <a href="https://support.speedcurve.com/docs/user-happiness">User Happiness</a> score</li> <li>The current status of your <a href="https://support.speedcurve.com/docs/performance-budgets-101">performance budgets</a></li> <li>Performance in <a href="https://support.speedcurve.com/docs/test-agent-locations">different geolocations</a> around the world</li> <li>Your current <a href="https://support.speedcurve.com/docs/competitive-benchmarking">competitive benchmarks</a> status</li> <li>Top <a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse recommendations</a> for performance improvements</li> </ul> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/home-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>(Hint: If you're considering installing a <a href="https://support.speedcurve.com/docs/share-dashboards">big-screen performance monitor</a> at your organization, this dashboard is a good candidate for that.)</p> <h2>Synthetic Site dashboard</h2> <p>You may have noticed that your <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">Site dashboard</a> has been given a complete overhaul. It now focuses on helping you spot differences between the performance of your pages, making it easier to identify which pages in your site have performance issues.</p> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/site-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>When you identify a page that needs further investigation, you can quickly navigate through to your new Synthetic Page dashboard...</p> <h2>Synthetic Page dashboard</h2> <p>You'll find your new <a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards">Page dashboard</a> to be similar to your old Synthetic Site dashboard. Here you can still find time series charts that show how performance is trending over time across a number of key metrics, including Core Web Vitals.</p> <p><a href="https://support.speedcurve.com/changelog/new-home-site-and-page-dashboards"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/page-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Contextual Web Vitals recommendations</h2> <p>As you may already know, <a href="https://support.speedcurve.com/docs/lighthouse">we run full Lighthouse audits</a> on every page you test in synthetic. The results &ndash; including detailed performance optimization recommendations &ndash; are available in your test details.</p> <p>In addition to that, we now badge all performance recommendations with the Core Web Vitals metric (LCP, CLS, FID, and TBT) they affect. Fix those performance issues and you should see an improvement in your Core Web Vitals and Lighthouse scores.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/contextual-vitals-recommendations.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Here are the dashboards where you can find your Vitals-badged performance recommendations:</p> <ul> <li>Home</li> <li>Vitals</li> <li>Improve</li> <li>Synthetic Site</li> <li>Synthetic Page</li> </ul> <h2>RUM Compare dashboard</h2> <p>With your <a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard/">RUM Compare dashboard</a>, you can easily generate side-by-side comparisons for any two cohorts of real user data.</p> <p><a href="https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/rum-compare-dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>Some of the many reasons you might want to compare RUM cohorts:</p> <ul> <li><strong>Improve Core Web Vitals</strong> by identifying the tradeoffs between pages that have different layout and construction</li> <li><strong>Triage a performance regression</strong> related to the latest change or deployment to your site by looking at a before/after comparison</li> <li><strong>Explore and compare different out-of-the-box cohorts</strong>, such as device types, geographies, page labels, and more</li> <li><strong>Analyze A/B tests or experiments</strong> to understand which had the most impact on user behaviour, as well as performance</li> <li><strong>Optimize your funnel</strong> by understanding differences between users that convert or bounce from your site and users who don't</li> <li><strong>Evaluate CDN performance</strong> by exploring the impact of time-of-day traffic patterns</li> </ul> <h2>RUM update: Page labels</h2> <p>Page labels are an integral part of getting the most out of dashboards like your RUM Compare dashboard (above). We've added <a href="https://www.speedcurve.com/blog/page-labels/">new capabilities for defining page labels</a> with an easy, no-code, server-side (SpeedCurve) option using URL patterns. (This means you now have four options for labeling pages in RUM, depending on your use case.)</p> <p><a href="https://www.speedcurve.com/blog/page-labels/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/rum-page-labels.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>RUM update: Performance heat maps</h2> <p>We've added heat maps to your RUM Performance dashboard. These let you quickly spot which page labels or paths are outliers. You can then prioritize and focus on these areas to improve important metrics, such as Largest Contentful Paint. Fixing high-traffic but poor-performing areas of your site will help lift your overall metrics.</p> <p><a href="https://support.speedcurve.com/changelog/rum-performance-heat-maps"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/performance-heatmaps.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Synthetic update: Compare third parties</h2> <p>When <a href="https://support.speedcurve.com/docs/bookmark-and-compare-tests">comparing any two synthetic tests</a>, we now show you a comparison of common third parties and identify which third parties are unique to each test. This lets you quickly spot changes in common third parties, as well as new third parties that might have introduced performance issues.</p> <p><a href="https://support.speedcurve.com/docs/first-third-parties"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/compare-third-parties.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Expanded Industry Speed Benchmarks</h2> <p><a href="https://app.speedcurve.com/benchmarks/">Page Speed Benchmarks</a> is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries &ndash; from retail to media &ndash; over the past year. The dashboard allows you to easily filter by region, industry, mobile/desktop, fast/slow, and key web performance metrics, including Core Web Vitals.</p> <p><a href="https://app.speedcurve.com/benchmarks/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/industry-benchmarks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>The <a href="https://app.speedcurve.com/benchmarks/">Industry Benchmarks dashboard</a> is publicly available (meaning you don't need a SpeedCurve account to explore it) and is a treasure trove of meaningful data that you can use for your own research.</p> <h2>Lots of new videos!</h2> <p>We've added a plethora of new videos to give you everything from <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">dashboard tours</a> to best practices for <a href="https://support.speedcurve.com/docs/diagnose-a-third-party-regression">fighting third-party regressions</a> and <a href="https://support.speedcurve.com/docs/track-core-web-vitals">tracking Core Web Vitals</a>. You can find them in our <a href="https://support.speedcurve.com/docs/welcome">Support Hub</a>.</p> <p><a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough"><img class="blog-img" src="https://blog-img.speedcurve.com/img/446/support-videos.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <h2>Here's to the year that was and the year ahead. :-)</h2> <p>As we enter our tenth year of bringing you the very best performance monitoring tools we can muster &ndash; and helping you deliver fast, joyous experiences to your users &ndash; we welcome your feedback and suggestions. Let us know your thoughts by leaving a comment or sending us a note at support@speedcurve.com.</p> Thu, 22 Dec 2022 00:00:00 +1300 NEW! RUM Compare dashboard https://www.speedcurve.com/blog/real-user-monitoring-compare-dashboard <p>Exploring real user (RUM) data can be a hugely enlightening process. It uncovers things about your users and their behavior that you never might have suspected. That said, it's not uncommon to spend precious time peeling back the layers of the onion, only to find false positives or uncertainty in all that data.</p> <p>At SpeedCurve, we believe a big part of our job is making <em>your job</em> easier. This was a major driver behind the Synthetic Compare dashboard we released last year, which so many of you given us great feedback on.</p> <p>As you may have guessed, since then we've been hard at work coming up with the right way to explore and compare your RUM datasets using a similar design pattern. Today, we are thrilled to announce your new RUM Compare dashboard!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/rum-compare-overview.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>With your RUM Compare dashboard, you can easily generate side-by-side comparisons for any two cohorts of data. Some of the many reasons you might want to do this include:</p> <ul> <li><strong>Improve Core Web Vitals</strong> by identifying the tradeoffs between pages that have different layout and construction</li> <li><strong>Triage&nbsp;a performance regression</strong> related to the latest change or deployment to your site by looking at a before/after comparison</li> <li><strong>Explore and compare different out-of-the-box cohorts,</strong> such as device types, geographies, page labels, and more</li> <li><strong>Analyze A/B tests or experiments</strong> to understand which had the most impact on user behavior, as well as performance&nbsp;</li> <li><strong>Optimize your funnel</strong> by understanding differences between users that convert or bounce from your site and users who don't</li> <li><strong>Evaluate CDN performance</strong> by exploring the impact of time-of-day traffic patterns</li> </ul> <p>Let's take a tour...</p><h2>How to compare RUM sessions</h2> <p>In September of last year, <a href="https://www.speedcurve.com/blog/real-user-monitoring-sessions-dashboard/">we released</a> the RUM Sessions dashboard. Along with the Sessions dashboard, we provided you with the ability to drill into specific datapoints on a time series to explore those sessions, along with the ability to bookmark them. You'll now find the ability to compare sessions from a time series with those existing options.</p> <p>To compare two different data points from any RUM time series chart, simply click 'Compare Sessions' on two separate data points:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/tooltip.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A tooltip that displays a performance metric and options for viewing, bookmarking and comparing sessions." /></p> <p>After you've selected two sessions to compare, you'll see a dashboard with those sessions in a side-by-side comparison:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/chart_compare.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Initial view of the Compare dashboard after clicking on two data points from a series" /></p> <h2>Filtering your data</h2> <p>Navigating directly to the dashboard is another option. From the RUM menu in the side navigation, click on 'Compare'. By default you are presented with a view of Mobile vs. Desktop device types:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/mobile_desktop.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Compare dashboard with Mobile vs. Desktop cohorts" /></p> <p>You can alter each of the cohorts by adding and removing filters from the menu for the cohort:&nbsp;</p> <p><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/444/filter_menu.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Menu displaying different filter options for the cohort" /></p> <p>If you're already experienced with our other RUM dashboards, the different options available in your filter will look familiar. The 'Metric Threshold' filter has also been introduced as a way for you to explore 'fast' versus 'slow' sessions for a metric. To use, simply select the metric along with the upper and lower bounds for each cohort.</p> <p>To demonstrate, let's look at a comparison of fast sessions (0-2 seconds) to slow sessions (2-15 seconds) for Largest Contentful Paint:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/metric_threshold.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Metric threshold filter showing LCP metric with min 0 and max 2 seconds." /></p> <h2>Dashboard components</h2> <h3>Overview</h3> <p>This comparison highlights the comparison of key performance metrics between the cohorts. Here we can see that the faster cohort has a median LCP of 1.25 seconds, while the slower is at 3 seconds. More importantly, we see the bounce rate is 27% higher for the slower subset of users. Speed matters!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/overview.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison table called Overview which contains a subset of key metrics compared between the cohorts" /></p> <h3>Sample sizes</h3> <p>If you've spent any amount of time exploring your RUM data, you've probably learned that the size of your population matters a lot. This chart lets you quickly discern whether or not you're comparing statistically viable sets of data. While the example below is not a 50/50 split, there seems to be enough data in the slower cohort compared to the faster one to make some valid observations.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/size_sample.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Line chart showing the population size of each cohort by pageviews." /></p> <h3>Location</h3> <p>Understanding where your users are coming from and how their experience may vary is a key part of RUM analysis, specifically as you make investment decisions on CDNs or CSPs. While not largely pronounced, in the visualization below you can see there is a larger subset of 'fast' users coming from Norway and the United States. Note that you have the ability in this graph to toggle between an absolute value and a percentage. This is key when you are comparing datasets that are not the same size.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/locationmap.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing a world map with population density visualized as bubbles. Table below showing numeric representation by Country." /></p> <h3>User Happiness</h3> <p>Somewhat self-explanatory, the comparison of your 'Happy Page Views' shown here illustrates a lot more of your slower users in the 'Unhappy' bucket. More can be found on our Happiness Index and how it's calculated&nbsp;<a href="https://support.speedcurve.com/docs/user-happiness">here</a>.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/happiness.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Threshold comparison of happy, ok and unhappy users between cohorts" /></p> <h3>Looking at cohorts by other filters</h3> <p>This table breakdown defaults to a breakdown across cohorts by Browser, but there are several other dimensions to explore. In this example, we don't see much of a browser-driven impact to LCP. (If you're wondering where iPhone is, keep in mind that, unfortunately, not all metrics are supported across platforms.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/browser.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table breakdown of the population of browser types for each cohort" /></p> <p>However, here we can see that a breakdown by connection speed highlights that while most of the users are on fast connections, there are a higher percentage of users coming from slower connection speeds in the slow cohort.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/connection.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table showing a population comparison by connection speed for each cohort." /></p> <h3>Event Timeline&nbsp;</h3> <p>Your event timeline allows you to compare key metrics between your cohorts. This can be useful in identifying other metrics that may be more interesting or potentially responsible for downstream metrics such as LCP. Backend time here is 500ms slower, which affects ALL metrics.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/eventtimeline.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparison of the two cohorts event timeline for key metrics." /></p> <h3>Metrics</h3> <p>The last, and potentially most compelling, component in the dashboard is focused on metrics. This component contains a LOT of information you can dig into.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/metrics.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table showing a comparison of a long list of metrics" /></p> <p>When you expand each metric, you have the ability to look at a comparison distribution, a time series comparison or a breakdown across all of our key dimensions. In this distribution chart, you can see that fast users often have network times of 0 seconds, most likely due to connection reuse.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/distribution.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Distribution comparison of the two cohorts for Network time" /></p> <p>Stacked bubble charts can help you quickly identify the presence of outliers. In the graph below, a lot of the slow LCP traffic is driven by older versions of Chrome. Oftentimes this can be automated traffic or other non-user traffic.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/444/bubbles.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Stacked bubble chart showing a breakdown by browser type. One outlier for Chrome 104 is highlighted." /></p> <h2>We welcome your feedback!</h2> <p>Phew! That's a lot of information packed into a single dashboard.&nbsp;We're excited to hear how you put this new RUM capability to work for your site. <a href="mailto:support@speedcurve.com">Drop us a line</a>&nbsp;and let us know what you think!</p> <p>And if you're not yet using our real user monitoring to learn how actual users are experiencing your site, <a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">try our RUM for free</a>!</p> Thu, 01 Dec 2022 00:00:00 +1300 Page labels: Why they are important and how to use them https://www.speedcurve.com/blog/page-labels <p>Labeling your pages in your synthetic and real user monitoring (RUM) tools is a crucial step in your performance monitoring setup. We recently released some exciting new capabilities for labeling your RUM pages that we want to share with you. This is also a great opportunity to reiterate why page labels are important, and to show you how easy it is to apply labels to your pages.</p> <h2>Why should I care about page labels?</h2> <p>There are so many benefits to labeling your pages in both your synthetic and real user monitoring (RUM) tools. Page labels give you the ability to:</p> <ul> <li>Compare and benchmark similar pages across different sites</li> <li>Create more meaningful performance budgets in RUM</li> <li>Correlate RUM data with synthetic diagnostics in the same charts (e.g., for Core Web Vitals that are measurable in synthetic and RUM)</li> <li>Prioritize performance optimizations according to their projected impact on real users</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/rum-vs-synthetic-budget-lcp.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Ready to learn more? Let's get to it!</p><h2>How do page labels help me?</h2> <p>Page labels give you the opportunity to gain richer insights from your data, and then to make that data actionable. Here are a few examples.&nbsp;</p> <h3>Compare and benchmark similar pages across different sites</h3> <p>For synthetic testing, page labels give you the ability to add URLs to a defined segment. When you configure a synthetic test, you provide the URL as well as a page label. There are a few common use cases for comparing similar pages:</p> <ul> <li><a href="https://support.speedcurve.com/docs/synthetic-page-labels">Compare different URLs of the same type</a> across different sites you manage.</li> <li><a href="https://support.speedcurve.com/docs/private-qa-development-and-staging-websites">Compare a pre-production URL</a> to a page on your production site.</li> <li><a href="https://support.speedcurve.com/docs/competitive-benchmarking" target="_blank" rel="noopener">Benchmark key pages</a>&nbsp;against those of your competitors.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/auto-benchmark.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart comparison of LCP for three elective car manufacturers home pages." /></p> <p style="text-align: center;"><em>Benchmark Dashboard from SpeedCurve</em></p> <h3>URL paths can be too granular</h3> <p>While <a href="https://support.speedcurve.com/docs/rum-page-labels#page-label-url-rules">using URL paths to filter your RUM data</a> can be really useful, by default when looking at datasets for a specific page template it is also important to have the ability to look at them as groups or segments.</p> <p>To illustrate, let's compare two distribution charts: the first for a set of grouped product pages (PDP), and the second for a unique URL path for a specific product page.</p> <p>This distribution for pages labeled as 'PDP' (product details page) is a good example of a fully-formed set of data representing a population:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/pdp-histo-blue.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Fully formed histogram for page load time" /></p> <p>Next, this distribution illustrates the issue with cardinality that occurs when attempting to analyze a specific URL. Even the p75 is quite different between the two distributions:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/product-url-histo-blue.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Sparse histogram of page load time for a unique url" /></p> <p>It's difficult to attempt to draw firm conclusions or use the information in the second chart to inform your performance budgets for such a small dataset.</p> <p>For more information on interpreting histograms, <a href="https://support.speedcurve.com/docs/how-to-read-a-histogram" target="_blank" rel="noopener">see this support article</a>.</p> <h3>Combine synthetic and RUM datasets</h3> <p>One of the things SpeedCurve is known for (and extremely passionate about) is using the complementary datasets from RUM and synthetic monitoring. When set up correctly, you have the ability to match synthetic page labels with RUM page labels. This opens up a lot of opportunity to do things like:</p> <h4>Present both datasets on a single chart</h4> <p>This example shows Largest Contentful Paint (LCP) from RUM for a page labeled 'Home' along with Image Size and the Lighthouse Performance score from a synthetic test with the same label:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/combined-chart.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart showing LCP from RUM data, image size and lighthouse score from synthetic." /></p> <h4>Correlate RUM data with synthetic diagnostics for Core Web Vitals</h4> <p>When using both RUM and synthetic, <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">your SpeedCurve Vitals dashboard</a> provides the most informed metrics (Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift) from RUM, alongside synthetic diagnostics such as LCP element identification and performance optimization recommendations.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/cwv-dash.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Core web vitals dashboard showing combined RUM and Synthetic datasets" /></p> <h4>Prioritize performance opportunities by RUM impact</h4> <p>Your <a href="https://support.speedcurve.com/docs/aggregated-lighthouse-results">Improve dashboard</a> aggregates performance recommendations from synthetic testing and identifies the percentage of your RUM traffic that would benefit. This is extremely helpful when trying to prioritize performance work for the most impact on your actual users.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/improve-dash.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table from the Improve dashboard showing recommendations from synthetic audit and the percentage of RUM traffic that would be impacted." /></p> <h2>How do I set up page labels in SpeedCurve?</h2> <h3>Synthetic page labels</h3> <p>This is really straightforward. From your Site Settings, simply add a label next to the URL you want to test.</p> <p>As a best practice, page labels should be unique for each URL if you are planning on comparing them in your benchmarks or correlating them with your RUM data.</p> <p>As a rule of thumb, you should have page labels for everything you label in RUM. For an overview of Site Settings, <a href="https://support.speedcurve.com/docs/setup-guide#step-4--add-sites-and-urls-for-testing">see this support article</a>.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/site-settings.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="URLs section of the Site Settings page highlighting a page label" /></p> <h3>RUM page labels</h3> <p><strong>Product update!</strong> We've recently rolled out the ability for customers to define page labels with a no-code, server-side (SpeedCurve) option using URL patterns (option 2 below). There are now four options for labeling pages in RUM depending on your use case. The following are in priority order. If you use more than one option to define a unique page label, we will default to option with the highest priority.</p> <h4>Option 1: Defining page labels using the RUM API</h4> <p>To programmatically assign labels, simply call the API from your pages using the following JavaScript placed after your SpeedCurve snippet:</p> <p style="padding-left: 60px;"><code>LUX = window.LUX || {};</code></p> <p style="padding-left: 60px;"><code></code><code>LUX.label = "Home";</code></p> <p>For more information on the RUM JS API, see <a href="https://support.speedcurve.com/docs/rum-js-api" target="_blank" rel="noopener">this support article</a>.</p> <h4>Option 2: Defining page labels using URL rules (<strong>NEW!</strong>)</h4> <p>This no-code approach is by far the most convenient. From your RUM Settings, choose the Page Label URL Rules tab in the top navigation and select 'Add Label'.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/add_page_label.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page Label Groups interface with selected Add Label button" /></p> <p>Next, choose a name for your label and add one or more URL rules. Full paths are supported (including domain), as well as paths relative to the root domain. Wildcard characters may also be used as demonstrated below.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/new_rule.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="A new page label rule called 'article' being created with an absolute path and a path containing a wildcard" /></p> <p>Now that you've added a rule (or rules), click on 'URLs with no associated Page Label' to show a list of URLs that are unmatched. You can then build a rule that matches unmatched URLs and assign it to an existing page label.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/reconcile.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page Label reconcile tool showing a list of unmatched page labels and an input field for pattern matching from the list." /></p> <h4>Option 3: Use an existing JavaScript variable</h4> <p>This is a good option if you're already using labels that you'd like to repurpose. From your RUM Settings, choose the 'Page Label JS Variable' tab. Simply fill in the field provided with the JavaScript variable, array, or property. (Note that functions are not supported.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/441/page_label_variable.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Page label JS Variable UI with a form field populated with a variable called page_name" /></p> <h4>Option 4: Default to <code>document.title</code></h4> <p>If no other options are used to define a page label, the <code>document.title</code> of the page will be used. This is typically not the most desirable option due to the uniqueness of the title. The titles are sometimes extremely long and separated by pipes and spaces (' | ') which can make it hard to navigate dashboard filters, chart titles, etc. Additionally, in some cases titles are so unique that you'll run into the same issues discussed earlier around URL cardinality.&nbsp;</p> <p>For more information on setting up your RUM page labels, see <a href="https://support.speedcurve.com/docs/rum-page-labels" target="_blank" rel="noopener">this support article</a>.</p> Wed, 02 Nov 2022 00:00:00 +1300 Element Timing: One true metric to rule them all? https://www.speedcurve.com/blog/element-timing-one-true-metric <p><span style="color: #1f1f1f; font-size: 16px;">One of the great things about Google's <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a> is that they provide a standard way to measure our visitors&rsquo; experience. Core Web Vitals can answer questions like:</span></p> <ul> <li><strong>When was the largest element displayed?</strong> Largest Contentful Paint (LCP) measures when the largest visual element (image or video) finishes rendering.</li> <li><strong>How much did the content move around as it loads?</strong> Cumulative Layout Shift (CLS) measures the visual stability of a page.</li> <li><strong>How responsive was the page to visitors' actions?</strong> First Input Delay (FID) measures how quickly a page responds to a user interaction, such as a click/tap or keypress.</li> </ul> <p>Sensible defaults, such as Core Web Vitals, are a good start, but one pitfall of standard measures is that they can miss what&rsquo;s actually most important.</p> <h2>The (potential) problems with Largest Contentful Paint</h2> <p>Largest Contentful Paint (LCP) makes the assumption that the largest visible element is the most important content from the visitors&rsquo; perspective; however, we don&rsquo;t have a choice about which element it measures. LCP may not be measuring the most appropriate &ndash; or even the same &ndash; element for each page view.</p> <h3>The LCP element can vary for first-time vs repeat visitors</h3> <p>In the case of a first-time visitor, the largest element might be a consent banner. On subsequent visits to the same page, the largest element might be an image for a product or a photo that illustrates a news story.</p> <p>The screenshots from What Hifi (a UK audio-visual magazine) illustrate this problem. When the consent banner is shown, then one of its paragraphs is the LCP element. When the consent banner is not shown, an article title becomes the LCP element. In other words,&nbsp; the LCP timestamp varies depending on which of these two experiences the visitor had!</p> <p style="text-align: center;"><a href="https://www.whathifi.com"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/whathifi2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="https://www.whathifi.com with and without the consent banner visible" /><em>What Hi Fi</em></a><em>&nbsp;with and without the consent banner visible</em></p><h3>Even when it&rsquo;s visible, a cookie banner may not always be the LCP element</h3> <p>In this example from my local newspaper, the story image is displayed and then the consent dialog covers it. But because the text elements in the dialog are smaller than the image, the image is measured for LCP.</p> <p style="text-align: center;"><a href="https://www.stroudnewsandjournal.co.uk/sport"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/stroudnewsandjournal.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="LCP element on www.stroudnewsandjournal.com with and without consent banner visible" /><em>Stroud News</em></a><em> with and without the consent banner visible</em></p> <p style="text-align: center;">&nbsp;</p> <h3>The largest element can vary by viewport size</h3> <p>On some devices What HiFi has a story image hidden behind the ad at the bottom of the page, which is gets measured as LCP. But more commonly we see variations between desktop and mobile viewport sizes, where different elements are measured for LCP.</p> <p>Joules clothing has this challenge on its product listing pages where the message about the impact of UK postal services going on strike is the LCP element at mobile viewport sizes, but one of the product images becomes LCP element at larger viewports.</p> <p style="text-align: center;"><a href="https://www.joules.com"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/joules-mobile-v-desktop.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparing the LCP element between mobile and desktop on on Joules product listing pages" /><em>Joules</em></a><em>&nbsp;on mobile and desktop viewports</em></p> <p style="text-align: center;">&nbsp;</p> <h3>LCP uses the rendering timestamp to prioritize same-size elements</h3> <p>Joules&rsquo; also highlights another other challenge with LCP &ndash; if I asked you to guess which element would be measured for LCP you&rsquo;d probably guess the first product image, but as all four images are the same size, and the third image was actually rendered first, it&rsquo;s measured for the LCP timestamp. This might be fine in some situations, but the LCP element might not be the most important image from your &ndash; or your visitors' &ndash; perspective.</p> <h3>Other caveats...</h3> <p>Those are just some examples of the issues we come across when measuring visitors&rsquo; experiences day-to-day. There are other examples, such as:</p> <ul> <li>image elements that are animated in, and</li> <li>'soft-navigations' in single-page apps (SPAs).<span class="Apple-converted-space">&nbsp;</span></li> </ul> <h2><span class="Apple-converted-space">How to understand which elements are chosen for LCP</span></h2> <p><span class="Apple-converted-space">If you want to understand which elements are being chosen for LCP, you can either use <a href="https://www.speedcurve.com/features/performance-monitoring/">synthetic monitoring</a>&nbsp;or the <a href="https://github.com/GoogleChrome/web-vitals/blob/main/src/attribution/onLCP.ts#L52">webvital.js library</a>, which includes support to help determine which element is the LCP element.</span></p> <p>With LCP, Chrome is making a best guess on which element is most important, but it may not be consistent between different pages, or different viewports, and it can be affected by other UI elements too.<span class="Apple-converted-space">&nbsp;</span></p> <p>There's no guarantee that LCP is measuring the moments that matter to your site and your visitors. That&rsquo;s where Element Timing helps.</p> <h2>What is Element Timing?</h2> <p>Similar to LCP, Element Timing measures when a DOM element is rendered. Unlike LCP, Element Timing allows you (rather than Chrome) to decide which elements you want to measure. And unlike LCP, Element Timing allows you to measure more than one element on a page.</p> <p>Element Timing is implemented by adding the <code>elementtiming</code> attribute to the element you want to measure, with its value set to the identifier the timestamp will be given.</p> <p>For example, if we wanted to measure when an image is displayed, we could use this markup:</p> <pre class="language-markup"><code>&lt;img src=&rdquo;/images/logo.png&rdquo; elementtiming="logo-shown" /&gt;</code></pre> <p class="p2" style="margin: 0px; font-variant-numeric: normal; font-variant-east-asian: normal; font-stretch: normal; font-size: 13px; line-height: normal; font-family: 'Helvetica Neue'; min-height: 15px; color: #000000;">&nbsp;</p> <p>Then, when the logo is displayed, an Element Timing entry with the identifier &lsquo;logo-shown&rsquo; will be created, containing a timestamp and other data about the element, e.g., intersection with the viewport.</p> <p>Timing entries can be retrieved using a Performance Observer, and the data can be forwarded to a RUM or analytics product:</p> <pre class="language-markup"><code>&lt;script&gt; const observer = new PerformanceObserver((list) =&gt; { let entries = list.getEntries().forEach((entry) =&gt; { // process entry here; }); }); observer.observe({ entryTypes: ["element"] }); &lt;/script&gt; </code></pre> <h2>How Element Timing can fill the gaps that Largest Contentful Paint leaves</h2> <p>Revisiting the Joules&rsquo; example from earlier, we could use Element Timing to measure when landmarks such as the logo or category title are displayed. We could also measure when the first product image is rendered.</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/joules-landmarks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Possible landmarks we might want to measure using Element Timing on Joules' product listing page" /><em>Example landmarks on Joules' Product Listing Page (PLP)</em></p> <p style="text-align: center;">&nbsp;</p> <p>Similarly, a publisher might want to measure when their logo, the story headline, and the accompanying image are shown.</p> <p>It should also be possible to use Element Timing to measure some pop-ups, such as consent banners; however, for now this probably depends on a site serving its own consent banner rather than relying on a third-party service.</p> <h2>A replacement for LCP in single-page apps?</h2> <p>A common question we get from customers with single-page applications (SPAs) is:&nbsp;<em>"How do I measure LCP after a route change?"</em></p> <p>The short answer is: <em>"You can&rsquo;t."</em></p> <p>LCP starts measuring when a navigation starts and stops on user input. SPAs often only have a single navigation as the initial page loads, with responses to user actions updating the contents of the page rather than navigating to a new one.</p> <p>Element Timing doesn't have these restrictions. It can measure new elements as they're inserted into the DOM and rendered, even after user interaction.<br /><br />In the past we've typically relied on <a href="https://support.speedcurve.com/docs/single-page-applications#using-custom-metrics-user-timing-marks--measures-in-a-spa">User Timing's marks and measures</a> to measure how long a route change or other action takes to complete. With Element Timing we can also measure when the visual changes are displayed and get a more accurate picture of the users' experience.</p> <h2>How to collect Element Timing data in SpeedCurve</h2> <p>Element Timings can be collected in both Synthetic and RUM by configuring them in the <a href="https://support.speedcurve.com/docs/custom-metrics-video">Custom Metrics</a> section of your settings:</p> <p><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/add-custom-metric.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Adding a custom metric in SpeedCurve" /></p> <p style="text-align: center;"><em>Adding a custom metric in SpeedCurve</em></p> <p style="text-align: center;"><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/configuring-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Configuring Element Timing in SpeedCurve" /></p> <p style="text-align: center;"><em>Configuring Element Timing in SpeedCurve</em></p> <p style="text-align: center;">&nbsp;</p> <p>After you&rsquo;ve started collecting Element Timings, you can add them to custom charts on your <a href="https://support.speedcurve.com/docs/custom-charts-dashboards">Favorites dashboards</a> and start tracking the moments that matter for you and your visitors. You can also <a href="https://support.speedcurve.com/docs/performance-budgets-and-alerts">create performance budgets and get alerts</a> when they exceed their thresholds.</p> <p>In this example, a retailer is tracking when their logo and the first product image is displayed:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/visualising-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Timeseries chart visualising the times when the logo and first product image was displayed" /></p> <p style="text-align: center;"><em>Visualising when a retailer's logo and first product image is displayed to a visitor</em></p> <h2>Limitations of Element Timing</h2> <p>Currently, Element Timing has a few limitations.</p> <h3>Only available in Chromium-based browsers</h3> <p>Neither <a href="https://webkit.org/status/">WebKit's</a>&nbsp;nor <a href="https://mozilla.github.io/standards-positions/"><span class="s1" style="color: #dca10d;">Firefox&rsquo;s</span></a> status trackers mention it,&nbsp;but as it's implemented as an attribute in markup, it will degrade gracefully in those browsers:</p> <p style="text-align: center;"><span class="s2" style="text-decoration-line: underline;"><a href="https://caniuse.com/?search=elementtiming"><img class="blog-img" src="https://blog-img.speedcurve.com/img/440/can-i-use-element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table from caniuse.com showing browser support for Element Timing" /><em>Can I Use Element Timing?</em></a></span></p> <p>&nbsp;</p> <h3>Supported on a limited subset of HTML elements</h3> <p>For privacy and security reasons Element Timing is only supported on a limited subset of HTML elements:</p> <ul> <li><code>img</code> elements</li> <li><code>image</code> elements within an SVG</li> <li><code>video</code> element's poster image</li> <li>elements with a <code>background-image</code></li> <li>text nodes</li> </ul> <p>But even with just this subset we can still measure many of the loading milestones we care about, such as key images and headings.</p> <h3>Can't track elements within iframes</h3> <p>As Element Timing only measure elements in the current document, we can&rsquo;t yet use it to track when say an embedded YouTube video renders or when ads are displayed, as these are typically contained within an iframe. (There&rsquo;s a <a href="https://docs.google.com/document/d/1LbXafcZMHagLReSkEkjhQdOIzeA9RS8_2e0_g5QLgY0"><span class="s1" style="color: #dca10d;">proposal to allow iframes to expose some of their performance data to their parent</span></a> that might enable this, but it&rsquo;s at an early stage.)</p> <h3>Elements in the ShadowDOM are currently excluded from Element Timing</h3> <p><a href="https://youtu.be/sOaZsMFScE0">Watch the discussion</a> about this at the most recent Web Performance Working Group TPAC meeting.</p> <h2>Summary</h2> <p>Largest Contentful Paint is a useful default, but the largest element isn't necessarily the most important from a visitor's perspective.</p> <p>Element Timing allows us to choose the elements that are most important on our own pages, and therefore measure moments that are most essential to our visitors' experience.</p> <p>You can use SpeedCurve to collect and track Element Timings so you can see how the performance of key elements changes over time or varies between different pages.</p> <p>If you've been using Element Timings to measure your visitors' experiences, or if you've implemented them in a SPA, we'd love to&nbsp;<a href="mailto:support@speedcurve.com">hear about it</a>.</p> <h2>References</h2> <ul> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://mozilla.github.io/standards-positions/">Mozilla Standards Positions</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://webkit.org/status/">WebKit Feature Status</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://wicg.github.io/element-timing/">Element Timing API Specification</a></span></li> <li><a href="https://w3c.github.io/user-timing/">W3C User Timing API Specification</a></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://caniuse.com/?search=elementtiming">Browser support for ElementTiming</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://support.speedcurve.com/docs/customer-data#adding-custom-data-to-settings">Configuring SpeedCurve Custom Data</a></span></li> <li><span class="s1" style="text-decoration-line: underline;"><a href="https://docs.google.com/document/d/1LbXafcZMHagLReSkEkjhQdOIzeA9RS8_2e0_g5QLgY0">Proposal for Cross Frame Performance Timeline</a></span></li> </ul> Wed, 21 Sep 2022 00:00:00 +1200 July 2022 product update: Web Vitals support, more synthetic test agents & performance heat maps https://www.speedcurve.com/blog/july-2022-product-update <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/july-banner.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's been another busy month here at SpeedCurve! Check out our latest product updates below.</p><h2>RUM: Performance heat maps</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/rum-perf-heat-maps.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Fixing high-traffic but poorly performing areas of your site will improve your overall metrics &ndash; and create a faster, better experience for more of your users. To help you do this, we've added heat maps to your RUM Performance dashboard that let you quickly spot which page labels or paths are experiencing performance issues. You can then prioritize and focus on these areas to improve important metrics like Largest Contentful Paint.</p> <h2>Synthetic: New test regions in Italy, Bahrain, South Africa, and Indonesia</h2> <p>Based on your feedback and requests, we've added four new regions from which you can run your synthetic tests:</p> <ul> <li>Italy (Milan)</li> <li>South Africa (Cape Town)</li> <li>Bahrain (Manama)</li> <li>Indonesia (Jakarta)</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/new-test-regions.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>If none of <a href="https://support.speedcurve.com/docs/test-agent-locations">our current test regions</a> gives you the testing locations you need you can also <a href="https://support.speedcurve.com/docs/private-agents">run your own private agent</a>.</p> <h2>Change summary metrics on your charts</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/summary-metrics.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>We've added a new display option that lets you change the summary metrics shown at the top of each chart. You can now choose between Aggregate, Minimum, Maximum or Latest. Hovering over any metric will give you a tooltip with more context.</p> <h2>In case you missed it...</h2> <p>Here are some other features we've released in recent months:</p> <p><a href="https://support.speedcurve.com/changelog/vitals-recommendations"><strong>Core Web Vitals badges in performance recommendations</strong></a> &ndash; Identify exactly which performance issues you need to fix to improve your Lighthouse and CWV metrics. All performance recommendations are badged, so you can see which Core Web Vitals you're improving when you fix performance issues. (You can find recommendations in your Vitals and Improve dashboards, as well as test details.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/439/vitals-badges.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://support.speedcurve.com/changelog/synthetic-update-compare-third-parties"><strong>Compare third parties</strong></a> &ndash; When comparing two synthetic tests, see a comparison of common third parties and identify which third parties are unique to each test.</p> <p><a href="https://support.speedcurve.com/docs/bookmark-and-compare-tests"><strong>Bookmarks dashboard for both Synthetic and RUM</strong></a> &ndash; Compare any two synthetic tests or RUM sessions.</p> <p><a href="https://support.speedcurve.com/changelog/rum-update-path-filter"><strong>RUM path filters</strong></a> &ndash; Monitor popular or custom user paths in your data.</p> <h2>Most-watched help videos</h2> <p>We've created two video series &ndash; <a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough"><strong>dashboard tours</strong></a> and <a href="https://support.speedcurve.com/docs/track-core-web-vitals"><strong>how-to videos</strong></a> &ndash; to help you find and fix performance issues on your site. Here are the current top five most-watched videos:</p> <ul> <li><a href="https://support.speedcurve.com/docs/vitals-dashboard-walkthrough">Vitals Dashboard tour</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-first-and-third-party-dashboard-walkthrough">First and Third Party Dashboard tour</a></li> <li><a href="https://support.speedcurve.com/docs/track-core-web-vitals">Track Google's Core Web Vitals (and create performance budgets for them)</a></li> <li><a href="https://support.speedcurve.com/docs/diagnose-cumulative-layout-shift-cls-issues">Diagnose Cumulative Layout Shift (CLS) issues</a></li> <li><a href="https://support.speedcurve.com/docs/investigate-slow-time-to-interactive-tti">Investigate why your Time to Interactive (TTI) is slow</a></li> </ul> <h2>Most-read support articles</h2> <p>We take a lot of pride in maintaining a comprehensive <a href="https://support.speedcurve.com/docs/welcome"><strong>Support Hub</strong></a>. These are the most popular articles users looked up in the past month:</p> <ul> <li><a href="https://support.speedcurve.com/docs/average-median-percentiles">Understand medians, averages, and percentiles</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Get started with Core Web Vitals</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-real-user-monitoring">Get started with real user monitoring (RUM)</a></li> <li><a href="https://support.speedcurve.com/docs/synthetic-test-scripts">Synthetic test scripting</a></li> <li><a href="https://support.speedcurve.com/docs/metrics-glossary">Glossary of performance metrics</a></li> </ul> <h2>Questions? Feedback? Suggestions?</h2> <p>We'd love to hear from you! Leave a comment below or email us at support@speedcurve.com.</p> Wed, 27 Jul 2022 00:00:00 +1200 June 2022 product update: Performance recommendations on Vitals dashboard, RUM path filters & more https://www.speedcurve.com/blog/product-update-june-2022 <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-product-june-2022-newer.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>We've been busy here at SpeedCurve HQ! Here's a roundup of our recent product updates.</p><h2>Performance recommendations on your Vitals dashboard</h2> <p><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals"><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-vitals-audits.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p>On <a href="https://www.youtube.com/watch?v=6dOBbvh4ZLA">your Vitals dashboard</a>, you now get performance recommendations that are specific to each of the Vitals you're tracking &ndash; Largest Contentful Paint, First Input Delay, Total Blocking Time, and Cumulative Layout Shift. This makes your Vitals dashboard a powerful tool for not only seeing how your metrics perform relative to Google's thresholds, but also diagnose your biggest pain points and get prioritized solutions.</p> <h2>Vitals badges on Lighthouse performance audits</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-vitals-badges2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Elsewhere in SpeedCurve, all the performance recommendations you see in your&nbsp;<a href="https://youtu.be/6dOBbvh4ZLA">Vitals</a>&nbsp;and&nbsp;<a href="https://support.speedcurve.com/docs/aggregated-lighthouse-results">Improve</a>&nbsp;dashboards &ndash; as well as in your synthetic test details &ndash; are now badged so you can see which&nbsp;<a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Web Vitals</a>&nbsp;they affect. Fix those issues and you should see improvements in your Vitals and&nbsp;<a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse</a>&nbsp;scores.</p> <h2>RUM update: Path filters</h2> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-rum-paths.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><code></code></p> <p><code></code></p> <h2>RUM update: lux.js v301</h2> <p>New features:</p> <ul> <li>The synthetic onload time for <a href="https://support.speedcurve.com/docs/single-page-applications">SPAs</a> can be marked with <code>LUX.markLoadTime()</code>, allowing <code>LUX.send()</code> to be called later in the page lifecycle.?</li> <li>Added the <a href="https://speedcurve-metrics.github.io/lux.js/debug-parser.html">SpeedCurve RUM Debug Parser</a> to help interpret the debug messages.?</li> <li><code>LUX.getDebug()</code> now includes events that help to debug some metrics including LCP, CLS, element timing, and long tasks.?</li> <li>Source maps are now available for lux.js.?</li> </ul> <p>Bug fixes:</p> <ul> <li>Fixed a bug where JavaScript errors were only tracked on the first SPA page view.?</li> </ul> <h2>Synthetic update: Compare third parties</h2> <p><a href="https://youtu.be/UrF6HpFC08Q"><img class="blog-img" src="https://blog-img.speedcurve.com/img/438/blog-third-parties2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p style="color: #1f1f1f; font-family: Gotham, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">&nbsp;</p> <p style="color: #1f1f1f; font-family: Gotham, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">When&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://support.speedcurve.com/docs/bookmark-and-compare-tests">comparing two tests</a>, we now give you a comparison of common&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://support.speedcurve.com/docs/first-third-parties">third parties</a>. We also identify which third parties are unique to each test. Using this feature, you can quickly identify new and problematic third parties.&nbsp;<a style="text-decoration: underline; color: #35a2d3;" href="https://youtu.be/UrF6HpFC08Q">This short video</a>&nbsp;explains how to diagnose third-party regressions in SpeedCurve.</p> <h2>New in the Support Hub</h2> <ul> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Get started with Core Web Vitals</a>?</li> <li><a href="https://support.speedcurve.com/docs/seo-and-web-performance">SEO and web performance</a>?</li> <li><a href="https://support.speedcurve.com/docs/custom-metrics-for-anti-flicker-snippets">Custom metrics for anti-flicker snippets</a>?</li> <li><a href="https://support.speedcurve.com/docs/cls-scores-in-rum-vs-synthetic">Understand Cumulative Layout Shifts (CLS) scores in RUM vs synthetic?</a></li> <li><a href="https://support.speedcurve.com/docs/investigate-rum-sessions">Investigate RUM Sessions</a>?</li> </ul> <h2>Questions? Feedback? Suggestions?</h2> <p>We'd love to hear from you! Send us a note at support@speedcurve.com</p> Mon, 13 Jun 2022 00:00:00 +1200 Sampling RUM: A closer look https://www.speedcurve.com/blog/sampling-rum <p>Being able to set a sample rate in your real user monitoring (RUM) tool allows you to monitor your pages while managing your spending. It's a great option if staying within a budget is important to you. With the ability to sample real user data, comes this question...</p> <h2>"What should my RUM sample rate be?"</h2> <p>This frequently asked question doesn't have a simple answer. Refining your sample rate can be hit or miss if you aren&rsquo;t careful. In a <a href="https://www.speedcurve.com/blog/sampling-real-user-monitoring/">previous post</a>, I discussed a few considerations when determining how much RUM data you really need to make informed decisions. If you sample too much, you may be collecting a lot of data you may never use. On the other hand, if you sample too little, you risk creating variability in your data that is hard to trust.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/rum-sample-rate2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In this post, we are going to do a bit of research and let the data speak for itself. I took a look at the impact of sampling at various levels for three t-shirt sized companies (Small, Medium, Large) with the hope of providing some guidance for those of you considering sampling your RUM data.</p> <p>In this post, I'll cover:</p> <ul> <li>Methodology</li> <li>Key findings</li> <li>Considerations</li> <li>Recommendations</li> </ul><h2 id="methodology">Methodology</h2> <h3>Traffic size</h3> <p>I tried to keep this research as simple as possible. We see a large variety of sites at SpeedCurve, representing an assortment of countries, industry segments, traffic levels and more. For the purposes of this study, I'll use example sites from three cohorts:</p> <ol> <li>Large: &gt;1M daily page views</li> <li>Medium: 250K-500K daily page views</li> <li>Small: 10K-100K daily page views</li> </ol> <p>It's important to note that the sites I looked at collect 100% of their RUM data.&nbsp;</p> <h3>Time frame</h3> <p>24 hours. Traffic fluctuates based on the hour of the day, day of the week, and due to seasonality. I looked at the same date, mid-week, for each of the sites, which represented a consistent pattern of daily traffic.</p> <h3>Metric</h3> <p>This was a little tough. Not all metrics are created equal and I try to avoid picking favorites. At the time of this writing, Largest Contentful Paint (LCP) is <a href="https://caniuse.com/mdn-api_largestcontentfulpaint" target="_blank" rel="noopener">not supported by all browsers</a>, so it brings with it a bit of bias. This is true of many of the metrics we collect at SpeedCurve. We'll discuss this and other considerations a bit later. In the end, I settled on <a href="https://caniuse.com/mdn-api_performancetiming_loadeventend" target="_blank" rel="noopener">loadEventEnd</a>&nbsp;due to the fact that it has widespread support across browser platforms.&nbsp;</p> <h3>Sampling method</h3> <p>At SpeedCurve, we have the ability to sample based on sessions versus randomly sampling page views. We feel it's more important to maintain the integrity of the session than to specify precisely how many page views you want to look at. Because we track and identify user sessions, it made things a lot easier for me to sample the data&nbsp;<em>after</em> the fact.</p> <h3>Interpreting the data</h3> <p>There are a lot of ways to compare the data. I'm not a data scientist and I wanted to demonstrate the impact of sampling using views of the data that are familiar to those who have at least seen performance data before.</p> <p style="padding-left: 30px;"><strong>Aggregates:</strong> We will look at the percentage change between the 50th, 75th, and 95th percentiles. I considered anything under 5% acceptable.</p> <p style="padding-left: 30px;"><strong>Histograms:</strong> You can learn a lot if you just look at your data. Histograms are great for showing the performance characteristics of an entire population. (<a href="https://support.speedcurve.com/docs/how-to-read-a-histogram">Learn more about understanding and interpreting histograms</a><span style="color: #1f1f1f;">.)&nbsp;</span><span style="color: #1f1f1f;">For this experiment, we are comparing the overall shape and distribution of our sampled versus unsampled&nbsp;populations. In some cases, the aggregates may have been under 5%, but the the histogram was very sparse and didn't resemble the original distribution. </span><span style="color: #1f1f1f;">For example, the differences between these two histograms is obvious despite their medians being within reason. When looking at the 95th percentile, you observe that the long tail is essentially 'missing' from the sampled data. While somewhat unscientific, I used the eyeball test along with the aggregates to decide if the rate was appropriate.</span></p> <p><span style="color: #1f1f1f;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/bad_histo.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of 100% and 1% histogram samples." /><br /></span></p> <p style="padding-left: 30px;"><span style="color: #1f1f1f;"><strong>Time series:</strong>&nbsp;</span>Intraday variability is important if you're using the RUM data operationally. A simple time series will be used to illustrate how sampling impacts the 'noise' factor.</p> <h2 id="findings">Key findings</h2> <h3>TL;DR</h3> <p>For the most part, I found that if the sampled population of users was greater than 3,000, the aggregate stats were pretty close to your upsampled population (1-2% difference in the median). However, you should read on to understand some of the trade-offs dependent on your use case for RUM. Or, if you'd rather, go ahead and <a href="#recommendations">jump to the results.</a></p> <h3>RUM for reporting</h3> <p>If you're simply looking to RUM as a reporting tool that can represent your daily performance, you're in luck. You can get away with a relatively small sample of your overall population depending on your size.</p> <p>To determine the smallest sample rate for each group, we looked at a combination of the aggregate numbers and a comparison of the histograms. Note the consistency in the 95th percentile illustrated in these comparison charts.&nbsp;</p> <h4>Small (10K-100K daily page views sampled at 50%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_small.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 50% of population for a small site." /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h4>Medium (250K-500K daily page views sampled at 10%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_medium.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 10% of population for a medium site." /></p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h4><br />Large (&gt;1M daily page views sampled at 1%)</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/histo_compare_large.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Histogram comparison of full data set and 1% of population for a large site." /></p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table3.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h3>&nbsp;</h3> <h3>Intraday performance monitoring</h3> <p>You might be one of those sites deploying code multiple times a day. Maybe you're susceptible to variability from things such as third parties, varying traffic patterns, or other unknowns. (Aren't we all?) If this is the case, you may have more operational need for RUM. Your sampling rate can have a bit of impact on whether or not your data appears noisy or unpredictable.</p> <p>Looking at the recommended rates from the previous use case, the examples below show you how much you'll need to dial that up to get a reliable picture of hourly performance, and even more if you are looking at real-time monitoring (by minute).</p> <h4 style="text-align: left;"><strong><span style="color: #1f1f1f;">Hourly monitoring:</span></strong></h4> <h4 style="text-align: left; padding-left: 90px;">Small &ndash; increased from 50% -&gt; 75%</h4> <h4><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/small_timeseries_compare_by-hour.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a small site." /></h4> <p style="text-align: center;"><em>While increasing the rate helped remove some of the large deviations seen, the data is naturally much more variable for small-traffic sites.</em></p> <h4 style="text-align: left; padding-left: 90px;">Medium &ndash; increased from 10% -&gt; 25%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/medium_timeseries_compare_by-hour.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a medium site." /></p> <p style="text-align: center;"><em style="text-align: center;">While the peak hours were somewhat consistent at 10%, increasing the rate to 25% removed the larger off-peak deviations.</em></p> <h4 style="text-align: left; padding-left: 90px;">Large &ndash; increased from 1% -&gt; 10%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/large_timeseries_compare.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Side by side comparison of time series hourly data for a large site." /></p> <p style="text-align: center;"><em style="text-align: center;">Increasing the rate by 10% greatly improved consistency for the large-traffic site.</em></p> <p><strong>Real-time monitoring:</strong>&nbsp;</p> <h4 style="text-align: left; padding-left: 90px;">Small &ndash; increased from 75% -&gt; 95%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/small_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a small site." /></p> <p style="text-align: center;"><em style="text-align: center;">For some of the larger spikes in the data, increasing the sample to 95% was effective. However, given how variable the data is, it's hard to say if real-time monitoring of smaller sites like this is really very effective.</em></p> <h4 style="text-align: left; padding-left: 90px;">Medium &ndash; increased from 25% -&gt; 75%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/medium_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a medium site." /></p> <p style="text-align: center;"><em style="text-align: center;">For the medium-traffic site, there was benefit&nbsp;when increasing the rate to 75%.&nbsp;</em></p> <h4 style="text-align: left; padding-left: 90px;">Large &ndash; increased from 10% -&gt;&nbsp; 40%</h4> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/large_timeseries_compare_by-minute.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series comparison of realtime data for a large site." /></p> <p style="text-align: center;"><em style="text-align: center;">For this particular large-traffic site, getting real-time data consistent with the whole population required a much larger increase in the sample rate than anticipated.</em></p> <p>&nbsp;</p> <div style="overflow-x: auto;"> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table4.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h2 id="considerations">Considerations</h2> <h3>Data segmentation</h3> <p>Here comes the kicker. One of the great things about RUM is the ability to slice and dice your data. The distribution of your user population is made up of all types of user experiences. This has a pretty big impact on your sample rate, as when you filter/segment/slice/dice your data, you're effectively reducing the size of your population.</p> <p><strong>When determining how sampling will be affected by the segments you care about, get an idea of the percentage of traffic that is represented by the segment and factor that percentage into your overall rate.</strong> Some of the common segments include country, page types, device types and browsers. After applying a lot of segmentation to the experiments above, a good rule of thumb is to increase your sample rate by 50% (or collect 100% of the data for small sites).</p> <h3>Metrics</h3> <p>As mentioned earlier, there are some metrics (okay, many metrics) that aren't supported across browsers. Just as you would increase your sample rate for the segments, <strong>you should consider increasing the sample rate for metrics such as FCP, LCP and Total Blocking Time, which don't have broad browser support</strong>. This is also true of some network-related metrics that don't occur on every page load (DNS, Connect, SSL, Redirect).</p> <h3>Increasing time windows</h3> <p>It's sometimes recommended that you need to capture 100% of your data if you are comparing RUM data for different experiments, or capturing conversion data in order to understand the business impact of performance. This is not always the case. As an alternative, <strong>you can look at a much larger time window with a LOT more sampled data</strong>. This is also true of sites with low traffic numbers. Simply expand your time window until you have a healthy distribution.<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/largerwindows-histo.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Comparing 3 histograms to show the impact of widening your time window." /></p> <h2 id="recommendations">Recommendations</h2> <p>The intent of this post was to help provide some direction around sampling RUM data. The recommended levels are not intended to be precise, as there are too many factors that could influence things one way or the other. Use this table as a guide in addition to the knowledge you have about your users:</p> <p>&nbsp;</p> <div class="table-responsive"><img class="blog-img" src="https://blog-img.speedcurve.com/img/437/table5.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></div> <h2>Learn more about RUM sampling</h2> <p>As you may have guessed, SpeedCurve supports data sampling in RUM. <a href="https://support.speedcurve.com/docs/sample-rate" target="_blank" rel="noopener">This article goes into detail about how our RUM sampling works</a> and explains the different ways you can implement it. If you have any questions or feedback, we'd love to hear from you. Leave a comment below or send us a note at support@speedcurve.com.</p> </div> Wed, 01 Jun 2022 00:00:00 +1200 Understanding the performance impact of anti-flicker snippets https://www.speedcurve.com/blog/web-performance-anti-flicker-snippets <p>Experimentation tools that use asynchronous scripts &ndash; such as Google Optimize, Adobe Target, and Visual Web Optimizer &ndash;&nbsp; recommend using an anti-flicker snippet to hide the page until they've finished executing. But this practice comes with some performance measurement pitfalls:</p> <ul> <li>Hiding the contents of the page can have a dramatic effect on the Web Vitals that measure visual experience, such as First Contentful Paint (FCP) and Largest Contentful Paint (LCP)</li> <li>Anti-flicker snippets can also affect Cumulative Layout Shift (CLS) and the synthetic equivalent of First Input Delay (FID), Total Blocking Time (TBT).</li> </ul> <p>In this post we'll look at how anti-flicker snippets work, their impact on Web Vitals, and how to measure the delay they add to visitors' experience.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/anti-flicker-vc-bounce-rate.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p><h2>Hiding the page contents</h2> <p>Normally web pages are rendered progressively. As browsers start to receive content, they can layout and render page elements and display the content bit by bit.</p> <p>Anti-flicker snippets hide the contents of a page until the the experimentation tool (e.g. Google Optimize) has finished applying its experiments. The hypothesis is that if a visitor sees the page changing, it may influence how they behave &ndash; either because they had an unpleasant experience or simply because they became aware that they're in an experiment.&nbsp;<br /><br />But as you can see in these filmstrips of Wiggle, a UK cycling retailer, hiding the page can have a dramatic impact on a visitor's experience:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/wiggle.001.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Filmstrip showing how Google Optimize's Anti-Flicker snippet delays content from being visible" /></p> <p style="text-align: center;"><em>Comparison of a site loading with (top) and without (bottom) the anti-flicker snippet enabled</em></p> <p style="text-align: center;">&nbsp;</p> <p><strong>The test in top row has the anti-flicker snippet in place.</strong> The content is displayed 'all at once' when the anti-flicker snippet reveals the page.&nbsp;</p> <p><strong>The test in the bottom row has the anti-flicker snippet removed.</strong> The page loads progressively, with the menu and logo appearing about 1.5s before the hero image. The anti-flicker snippet delayed showing the content for two seconds.</p> <p>These tests were over a fast network connection, so First Contentful Paint and Last Contentful Paint happened simultaneously. On slower connections, however, there is a gap between FCP and LCP &ndash; and some progressive rendering &ndash; but FCP still won't start until the anti-flicker snippet finishes.<br /><br />(I chose Wiggle semi-randomly after searching the HTTP Archive for sites that use Google Optimize.)</p> <h2>Effects on other Web Vitals</h2> <p>Anti-flicker snippets can affect other Web Vitals, too:</p> <ul> <li><strong>Decrease in Total Blocking Time</strong> &ndash; TBT is a synthetic monitoring metric that measures how long JavaScript and others tasks prevent the page from handling user interaction. It starts measuring at First Contentful Paint (FCP) and stops at Time to Interactive (TTI). As anti-flicker snippets delay FCP, then the window for measuring the Long Tasks gets smaller, which means you may see a decrease in TBT.</li> <li><strong>Lower Cumulative Layout Shift score</strong> &ndash; CLS measures how much content moves around on the page. If some of this movement happens while the page is hidden, then the CLS score will be lower.</li> </ul> <p>To summarize: On one hand, anti-flicker snippets make metrics such as FCP (and possibly LCP) worse. On the other hand, they can appear to improve TBT and CLS.</p> <p>My instinct is that showing content to the visitor sooner may be a higher priority than the incidental TBT or CLS boost. But rather than trust my instincts, we should measure the impact of hiding the page. To do that, we need to understand a little about how the snippets work.</p> <h2>How anti-flicker snippets work</h2> <p>Anti-flicker snippets typically add a style with <code>opacity: 0</code> to the elements to be hidden. In Google Optimize's default case is the whole document.</p> <p>The (un-minified) Optimize snippet below declares the <code>.async-hide</code> class in a style block, and then applies it to the document using a script. It also defines a function to remove the class and sets a timer to call this function after four seconds.</p> <p>The <code>.async-hide</code> class will either be removed when Google Optimize finishes applying its variants or when the timeout value is reached. In the example below, FCP can be delayed by up to four seconds (the default in the snippet example).</p> <pre class="language-markup"><code>&lt;!-- anti-flicker snippet for Google Optimize (recommended) --&gt; &lt;style&gt; .async-hide { opacity: 0 !important } &lt;/style&gt; &lt;script&gt; (function(a, s, y, n, c, h, I, d, e) { s.className += ' ' + y; h.start = 1 * new Date; h.end = I = function() { s.className = s.className.replace(RegExp(' ?' + y), '') }; (a[n] = a[n] || []).hide = h; setTimeout(function() { I(); h.end = null }, c); h.timeout = c; } )(window, document.documentElement, 'async-hide', 'dataLayer', 4000, { 'GTM-XXXXXX': true }); &lt;/script&gt;?</code></pre> <p>&nbsp;</p> <p>If you'd like to understand the snippet in more detail, there's an annotated version in this Optimize support article: <a href="https://developers.google.com/optimize">Using the Optimize anti-flicker snippet | Google Developers</a></p> <p>As a fallback, four seconds is a long time. Based on the Chrome UX Report thresholds, a page needs to display the Largest Contentful Paint element within 2.5s for it to be considered good.</p> <p>Not every visitor may reach that timeout. For some visitors the experiments may complete soon enough to avoid it.</p> <p>How often the snippet reaches the timeout will depend on factors like:</p> <ul> <li>the number of experiments,</li> <li>how long the experiments take to execute,</li> <li>what device the visitor is using, and</li> <li>the speed of the network the device is connected to.</li> </ul> <p>If we measure how long the page is hidden, we can start to understand how Optimize affects our visitors experiences, the range of delays it adds, and how it influences visitor behaviour.&nbsp;</p> <h2>Measuring how long the page is hidden</h2> <p>Unfortunately, Google Optimize &ndash; like most third-party tags &ndash; doesn't expose any timing information for its key milestones (page hidden, page shown), but there are still ways we can measure them.</p> <h3>1. Update the Optimize anti-flicker snippet to include performance marks and measures&nbsp;</h3> <p>A start mark is recorded just before the hide class is added, and then when the class is remove an end mark, and duration measure are recorded.</p> <pre class="language-markup"><code>&lt;!-- anti-flicker snippet for Google Optimize (recommended) --&gt; &lt;style&gt; .async-hide { opacity: 0 !important } &lt;/style&gt; &lt;script&gt; (function(a, s, y, n, c, h, I, d, e) { performance.mark('anti-flicker-start'); s.className += ' ' + y; h.start = 1 * new Date; h.end = I = function() { s.className = s.className.replace(RegExp(' ?' + y), '') performance.mark('anti-flicker-end'); performance.measure('anti-flicker-duration', 'anti-flicker-start', 'anti-flicker-end'); }; (a[n] = a[n] || []).hide = h; setTimeout(function() { I(); h.end = null }, c); h.timeout = c; } )(window, document.documentElement, 'async-hide', 'dataLayer', 4000, { 'GTM-XXXXXX': true }); &lt;/script&gt;</code></pre> <pre></pre> <p>&nbsp;</p> <p>Editing the predefined snippet might be a bit fragile, as in the future someone might not notice it's been customised and overwrite it with the default version.</p> <h3>2. Create a second snippet that uses a MutationObserver to detect when the <code>async-hide</code> class is removed from the document</h3> <p>This is probably more sustainable as it's less prone to being overwritten.</p> <pre class="language-javascript"><code>(function (node, selector, name) { performance.mark(name + '-start'); const callback = function (mutationsList, observer) { // Use traditional 'for loops' for IE 11 support for (const mutation of mutationsList) { if (mutation.attributeName === 'class' &amp;&amp; !mutation.target.classList.contains(selector) &amp;&amp; mutation.oldValue.includes(selector)) { performance.mark(name + '-end'); performance.measure(name + '-duration', name + '-start', name + '-end'); observer.disconnect(); break; } } } const observer = new MutationObserver(callback); observer.observe(node, { attributes: true, attributeOldValue: true }); })(document.documentElement, 'async-hide', 'anti-flicker'); </code></pre> <p><span style="color: #1f1f1f; font-family: Gotham, sans-serif;"><br />This measurement snippet should be placed immediately after Google Optimize's anti-flicker snippet. It creates a mark when it runs, and then another when the class is removed from the document. It also creates a measure to record how long the page was hidden for.</span></p> <p>The snippet takes three parameters:</p> <ol> <li>the element that's being hidden,</li> <li>the name of the class used to hide it, and</li> <li>a prefix for the name of the marks and measures.</li> </ol> <p>The first two must match their equivalents in the anti-flicker snippet.<br /><br />A similar measuring approach can be used for Adobe Target and VisualWebOptimizer. There are example snippets for these in our support docs:&nbsp;<a href="https://support.speedcurve.com/docs/custom-metrics-for-anti-flicker-snippets">Custom metrics for anti-flicker snippets</a></p> <h2>Using the data</h2> <p>Once the snippet is installed on the page and the&nbsp;<a href="https://support.speedcurve.com/docs/custom-metrics">User Timing metrics configured in SpeedCurve</a>, they can be included in dashboard charts. For example, you can <a href="https://support.speedcurve.com/docs/create-correlation-charts">create a correlation chart</a> in RUM to plot how bounce rate is affected by the length of time page is hidden for.<br /><br /><br /><a href="https://support.speedcurve.com/docs/create-correlation-charts"><img class="blog-img" src="https://blog-img.speedcurve.com/img/436/anti-flicker-duration-vs-bounce-rate.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing bounce rate increasing as the page is hidden for longer" /></a></p> <p style="text-align: center;"><em>Correlation chart showing relationship between bounce rate and how long the page is hidden</em></p> <p><br />In this example, the chart shows the bounce rate increasing when the page is hidden for longer durations. It also shows that some visitors are waiting four seconds before the see any content!</p> <p>Measuring how long the page is hidden opens up opportunities to get a better understanding of how this wait time affects our visitors' experience. We can use the <a href="https://support.speedcurve.com/docs/investigate-rum-sessions">RUM Sessions dashboard</a> to identify and explore which visitors are being affected by slow Optimize experiments. We can experiment with reducing the timeout so that visitors won't see a blank screen for as long. Or if we recorded which variant the visitor was seeing via&nbsp;<a href="https://support.speedcurve.com/docs/customer-data">RUM's custom data API</a>, we could see which experiments took the longest to execute.&nbsp;<br /><br />(While testing the snippet, I discovered a common third-party reviews service was corrupting the duration measure, so in some cases you may need to switch to using the anti-flicker end mark instead)</p> <h2>Summary</h2> <p>Managing the performance of third-party tags is a key aspect of delivering great web experiences. Unfortunately, the performance of third-party tags can be pretty opaque. In an ideal world, tag vendors would use marks and measures to help us understand how their tags behave in the field, but until they do, browser APIs such as MutationObserver and User Timing can help us to measure some aspects of them.</p> <p>If you've got other third-party tags that you'd like help with measuring, or if you try out one of the snippets for measuring anti-flicker snippets, we'd love to <a href="mailto:support@speedcurve.com">hear from you</a>.</p> Thu, 28 Apr 2022 00:00:00 +1200