SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. NEW: On-demand testing in SpeedCurve! https://www.speedcurve.com/blog/on-demand-web-performance-testing <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/adhoc-testing-hero.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p style="text-align: right;"><sup><em>Image by <a href="https://www.freepik.com/free-vector/finger-pressing-red-start-button_2947389.htm#query=finger%20pressing%20button&amp;position=3&amp;from_view=keyword&amp;track=ais&amp;uuid=f3ff2ad6-50be-46d6-ae4a-7b03d8b6ba86">Freepik</a></em></sup></p> <p>On-demand testing has sparked a lot of discussion here at SpeedCurve over the past year. You've always had the ability to manually trigger a round of tests &ndash; based on the scheduled tests in your settings &ndash; using the 'Test Now' button. But there hasn't been a lot of flexibility to support nuanced use cases, such as...</p> <p style="padding-left: 30px;"><em>"I just deployed changes to my site and want to check for any regressions."</em></p> <p style="padding-left: 30px;"><em>"I saw a change to my RUM data and I want to see if I can replicate it with synthetic for further diagnostics."</em></p> <p style="padding-left: 30px;"><em>"I have a paused site that I don't want to test regularly, but would like to test from time to time."</em></p> <p style="padding-left: 30px;"><em>"Please just let me test any URL I want without setting up a site and scheduling testing."</em></p> <p style="padding-left: 30px;"><em>"I need to quickly debug this script without kicking off tests for my entire site."</em></p> <p style="padding-left: 30px;"><em>"I would like to get a first look at a page in order to troubleshoot regressions I saw in RUM."</em></p> <p><strong>Based on your feedback, we've just launched new capabilities for <a href="https://support.speedcurve.com/docs/ondemand-site-testing">on-demand testing</a>.</strong> We're pretty excited about these, and we hope you will be, too!</p> <p>In this post, we'll:</p> <ul> <li>Highlight the differences between on-demand and scheduled testing</li> <li>Cover the various types of on-demand testing, including some of the more common use cases we've heard from SpeedCurve users</li> <li>Step you through running an on-demand test</li> </ul> <p>Let's goooooooo!</p><h2>What are the two types of tests within SpeedCurve?</h2> <p>Synthetic performance testing comes in two forms:</p> <h3>Scheduled testing</h3> <p>Baselining. Benchmarking. Continuous performance testing. Tried-and-true performance monitoring goes by many names. This is a necessary offering that keeps folks honest, supports the use of <a href="https://www.speedcurve.com/blog/performance-budgets/">performance budgets</a>, and gives you a lot of consistency when identifying what has changed over time or between deploys.</p> <h3>On-demand testing</h3> <p>This is something our new release fully embraced. Based on feedback from our customers &ndash; as well as the industry at large &ndash; we now give you the ability to run tests when debugging, benchmarking, or just because you feel like it. This is a highly sought after arrow in your web performance quiver.</p> <h2>On-demand testing in SpeedCurve</h2> <p>You now have two options for testing on demand:&nbsp;</p> <h3>Site testing</h3> <p>For each site you have configured in your SpeedCurve test settings, you have the ability to test on demand. Using this option, you'll test the existing URLs you have configured for a Site using the pre-defined configuration in your settings.</p> <h3>Custom URL testing</h3> <p>You now have the ability to test any URL using selected browsers and locations. You can also script a test if you are testing something using any of the scripting options, such as blocking third parties or testing a page within a multi-step transaction.</p> <h2>How do I run tests on demand?</h2> <p>You can trigger on-demand tests either automatically or manually, using the options described below.</p> <h3>Automatic: API, CLI, GitHub integration</h3> <p>On-demand testing of a site can be triggered using these options:</p> <ul> <li><a href="https://support.speedcurve.com/reference/get-all-deploys">SpeedCurve REST API</a></li> <li><a href="https://github.com/SpeedCurve-Metrics/speedcurve-cli">SpeedCurve CLI</a></li> <li><a href="https://support.speedcurve.com/docs/github-integration">SpeedCurve Github integration</a>&nbsp;&ndash; NEW! This is a great option for CI/CD or other means of triggering non-scheduled tests.</li> </ul> <h3>Manual: 'Test Now'</h3> <p>We've added new options to the 'Test Now' feature in SpeedCurve. To run an on-demand test, simply click on 'Test Now' from the side menu.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/test-now-button.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve portal with Test Now button highlighted" /></p> <p><strong>Support article:</strong> <a href="https://support.speedcurve.com/docs/manual-deployment">Trigger deployment tests</a></p> <h2>On-demand site testing</h2> <p>Here are some common use cases that may sound familiar to you:</p> <p style="padding-left: 30px;"><em>"I just deployed changes to my site and want to check for any regressions."</em></p> <p style="padding-left: 30px;"><em>"I saw a change to my RUM data and I want to see if I can replicate it with synthetic for further diagnostics."</em></p> <p style="padding-left: 30px;"><em>"I have a paused site that I don't want to test regularly, but would like to test from time to time."</em></p> <p>To test an existing site, select the site (or sites) to be tested. You can optionally add a note here, or elect to group the tests as a deployment. (I'll discuss that later in this post.)</p> <p><em><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/477/sitetest.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dialog for testing an existing Site configuration" /></em></p> <p><strong>Support article:</strong> <a href="https://support.speedcurve.com/docs/ondemand-site-testing">Test a site on demand</a></p> <h2>Custom URL (adhoc) testing</h2> <p>Some of these use cases may be familiar to you as well:</p> <p style="padding-left: 30px;"><em>"Please just let me test any URL I want without setting up a site and scheduling testing."</em></p> <p style="padding-left: 30px;"><em>"I need to quickly debug this script without kicking off tests for my entire site."</em></p> <p style="padding-left: 30px;"><em>"I would like to get a first look at a page in order to troubleshoot regressions I saw in RUM."</em></p> <p>There are many reasons you may want to execute an adhoc test in SpeedCurve. The custom URL option when selecting 'Test Now' allows you to do just that.</p> <p>To test a custom URL, click the 'Test Now' button and select the custom URL in the dialog. You have the option to add one or more URLs for testing, as well as the ability to select different regions, browser types, and the number of times to test the URL.<br /><br /><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/477/customurltestnow.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dialog for running an on-demand test with custom url selected" /></p> <p>You also have the option to add basic authentication, as well as a <a href="https://support.speedcurve.com/docs/synthetic-test-scripts">scripted test</a>. This can be very useful if you are trying to debug a script before adding it to a site, or if you simply need advanced options for the site or user flow you are testing.</p> <p><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/477/customscript.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dialog of a custom url test using a script command" /></p> <p><strong>Support article:</strong> <a href="https://support.speedcurve.com/docs/adhoc-testing">Adhoc (custom URL) testing</a></p> <h2>Viewing on-demand tests</h2> <p>Once your test(s) are kicked off, you'll be directed to the <strong>Synthetic Tests</strong> dashboard. From here, you can see the status and history of all of your tests &ndash; and even filter by test type.</p> <p>For each test, you have the option to:</p> <ul> <li>View the results&nbsp;</li> <li><a href="https://support.speedcurve.com/docs/bookmark-and-compare-tests#how-to-bookmark-and-compare-synthetic-tests">Compare the results</a> with another test</li> <li>Retest with the same on-demand settings</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/testsinqueue.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Tests dashboard showing queued custom url tests." /></p> <h2>Test details</h2> <p>Once your tests have completed, the <a href="https://support.speedcurve.com/docs/test-details">Test Details dashboard</a> is ready for viewing.</p> <p>If you are new to this dashboard, here is a summary of what's captured and displayed for every test.</p> <h3>Test overview and Lighthouse scores</h3> <ul> <li>See what was tested, including your test settings</li> <li><a href="https://support.speedcurve.com/docs/lighthouse">Lighthouse results</a> with details are provided for every synthetic test</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/testdetailslighthouse.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details with lighthouse scores" /></p> <h3>Render times</h3> <p>Key rendering moments of the page lifecycle are visualized with user focused milestone metrics. Both the LCP and&nbsp;<a href="https://www.speedcurve.com/blog/last-painted-hero/">Last Painted Hero</a> elements are highlighted.</p> <h3><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/rendertimes.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details with render times visualization" /></h3> <h3>Compressed and expanded waterfall</h3> <p>The best of both worlds. Seeing a simplified event timeline makes it easy to understand how key metrics line up with the actual filmstrip through the use of a scrubber.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/waterfall_condensed.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Condensed version of waterfall" /></p> <p>Expanding the waterfall exposes a great amount of detail for every request. If you are looking for more detail, click on 'Detailed Results' to see the full test results for every test run.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/waterfall_expanded.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details expanded waterfall" /></p> <h3>CPU and content breakdowns</h3> <p>Understanding the impact that JavaScript has on CPU usage can be seen in these visuals for three points in the page lifecycle: Start Render, Page Load, and Fully Loaded.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/cpubreakdown.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details with breakdown by CPU" /></p> <p>Page construction &ndash; both HTTP request count and size &ndash; is a helpful indicator of good/bad performance. This breakdown shows you how the content you choose affects the overall size, weight, and complexity of the page.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/contentbreakdown.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details with breakdown by content type" /></p> <h3>CLS diagnostics</h3> <p><a href="https://www.speedcurve.com/blog/cls-windows-core-web-vitals/">Cumulative Layout Shift</a>&nbsp;is broken down by each layout shift and displayed within session windows. Start with the largest layout shifts in the highlighted window first!</p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/layoutshifts.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Test details dashboard with layout shift diagnostics including CLS windowing" /></h2> <h2>Grouping tests as a deployment</h2> <p>For all on-demand tests, you have the option to group them as a deployment. You'll want to add a name and any details you wish to include.</p> <p><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/477/sitetestdeploy.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dialog showing deployment notes" /></p> <p>Once the deployment test(s) are triggered, you'll be taken to the <strong>Deployments</strong> dashboard in lieu of the synthetic tests dashboard to view your deploy.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/deploydash.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Deploys dashboard showing queued on-demand tests" /></p> <p>Once completed, you'll be able to see the high-level pass/fail results for the deploy.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/deployfail.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Deployments showing failed test status." /></p> <p>Clicking through to the <strong>Deployment Details</strong> gives you details around the deployment, including performance budget status, filmstrips, and more for each test in the deployment.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/deploydetails.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Deployment details dashboard showing details for the on-demand test deployment" /></p> <p><a href="https://support.speedcurve.com/docs/deployment-testing-synthetic">Read this article</a>&nbsp;to learn more about deployments.</p> <h2>Summary</h2> <p>You asked, we listened. We hope you continue to get value from your testing in SpeedCurve with the on-demand testing capabilities we've introduced. As always, <a href="mailto:%20support@speedcurve.com">your feedback</a> is welcome!&nbsp;</p> <p>Not a SpeedCurve customer? Start a <a href="https://www.speedcurve.com/signup/">free trial</a> today!</p> <p><br /><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/477/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Thu, 29 Feb 2024 00:00:00 +1300 Debugging Interaction to Next Paint (INP) https://www.speedcurve.com/blog/debugging-interaction-to-next-paint-inp <p>Interaction to Next Paint (INP) will become a Core Web Vital on March 12th. Not surprisingly, most of the conversations I've had with SpeedCurve users over the last few months have focused on improving INP.</p> <p>INP measures how responsive a page is to visitor interactions. It measures the elapsed time between a tap, a click, or a keypress and the browser next painting to the screen.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/inp.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Definition of INP" /></p> <p>INP breaks down into three sub-parts</p> <ul> <li><strong>Input Delay</strong>&nbsp;&ndash; How long the interaction handler has to wait before executing</li> <li><strong>Processing Time</strong>&nbsp;&ndash; How long the interaction handler takes to execute</li> <li><strong>Presentation Delay</strong>&nbsp;&ndash; How long it takes the browser to execute any work it needs to paint updates triggered by the interaction handler</li> </ul> <p>Pages can have multiple interactions, so the INP time you'll see reported by RUM products and other tools, such as Google Search Console and Chrome's UX Report (CrUX), will generally be the worst/highest INP time at the 75th percentile.</p> <p>Like all Core Web Vitals, INP has a set of thresholds:</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/inp-thresholds.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="INP thresholds for Good, Needs Improvement and Poor" /><br /><em>INP thresholds for Good, Needs Improvement, and Poor</em></p> <p><br />Many sites tend to be in the Needs Improvement or Poor categories. My experience over the last few months is that getting to Good is achievable, but it's not always easy.</p> <p>In this post I'm going to walk through:</p> <ul> <li>How I help people identify the causes of poor INP times</li> <li>Examples of some of the most common issues</li> <li>Approaches I've used to help sites improve their INP</li> </ul><h2>Identifying Interactions</h2> <p>The Chrome UX Report (CrUX) can provide a high-level view of INP. Individual pages can be spot checked via the CrUX API or tools such as Page Speed Insights&nbsp;</p> <p>But as Cliff's already covered in&nbsp;<a href="https://www.speedcurve.com/blog/interaction-to-next-paint-find-ix-elements/">How to find (and fix!) INP interactions on your pages</a>, CrUX is no substitute for having your own RUM data that you can group and filter by dimensions such as the different page and device types.&nbsp;<br /><br />My favourite place to start is with the Web Vitals heatmap on SpeedCurve's <strong>RUM &gt; Performance</strong>&nbsp;dashboard. It gives a high-level summary that can be filtered by page label to check if the behavior is consistent across all paths in the group.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/heatmap2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Heatmap showing 75th Percentile values for Core Web Vitals per Page Group" /></p> <p style="text-align: center;"><em>Heatmap of the most popular pages and their Web Vitals metrics</em></p> <p><br />I then typically switch to the&nbsp;<strong>RUM &gt; Design</strong> dashboard and use the list of popular interaction elements to determine which ones I want to investigate further.</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/common-interactions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Common visitor interactions ranked by frequency" /><br /><em>Most popular interactions</em></p> <p><br />Ideally we'd have a view that shows which interactions are responsible for high INP times. (We're currently working on adding full attribution for INP to SpeedCurve.) In the meantime, we've discovered that, in practice, just knowing which page types have high INP times and the interactions visitors are using on those pages is really effective at identifying interactions to investigate further.</p> <p>Some companies I work with don't have RUM. In those cases, we think about the common interactions visitors are likely to use &ndash; dismissing cookie dialogs, opening menus, zooming on product images, etc. &ndash; and investigate those further. The caveat here is that it's not as effective as having RUM data to work from and can lead improvements that don't seem to influence INP much.</p> <h2>Profiling Interactions</h2> <p>Once we know which pages have high INP times, and what are the popular interactions on those pages, I switch to Chrome DevTools, profile the interactions, and identify ways to improve them.</p> <p>The Performance panel can be overwhelming &ndash; even for experienced engineers &ndash; as it exposes how much work the browser is doing to load pages or handle interactions.</p> <p>Here is the approach I use when debugging interactions. Although I'm using Chrome Canary in these examples, the same approach works in stable Chrome and other Chromium-based browsers.</p> <h3>1. Switch to a guest user profile</h3> <p>As guest user profiles don't load extensions, they help minimise some of the noise that extensions and other factors can have on performance analysis.</p> <p>Guest profiles also&nbsp;start with empty caches, empty cookie stores, empty browser storage, etc. These may get populated during testing, but we can clear them at any time via&nbsp;<em>Application</em> &gt; <em>Storage</em> &gt; <em>Clear Site Data</em> in DevTools.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/chrome-guest-profile.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Switching to a guest profile in Chrome" /></p> <p style="text-align: center;"><em>Opening a guest profile in Chrome</em><br /><br /></p> <h3>2. Open DevTools and switch to the Performance panel</h3> <p>As mobile visitors tend to be the majority of visitors for most sites, I also switch to mobile emulation.</p> <p><img class="blog-img" style="display: block; margin-left: auto; margin-right: auto;" src="https://blog-img.speedcurve.com/img/472/chrome-devtools-performance-panel.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools Performance Panel" /></p> <p style="text-align: center;"><em>Switching to the DevTools Performance panel<br /><br /></em></p> <h3>3. Load a page</h3> <p>Load the page you want to investigate. Wait until has finished load before profiling it.</p> <h3>4. Hit 'Record' and interact with the page</h3> <p>After the page has loaded, press the record icon in the DevTools toolbar, wait for the profile to start recording, and then interact with the page.</p> <p>The profiler starting up often creates a Long Task right at the start of the profile, so I tend to wait a second or so before actually interacting.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/chrome-devtools-recording.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Starting a trace in the DevTools Performance panel" /></p> <p style="text-align: center;"><em>Recording a profile<br /><br /></em></p> <h3>5. Stop recording</h3> <p>After you've recorded data on the interactions you're interested in, stop recording. After a few moments you should be greeted with a view something like the one below.</p> <p>In this view I've opened the tracks for Frames, Interactions, and Main Thread so I can see what's visible on the page when I interacted with it as well as the activity that happened on the main thread.</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/performance-profile.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Example performance profile from Chrome DevTools" /><br /><em>Example performance profile in Chrome DevTools</em><br /><br /></p> <p>In the Main Thread track you can see the Profiling Overhead task right at the start of the profile and then a second call stack in response to the interaction.&nbsp;</p> <p>A quick guide to interpreting this panel:</p> <ul> <li>The width of each cell in the call stack represents the <strong>elapsed time</strong> it (and its children) executed.</li> <li>The dark yellow, magenta, and green cells represent <strong>internal browser work</strong>.</li> <li>The pastel cells represent the <strong>scripts</strong> included in the page. (Each script gets its own color.)</li> </ul> <p>Clicking on an individual cell will show more detail in the summary panel at the bottom of the tab (not shown in screenshot). You can zoom in/out and scroll using either the mouse or the W A S D keys.</p> <h2>Analyzing Interactions</h2> <p>After we've captured profiles, we can start analyzing them to understand why we're seeing long INP times, and perhaps more importantly, what we can to do reduce them.</p> <p>While writing this post, I tested interactions on a few sites and chose three examples that illustrate the common issues I see.</p> <p>The examples were captured in Chrome Canary on a 2020 i5 MacBook Pro without CPU throttling enabled. If CPU throttling was enabled or the tests were carried out on an actual Android device, then I'd expect the INP times to be higher.</p> <p>If you want to explore the traces in more detail I've uploaded them to <a href="https://toot.cafe/@paul_irish">Paul Irish's</a>&nbsp;<a href="https://trace.cafe">trace.cafe</a></p> <h3>Example 1 &ndash; Opening the menu on H&amp;M</h3> <p>In this first example, I opened the menu on the mobile version of H&amp;M by clicking on the icon in the top right.&nbsp;</p> <p>Even though I only clicked on the page once, multiple event handlers were invoked. The one for the menu was the longest and had INP time of 350ms &ndash; in other words, 150ms longer than Google's 200ms threshold for 'Good'.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/hm-three-interactions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools showing three interactions as the menu was opened" /></p> <p style="text-align: center;"><em>Long interaction when opening the menu on H&amp;M</em></p> <p><br />In this case, most of the time is spent in the actual event handler (<em>Processing Time</em>) for the menu, but there is a slight delay before the event handler can execute.</p> <p>Examining the flame chart reveals four main groups of processing that happen in response to the interaction:</p> <ol> <li>Akamai's bot manager is the source of the first two event handlers. These event handlers execute before the one for the menu, so creating the <em>Input Delay</em> for the menu interaction.</li> <li>Within the longest event handler, the first group of processing creates an analytics event to record the visitor opening the menu.</li> <li>The second group is a JS component that constructs the menu and then adds it to the DOM, triggering style recalculations and layout.</li> <li>The last group adds a style element to the DOM, again triggering style recalculations.</li> </ol> <p><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/hm-flame-chart-overlay.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Flame Chart with overlay showing the main split in work" /></p> <p style="text-align: center;"><em><span style="text-align: center;">Main thread activity when opening the menu on H&amp;M</span></em></p> <p><br />Here I'd start by focusing on what's the source of the long <em>Processing Time</em>, asking questions such as:</p> <ul> <li>Can the menu be rendered without using React?</li> <li>Does the stylesheet need to be injected?</li> <li>Could the design of the interaction be changed to avoid it?</li> </ul> <p>&gt; <a href="https://trace.cafe/t/gyeN0QKToq">Explore the trace</a></p> <h3>Example 2 &ndash; Opening the menu on John Lewis</h3> <p>For the second example, I've also chosen to open the menu. The trace for John Lewis shows similar patterns to the one for H&amp;M.</p> <p>Again there's another event handler that fires before the handler for the menu, but the execution of the both handlers is also delayed by a separate task on the Main Thread. These tasks create a 170ms <em>Input Delay</em> before the interaction handler for the menu executes.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/john-lewis---interactions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="DevTools sowing long interactions when opening the menu on John Lewis" /></p> <p style="text-align: center;"><em><span style="text-align: center;">Long interaction when opening the menu on John Lewis</span></em></p> <p><br />Breaking down the main thread activity shows eight groups of activities that delay the response to the interaction:</p> <ol> <li>Initial interaction occurs while a Long Task caused by a customer feedback widget is executing.</li> <li>Even though feedback widget isn't visible, it triggers a style recalculation and layout.</li> <li>A <em>pointerdown</em> event handler within the site's own code executes.</li> <li>Akamai's bot manager is listening to <em>pointerdown</em> and <em>touchstart</em> events, and handlers for these events execute.</li> <li>Handlers within the site's own code for&nbsp;<em>gotpointercpature</em>, <em>lostpointercapture</em>, <em>pointerup</em>, <em>pointerout</em> and <em>touchend</em> events execute.</li> <li>A style recalculation (which I believe is triggered by the bot manager) and a&nbsp;<em>mousedown</em>&nbsp;handler registered by Akamai bot manager execute.</li> <li>The menu hander finally executes and generates the DOM for the menu.</li> <li>Lastly, a style recalculation triggered by the menu hander executes.</li> </ol> <p>John Lewis uses New Relic. New Relic wraps many of the script calls, and this has some impact on the duration of the tasks. If I were investigating this further, I'd profile with New Relic disabled to measure what impact it's having (if any).<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/john-lewis---call-stacks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools showing the Main Thread activity during an interaction" /></p> <p style="text-align: center;"><em>Main thread activity when opening the menu on John Lewis</em></p> <p><br />The main question I'm asking when I see this kind of profile is this:</p> <p>What can be done to reduce the <em>Input Delay</em>&nbsp;&ndash; the Long Task at the start, then focusing in on the intermediate event handlers, and lastly the style and layout calculations?</p> <p>(In the chart above, the whisker for the <em>Presentation Delay</em> extends into a GTM task, but I believe this is a Chrome issue. You might also notice Chrome Canary doubles up some Long Tasks in the Profile, too.)<br /><br />&gt; <a href="https://trace.cafe/t/46PZ6lDnH7">Explore the trace</a></p> <h3>Example 3 &ndash; Accepting the consent dialog on Wales Online</h3> <p>For the last example, I'm closing the consent dialog that all sites in Europe are required to display before they inject third-parties such as ads and analytics into the page.</p> <p>Here the main issue is the amount of work the event handler is trying to complete in a single task. The <em>Processing Time</em> for the interaction is 382ms</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/wales-online---interactions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools showing the long interaction when the Consent Manager is closed" /></p> <p style="text-align: center;"><em>Long interaction when clicking 'accept' on Wales Online</em></p> <p><br />Examining the flame chart reveals six main groups of processing that happen in response to the interaction:</p> <ol> <li>Closing the dialog is actually pretty quick as it just needs to be removed from the DOM.</li> <li>The consent manager starts communicating consent to the ad providers, so they can begin to load.</li> <li>Amazon Ads executes.</li> <li>Prebid executes.</li> <li>A second Prebid component executes .</li> <li>A 'Bad Ads reporting tool' adds a stylesheet. After the styles are parsed and recalculated, something forces a layout task.</li> </ol> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/wales-online---call-stacks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools showing the main browser tasks when someone clicks Yes in the Consent Manager on Wales Online" /></p> <p style="text-align: center;"><em>Main thread activity after clicking 'accept' on Wales Online</em></p> <p><br />One thing that's noticeable with the Wales Online example is that the <em>Processing Time</em>&nbsp;is entirely due to third-party scripts. That can limit the options to reduce it, but even then it should be possible to divide the task up.<br /><br />&gt; <a href="https://trace.cafe/t/Jeqw44miet">Explore the trace</a></p> <h2>Fixing Interactions</h2> <p>After we've identified why an interaction has a high INP time, our next goal is to reduce it. I find that separating how I think about <em>Input Delay</em> versus&nbsp;<em>Processing Time and Presentation Delay</em>&nbsp;can help.</p> <ul> <li><strong><em>Input Delay</em></strong>&nbsp;is due to other tasks blocking the main thread and so delay when the interaction handler can execute. As such,&nbsp;<em>Input Delay</em>&nbsp;is outside an interaction handler's control.</li> <li><strong><em>Processing Time and Presentation Delay</em></strong> are the time the interaction handler takes to execute, and then the time it takes the browser to complete layout, styling, paint, and other tasks created by the event handler.</li> </ul> <p style="text-align: center;"><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/inp-input-delay-vs-processing-time-and-presentation-delay.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Division between Input Delay, and combined Processing Time and Presentation Delay" /></p> <p style="text-align: center;"><em>Split between Input Delay, and combined Processing Time and Presentation Delay</em></p> <p><br />As <em>Processing Time</em> and <em>Presentation Delay</em> are easier to identify and fix, I'm going to cover them first before moving on to <em>Input Delay.</em></p> <h3>How to improve Processing Time and Presentation Delay</h3> <p>When it comes to reducing&nbsp;<em>Processing Time</em>&nbsp;and&nbsp;<em>Presentation Delay</em>, many articles focus on breaking up Long Tasks up or 'yielding to the main thread' with <code>setTimeout</code>, <code>scheduler.yield</code> or <code>requestIdleCallback</code>, etc.</p> <p>While that is one place to start, it's not the only approach. My view is that reducing the time Long Tasks take to execute is just as important as splitting tasks up.</p> <p>My other guiding view is to focus on what's most important from a user perspective and optimize that. For example, if the user is opening a menu, then showing them the menu is the most important activity. Anything else that might be triggered by the same action should be secondary.</p> <h4>1. Defer less important activities&nbsp;</h4> <p>Both the H&amp;M and John Lewis menus record an analytics event when someone opens the menu. These calls happen before the menu is actually displayed. (I've seen this pattern on many other sites, too.)</p> <p>While analytic events are useful to help us understand our visitors' behavior, they're secondary and shouldn't delay the visitors primary goal, in this case opening the menu.</p> <p>Scheduling these events into a new task via setTimeout (e.g. <code>setTimeout(analytics_fn, 0)</code> or <code>scheduler.postTask</code> for browsers that support it) moves the work out of the interaction handler to be executed later and allows the interaction handler to complete sooner.</p> <p>The same method can be used with Wales Online's consent manager. After the visitor has clicked 'accept' or 'reject' they want to get on and read the news rather than wait for multiple ad providers to be given permission to load (or not). Scheduling <code>setConsentInfo</code> into a separate task enables the browser to paint the next frame sooner, while the ad providers can carry on loading in the background.</p> <p>Here's an example of what this looks like in a trace:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/cmp-with-timeout.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="DevTools profile showing Long Task scheduled in to new task" /></p> <p style="text-align: center;"><em>Breaking up a click event handler by scheduling work into a separate task</em></p> <p style="text-align: left;"><br />The Long Task was originally part of the click event handler. Using <code>setTimeout</code> to schedule it into its own separate task allows the browser to paint before the Long Task executes. The Long Task might still be a problematic if someone interacts while it's executing, but it's no longer part of the click handler's INP.</p> <p>I've seen many examples where publishers have improved INP by scheduling the setting of consent into a separate task. I expect consent managers to adopt this approach as a default.</p> <h4>2. Do less work</h4> <p><a href="/blog/@ryantownsend@webperf.social">Ryan Townsend</a> spoke about <a href="https://www.youtube.com/watch?v=f5felHJiACE">The Unbearable Weight of Massive JavaScript</a> at <a href="https://perfnow.nl/">performance.now()</a> in November 2023. He shared a case study where they replaced over 50,000 lines of JavaScript with native HTML and CSS features. The result was a faster user experience with more maintainable codebase.</p> <p>H&amp;M relies on JavaScript components to create the menu elements, add them to the DOM, and apply styles. The result: a processing time of 303ms.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/hm-three-interactions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chrome DevTools showing three interactions as the menu was opened" /></p> <p style="text-align: center;"><em>Long interaction when opening the menu on H&amp;M</em></p> <p><br />Let's compare this to another fashion retailer, French Connection. French Connection creates the menu elements when they render the page server-side, and then just changes the elements styles to display the menu. This illustrates the dramatic difference in processing time between the two approaches:</p> <ul> <li>H&amp;M processing time: <strong>303ms</strong></li> <li>French Connection processing time: <strong>22ms</strong> (of which nearly half is sending an analytics event to Google Tag Manager!)</li> </ul> <p><strong><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/french-connection-menu.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Performance Profile when opening the menu on French Connection" /></strong></p> <p style="text-align: center;"><em>Interaction when opening the menu on French Connection</em></p> <p><br />Of course the French Connection page is going to have more DOM elements. When tools like Lighthouse warn you to "avoid an excessive DOM size", it's tempting to choose other approaches without perhaps fully considering the tradeoffs. Menus often contain large numbers of DOM elements. The choice is whether they're created when the page is initially generated, or at runtime using JavaScript when the menu is requested.</p> <p>Lighthouse warns about DOM size because it "will increase memory usage, cause longer style calculations, and produce costly layout reflows." But when used carefully, CSS properties like&nbsp;<em>content-visibility</em>, <em>isolation</em>, and <em>will-change</em> can help reduce the cost of a large number of DOM nodes.</p> <p>Talking of CSS, you might have noticed that in some of the examples above, INP was affected by some long style and layout calculations. These were caused by interaction handlers injecting styles that affected the whole DOM, or by interaction handlers querying style or size properties via methods like&nbsp;<em>getComputedStyle</em> or <em>getBoundingClientRect</em>, so forcing recalculations. (Paul Irish keeps a list of JavaScript methods that typically&nbsp;<a href="https://gist.github.com/paulirish/5d52fb081b3570c81e3a">trigger style and layout calculations</a>.)</p> <p>Think about the work you're asking the browser to do and whether there are more efficient ways of achieving the end result.</p> <h4><strong>3. Yield to the main thread</strong></h4> <p>Sometimes there isn't work that can be deferred, or the efficiency of interaction handlers can't be improved. In those cases, we just need give control back to the main thread so that it can get on with painting the next frame.</p> <p>One approach is to use <em>setTimeout</em> wrapped in a Promise:<br /><code></code></p> <div> <p><code>function yieldToMain() {</code></p> <p><code>&nbsp; return new Promise(resolve =&gt; {</code></p> <p><code>&nbsp; &nbsp; setTimeout(resolve,0);</code></p> <p>&nbsp; &nbsp; <code>});</code></p> <p><code>}</code></p> </div> <p>&nbsp;</p> <p>And then at suitable points in the code insert:<br /><br /></p> <div> <p><code>// Yield to the main thread:</code></p> <p><code>await yieldToMain();</code></p> </div> <p>&nbsp;</p> <p><em>setTimeout</em> creates a new task, so enabling the browser's scheduler to take over and process other tasks like input before resuming.</p> <p><a href="https://www.linkedin.com/in/malchata">Jeremy Wagner</a>&nbsp;discusses this approach in more detail in his web.dev posts on <a href="https://web.dev/articles/optimize-inp#yield_to_the_main_thread_often">Optimizing INP</a>&nbsp;and <a href="https://web.dev/articles/optimize-long-tasks#use_asyncawait_to_create_yield_points">Optimizing Long Tasks</a>. I'd suggest you read those for a deeper view.</p> <p>The other option to consider for tasks that are hard to optimize is whether the work they're doing can be moved off the main thread via a Web Worker. I've not had the need to use this approach with clients yet.</p> <h3>How to improve Input Delay</h3> <p>Diagnosing the causes of slow Input Delay isn't always easy with just Synthetic monitoring and DevTools. That's because the length of the delay depends on when the visitor interacted with the page and what tasks were executing when the visitor interacted.&nbsp;<br /><br />The&nbsp;<a href="https://developer.chrome.com/docs/web-platform/long-animation-frames">Long Animations Frame (LoAF) API</a>, which is set to ship in Chrome 123, will help RUM tools identify the tasks that caused <em>Input Delay</em>. Once identified, we will be able to profile the tasks in DevTools.</p> <p>Until LoAF is widely supported, there are profiling approaches in DevTools that can help identify some of the problematic scripts.</p> <h4><strong>1. Investigate other interaction handlers</strong></h4> <p>As the H&amp;M and John Lewis examples demonstrated, other touch, mouse, and keyboard event handlers are also triggered by interactions and can execute before our event handler for the main interaction.</p> <p>Fortunately these event handlers are captured in the DevTools profile. We can also inspect which event handlers are active using the&nbsp;<em>Event Listeners</em> panel in the right sidebar of the <em>Elements</em> panel in DevTools.<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/event-listeners.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="DevTools Event Listeners Panel" /></p> <p style="text-align: center;"><em>Viewing active event listeners in Chrome DevTools</em></p> <p><br />Some monitoring products, such as New Relic, wrap other scripts calls. This can make identifying the code that's actually going to execute a bit harder. To identify the actual event handler, you can either:</p> <ul> <li>Block the external instrumentation script</li> <li>Use DevTools&nbsp;<em>Content Overrides</em> to create a copy of the page with New Relic, etc., removed</li> </ul> <p>Once we've identified event listeners that are active for click, key, mouse, and tap events, we can review and remove any that aren't really necessary. <br /><br />If third-party tags are adding their own event handlers, then it's a case of:</p> <ol> <li>Evaluating the tradeoff between the features the third party provides and its impact on the visitors experience</li> <li>Raising the issue with the provider</li> <li>Expecting the provider to fix the issue</li> </ol> <p>Some third-party providers are serious about addressing the impact they have on INP. If you're using one that's not, then I'd advocate switching to an alternative.</p> <h4>2. Investigate other Long Tasks</h4> <p>It's harder to identify any other Long Tasks that contribute to <em>Input Delay.</em>&nbsp;It depends on what task is executing when the visitor interacts (this is where LoAF will really help) and how long the task continued executing after the initial interaction.</p> <p>But we do know there is a relationship between total Long Task Time and INP:<br /><br /></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/total-long-task-time-vs-inp.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Relationship between Total Long Task Time and Interaction to Next Paint" /></p> <p style="text-align: center;"><em>Relationship between Long Tasks and Interaction to Next Paint (INP)</em></p> <p><br />And we also know that visitors start to interact shortly after they start to see content:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/fcp-vs-first-interation.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Correlation chart showing relationship between First Contentful Paint and First Click Interation" /></p> <p style="text-align: center;"><em>Relationship between First Contentful Paint (FCP) and First Click Interaction</em><br /><br /></p> <p>The exact relationships will vary from site to site, but the overall pattern was pretty consistent across the many sites I checked.</p> <p>Knowing this, we can make an informed guess that any Long Tasks that occur after useful content starts to appear are in danger of contributing to <em>Input Delay</em>.</p> <p>In John Lewis' case, profiling the page while it loads shows there are a bunch of Long Tasks that happen after 2.2s &ndash; notice the gap in the filmstrip &ndash; and these are likely to lead to higher INP times if the visitor tries to interact at this point.</p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/john-lewis-long-tasks.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Main Thread activity when loading John Lewis' home page" /><br /><em>Profile showing Long Tasks during John Lewis home page loading</em>&nbsp;</p> <p><br />Some other things you might want to experiment with:</p> <ul> <li>Profiling a page while it loads and then trying to interact as soon as there's visible content.</li> <li>Profiling the acceptance of a cookie consent dialog and then interacting immediately after while third parties might be still starting up.</li> <li>If you have tasks that repeat at a regular frequency (e.g. ad refreshes), you can profile what happens if a visitor interacts during those.</li> </ul> <p>As far as optimising these Long Tasks goes, my advice is very similar to optimising slow interaction handlers:</p> <ul> <li>Defer secondary work into new tasks</li> <li>Optimize code so it uses the browser efficiently</li> <li>Yield to the main thread appropriately</li> </ul> <p>One thing to watch out for:</p> <p>There's often a Long Task just before the DOM Content Loaded event. This is because<a href="https://issues.chromium.org/issues/40894694">&nbsp;any deferred and module scripts execute as a single &ndash; potentially long &ndash; task just before DOM Content Loaded</a>. Until browser makers change this behaviour, there's always a potential for this Long Task to create <em>Input Delay</em> if someone interacts at this point.</p> <h2>Wrapping up</h2> <p>Getting to the root of high INP times and fixing them can be quite complex and sometimes overwhelming. It's important to remember that even small incremental changes add up to larger overall improvements.</p> <p>Some other things to keep in mind...</p> <h3>Use RUM when you can</h3> <p>RUM is great for quickly identifying pages and interactions with high INP times. But even without RUM it's possible to start improving INP. There's just a danger that you might not be profiling the most influential interactions.</p> <p>When it comes to actually understanding and optimizing INP:</p> <ul> <li>Profile in a guest window to remove noise caused by extension or previous state</li> <li>Identify secondary activities and defer them into separate tasks</li> <li>Optimize the primary tasks &ndash; lean on the native power of browsers but watch activities that create unnecessary style and layout calculations</li> <li>Break remaining Long Tasks up by yielding to the main thread</li> <li>Review other interaction handlers to check they're necessary and aren't contributing to&nbsp;<em>Input Delay</em></li> </ul> <h3>It's easy to write a list of actions, but not always easy to implement them</h3> <p>This is especially true if a site is heavily reliant on a JavaScript framework.</p> <h3>Measurement of INP isn't a perfect science</h3> <p>You may come across things that don't quite make sense. I've seen <code>requestAnimationFrame</code> and <code>setTimeout</code> loops delay input handlers. I've also seen JavaScript dialogs really affect INP. As with all Core Web Vitals, it's a work in progress. I expect the Chrome team will address some of these edge cases eventually.</p> <h3>Progress may not be linear</h3> <p>The reported INP measurement represents the worst interaction on the page ,and pages can have many interactions. If the slowest interaction has an INP time of 500ms, and the second slowest has an INP time of 450ms, then fixing the worst interaction will only reduce INP by 50ms!</p> <p>To borrow the words of Pok&eacute;mon, you gotta catch 'em all!</p> <h3>If you need some inspiration...</h3> <p>I've been using the techniques from this post to help companies improve their INP time. For one company, we reduced INP by more than 50% in the space of just a couple of weeks.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/social/472/inp-75th.png?auto=format,compress&amp;fit=crop&amp;ar=2:1&amp;w=1200" alt="https://blog-img.speedcurve.com/social/472/inp-75th.png?auto=format,compress&amp;fit=crop&amp;ar=2:1&amp;w=1200" /></p> <p style="text-align: center;"><em>Reduction in INP due to Long Tasks improvements</em></p> <h2>Need help measuring and fixing INP?</h2> <p>If you'd like to start measuring INP effectively,&nbsp;<a href="https://www.speedcurve.com/signup/">we offer a free 30-day trial</a> that includes both real user and synthetic monitoring.</p> <p>We also have <a href="https://www.speedcurve.com/features/consulting/">some of the most experienced web performance consultants</a>&nbsp;in the world. If you're not sure where to start with INP, or are stuck with what to do next, feel free to get in touch via <a href="mailto:support@speedcurve.com">support@speedcurve.com</a>.<br /><br /><br /><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/472/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Mon, 26 Feb 2024 00:00:00 +1300 How to use Server Timing to get backend transparency from your CDN https://www.speedcurve.com/blog/server-timing-time-to-first-byte <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/ttfb.jpg?auto=format,compress&amp;fit=max&amp;w=2000" /></p> <p><a href="https://www.stevesouders.com/blog/2012/02/10/the-performance-golden-rule/">80% of end-user response time is spent on the front end.</a></p> <p>That performance golden rule still holds true today. However, that pesky 20% on the back end can have a big impact on downstream metrics like First Contentful Paint (FCP), Largest Contentful Paint (LCP), and any other 'loading' metric you can think of.</p> <p><strong>Server-timing headers are a key tool in understanding what's happening within that black box of Time to First Byte (TTFB).</strong>&nbsp;</p> <p>In this post we'll explore a few areas:</p> <ul> <li>Look at industry benchmarks to get an idea of how a slow backend influences key metrics, including Core Web Vitals</li> <li>Demonstrate how you can use server-timing headers to break down where that time is being spent</li> <li>Provide examples of how you can use server-timing headers to get more visibility into your content delivery network (CDN)</li> <li>Show how you can capture server-timing headers in SpeedCurve</li> </ul><h2>How slow backend times influence key metrics</h2> <p>First, we need to understand what 'slow' means. For more years than I care to mention, I've been advising folks that their TTFB should be under 500ms. <a href="https://web.dev/articles/ttfb#what_is_a_good_ttfb_score">Google recommends</a> that TTFB be 800ms at the 75th percentile. <strong>For the purpose of this post, let's say that 500ms backend time is 'good enough'.</strong></p> <p>Looking at the <a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/backend/">industry benchmarks for US retailers</a>, four well-known sites have backend times that are approaching &ndash; or well beyond &ndash; that threshold.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/slow_us_retail_backend.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing the backend times for US retailers in the SpeedCurve Industry Benchmarks" /></p> <p style="text-align: center;"><a style="text-align: center;" href="https://app.speedcurve.com/benchmarks/usa/retail/fast/backend/">Pagespeed Benchmarks - US Retail - Backend</a></p> <p style="text-align: left;">Those same sites, with the exception of Lowe's, were also in the slower cohorts for First Contentful Paint (FCP), Largest Contentful Paint (LCP), and other loading metrics.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/slow_us_retail_lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing LCP times for US retailers in the SpeedCurve Industry Benchmarks" /></p> <p style="text-align: center;"><a style="text-align: center;" href="https://app.speedcurve.com/benchmarks/usa/retail/fast/largest-contentful-paint/">Pagespeed Benchmarks - US Retail - LCP</a></p> <p style="text-align: left;">When you examine a waterfall, it's pretty obvious that <strong>TTFB is the long pole in the tent, pushing out render times for the page</strong>. Given that TTFB is synchronous by nature, we can expect to see this pattern for any site that has opportunities.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/waterfall_wish_slow_lcp.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Waterfall showing a slow backend time of 1.83s driving a slow LCP of 2.39s" /></p> <h2>Cue server-timing headers</h2> <p>Historically, when looking at page speed, we've had the tendency to ignore TTFB when trying to optimize the user experience. I mean, why wouldn't we? <a href="https://www.stevesouders.com/blog/2012/02/10/the-performance-golden-rule/">Steve told us to focus on the front-end!</a>&nbsp;;)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/ttfb.jpg?auto=format,compress&amp;fit=max&amp;w=2000" /></p> <p>Well, a lot has changed over the years, including the ability to get more detailed information from the server on the user agent through the use of&nbsp;<a href="https://www.w3.org/TR/server-timing/">Server Timing</a>.</p> <p><strong>Server Timing&nbsp;is a specification that allows communication of data from the server to the client through the use of a server-timing header.</strong> This is a special header that is accessible through a JavaScript interface. SpeedCurve's RUM collection library &ndash; lux.js &ndash; accesses this interface to collect custom data from the user agent.</p> <p>The specification requires that header values are passed through a duration field (dur) and/or a description field (desc). In practice, it looks something like this:</p> <p style="padding-left: 30px;"><code>server-timing: processing_time; dur=123.4; desc="Time to process request at origin"</code></p> <p><code></code><strong>NOTE:</strong> This is not a new API. Charlie Vazac introduced server timing in a <a href="https://calendar.perfplanet.com/2018/server-timing/">Performance Calendar post</a> circa 2018. However, wider adoption has only just started to take off.&nbsp;</p> <h2><span style="color: #000000; font-size: 35px;">It's not just about timing</span></h2> <p>While the intention of server-timing headers may have originally been to get more insight into how long things are taking, another great use case involves sending across dimension data or meta data.</p> <p>For example, you might use the headers to pass across identifiers such as a datacenter location or other geographic details:</p> <p style="padding-left: 30px;"><code>server-Timing: datacenter; desc="denDRP01"</code></p> <p>For debugging purposes, maybe send along a link to a session trace to provide a 'front-end' APM solution:</p> <p style="padding-left: 30px;"><code>server-Timing: sessiontrace; desc="https://your.logfiles.com/transId=</code><span style="font-family: monospace;">T1234-5678-9012-3456</span></p> <p>Or, did the request use early hints (or some other form of web performance magic):</p> <p style="padding-left: 30px;"><span style="font-family: monospace;">server-timing: earlyhints</span></p> <h2><span style="font-size: 35px; color: #000000;">Server timing and your CDN</span></h2> <p>For a large majority of sites, content delivery networks (CDNs) serve a critical role in delivering consistent user experiences. Caching the base page/HTML is common, and it should have a positive impact on backend times.&nbsp;But what happens when it doesn't?</p> <p>CDNs have traditionally been a bit of a black box when it comes to finding out where time is being spent. <strong>With as much as we are moving compute and other capabilities (e.g., bot management, WAF) to the 'edge', there are more and more checkpoints that often go unreported.</strong></p> <p>The use of server-timing headers by content delivery networks closes a big gap. Today, it's possible to add these headers from your CDN with ease, if they aren't already set up out of the box.</p> <h3>Key things to understand from your CDN</h3> <ul> <li><strong>Cache Hit/Cache Miss</strong>&nbsp;&ndash; Was the resource served from the edge, or did the request have to go to origin?</li> <li><strong>Latency</strong>&nbsp;&ndash; How much time does it take to deliver a packet from A to B. Also measured by round trip time (RTT).</li> <li><strong>Origin Time</strong>&nbsp;&ndash; How much time did the request spend from your origin? (In the case of a cache miss, this should be zero.)</li> <li><strong>Edge Time</strong>&nbsp;&ndash; How much time was spent at the CDN? This can include a lot of different service layers, not just serving from cache. For example, processing of web application firewall (WAF) rules, detecting bots or other malicious traffic though security services, and growing in popularity, edge compute.&nbsp;</li> </ul> <p>Below are some examples from the major CDN providers that you can leverage.</p> <h3>Akamai</h3> <p>Akamai was the first to start emitting server-timing headers and set the tone for CDN transparency. This data is available by enabling the mPulse behavior in property manager. With the behavior enabled, you will start seeing the following server-timing headers:</p> <p style="padding-left: 30px;"><strong>Cache HIT/MISS</strong></p> <p style="padding-left: 30px;"><code>Server-Timing: cdn-cache; desc=&lt;MISS|HIT&gt; </code></p> <p style="padding-left: 30px;"><strong>Edge Time</strong></p> <p style="padding-left: 30px;"><code>Server-Timing: edge; dur=&lt;# ms&gt;</code></p> <p style="padding-left: 30px;"><strong>Origin Time</strong></p> <p style="padding-left: 30px;"><code>Server-Timing: origin; dur=&lt;# ms&gt;</code></p> <p><span style="color: #1f1f1f;">Note that if you don't use the mPulse product, you can still enable the headers without the snippet by modifying property settings. Another option would be using EdgeWorkers to add the headers to the request, similar to the example shown next.</span></p> <h3>Amazon&nbsp;Cloudfront</h3> <p>You can add server-timing headers via opt-in via the AWS Console as mentioned&nbsp;<a href="https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-cloudfront-introduces-server-timing-headers/">in this post</a>.</p> <p style="padding-left: 30px;"><b>CDN Layer (edge, edge-cache, origin shield)&nbsp;</b></p> <p style="padding-left: 30px;"><b>DNS Time</b></p> <p style="padding-left: 30px;"><b>Connect Time</b></p> <p style="padding-left: 30px;"><b>Upstream/downstream first byte latency</b></p> <p style="padding-left: 30px;"><b>Cache status</b></p> <p style="padding-left: 30px;"><b>POP</b></p> <p>Here is an example of server-timing headers taken from&nbsp;<a href="https://www.perfwork.com/">https://www.perfwork.com/</a>:</p> <p style="padding-left: 30px;"><code>server-timing: cdn-upstream-layer;desc="EDGE",cdn-upstream-dns;dur=0,cdn-upstream-connect;dur=69,cdn-upstream-fbl;dur=562,cdn-cache-miss,cdn-pop;desc="DEN52-P3",cdn-rid;desc="5McHcGf1pCMEZKUtTuHH-UI7Co2qq-817CJu_cD7oVUo9BmxBtpIHQ==",cdn-downstream-fbl;dur=563</code></p> <h3>Cloudflare</h3> <p>Using <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, you can add values from existing headers such as <code>CF-Cache-Status</code> into server-timing headers.&nbsp;</p> <p><strong>Here is an example returning the cache status (hit, miss, revalidate, etc.):</strong></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">/**</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp;* @param {Response} response</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp;* @returns {Response}</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">*/</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">function addServerTimingHeaders(response, startTime) {</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; const serverTiming = [];</span></code><code><span style="color: #000000;"><br /></span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; const cfCache = response.headers.get('cf-cache-status');</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; if (cfCache) {</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; &nbsp; serverTiming.push(`cf_cache;desc=${cfCache}`);</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; }</span></code><code><span style="color: #000000;"><br /></span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; serverTiming.push(`worker;dur=${Date.now() - startTime}`);</span></code><code><span style="color: #000000;"><br /></span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">&nbsp; response.headers.set('Server-Timing', serverTiming.join(', '));</span></code></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">}</span></code></p> <p>Alternatively, you can add/modify response headers using transform rules via the dashboard or API as described <a href="https://developers.cloudflare.com/rules/transform/response-header-modification/">here</a>.&nbsp;</p> <h3><span style="color: #000000;">Fastly</span></h3> <p><span style="color: #000000;">While you could likely use Compute@Edge to add server-timing headers via Fastly, using VCL is pretty straightforward as discussed in <a href="https://www.fastly.com/blog/supercharging-server-timing-http-trailers">this post</a>.</span></p> <p><span style="color: #000000;">TLDR:</span></p> <p style="padding-left: 30px;"><strong><span style="color: #000000;">Request start (from edge)</span></strong></p> <p style="padding-left: 30px;"><strong><span style="color: #000000;">Elapsed time (edge time)</span></strong></p> <p style="padding-left: 30px;"><strong><span style="color: #000000;">POP (edge location)</span></strong></p> <p style="padding-left: 30px;"><strong><span style="color: #000000;">Cache status (hit, miss)</span></strong></p> <p><span style="color: #000000;">To get the following:</span></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">Server-Timing: time-start-msec;dur=1544705663920,time-elapsed;dur=0,fastly-pop;desc=LCY,hit-state;desc=HIT</span></code></p> <p><span style="color: #000000;">Use the following VCL:</span></p> <p style="padding-left: 30px;"><code><span style="color: #000000;">set resp.http.Server-Timing = "time-start-msec;dur=" time.start.msec ",time-elapsed;dur=" time.elapsed.msec ",fastly-pop;desc=" server.datacenter ",hit-state;desc=" fastly_info.state;</span></code></p> <p><span style="color: #000000;">There are a lot of <a href="https://developer.fastly.com/reference/vcl/variables/">VCL variables</a> available that might be useful server-timing headers.</span><code></code></p> <h3>Shopify</h3> <p>Shopify provides the following server-timing headers for all Shopify sites. It's important to note that these are not considered public, so use at your own risk.</p> <p style="padding-left: 30px;"><code>server-timing: processing;dur=15, db;dur=5, asn;desc="7922", edge;desc="DFW", country;desc="US", theme;desc="Prestige", pageType;desc="index", servedBy;desc="8jlx", requestID;desc="4ab33c3d-21e6-425a-9754-a6f42a27d36f"</code></p> <p style="padding-left: 30px;"><code>server-timing: cfRequestDuration;dur=48.999786, earlyhints</code></p> <p>As of the writing of this article, our understanding of each of the headers is as follows:</p> <p style="padding-left: 30px;"><strong>cfRequestDuration</strong> = Duration from the time the request hits Cloudflare (CDN) until it is finished processing.</p> <p style="padding-left: 30px;"><strong>processing</strong> = Duration from the time the request reaches the Storefront until processing of the request is complete.</p> <p style="padding-left: 30px;"><strong>db</strong> = Duration of the request processing spent querying the database. (Subset of processing time)</p> <p style="padding-left: 30px;"><strong>asn</strong> = Autonomous System Number</p> <p style="padding-left: 30px;"><strong>edge</strong> = Location of CDN edge server</p> <p style="padding-left: 30px;"><strong>country</strong> = Country of CDN edge server</p> <p style="padding-left: 30px;"><strong>theme</strong> = Shopify theme used</p> <p style="padding-left: 30px;"><strong>pageType</strong> = page identifier</p> <p style="padding-left: 30px;"><strong>earlyhints</strong> = Were early hints used for the request (if present, assume yes)</p> <h2>Collecting server-timing headers with SpeedCurve</h2> <p>If you use&nbsp;<a href="https://www.speedcurve.com/features/performance-monitoring/">SpeedCurve RUM</a>, server timing is one of the preferred methods for capturing custom data. <a href="https://support.speedcurve.com/docs/using-server-timing">See this guide</a> to learn how you can define custom dimension data, metadata, or metric (timing, size, numeric).</p> <p>Here are a few example charts created by leveraging server-timing headers provided by CDNs:<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/backend_by_status.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing TTFB for each cache state (HIT | MISS | REVALIDATE)" /></p> <p style="text-align: center;"><em>TTFB by cache state</em></p> <p style="text-align: center;"><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/edge_ttfb.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Chart showing a correlated increase in TTFB and CDN Edge time" /></p> <p style="text-align: center;"><em style="text-align: center;">TTFB and CDN Edge time</em></p> <p>If you're not using SpeedCurve RUM and want to experiment with capturing server-timing headers,&nbsp;<a href="https://www.speedcurve.com/signup/">start a free trial today!</a></p> <h2>Recent server timing case studies&nbsp;</h2> <p>It's great to see server timing starting to get more use in the wild. Here are a couple of great blog posts from last year's <a href="https://calendar.perfplanet.com/">Web Performance Calendar</a>. Definitely worth a read!</p> <ul> <li><a href="https://calendar.perfplanet.com/2023/server-timing-api-70-percent-perf-improvement/"><strong>How epilot used Server Timing to drive a 70% performance improvement</strong></a>&nbsp;</li> <li><a href="https://calendar.perfplanet.com/2023/ttfb-server-timing-measuring-monitoring-optimizing/"><strong>Breaking down TTFB with Server Timing</strong></a><br /><br /></li> </ul> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/458/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 06 Feb 2024 00:00:00 +1300 The psychology of site speed and human happiness https://www.speedcurve.com/blog/psychology-site-speed <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/mobile-stress-eeg.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In the fourteen years that I've been working in the web performance industry, I've done a LOT of research, writing, and speaking about the psychology of page speed &ndash; in other words, <strong>why we crave fast, seamless online experiences</strong>. In fact, the entire first chapter of my book, <a href="https://www.amazon.com/Time-Money-Business-Value-Performance/dp/1491928743">Time Is Money</a> (reprinted <a href="https://support.speedcurve.com/docs/psychology-of-web-performance">here</a>&nbsp;courtesy of the good folks at O'Reilly), is dedicated to the subject.&nbsp;</p> <p>I recently shared some of my favourite research at Beyond Tellerrand (video <a href="https://www.youtube.com/watch?v=leQZOmD7IYQ">here</a>) and thought it would be fun to round it up in a post. Here we're going to cover:</p> <ul> <li>Why time is a crucial (and often neglected) usability factor</li> <li>How we perceive wait times</li> <li>Why our memory is unreliable</li> <li>How the end of an experience has a disproportionate effect on our perception</li> <li>How fast we expect pages to be (and why)</li> <li>"Flow" and what it means in terms of how we use the web</li> <li>How delays hurt our productivity</li> <li>What we can learn from measuring "web stress"</li> <li>How slowness affects our entire perception of a brand</li> </ul> <p>There's a lot of fascinating material to cover, so let's get started!</p><h2>Time is a crucial usability factor</h2> <p>If you don't consider time a crucial usability factor, you're missing a fundamental aspect of the user experience.</p> <p>I'm embarrassed to admit that, in my previous career as a usability tester, I spent years testing websites in lab conditions. It never crossed my mind to take rendering time into consideration.</p> <p>In fairness, that was in the early 2000s, and site speed was barely on anyone's radar. A lot has changed since then, thankfully. <strong>There's been a wealth of research into why waiting is hard &ndash; which is why site speed matters &ndash; not just from a business perspective, but from a hard-wired neurological perspective.</strong>&nbsp;</p> <p>Let's start with a wide-angle look at how we humans handle waiting, in all its forms.</p> <h2>How do we perceive wait times?</h2> <p>Short answer: Poorly.</p> <p><a href="https://en.wikipedia.org/wiki/Queueing_theory">Queueing theory</a> is the mathematical study of waiting lines, both real and virtual. If you jump down the research rabbit hole, you can find some fascinating stories. My favourite is one that takes place at a Houston airport.&nbsp;</p> <p>The airport's customer relations department was fielding a huge number of complaints about how long they were forced to wait for their luggage at the baggage carousels. Airport executives tried to fix the problem by hiring more baggage handlers, which cut average wait times down to seven minutes. But the number of complaints remained the same.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/airport.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The solution? Rather than hire more baggage handlers, they simply made sure that the arrival gates for each flight were located as far as possible from the assigned baggage carousel for that flight. As a result, passengers had to walk six times longer to get to their bags, while the average wait time at the carousel dropped to a minute.&nbsp;</p> <p><strong>Passenger complaints dropped to almost zero.</strong></p> <p>The takeaways from this experiment:</p> <ul> <li>Waiting is hard</li> <li>Passive waiting is even harder</li> <li>Perceived speed is more important than reality</li> </ul> <p>These principles apply to waiting in almost any context, including waiting for pages to load.</p> <h2>Our memory is unreliable</h2> <p>Our perception of time varies according to many factors, including (but certainly not limited to) our age, our location, our emotions, and assorted external stimuli. Not surprisingly, this inconsistency applies to our online experiences as well:</p> <p><strong>The average web user perceives load times as being 15% slower than they actually are.</strong> Later, when recalling the experience, they remember load times as being 35% slower.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/perception.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The average person believes they spend 9 minutes per day waiting for slow websites. This translates to two full days every year. (This statistic is an interesting gauge of how people feel about the web, even if it is not entirely accurate.)</p> <p>Adding indicators like spinners and progress bars can trick us into believing that pages are up to 10% faster than they actually are. Not only do we feel wait times to be slower than they actually are, we later remember them as being even slower.</p> <h2>The end of an experience has a disproportionate effect on perception&nbsp;</h2> <p>The "colonoscopy effect" was identified in a study in which two patients tracked their perceived levels of pain throughout a colonoscopy procedure.</p> <p>In the chart below, you can see the data for Patient A and Patient B. Even though Patient A's experience was shorter and had similar-sized pain peaks, they <em>believed</em> their experience was longer and more painful. The conclusion was that, because patient A's experience ended at a painful peak, that pain coloured their overall perception.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/colonoscopy.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />What does this finding mean in web performance terms? <strong>If your site delivers a relatively fast experience throughout most of a user's journey, but then is slow and janky during the last stage &ndash; for example, checkout &ndash; then your users may take away a disproportionate sense of your site's overall slowness.</strong>&nbsp;</p> <p>Is that fair? Perhaps not, but it's how our brains work.</p> <h2>How fast do we expect web pages to be?</h2> <p>While what we say we expect from online experiences is inaccurate and highly variable, how we actually respond to different page speeds is much more consistent &ndash; and has been so for several decades.</p> <p>In 1968, Robert Miller published a study called <a href="https://dl.acm.org/doi/10.1145/1476589.1476628">Response Time in Man-Computer Conversational Transactions</a>. In the study, Miller shared that <strong>a wait of longer than 2 seconds breaks concentration and hurts productivity</strong>.</p> <p>In <a href="https://www.nngroup.com/articles/response-times-3-important-limits/">1993</a> and again in <a href="https://www.nngroup.com/articles/website-response-times/">2010</a>, usability expert Jakob Nielsen found that:</p> <ul> <li>0.1 seconds gives us the illusion of instantaneous response</li> <li>1 second keeps our flow of thought seamless</li> <li>10 seconds is enough to keep our attention &ndash; barely.</li> <li>After 10 seconds, our minds wander, making it harder to get back on task once a page finally loads.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/nielsen-response-times-no-bckgrnd.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The internet may change, and web pages may grow and evolve, but user expectations are constant. <strong>The numbers about human perception and response times have been consistent for more than 45 years.</strong> These numbers are hard-wired. We have zero control over them. They are consistent regardless of the type of device, application, or connection we are using at any given moment.</p> <p>But why? This is where things get really interesting.</p> <p>Nielsen has stated that human responses to poor load times are based on two aspects of how our brains function:</p> <ul> <li><strong>Our poor short-term memory</strong>&nbsp;&ndash; Information stored in short-term memory decays quickly.</li> <li><strong>Our need to feel in control</strong>&nbsp;&ndash; Being forced to wait makes us feel powerless and frustrated.</li> </ul> <h2>Impatience: It's in our heads</h2> <p>Our impatience is an indelible part of our incredible human circuitry. At any given moment, there are three types of memory processing at work in your brain:</p> <ul> <li>Sensory memory</li> <li>Short-term memory</li> <li>Working memory</li> </ul> <p>(There's also long-term memory, but it doesn't really come into play here.)</p> <h3>Sensory memory</h3> <p>Every time you see something, this visual information is taken in by photoreceptor cells in your eyes and sent to the occipital lobe in your brain. This is your iconic memory. It's just one of your three types of sensory memories. (The other two govern sound and touch.)</p> <p>People have been studying how iconic memory works for almost 300 years. In <a href="https://en.wikipedia.org/wiki/Persistence_of_vision#1820%E2%80%931866:_Revolving_wheel">one of the earliest studies</a>, back in the early 1800s, a glowing coal was attached to the wheel of a cart. The wheel was spun faster and faster until observers perceived an unbroken circle of light. The study concluded that the glowing coal had to perform a complete cycle in 100 milliseconds or less in order to create the illusion of a fiery circle.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/burning-wheel.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>That early study identified the phenomenon we now call "persistence of vision", which is predicated on the fact that <strong>our iconic memory holds on to visual information for about 100 milliseconds</strong>. After that, the "memory store" runs out and the iconic memory needs to be refreshed with new visual information. This number has remained consistent throughout the centuries.</p> <p>Interestingly, and perhaps not coincidentally, 100 milliseconds is Google's stated goal when it comes to page load times.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/100ms-new.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />Iconic memory, along with the other two types of sensory memory, is primitive. We can't consciously choose what information is stored in it, and we can't will it to last longer. (If we could, we'd probably go insane or accidentally walk in front of a bus.)</p> <p>Some sensory memory does stick, of course... provided it's used quickly and eventually consolidated into your long-term memory.</p> <h3>Short-term memory and working memory</h3> <p>If our sensory memory's role is to provide comprehensive information on our entire sensory experience, it's our short-term memory's job to extract the relevant bits and throw them into the hopper of our working memory. <strong>Your short-term memory can store information for 10-15 seconds</strong> &ndash; at most &ndash; just enough time for your working memory to process, manipulate, and control it.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/15s-new.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" />So the goal in getting perceived page speed rendering down to 100 milliseconds is to:</p> <ol> <li>Keep information from falling through the cracks in our iconic memory, while we</li> <li>Give our short-term and working memory ample time to do all the parsing they need to do before they start losing information.</li> </ol> <h2>What is "flow" and what does it mean in terms of how we use the web?</h2> <p>For hundreds of thousands of years, human beings have evolved to perform actions in beautiful, sequential flows. Our day-to-day tasks &ndash; building a fire, hunting antelope, baking bread, milking a cow &ndash; have been comprised of a series of minute actions that flow more or less seamlessly into the next.</p> <p>In his book <em>Finding Flow: The Psychology of Engagement with Everyday Life</em>, noted psychology researcher <a href="https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi">Mihaly Csikszentmihalyi</a> observes that <strong>people who perform seamless, sequence-based activities on a regular basis are happier than people who do not</strong>. He coined the term "flow" to describe this state of being.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/flow.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's only in the past 40 years, with the advent of computers in our homes and workplaces &ndash; and even our pockets! &ndash; that we have imposed a new set of demands on our brains. As most of us are painfully aware, instead of offering a series of smoothly sequential actions, computer use is characterized by lag, downtime, and restarts.</p> <p>Simply put, <strong>our flow-oriented brains are not wired to deal with the fits and starts of human-computer interaction</strong>.</p> <p>There are some people who are skeptical about the impact of lag, downtime, and restarts on productivity and other performance indicators. A common argument is that most people do, in fact, adjust to poor performance. As it turns out, those people may be somewhat correct in their assumption, but they may also be focusing on the wrong part of the picture.</p> <h2>Do delays really hurt productivity?</h2> <p>In a <a href="https://dl.acm.org/doi/10.1145/985692.985715">1999 study of workplace interruptions</a>, groups of workers were subjected to various disruptions in the course of their day-to-day responsibilities. They were then measured in terms of:</p> <ul> <li>their productivity, and</li> <li>their self-reported state of mind.</li> </ul> <p>While that study focused on general workplace interruptions, with only some attention given to human-computer interaction, there were some fascinating findings that are quite arguably relevant to web performance:</p> <p><strong>Finding 1: Participants developed strategies that let them deal effectively with interruptions and maintain their productivity</strong></p> <p>The research suggested that, at least for some workers in some environments, not only did they learn how to cope with interruptions, they may even have striven to overcompensate for their potential performance decline.</p> <p><strong>Finding 2: However, this coping mechanism is achieved at the expense of higher psychological costs</strong></p> <p>Cumulatively, interruptions had a negative impact on emotions and well-being. In addition, participants ultimately needed to increase the amount of effort required to perform the same tasks.</p> <p><strong>Finding 3: Over time, interruptions affected participants' ability and willingness to resume work and take on new tasks</strong></p> <p>Interruptions seemed to have a cumulative effect. When the number of interruptions grew, the resumption time (i.e., the time needed to restart the task) became disproportionately longer. The participants seemed to lose motivation and develop mental fatigue.</p> <h2>What do these findings mean in web performance terms?</h2> <p>When dealing with application delays, it is possible that people can develop coping strategies that allow them to maintain productivity in the short term. But the missing ingredient here is flow. And without flow, eventually our sense of motivation and well-being suffers.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/interruptions.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>It's important to remind ourselves that application performance is just one part of the greater world. Our everyday lives are filled with events &ndash; from sitting in traffic to standing in line at the grocery store &ndash; that challenge our need for flow.</p> <p>Slow websites are just one problem, but for those of us who spend much of our work and personal time online, <strong>slow sites creates extra friction in an already friction-filled world.</strong>&nbsp;The effects are cumulative, as most of us are not capable of compartmentalizing our stress.</p> <h2>"Web stress" is measurable</h2> <p>When websites perform poorly, we react badly. (There is even some research that suggests <a href="https://simonhearne.com/2021/web-stress/">using slow websites increases our blood pressure</a>!) This is not surprising given what we now know about our deep craving for flow.</p> <p>In 2011, CA Technologies commissioned Foviance, a customer experience consultancy, to conduct <a href="https://docplayer.net/5537237-Web-stress-a-wake-up-call-for-european-business-study-conducted-by-foviance-on-behalf-of-ca-february-2010.html">a series of lab experiments</a> at Glasgow Caledonian University. The participants wore an EEG (electroencephalography) cap to monitor their brainwave activity while they performed routine online transactions. Participants completed tasks using either a 5 MB web connection or a connection that had been artificially slowed down to 2 MB.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/desktop-eeg.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Brainwave analysis from the experiment revealed that <strong>participants had to concentrate up to 50% more when using websites via the slower connection</strong>. When asked what they liked most and least about the websites they used during the study, participants frequently cited speed as a top concern:</p> <p style="padding-left: 30px;"><em>"The website was very slow, so it took a really long time to load the book preview."</em></p> <p style="padding-left: 30px;"><em>"What I liked least about the site is its speed."</em></p> <p>The study also found that people were most likely to experience the greatest levels of stress during these points in the transaction process:</p> <ul> <li>Search</li> <li>Finding and selecting products</li> <li>Checkout</li> <li>Entering personal information and concluding the sale</li> </ul> <p>Intuitively, this makes sense. Online shopping already comes with an inherent amount of stress, as most of us are concerned with finding the right item at the best possible price. And the checkout process &ndash; when we hand over our personal and credit card information &ndash; is fraught with a certain amount of stress as well. Add page slowdowns to the mix and it is easy to understand why the online shopping experience can become unpleasant.</p> <h2>Mobile users feel "web stress" too</h2> <p>Based on the desktop neuroscientific research conducted by CA Technologies, Radware conducted a <a href="https://www.radware.com/blog/applicationdelivery/wpo/2014/06/mobile-web-performance-stress/">similar study</a> in 2013, this time focusing on users of mobile devices.</p> <p>(Disclosure: I worked at Radware at the time and directed the study. In order to ensure that there were no biases, the research and analysis was outsourced to a third-party neuroscientific research company called Neurostrata.)</p> <p>The mobile stress study involved using a groundbreaking combination of eyetracking and electroencephalography (EEG) technologies to monitor neural activity in a group of mobile users who were asked to perform a series of online transactions via mobile devices. (Below is a photo of one of the study participants.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/mobile-stress-eeg.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>In the study, participants were asked to complete standardized shopping tasks on four ecommerce sites while using a smartphone. Some participants were served pages at normal speeds over WiFi, while others were served pages at a consistently slowed-down speed (using software that created an artificial 500-millisecond network delay).</p> <p>The participants did <em>not</em> know that speed was a factor in the tests; rather, they believed they were participating in a generic usability/brand perception study.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/mobile-strees.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Some highlights of the study's findings:</p> <ul> <li><strong>Users experienced frustration peaks of up to 26% at critical points.</strong></li> <li>Like the CA Technologies study, frustration peaks were most common during the browsing and checkout phases.</li> <li>Faster pages correlated with increased user engagement. (That's a good thing!)</li> <li>All users experienced some level of "web stress" even under ideal mobile browsing conditions.</li> </ul> <h2>Slowness affects our entire perception of a brand</h2> <p>Yes, that includes non-performance aspects of the site, such as content, design, and navigation.</p> <p>After the mobile stress study above, we conducted exit interviews with participants, in which we asked them about their impressions of the site and the company. We then poured all the adjectives from the interviews into a word cloud generator and generated clouds for each version (normal and slow) of each site.</p> <p>Reminder: The only differentiator was site speed. And because this was a blind study, the testers were not consciously aware of the speed difference. The results indicate that slower page speed affects the brand on a global level.</p> <p>What we found: <strong>Slow pages undermine overall brand health.</strong></p> <p>This is the word cloud that was generated by test participants after using the site at normal speed:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/mobile-stress-normal.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>And this is the word cloud that was generated by participants after experiencing the same site with a 500ms network delay:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/mobile-stress-slow.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>While it&rsquo;s true that both word clouds contain positive and negative descriptors, it is important to note that the word cloud for the slower site contains almost three times more negative adjectives than the faster site. The adjectives shift from mainly easy-to-use (in the first word cloud) to a range of negative associations (in the second word cloud) &mdash; solely because of the page delays.</p> <p>While some participants clearly picked up on the slight deterioration in performance (&ldquo;slow&rdquo; and &ldquo;sluggish&rdquo;), participants also developed negative perceptions of areas that are unrelated to speed. They reported that the site seemed "boring", "inelegant", "clunky", "tacky", and &ldquo;hard to navigate".</p> <p>In other words, <strong>slower page loads affected people&rsquo;s perception of three important aspects of the site</strong> that are completely unrelated to load time:</p> <ul> <li>Content (&ldquo;boring&rdquo;)</li> <li>Visual design (&ldquo;tacky&rdquo; and &ldquo;confusing&rdquo;)</li> <li>Ease of navigation (&ldquo;frustrating&rdquo; and &ldquo;hard-to-navigate&rdquo;)</li> </ul> <h2>Takeaway</h2> <p>There is a fascinating disconnect between what we say we want and what &ndash; deep down &ndash; we really <em>need</em> from our online experiences.</p> <p>Over the past dozen or so years, user surveys have revealed that what we claim to want changes over time &ndash; from 8-second load times back in 1999 to 4 seconds in 2006 to around 2 seconds today. If we were to believe these surveys, then we would conclude that we are an increasingly hasty, impatient species. We might be tempted to judge (or pity) ourselves as victims of our frantic modern lives.</p> <p>But neuroscientific research &ndash; which studies how we actually take in and respond to visual information &ndash; tells a very different story. Over the decades, researchers have reproduced the same results: that, by and large, we function at our happiest best when our websites and apps (and technology in general) respond in fractions of a second. We may learn how to adapt to slower response times, but this adaptation will always &ndash; or at least for the foreseeable future &ndash; be awkward and uneasy.</p> <p>Yes, there's a <a href="https://www.speedcurve.com/customers/">business case</a> for making your site faster. But caring about web performance is about more than business.</p> <p>As technologists &ndash; and as empathetic human beings &ndash; we need to do more than just deliver adequate online experiences. <strong>Our goal should be to deliver online experiences that are frictionless and delightful, so our visitors leave our sites and apps happier than when they arrived.</strong></p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/happy.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/463/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Tue, 30 Jan 2024 00:00:00 +1300 Building the future of web performance with SpeedCurve https://www.speedcurve.com/blog/building-future-performance <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/473/tim-circle-blog.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>I&rsquo;m beyond excited to announce that I&rsquo;m joining the SpeedCurve team this year! I&rsquo;ll still be doing some consulting work, but I&rsquo;ll be taking on a few less clients this year so I can focus on helping to make an already amazing performance tool even better, working alongside some of my favorite people in the performance community.</p><h2><span style="font-size: 35px; color: #000000;">A long time coming</span></h2> <p>I started working on performance back in 2010 (oh no, I&rsquo;m old). I remember reading Steve Souders's <a href="https://www.amazon.com/High-Performance-Web-Sites-Essential/dp/0596529309" target="_blank" rel="noopener">book</a> and immediately becoming hooked&mdash;living in the middle of nowhere with terrible connectivity and devices that tended to lag behind mean the pain of bad performance was something I couldn&rsquo;t ignore.</p> <p>I got to know Steve a bit and when I attended my first Velocity event in 2012, <a href="https://twitter.com/tameverts" target="_blank" rel="noopener">Tammy</a>, <a href="https://twitter.com/AndyDavies" target="_blank" rel="noopener">Andy</a> and <a href="https://twitter.com/cliffcrocker" target="_blank" rel="noopener">Cliff</a> were some of the first people I met. So much of what I know about performance, I learned from this group of folks. They were patient with advice and feedback and endless questions over the years, and I continue to learn from them daily.</p> <p>I met <a href="https://twitter.com/MarkZeman" target="_blank" rel="noopener">Mark</a> shortly after.</p> <p>I can remember seeing very very early versions of SpeedCurve before it was launched and getting incredibly excited about the tool and what Mark was doing with it.</p> <p>I got even more excited when, later, Mark and Steve pulled me aside at a conference to show me the new RUM capabilities they were going to start adding. (At the time, I think it was just a simple log of all the beacons coming back to them.) I&rsquo;m a firm believer that the best possible thing a company can do with monitoring is pair their <a href="https://www.speedcurve.com/features/performance-monitoring/" target="_blank" rel="noopener">RUM and synthetic</a> data together, and SpeedCurve was the first tool I&rsquo;d ever seen put them both under the same umbrella.</p> <p>It&rsquo;s been fun to watch the tool develop over the years from a small newcomer to an influential tool that has quite literally paved the way to better metrics. (SpeedCurve&rsquo;s fingerprints are all over the development of critical paint metrics like Largest Contentful Paint, for example.)</p> <p>Long story short, I&rsquo;ve been a fan of the tool for a long time, and the team is composed of people who I&rsquo;ve been learning from since literally day one of my performance career. A chance to work alongside longtime friends on an industry critical tool&mdash;hard to beat that!</p> <h2>"What would you say you do here?"</h2> <p>I&rsquo;ve been lucky enough to work on performance from a variety of angles over the past 14 years. I&rsquo;ve been a consultant. I&rsquo;ve worked on tools. I&rsquo;ve been a product and engineering leader. I&rsquo;m a bit like a performance squirrel, I guess: there&rsquo;s a lot about performance that interests me and I&rsquo;ve worked on it from pretty much every angle at this point.</p> <p>The one common element through it all is that I love doing whatever I can to help make the web faster.</p> <p>So&hellip; I&rsquo;m going to continue doing a little bit of everything with SpeedCurve.</p> <p>In addition to my own consulting, I&rsquo;ll be doing some <a href="https://www.speedcurve.com/features/consulting/" target="_blank" rel="noopener">consulting</a> with SpeedCurve customers. I&rsquo;m also excited to work along this <strong>absurdly</strong> talented <a href="https://www.speedcurve.com/about/">team</a> to help make SpeedCurve an even better product than it already is. There are some big plans, and a massive opportunity to make it easier for everyone to build a faster web.</p> <p><!-- notionvc: 8ef2de31-8b90-4338-9574-5e97242d2b0e --></p> <p>I&rsquo;m over the moon about this&mdash;it&rsquo;s been a long time coming and I feel honored and humbled to be able to join the team.</p> Tue, 16 Jan 2024 00:00:00 +1300 How to find (and fix!) INP interactions on your pages https://www.speedcurve.com/blog/interaction-to-next-paint-find-ix-elements <p>Andy Davies &ndash; fellow SpeedCurver and web performance consultant extraordinaire &ndash; recently shared an impressive Interaction to Next Paint (INP) success:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/inp-interactions-3.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Andy has promised us a more in-depth post on debugging Interaction to Next Paint. While he's working on that, I'll try not to steal his thunder while I share a tip that may help you identify element(s) causing INP issues for your pages.</p><h2>What is Interaction to Next Paint?</h2> <p>As of March 2024, Interaction to Next Paint (INP) will be one of the three <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals">Core Web Vitals</a>, a set of metrics intended to measure web performance from an end user's perspective. <strong>INP measures how responsive a page is to user interaction(s).</strong> This is measured based on how quickly the page responds visually after a user interaction (i.e., when the page paints something in the browser's next frame after you interact with it).&nbsp; &nbsp;</p> <p>For the purposes of INP, an interaction is considered any of the following:</p> <ul> <li>Mouse click</li> <li>Touchscreen tap</li> <li>Key press</li> </ul> <p>Because INP measures actual user interactions, it can only be monitored using a real user monitoring (RUM) tool. INP is measured using the Event Timing API, which is supported in Chrome, Edge, Firefox and Opera. (Alas, currently there is no support for INP in Safari.)</p> <h2>What are interaction metrics?</h2> <p>Interaction (IX) metrics focus on the first interaction a user has with an element on your page. In SpeedCurve RUM, these interactions include:</p> <ul> <li>IX Type &ndash; Click/tap, scroll, key press</li> <li>IX Time &ndash; When the first interaction occurred relative to the start of the initial navigation to the page</li> <li>IX Element &ndash; The element the user interacted with</li> </ul> <p>We've been measuring interaction metrics (IX metrics) in RUM for a while. <strong>Knowing when users are able to interact with your site is an important part of understanding the user experience.</strong></p> <p>We've talked a lot about when a user interacts with the page, which most commonly is well after the page has loaded. This poses a challenge for measurement in the field, as a lot of RUM providers, including us, are somewhat beholden to the onload event to fire our beacon. We've addressed this by sending a second beacon on that first interaction, which in turn has allowed us to capture other metrics such as First Input Delay (FID) and Interaction to Next Paint (INP).</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/inp-ix-timeline.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>INP measures the responsiveness of those first interactions. We've yet to start exposing the INP element or associated LoAF attribution (coming soon), but have found that a lot of the time you can use the IX element as a starting point for troubleshooting INP.</p> <h2>How to identify popular IX elements in SpeedCurve</h2> <p>With SpeedCurve RUM, we've found that the IX element is likely to be the INP element in A LOT of cases. Here is a quick workflow you can use that may help you find and fix INP issues.</p> <p><strong>Step 1:&nbsp;</strong>Using the heatmap in the RUM Performance dashboard, identify pages with slow INP.</p> <h3><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/heatmap.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Heatmap showing CWV for pages sorted by popularity" /></h3> <p><strong>Step 2:</strong>&nbsp;Navigate to the <strong>RUM Design dashboard </strong>and filter the&nbsp; to the page label from the heatmap associated with the slow INP value.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/design_filter.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dropdown filter on the Design dashboard showing a list of options with page label selected" /></p> <p><strong>Step 3:</strong>&nbsp;Identify the most popular interaction elements for that page shown in the Interaction Element section. (Note that these elements listed are NOT associated with scroll interactions. This is important as INP, FID, LCP, etc. do not consider the scroll event an interaction.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/ix_element_filtered.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Bar chart showing a list of interaction elements for the filtered page sorted by popularity" /></p> <p>Voila! I hope this helps you address any outstanding INP issues.&nbsp;</p> <p>We're always working on ways to provide deeper RUM diagnostics. It's going to be a great year!</p> <h2>More INP reading</h2> <ul> <li><a href="https://www.speedcurve.com/blog/interaction-to-next-paint-core-web-vitals/" target="_blank" rel="noopener">Farewell FID... and hello Interaction to Next Paint!</a></li> <li><a href="https://support.speedcurve.com/docs/add-interaction-to-next-paint-inp-to-your-core-web-vitals-dashboard" target="_blank" rel="noopener">How to add Interaction to Next Paint to your Core Web Vitals dashboard</a></li> <li><a href="https://www.speedcurve.com/blog/INP-user-experience-correlation/" target="_blank" rel="noopener">Does Interaction to Next Paint actually correlate to user behavior?</a></li> <li><a href="https://www.speedcurve.com/blog/core-web-vitals-inp-mobile/" target="_blank" rel="noopener">Mobile INP performance: The elephant in the room</a></li> <li><a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals" target="_blank" rel="noopener">Get started with Core Web Vitals</a></li> </ul> <h2>Have you tried SpeedCurve RUM?</h2> <p><strong>Existing customer?</strong> <a href="mailto:support@speedcurve.com">Reach out to us</a> to enable your free trial.</p> <p><strong>Not using SpeedCurve yet?</strong>&nbsp;<a href="https://www.speedcurve.com/signup/">Start your free trial</a>.</p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/469/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Wed, 03 Jan 2024 00:00:00 +1300 2023 recap: This year was all about making performance easy (well, easier) https://www.speedcurve.com/blog/2023-easy-web-performance <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/easy-button.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Every year feels like a big year here at SpeedCurve, and 2023 was no exception.</p> <p>Among other things, <a href="https://www.speedcurve.com/blog/10-things/" target="_blank" rel="noopener">we turned ten</a>! Ten years is a lot of time to reflect, and over this past year&nbsp;our team has been thinking a lot about not just the "what" and "how" of web performance, but also the "why". Why should we &ndash; and you &ndash; care about delivering a fast, delightful experience to all your users? This "why" informs all the choices we make about the "what" and "how" of our tools.</p> <p>Looking back over the past year, if I were to pick a word that defined our goals, that word would be "easier". It's no secret that the past couple of years have been challenging for the tech community. <strong>In the current landscape of smaller teams, aggressive goals, and an ever-increasing tech stack, how can we make it easier for you to create impact?</strong>&nbsp;</p> <p>Our biggest achievements this year have centred on making it easier for you to:</p> <ul> <li>Fight regressions and keep your sites fast</li> <li>Measure the impact of Core Web Vitals on actual user experience</li> <li>See correlations between site speed and business metrics</li> <li>Visualize your data</li> <li>Leverage synthetic and real user monitoring</li> <li>Focus on what matters</li> <li>Get help when you need it</li> </ul> <p>Keep reading to learn more...</p><h2>Easier to fight regressions</h2> <p>Integrating SpeedCurve into your development pipeline &ndash; so you can take advantage of <a href="https://www.speedcurve.com/blog/performance-budgets/">performance budgets and alerts</a> to catch issues before (or right after) they go into production &ndash; has been a popular practice over the last several years.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/budget-vs-goals.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Our latest <a href="https://www.speedcurve.com/blog/december-2023-update/">CI/CD workflow improvements</a> have radically evolved the developer experience and the overall time to value for folks who use SpeedCurve in their CI/CD pipeline.&nbsp;</p> <h2>Easier to measure the impact of Core Web Vitals on actual user experience</h2> <p>It's been almost four years since the Core Web Vitals initiative was kicked off in May 2020. In that time, we've seen people's interest in performance dramatically increase, especially in the world of SEO. It's been hugely helpful to have a simple set of three UX-oriented metrics &ndash; focused on loading, interactivity, and responsiveness &ndash; that everyone can understand and focus on.</p> <p>Over the past few years, we've done more than just support Core Web Vitals. We've analyzed them to validate their usefulness, and we've developed new ways to visualize your Vitals data to be as helpful as possible. Most recently, this includes:&nbsp;</p> <h3>Interaction to Next Paint&nbsp;</h3> <p>INP is intended to measure how responsive a page is to user interaction(s). This is measured based on how quickly the page responds visually after a user interaction (i.e. when the page paints something in the browser's next frame after you interact with it).&nbsp;</p> <ul> <li>We've added INP everywhere your Vitals are showcased throughout SpeedCurve &ndash; including the Home, Vitals, Performance, and JavaScript dashboards.</li> <li>We've also demonstrated how you can&nbsp;<a href="https://www.speedcurve.com/blog/INP-user-experience-correlation/">validate that INP correlates to user behaviour</a>&nbsp;for your own site.</li> <li>And we've discovered that&nbsp;<a href="https://www.speedcurve.com/blog/core-web-vitals-inp-mobile/">INP for mobile has an even stronger correlation with user engagement metrics than INP for desktop</a>.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/inp-conversions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Cumulative Layout Shift<span style="color: #000000;">&nbsp;</span></h3> <p>When CLS was originally launched, it measured all the layout shifts on a page. Unfortunately, this collided head-on with developers moving to single-page apps (SPAs). Since a SPA is by definition a full app shoved inside a single page, the CLS for a SPA would continue to grow for the lifetime of the app/page. This led to massive CLS scores for SPAs and a rethink of how CLS should be measured.</p> <p>In 2021, Google relaunched CLS with a session window. The idea was to measure a burst of layout shifts within a 5s session window. A new session starts if there is a gap of more than 1s between layout shifts. At the end of the page lifespan, the session window with the highest total layout shifts becomes CLS.&nbsp;By introducing a session window, Google reduced the impact of the SPA problem. The CLS for any long-lived page was now limited to a 5s window containing the largest layout shifts.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/cls-windows.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>To help people get their heads around CLS windows, we've introduced a new visualization (above) to SpeedCurve that shows you each session window, as well as which session is the largest and therefore used as the CLS score.</p> <h3>Element Timing: An alternative to LCP&nbsp;</h3> <p>Largest Contentful Paint is a sound concept for a metric, but sometimes it's not the right metric for your pages. This is where Element Timing comes in.&nbsp;Similar to LCP, Element Timing measures when a DOM element is rendered, and it's available in synthetic and RUM. Unlike LCP, Element Timing allows you to do the following:</p> <ul> <li>Decide which elements you want to measure</li> <li>Measure more than one element on a page</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/element-timing.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>We've made enhancements to help you capture Element Timing data so you can always make sure you're measuring the most important content on your pages. Learn more <a href="https://support.speedcurve.com/docs/using-element-timing">here</a>.</p> <h2>Easier to see correlations between site speed and business metrics</h2> <p>Earlier this year, I wrote about <a href="https://www.speedcurve.com/blog/web-performance-plateau/">why you need to know the performance plateau for your site</a> &ndash; the point at which changes to your website&rsquo;s rendering metrics (such as Start Render and Largest Contentful Paint) cease to matter because you&rsquo;ve bottomed out in terms of business and user engagement metrics. In other words, if your performance metrics are on the performance plateau, making them a couple of seconds faster probably won't help your business.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/plateau.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>To find your performance plateau, first you need to create a correlation chart (above) that shows the relationship between your performance metrics and business KPIs. <a href="https://support.speedcurve.com/docs/conversions">We've made it easier for you to add your own custom data</a> &ndash; such as conversion data &ndash; to SpeedCurve.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/conversion.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>After you've added conversion data, you're all ready to create lots of compelling visualizations to get your organization to care about site speed.</p> <h2>Easier to visualize your data</h2> <p>We're always trying to create new ways to help you understand and share your web performance data as effectively as possible. This year we added and enhanced a few chart types.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/chart-types.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Filmstrip</h3> <p>Previously this ability was limited to curated synthetic dashboards. Now you can create your own filmstrip visuals in your Favorites to illustrate comparisons or show the impact of changes to the critical rendering path for your site.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/charts-filmstrips.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>Value</h3> <p>Several users have asked for the ability to show 'numbers only' when creating a Favorites dashboard. This new chart type does just that. Value charts can be useful for reporting or when you want to show a lot of numbers in a dashboard, but don't necessarily want the noise of time series data.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/charts-value.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h3>RUM heatmap</h3> <p>RUM heatmaps (found on your RUM Performance dashboard) have been updated to include:</p> <ul> <li><strong>Interaction to Next Paint (INP)</strong> &ndash; The heatmap now includes all Core Web Vitals (LCP, CLS and INP) in addition to Backend Time and Start Render. Now you can see at a glance which pages are suffering from poor Vitals, and which pages are in great shape.</li> <li><strong>Dimension filter</strong> &ndash; Heatmaps are now filterable by dimension. While 'Page Label' and 'Path' remain popular options, seeing the breakdown across various out-of-the-box dimensions can help you quickly identify potential pain points.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/heatmaps.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Easier to leverage synthetic and RUM</h2> <p>We're always looking for ways to make it easier to extract actionable data from RUM, and lately we've taken some evolutionary next steps to make it easier for you to take advantage of having access to both RUM and synthetic data in your dashboards.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/syn-rum-1.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://support.speedcurve.com/docs/synthetic-page-labels" target="_blank" rel="noopener">Applying the same labels to pages in synthetic and RUM</a> allows you to do things like create a custom chart in your Favorites that tracks a metric (like Start Render, shown above) in both synthetic and RUM, set a performance budget so that you get an alert if the metric degrades in RUM, and then drill down into your synthetic diagnostics... all within a single chart. To help you do this, we've made it easier &ndash; not to mention given you more flexibility &ndash; to&nbsp;<a href="https://support.speedcurve.com/docs/rum-page-labels" target="_blank" rel="noopener">label your RUM page groups</a>.&nbsp;</p> <h2>Easier to focus on what matters</h2> <p>It's easy to get overwhelmed with the sheer volume of performance work. This year at <a href="https://perfnow.nl/" target="_blank" rel="noopener">performance.now()</a>, both Mark Zeman (SpeedCurve's founder) and I talked about how to focus on what matters most.</p> <p>Mark's talk &ndash; <a href="https://www.youtube.com/watch?v=1jGaov-4ZcQ" target="_blank" rel="noopener">Pushing Pixels</a> &ndash; reminded us that not every pixel on a page has the same importance. Your users want to see the most important content first, while your ad team might want their content delivered early as well. Content elements need deliberate prioritisation, rather than trying to make the whole page fast. Mark pulled the idea of a "page" apart and explored a design and development pipeline that focuses instead on islands of content.&nbsp;</p> <p>In my talk &ndash; <a href="https://www.youtube.com/watch?v=L6gZp3-7w8c" target="_blank" rel="noopener">Creating Performance Impact in a Changing Tech Landscape</a> &ndash; I shared best practices to chase impact in our current landscape of smaller teams, aggressive goals, and an ever-growing tech stack. Topics included how to:</p> <ul> <li>Identify your critical rendering path</li> <li>Focus on the pages that matter</li> <li>Fix the low-hanging fruit</li> <li>Manage expectations</li> <li>Use performance budgets to fight regressions</li> <li>Quantify success &ndash; and brag about it!</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/critical-rendering-path.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>One thing both our talks had in common was our mentions of the critical rendering path. <strong>The critical rendering path is the set of steps a browser takes to convert all a page's resources &ndash; from images and HTML to CSS and JavaScript &ndash; into a complete, functional web page.</strong> Optimizing the critical rendering path means taking a good look at the order in which the resources on your pages render, and then making sure that each resource in the rendering path is as performant as possible. It sounds simple &ndash; and conceptually it is &ndash; yet it can be tricky to achieve (as this recent <a href="https://www.speedcurve.com/blog/web-performance-audit-lego/">performance audit of LEGO.com</a> reveals).&nbsp;</p> <h2>Easier to get help when you need it</h2> <p>One of the biggest challenges everyone seems to face these days is finding time to focus on keeping their sites fast. So we've recently expanded to include <a href="https://www.speedcurve.com/features/consulting/">great consulting</a> to go with our great tools.</p> <p>Our team includes some of the most experienced people in our industry. People like <strong>Steve Souders</strong>, <strong>Andy Davies</strong>, <strong>Cliff Crocker</strong>, <strong>Mark Zeman</strong>, and me. We've started global conferences, written books, taught courses, run design agencies, and improved conversion rates for all the big brands. We care about making the web faster and we want to help you.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/consulting-team-5.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p><a href="https://www.speedcurve.com/features/consulting/">Learn more about our consulting services</a> (including our KickStart and HealthCheck services), and contact us at support@speedcurve.com if you have any questions.&nbsp;</p> <h2>Looking ahead</h2> <p>"Make things easy (or at least easier)" continues to be our mantra. We want to make your web performance monitoring &ndash; not to mention your ability to fix performance issues &ndash; as easy as possible. If you have any feedback or suggestions, we'd love to hear it! Send us a note at support@speedcurve.com.</p> <p>If you're not yet using SpeedCurve, we'd love to have you give us a try! <a href="https://www.speedcurve.com/signup/"><strong>Start your free trial.</strong></a></p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/470/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Thu, 21 Dec 2023 00:00:00 +1300 How to automatically test the performance of your GitHub pull requests https://www.speedcurve.com/blog/web-performance-test-pull-requests <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/467/continuous-integration.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>As highlighted in our <a href="https://www.speedcurve.com/blog/december-2023-update/">December product update</a>, we've been making a lot of improvements in the area of CI/CD. In addition to the new Deployments dashboards and Notes updates, we've launched a new GitHub integration. Our purpose in building this integration is to lower the barrier for getting web performance feedback for your code changes, directly in the environment you are working in.</p> <p>Follow along below for an example of how you can use this integration in practice to fight web performance regressions and keep your pages fast.</p><h2>Getting started</h2> <p>To get started with installing the SpeedCurve CI GitHub app, please review our <a href="https://support.speedcurve.com/docs/github-integration">support guide</a>. This will take you through the simple steps of installing the app and linking a repository to a site in SpeedCurve. The following walkthrough is an example of using the integration in practice.&nbsp;</p> <p>For demonstration purposes, I have created a <a href="https://visionary-wisp-31b1e2.netlify.app/">simple static page</a> that loads a few images. The hero image is small and it is hosted locally, so it loads very quickly. Some smaller images are hosted on another service, so there is a small delay before they load:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-baseline-3.gif" /></p> <p>Soon I will make some changes to this page to show how SpeedCurve's GitHub integration can catch performance issues before they are merged. But before that, I need to make sure that the repository for this page is linked to a site in SpeedCurve.</p> <h2>Linking the GitHub repository to a SpeedCurve site</h2> <p>I have already created a site in my SpeedCurve account that tests my static page against a desktop browser and an emulated mobile browser. To link this site to its GitHub repository, I need to scroll down to the "integrations" section of the site settings page and select the repository from the list:<img class="blog-img" src="https://blog-img.speedcurve.com/img/467/site-settings.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Site settings showing the Github Integration drop down menu" /></p> <h2>Making changes through pull requests</h2> <p>Now that the GitHub repository is linked to my SpeedCurve site, I can start to make pull requests to change the content of the page. The first change I'm going to make is to change the font of the page. I'm hoping this won't affect the performance of the page, but I'll open a pull request and let SpeedCurve test it for me anyway. A key thing to note is that my hosting platform creates a GitHub deployment for me:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-pr-deployment-1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Once the pull request has been deployed, SpeedCurve will start running tests against the deployment. These tests will show up in the Deployments dashboard in SpeedCurve:</p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-deployments-dashboard-1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></h2> <h2>Reviewing the results</h2> <p>After a few minutes, SpeedCurve should post the performance test results as a comment on my pull request. I'm happy with the results - all of the Core Web Vitals metrics are green. This pull request is good to go!</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-pr-report-1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>&nbsp;</p> <h2>What about changes that make performance worse?</h2> <p>Changing the font showed us how SpeedCurve's performance testing can give pull requests the green light, but what about pull requests that make performance worse? To test this out, I'm going to open another pull request that increases the size of the hero image.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-pr-2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Once this pull request has been deployed, I can load it in my browser to see how it affects the page loading. As suspected, the hero image now loads much slower than before.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-pr-1.gif" alt="" /></p> <p>The SpeedCurve performance report confirms what I'm seeing in the browser: the larger hero image has made Largest Contentful Paint much slower. It has made LCP so slow that the page now has a poor Core Web Vitals score.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-pr-report-2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Investigating in SpeedCurve</h2> <p>If I want to look further into these results, I can go through to the deployment details in SpeedCurve. This page shows all the information about the deployment, including which pull request it is linked to.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-deployment-details-2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" /></p> <p>I can also see that the average LCP across all of the regions and browsers that were tested is way higher than I'd like it to be. The filmstrip shows how on an emulated mobile device, the LCP element isn't rendered until 7.5 seconds!</p> <h2><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-deployment-metrics-1.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></h2> <p>If I wanted to dig even further, I can click on any of the filmstrip images to see the detailed test results. Scrolling down to the Browser Waterfall section and clicking on the waterfall will show me all of the requests that were made to load this page. In case it wasn't clear already, this chart shows that the new hero image is responsible for delaying the page load.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/471/github-integration-example-browser-waterfall-2.jpg?auto=format,compress&amp;fit=max&amp;w=2000" /></p> <h2>Conclusion</h2> <p>In this demonstration we saw how valuable it is to integrate performance testing into your pull request workflow. The biggest benefit is catching performance regressions before they're merged, but it's also great to know when your changes <em>don't</em> make performance worse.</p> <p>We're very excited about our new GitHub integration makes it easier than ever to bring performance testing into your workflow - and we hope you are too! This integration is available for all accounts today.</p> <p>Don't have a SpeedCurve account? <a href="https://www.speedcurve.com/signup/"><strong>Start your free trial today!</strong></a></p> <p><a href="https://www.speedcurve.com/signup/" target="_blank" rel="noopener"><strong><img class="blog-img" src="https://blog-img.speedcurve.com/img/467/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></strong></a></p> <p>&nbsp;</p> Thu, 21 Dec 2023 00:00:00 +1300 Performance audit: Lego.com https://www.speedcurve.com/blog/web-performance-audit-lego <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/lego-home-page.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>I love LEGO. My kids love LEGO, too, which means that every year I find myself spending a fair bit of time on the LEGO website during the holidays. So I thought it would be fun to spend some time poking around behind the scenes and give the site a holiday performance audit. Keep reading to find out what I learned, and some lessons you may want to apply to your own pages.</p><h2>Background</h2> <p>For this performance audit, I looked at <a href="https://app.speedcurve.com/benchmark/retail-us/test/231129_MH_f526d16a74bdd5dbefcdfb4898f90d3b/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">this synthetic test</a> for the LEGO.com home page.</p> <p>(We track LEGO.com, along with a handful of other leading ecommerce sites, in our public-facing <a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/largest-contentful-paint/">Retail Benchmarks dashboard</a>, which I encourage you to check out. You can drill down into the test details for each site and learn how the fastest sites stay fast.)</p> <p>You can see in this recent snapshot of the leaderboard (which changes daily) that the LEGO home page is at the bottom of the ranking for Largest Contentful Paint (LCP). LCP measures when the largest visual element in the viewport &ndash; typically a hero image or video &ndash; finishes rendering.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/retail-benchmarks.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>A couple of things worth noting:</p> <ul> <li>All of the sites in the leaderboard sites are pretty speedy, so this is NOT a name-and-shame exercise.</li> <li>The LEGO page actually starts to load visual content FASTER than most of the other pages.</li> </ul> <p>So the question that demands an answer is this: <strong>If the LEGO home page appears to start rendering early, why is it at the bottom of the LCP leaderboard?</strong></p> <p>Let's dig deeper.</p> <h2>LCP is 3.96 seconds</h2> <p>As mentioned, this leaderboard ranks pages based on their LCP time. Looking at the synthetic test results, we see that while Start Render time is an impressive 0.3 seconds, LCP lags at 3.96 seconds.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/render-times.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>According to Google's thresholds, an LCP time of 3.96 seconds is very close to being categorized as "poor".</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/lcp-thresholds.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Content breakdown</h2> <p>Looking at the page composition, it's actually relatively lean, with 131 resource requests (e.g., HTML, CSS, JavaScript, images, and fonts) weighing in at 1,410 kilobytes. The performance team is clearly doing a good job of fighting page bloat.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/content-breakdown.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>If the issue isn't sheer page bloat, then the next step is to investigate the critical rendering path.</p> <h2>Critical rendering path</h2> <p>The critical rendering path is the set of steps a browser takes to convert all a pages resources &ndash; from images and HTML to CSS and JavaScript &ndash; into a complete, functional web page.</p> <p>Optimizing the critical rendering path means:</p> <ul> <li><strong>Taking a good look at the order in which the resources on your pages render.</strong> Does essential content and scripting load first? Are non-essential or lesser resources deferred?</li> <li><strong>Making sure that each resource in the rendering path is as performant as possible.</strong> Are images optimized? Are you using a content delivery network (CDN) to bring elements like images closer to your users, so that delivery times are faster? Are you compressing and caching the right things?</li> </ul> <p>The best way to investigate the critical rendering path is to look at a <strong>waterfall chart</strong> for the page page. A waterfall chart shows you all the resources on the page, the order in which each resource is requested by the browser, and the size and duration of each request, among other things.</p> <p>This is a <a href="https://app.speedcurve.com/benchmark/retail-us/test/231129_MH_f526d16a74bdd5dbefcdfb4898f90d3b/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">high-level waterfall</a> for the LEGO.com home page. It shows how key metrics align with the rendering timeline (the filmstrip view at the top of the chart).</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-1.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>When you expand the waterfall (as I've done below), you see all the resources &ndash; again, aligned with the rendering timeline at the top of the chart. (This is an interactive element, so screenshots don't do it justice. I encourage you to go to the <a href="https://app.speedcurve.com/benchmark/retail-us/test/231129_MH_f526d16a74bdd5dbefcdfb4898f90d3b/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae" target="_blank" rel="noopener">test details page</a>, click on the waterfall to expand it, and then explore the chart by hovering over and clicking on the various elements.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-2.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Each of the 131 resources on this page is represented by its own bar on the waterfall. From a performance perspective, some of these bars are more important than others. This is where the critical rendering path comes in. In this case, the critical rendering path is everything from the top of the waterfall to the moment when the Largest Contentful Paint element finishes rendering and fires the LCP event in the browser.</p> <p>It only took a minute to locate the hero image in the waterfall. This image is the LCP element &ndash; arguably the most important visual element on the page. Unfortunately, it's the 71st resource requested by the browser.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/lcp-element.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Looking at the same waterfall, I've greyed out everything that comes after the LCP image. For our purposes &ndash; which is to understand why the LCP time is below par and to figure out how to improve it &ndash; everything in the greyed-out section of the waterfall isn't relevant.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-3.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>This lets us focus on just this section of the waterfall chart:&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-4.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Now that we know what section of the waterfall to focus on, we can take advantage of a nifty feature in SpeedCurve (available under 'Options' in the top right corner of the waterfall), which lets you learn which requests have high CPU usage, as well as excessive main thread activity.&nbsp;&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-options.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>So now the waterfall chart looks like this, with the Long Tasks (main thread JS activity over 50ms) clearly visible in red:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/waterfall-chart-long-tasks.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Clicking on the JavaScript request with all the red bars lets you learn more about that request &ndash; including the fact that this request has a total Long Tasks time of almost 3.5 seconds (which is a lot!):</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/request-11.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Observations</h2> <h3>The LCP image is added to the page using JavaScript</h3> <p>Because of this, the browser doesn't know about the image until the JS is downloaded and fully executed. As we've just seen, some of the JS on this page generates excessive Long Tasks. Even though the hero image &ndash; which in and of itself is a fairly well optimized image &ndash; has downloaded at around the 2.9 second mark, the LCP event is prevented from firing for another second. One second may not sound like much, but it's the critical difference between a middling LCP score and a poor score.</p> <h3>Non-essential images are prioritized above the LCP image</h3> <p>There are 20 lesser image requests (purple bars) before the hero image. This includes smaller images that aren't even in the initial viewport. The reason why these images are loading before the LCP image is because they're in the HTML.</p> <h3>Too many JavaScript chunks before the LCP event</h3> <p>There are 33 JavaScript requests (orange bars) before the hero image is called. This page utilizes code splitting &ndash; a technique that involves breaking JS into smaller, manageable chunks that are ideally loaded only when needed. Reducing the amount of code that needs to be downloaded and parsed is intended to result in faster initial rendering times. However, too many chunks can have the opposite effect, because the browser is limited in how many resources it can request at once.&nbsp;</p> <h3>Unused JavaScript before the LCP event</h3> <p>Looking at the <a href="https://app.speedcurve.com/benchmark/retail-us/test/231129_MH_f526d16a74bdd5dbefcdfb4898f90d3b/?share=3ssmi8mdfi7g5j2m3oinu6d74c9tae&amp;share=3ssmi8mdfi7g5j2m3oinu6d74c9tae">Lighthouse audits</a> for this page, there's some unused JS that could be removed. All of the JS files below are requested before the LCP event. While this unused JS isn't the culprit behind the slow LCP time, it's still considered a best practice to remove unused scripts.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/lighthouse3.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>Takeaways</h2> <p>The LEGO home page is lean and quick to start rendering. Those are already big wins. Optimizing the critical rendering path could make this page a star on the <a href="https://app.speedcurve.com/benchmarks/usa/retail/fast/largest-contentful-paint/" target="_blank" rel="noopener">Retail Benchmarks leaderboard</a>.</p> <p>Some lessons to take away:</p> <ol> <li>Prioritize your most important visual element.&nbsp;Have the markup for the image in the served HTML, and include the <code>fetchpriority=&ldquo;high&rdquo;</code> attribute on it.&nbsp;</li> <li>Defer non-essential JavaScript.</li> <li>Use code splitting judiciously.</li> <li>Remove unused resources.</li> </ol> <h2>Next?</h2> <p>Is there a page you'd like us to audit? Let us know at support@speedcurve.com!</p> <p>If you'd like to <a href="https://support.speedcurve.com/docs/test-details">analyze your own pages</a>&nbsp;(including Core Web Vitals analysis and Lighthouse audits) &ndash; or <a href="https://support.speedcurve.com/docs/competitive-benchmarking">compare your site to your competitors</a> &ndash; you can do that in SpeedCurve. If you're not using us yet, <a href="https://www.speedcurve.com/signup/">start your free trial</a>.</p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/461/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Thu, 14 Dec 2023 00:00:00 +1300 NEW! December product update https://www.speedcurve.com/blog/december-2023-update <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/december-2023-release-balloons.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Holy cow, it's been a busy few months! <a href="https://www.speedcurve.com/blog/10-things/" target="_blank" rel="noopener">SpeedCurve turned ten</a>, we attended (and gave talks at) <a href="https://perfnow.nl/" target="_blank" rel="noopener">performance.now()</a>, Firefox added support for Largest Contentful Paint (LCP), and oh yeah... we just shipped a ton of stuff! (We wanted to wait until the dust settled around Black Friday/Cyber Monday for all of our friends in retail, which led to a pretty monumental release.)<br /><br />So get comfy and check out our holiday updates.</p><h2>Improve your CI/CD workflow</h2> <p><a href="https://support.speedcurve.com/docs/continuous-integration" target="_blank" rel="noopener">Integrating SpeedCurve into your development pipeline</a>&nbsp;&ndash; so you can catch issues before (or right after) they go into production &ndash; has been a popular practice over the last several years. In our last product planning session, we made the decision to invest a lot of time into improving the developer experience and the overall time to value for customers using us in their CI/CD pipeline.&nbsp;</p> <h3>Deployment dashboards</h3> <p>Previously, the Deploys dashboard (now renamed 'Compare' in the Synthetic navigation menu) was the only way you could see your recent and historical deployments. We've introduced two brand-new dashboards, with the goal of letting you see all of your deployments and their status in a single view, and then drill down into the details for each deployment.</p> <p>Your <strong>Deployments </strong>dashboard is in the main navbar. Your deployments history (for the last month by default) is on display in the dashboard. At a glance you can see:</p> <ul> <li>The status of the deploy</li> <li>The status of your <a href="https://www.speedcurve.com/blog/performance-budgets/" target="_blank" rel="noopener">performance budgets</a> (if you have created budgets)</li> <li>An aggregate metric for all of the URLs tested in the deploy (set to <a href="https://support.speedcurve.com/docs/metrics-glossary#largest-contentful-paint-synthetic-and-rum---chrome" target="_blank" rel="noopener">Largest Contentful Paint</a> by default)</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/deployment_dashboard.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Deployments dashboard showing list of past deployments in various states." /><br />Clicking on any deployment in the list brings up your <strong>Deployment Details</strong> dashboard. This view is specific to that deploy. It includes:</p> <ul> <li>The status of your performance budgets</li> <li>An overview of your <a href="https://support.speedcurve.com/docs/get-started-with-core-web-vitals" target="_blank" rel="noopener">Core Web Vitals</a>&nbsp;</li> <li>A breakdown of Core Web Vitals by each page</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/deployment-details-dashboard.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Deployment details showing failed status and overview of core web vitals" /></p> <h3>Notes cleanup</h3> <p>If you're one of the many SpeedCurve users who frequently <a href="https://support.speedcurve.com/docs/notes-and-deploys" target="_blank" rel="noopener">deploy changes with notes</a> &ndash; or if you have been taking advantage of <a href="https://support.speedcurve.com/docs/notes-and-deploys" target="_blank" rel="noopener">adding notes manually</a> &ndash; you're in for a treat.</p> <p>Previously, charts could get REALLY noisy and look something like this:<br /><br /><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/lots_of_annotations.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series charts with an overwhelming number of annotations making it unreadable" /></p> <p>The chart above shows how the annotations &ndash; as well as the associated name for each deploy &ndash; made the data in the chart hard to read. We're happy to share that we've cleaned this up (to say the least).</p> <p>Now you'll see something like this:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/new_annotations.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Time series chart with new markers below the x-axis replacing the old annotation style" /></p> <p>Below the x-axis, we've introduced markers to designate where deploys have occurred or notes have been posted. When hovering over each marker, you'll see a clear presentation of the associated name. The chart is now readable, but you still have the ability to see your deployments and find their details.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/deployments_tooltip.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Tooltip showing the deployment selected and the option to View Deploy." /></p> <p>Clicking on a deployment brings up a tooltip with the full name associated with the deployment and the option to 'View Deploy', which takes you to the Deployment Details dashboard shown earlier.</p> <p>Notes are now separate from deploys. Every deploy has an associated marker and the name provided for the deployment. In addition, if you choose to add a note from the chart or when using the 'Test Now' option, those will be shown as Notes alongside deployments.</p> <h3>Main navigation changes</h3> <p>There have been a few notable changes to the main navigation menu.&nbsp;</p> <ul> <li><strong>'Status' has been renamed 'Budgets'</strong> &ndash; The underlying dashboard is the same (for now!). The goal is to make it more clear that this is where to see the status of all your <a href="https://www.speedcurve.com/blog/performance-budgets/" target="_blank" rel="noopener">performance budgets</a> at a glance.</li> <li><strong>'Deployments' has been added</strong> &ndash; As shown earlier, this takes you to your Deployments dashboard.</li> <li><strong>'Test Now' is now located in the navbar</strong> &ndash; Now you can trigger a test from any dashboard.&nbsp;Options for 'Test Now' include the ability to test a site(s) standalone or as part of a deploy.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/navigation-changes.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Dashboard highlighting changes to the side navigation bar" /></p> <h3>Future CI/CD updates</h3> <p>We aren't done yet! This is just the start of building a more solid foundation for CI/CD. Here are a few more things we're progressing quickly on.&nbsp;</p> <p><strong>GitHub integration</strong></p> <p>This integration will provide the ability to link your GitHub repo to a site in SpeedCurve and automatically run performance tests against a GitHub deployment. Additionally, results will report back to the pull request for that deployment.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/speedcurve-pr-performance-report-2.png?auto=format,compress&amp;fit=max&amp;w=1834" alt="GitHub deployment report showing results from a SpeedCurve deployment test" /></p> <p><strong>Adhoc testing</strong></p> <p>Soon you'll be able to run a test on demand that is scoped to:</p> <ul> <li>a deploy,</li> <li>a site,</li> <li>a URL within the site, or</li> <li>a specified URL of your choosing.</li> </ul> <p>The ability to run adhoc testing has been a frequent request of SpeedCurvers for a while now, so we are REALLY looking forward to shipping this feature.</p> <p><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/466/adhoctesting.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Modal dialog showing options for testing via deploy, site or URL" /></p> <h2>More powerful RUM</h2> <p>We've made some huge strides with real user monitoring (RUM) over the past couple of years. We're thrilled to see the appetite for RUM continue to grow across all industries.</p> <p>Historically, RUM has been a bit challenging for some organizations. Making code changes to modify your configuration, collect custom data, or further segment your pages isn't hard, per se, but it's difficult and time consuming to make changes in production. This year we took some big steps to help remove some of this friction:</p> <ul> <li><a href="https://support.speedcurve.com/docs/rum-page-labels"> Page labels via URL rules</a>&nbsp;</li> <li>Updated <a href="https://support.speedcurve.com/docs/conversions">conversion tracking</a>&nbsp;</li> <li><a href="https://support.speedcurve.com/docs/using-server-timing">Server timing</a> support for custom data</li> </ul> <p>We are continually looking for ways to make it easier to extract actionable data from RUM, and this month has been a big jump forward.</p> <h3>Advanced settings</h3> <p>The SpeedCurve <a href="https://support.speedcurve.com/docs/rum-js-api">RUM JavaScript API</a> contains many properties, functions, and attributes that allow you to have more control around your integration. However, previously you were required to make code changes in order to take advantage of them. We've just introduced some advanced settings in RUM that allow you to make changes to some of these properties within our UI, with no coding required.</p> <p>Now you have easier access to controls like:</p> <ul> <li>Control over when to send a beacon, such as sending on 'Page Hidden'</li> <li>Adjust the min/max measure time for a page</li> <li>Track navigations that were previously ignored, such as hidden pages and pages served from back-forward cache</li> <li>Control over JavaScript error collection</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/advanced-settings.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="Advanced RUM settings page" /></p> <h3>New filters for RUM dashboards</h3> <p>We've added two new filters to RUM dashboards that will be available for your Favorites (custom) dashboards.</p> <p>The <strong>Navigation Type</strong> filter allows filtering on pages that were:</p> <ul> <li>a standard navigation,</li> <li>a page reload, or</li> <li>a back-forward navigation.&nbsp;</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/nav_type.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve dashboard with navigation type filter expanded" /></p> <p>The <strong>Page Attribute</strong> filter selects different states of a page, including whether it was:</p> <ul> <li>a soft navigation,</li> <li>a hidden page,</li> <li>a pre-rendered page, or</li> <li>a page that was restored from the back-forward cache.</li> </ul> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/page_attribute.gif?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve dashboard with page attribute menu expanded" /></p> <p>These filters will also be available in your custom Favorites dashboards soon. We are looking forward to seeing how different navigation types and states impact performance. More on that soon!</p> <h2>What's next?</h2> <p>We are pumped. These are exciting times in the world of web performance, and our backlog is full of amazing possibilities. What's on your feature wishlist? Please&nbsp;<a href="mailto:support@speedcurve.com">let us know!</a></p> <p><a href="https://www.speedcurve.com/signup/"><img class="blog-img" src="https://blog-img.speedcurve.com/img/466/customer-logos-free-trial-banner.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> Thu, 07 Dec 2023 00:00:00 +1300