SpeedCurve Blog https://www.speedcurve.com/blog/ Speed matters. Get the latest on how the areas of design and performance overlap with a focus on creating great user experiences. NEW: Exploring RUM sessions https://www.speedcurve.com/blog/real-user-monitoring-sessions-dashboard <p>If you want to understand how people actually experience your site, you need to monitor real users. The data we get from real user monitoring (RUM) is extremely useful when trying to get a grasp on performance. Not only does it serve as the source of truth for your most important budgets and KPIs, it help us understand that performance is a broad distribution that encompasses many different cohorts of users.</p> <p><strong>While real user monitoring gives us the opportunity for unparalleled insight into user experience, the biggest challenge with RUM data is that there's so much of it.</strong> Navigating through all this data has typically been done by peeling back one layer of information at a time, and it often proves difficult to identify the root cause when we see a change:</p> <p>"What happened here?"</p> <p>"Did the last release cause a drop in performance?"</p> <p>"How can I drill down from here to see what's going on?"</p> <p>"Is the issue confined to a specific region? Browser? Page?"</p> <p>Today we're excited to release a new capability &ndash; your RUM Sessions dashboard &ndash; which allows you to <strong>drill into a dataset and explore those sessions that occurred within a given span of time</strong>.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/117/session_dashboard.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="SpeedCurve Sessions dashboard" /></p><h2>Explore your data</h2> <p>Drill into any RUM data point from any chart in your dashboards to investigate RUM sessions. Whether you're looking into a spike, a baseline change, or a segment from a specific distribution you're interested in, clicking through to 'View Sessions' takes you to your new Sessions dashboard. (You can also navigate to the dashboard from the left-hand navbar.)</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/117/view-sessions.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Click on a data point in your chart to view those sessions" /></p> <p>&nbsp;</p> <p>When you arrive in the dashboard, you'll get a great breakdown of your sessions for that particular point in time. You bring along any of the context from your chart as well, including any filters that have been applied to that dataset. You'll instantly be able to understand the unique breakdown of user sessions across several dimensions, including geography, device, browser, page groups and more.&nbsp;</p> <p>Get a tour of the dashboards in this short video:&nbsp;</p> <div class="video"><iframe src="https://www.youtube.com/embed/YkMB8vMkHMU" width="560" height="315" frameborder="0" allowfullscreen=""></iframe> <div>&nbsp;</div> </div> <h2>Understand what changed?</h2> <p>Answering this question can be difficult and time consuming when you are dealing with a vast amount of RUM data. We make a point to highlight changes from the previous period. The change may be related to volume ("I'm seeing a lot more users from Ireland today" or "We are getting a lot more traffic to our landing pages") or it may be related to a metric ("LCP has really slowed down after this last release, and it looks to be driven by slower backend time"). Whatever the change may be, the comparison to the previous period can be found in several areas of the dashboard.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/117/metrics_change.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Table of metrics illustrating change from previous period" /></p> <h2>Investigate anomalies</h2> <p>Weird stuff happens. Whether you want to understand the long tail of performance or a specific pattern you're seeing in your data, it's useful to be able to quickly slice, dice, and compare.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/117/histogram_slider.gif" alt="Filtering a histogram" /></p> <p>&nbsp;</p> <p>In this video, we explore the mystery of what's behind a site's segment of fast bounces.&nbsp;</p> <div class="video"><iframe src="https://www.youtube.com/embed/F2yqvLWUfrQ" width="560" height="315" frameborder="0" allowfullscreen=""></iframe> <div>&nbsp;</div> </div> <h2>Questions?</h2> <p>Want to learn more about this feature? Do you have feedback on how we can make it even better? <a href="mailto:support@speedcurve.com" target="_blank" rel="noopener">Let us know!</a></p> Tue, 21 Sep 2021 00:00:00 +1200 UPDATE: Bookmark and compare synthetic tests https://www.speedcurve.com/blog/compare-web-performance <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/compare.gif" alt="" /></p> <p>One of the huge benefits of tracking web performance over time is the ability to <a href="https://support.speedcurve.com/en/articles/3996064-trend-metrics-compare-time-periods">see trends and compare metrics</a>. Last year we added new functionality that makes it easy for you to bookmark and compare different synthetic tests in your test history. <strong>We recently added some additional enhancements to make comparing tests even easier.</strong></p> <p>With the 'Compare' feature, you can generate side-by-side comparisons that let you not only spot regressions, but easily identify what caused them:</p> <ul> <li>Compare the same page at different points in time</li> <li>Compare two versions of the same page &ndash; for example, one with ads and one without</li> <li>Understand which metrics got better or worse</li> <li>Identify which common requests got bigger/smaller or slower/faster</li> <li>Spot any new or unique requests &ndash; such as JavaScript and images &ndash; and see their impact on performance</li> </ul> <p>Along the way, we've also made it much more intuitive for you to drill down into your detailed synthetic test results. Let's take a look...</p><h2>How to bookmark sites for comparison</h2> <p><em>(If you're more into watching than reading, scroll down to the bottom of this post to check out our short explainer video.)</em></p> <p>To get started, hover over data point in any time series chart and click. Within the popup, you can click "View Test" to see the full median test result for that test run. Or, if you already know that you want to bookmark that test for comparison, you can simply hit "Bookmark Test" within the popup.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/99/new-compare-screenshot.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>If you've drilled down into a Synthetic test result and confirmed that you want to bookmark it for comparison, all you need to do is click the "Add Bookmark" link in the top-left corner of the window.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Then you have the option of giving the bookmarked test a meaningful name and/or description.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-3.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>If you change your mind, just click "Remove Bookmark".</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-4.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Clicking on the bookmark icon in the top-right corner of the window shows you a list of all the tests you've bookmarked for comparison. Note that you can bookmark multiple tests, but you can only compare two at a time.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-5.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <h2>NEW: Compare directly from your charts</h2> <p>If you just want to compare two tests and aren't interested in bookmarking them, just click on "Compare Test" within the popup. You'll then see a message instructing you to select another test for comparison.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/99/new-compare-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>You can click on any synthetic test within the dashboard (or another dashboard with synthetic charts). Once you click on the second test, you'll be directed to the Compare dashboard.</p> <h2>Test case: CNN&nbsp;</h2> <p>Now let's walk through how to run a comparison and analysis using a real-world example from our "Top Sites" demo account. In this demo account, we track a number of leading retail, travel, and media sites. I've randomly selected CNN for the purpose of this post.</p> <p>Looking at the <a href="/speedcurve-enterprise/top-sites/site/?b=chrome&amp;cs=lg&amp;d=90&amp;dc=2&amp;de=1&amp;ds=1&amp;r=us-west-1&amp;s=20746&amp;tv=1&amp;u=41453&amp;share=0cv2ymz1ymyhv8b0hwbuvsz1a480u3" target="_blank" rel="noopener">last three month's worth of Synthetic test data</a>, we can see that the CNN home page has a Start Render time that's consistently above the 2-second threshold that we usually recommend &ndash; and that Start Render has sometimes hit almost 5 seconds.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-6.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Let's look at some of the peaks and valleys on their time series chart to see if we can spot the issues.&nbsp;To start, we'll quickly bookmark these two tests:</p> <ul> <li><a href="/speedcurve-enterprise/top-sites/test/200510_DZ_b1b85f151f93b4e59956b0932593599c/?tv=1&amp;share=0cv2ymz1ymyhv8b0hwbuvsz1a480u3" target="_blank" rel="noopener">Monday 11 May</a> - Start Render 3s&nbsp;</li> <li><a href="/speedcurve-enterprise/top-sites/test/200523_JJ_ac2b69c998ed279753ec25bebb461bb2/?tv=1&amp;share=0cv2ymz1ymyhv8b0hwbuvsz1a480u3" target="_blank" rel="noopener">Saturday 24 May</a> - Start Render 4s&nbsp;</li> </ul> <p>Now that we've selected these two tests, <a href="/speedcurve-enterprise/top-sites/compare/?tests=200510_DZ_b1b85f151f93b4e59956b0932593599c%2C200523_JJ_ac2b69c998ed279753ec25bebb461bb2" target="_blank" rel="noopener">let's look at them side by side</a>.</p> <p>Here you can see the visuals are stacked compactly so you can easily spot any differences. Hovering over any metric in the waterfall chart shows you the metric for both tests. It also shows you how the metrics align with the rendering filmstrips. For example, you can see that not much is happening in either filmstrip when Start Render fires:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-7.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>For this particular page, it looks like Largest Contentful Paint is a more meaningful metric, in terms of tracking when important content has rendered, so let's look at that. Things look a bit better in the May 11 test when Largest Contentful Paint fires at 4.95 seconds. In the May 24 test, LCP lags at almost 7 seconds.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-8.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>The gaps get much broader when you look at Visually Complete and Fully Loaded (below). Delays in those metrics can be an indicator that, even though the page's visible content might have rendered, the page might be janky and unpleasant to interact with.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/Untitled-9.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Just below the waterfalls, the CPU timelines are really telling:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-10.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Ideally, you want to see a busy CPU at the start of the page render, but both tests show pockets of dead time early on. CPU utilization doesn't pick up till around 1.7 seconds for the May 11 test and 2.2 seconds for the May 24 test. This begins to explain the slow Start Render.&nbsp;</p> <p>The CPU timeline for the May 24 test also shows that the CPU is thrashing for the entire 80-second duration that the page is rendering.&nbsp;</p> <h2>Two questions for investigation</h2> <p>Based on the comparison so far, there are two questions for investigation:</p> <ol> <li><strong>What is causing the delays in Start Render and Largest Contentful Paint?</strong> High Start Render and LCP times can give users the feeling that the page is <em><strong>unresponsive</strong></em>.</li> <li><strong>What is causing the excessive CPU thrashing?</strong> This thrashing can give users the feeling that the page is <em><strong>janky</strong></em>.&nbsp;</li> </ol> <p>Time to dig deeper. To do that, we show you detailed metrics, along with a calculation of what got better or worse. For obvious reasons, the metrics that see the biggest changes are shown at the top of each list.&nbsp;</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-11.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Looking at this list, it's pretty clear where some of the issues lie:</p> <ul> <li>Video size has jumped from 32KB to more than 4MB. This would help explain the extremely delayed Visually Complete and Fully Loaded times.</li> <li>The number of JavaScript Long Tasks has leapt from 1 to 51 &ndash; and 32 of these are third parties. The longest Long Task is 722ms.</li> <li>On the slower page, third-party JavaScript CPU time is more than 7 seconds &ndash; 3 of which are blocking the CPU.</li> </ul> <p>Now that we know that the number and complexity of JS requests is a major issue, we can find out exactly which scripts appear to be the culprits. This is a list of all the common requests that are shared across both pages, along with a calculation of how much performance degradation each request experienced:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-12.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Now let's take a closer look at one of those scripts:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-13.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>I'm not familiar with Exelator, so I had to look it up to identify it as a data collection tracker that is <a href="https://better.fyi/trackers/exelator.com/">reportedly</a> owned by Nielsen. Whatever it is, its duration has increased by 1727% &ndash; from 59ms to 1078ms. Clicking through to the <a href="/speedcurve-enterprise/top-sites/test/200523_JJ_ac2b69c998ed279753ec25bebb461bb2/">full test result page for May 24</a> and searching for Exelator yielded four separate requests, with a total duration of more than 1600ms. Here's one of those requests:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/screenshot-14.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Something that's important to note here is that three out of four of those requests had a size of 0 bytes, and the fourth was only 43 bytes. Keep this in mind next time you ponder adding an allegedly small third-party request to your pages.</p> <p>That was just one request issue among the many listed above. To drill down further, next steps could include looking at the detailed waterfall charts for each test run &ndash; <a href="/speedcurve-enterprise/top-sites/test/200510_DZ_b1b85f151f93b4e59956b0932593599c/" target="_blank" rel="noopener">May 11</a> and <a href="/speedcurve-enterprise/top-sites/test/200523_JJ_ac2b69c998ed279753ec25bebb461bb2/" target="_blank" rel="noopener">May 24</a> &ndash; to see what else is happening before Largest Contentful Paint and Start Render to delay those metrics.&nbsp;</p> <p>A quick glance at the waterfalls for both tests reveals:</p> <ul> <li><strong>40 requests before Start Render </strong>fires on the May 11 test</li> <li><strong>64 requests before Start Render</strong> fires on the May 24 test</li> <li><strong>105 requests before First Contentful Paint</strong> fires on the May 11 test</li> <li><strong>144 requests before First Contentful Paint</strong> fires on the May 24 test</li> </ul> <p>I hope I don't have to tell you that those are a LOT of requests. The volume alone is enough to create performance issues. Add to that any episodic issues with serving those requests, and you can see how easily performance can degrade.</p> <h2>We'd love your feedback!</h2> <p>I adore this feature. We've been talking about building this for quite a while &ndash; not least because it's been requested by a number of our customers. Take it for a spin and let us know what you think!</p> <p>Here's Cliff's tutorial, for all you video lovers out there.</p> <p><iframe src="https://www.youtube.com/embed/SckTlcICY84" width="560" height="315" frameborder="0" allowfullscreen=""></iframe></p> <h3>Related reading</h3> <ul> <li><a href="https://support.speedcurve.com/en/articles/869994-benchmark-yourself-against-your-competitors">Benchmark yourself against your competitors</a></li> <li><a href="https://support.speedcurve.com/en/articles/1461602-get-comparison-videos">Generate comparison videos</a></li> <li><a href="https://support.speedcurve.com/en/articles/3996064-trend-metrics-compare-time-periods">Trend metrics and compare time periods</a></li> <li><a href="https://support.speedcurve.com/en/articles/1430942-use-custom-labels-to-compare-urls-or-page-groups">Compare your Synthetic and RUM data</a></li> <li><a href="https://support.speedcurve.com/en/articles/74071-add-a-repeat-view-using-a-wpt-script">Compare first and repeat views in Synthetic</a></li> <li>A/B testing in <a href="https://support.speedcurve.com/en/articles/74080-a-b-performance-testing">Synthetic</a> and <a href="https://support.speedcurve.com/en/articles/2129265-a-b-testing-with-lux">LUX</a></li> </ul> Mon, 13 Sep 2021 00:00:00 +1200 New feature: Streamlined test comparison https://www.speedcurve.com/blog/update-226 <p>A short while ago we introduced the ability to <a href="https://support.speedcurve.com/en/articles/4222741-bookmark-and-compare-synthetic-tests">bookmark and compare </a> synthetic tests. Today we've made comparing tests even easier by including an option to compare directly from the chart you are focused on. While the capability to bookmark tests remains, this enhancement streamlines the process for quick comparison. Happy sleuthing!</p> Mon, 23 Aug 2021 00:00:00 +1200 NEW: Lighthouse v8 support! https://www.speedcurve.com/blog/new-lighthouse-v8 <p><img class="blog-img-sm" src="https://blog-img.speedcurve.com/img/lighthouse-1024x1024.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="Google Lighthouse logo" /></p> <p>After Google's&nbsp;<a style="background-color: #f5f5f5;" href="https://github.com/GoogleChrome/lighthouse/blob/master/changelog.md#800-2021-06-02">announcement</a>&nbsp;about Lighthouse 8 this past month, we have updated our test agents. We've gotten a lot of questions about what has changed and the impact on your performance metrics, so here's a summary.</p><h2>What is Lighthouse?</h2> <p>In case you missed it, Lighthouse is a speed tool created by the Chrome Developer team at Google. Lighthouse is a score based system that evaluates lab data (synthetic) through a series of audits in order to identify how your application will perform in the wild.</p> <p>There are five categories evaluated:&nbsp;<strong>Performance, Accessibility, Best Practices, SEO, and Progressive Web App (PWA).</strong> As part our synthetic testing, we run a separate Lighthouse test that produces a full audit and allows you to track these scores over time, alongside your other favorite performance metrics.&nbsp;</p> <h2>What has changed?</h2> <p>No metrics were added or removed from the Lighthouse scoring in version 8. The biggest changes are:</p> <ul> <li>the weighting of metrics used for the performance score, and</li> <li>the adoption of the recently updated&nbsp;<a href="/blog/The%20CLS metric has evolved a lot in the short time that it's been around." target="_blank" rel="noopener">Cumulative Layout Shift (CLS) calculation</a>.</li> </ul> <h3>Weighting adjustments</h3> <ul> <li>First Contentful Paint (FCP): 15 -&gt; <strong style="color: red;">10</strong></li> <li>Speed Index: 15 -&gt; <strong style="color: red;">10</strong></li> <li>Largest Contentful Paint (LCP): 25 -&gt; <strong>25</strong></li> <li>Time To Interactive (TTI): 15 -&gt; <strong style="color: red;">10</strong></li> <li>Total Blocking Time (TBT): 25 -&gt; <strong style="color: green;">30</strong></li> <li>Cumulative Layout Shift (CLS): 5 -&gt; <strong style="color: green;">15</strong></li> </ul> <p>Not surprisingly, we see a shift in importance for metrics related to Core Web Vitals. Largest Contentful Paint (LCP) remains the heaviest weighted, while Cumulative Layout Shift (CLS) shows the biggest increase.</p> <p>We were happy to see the weighting change for Total Blocking Time (TBT), which &ndash; while technically not a Web Vital &ndash; shines the biggest light on the importance of optimizing first and third party JavaScript.</p> <h3>Updated Cumulative Layout Shift (CLS) calculation</h3> <p>CLS now uses session windowing to more accurately measure layout shifts for longer-lived sites, including single page applications. While we don't anticipate a major change for CLS scores measured in a lab environment such as Lighthouse, field data (aka real user monitoring) should generally see improvement for those types of sites as it is adopted more broadly.</p> <h2>How will these changes affect your metrics?</h2> <p>Google anticipates <strong>the majority of sites could see improvements to their scores</strong>, based on their analysis the HTTP Archive.</p> <p>Our own research here at SpeedCurve suggests that there will be a <strong>moderate increase of the performance score (1-5 points) for most sites</strong>. Sites that had a high CLS score and/or TBT were penalized more heavily due to the weighting changes and saw a decrease in the score which was sometimes significant (5-10 points).</p> <h2><span style="font-size: 35px;">What else is new?</span></h2> <p>In addition to the performance weighting changes, Lighthouse has added a few pretty cool features to the report:</p> <h3>Treemaps</h3> <p>Identify opportunities to examine JavaScript size and coverage so you can optimize accordingly.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/116/treemap.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Google Lighthouse Treemap visualization" /></p> <h3>Sort Lighthouse audits by metric</h3> <p>It can be a bit overwhelming to look through the list of audits in your report. Filtering by the metric you're interested in can help slim down your performance to-do list.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/116/filter.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Google Lighthouse filtering by metric" /></p> <h2>Learn more</h2> <p>If you are looking for more details around Lighthouse, or specifics around version 8, here are some good resources:</p> <ul> <li><a style="background-color: #f5f5f5;" href="https://github.com/GoogleChrome/lighthouse/blob/master/docs/v8-perf-faq.md" target="_blank" rel="noopener">Lighthouse v8 Performance FAQ</a></li> <li><a style="background-color: #f5f5f5;" href="https://support.speedcurve.com/en/articles/2569490-lighthouse-scores-and-audits" target="_blank" rel="noopener">Lighthouse Scores and Audits</a>&nbsp;in SpeedCurve</li> <li><a href="https://web.dev/performance-scoring/" target="_blank" rel="noopener">Understanding how Performance scoring works in Lighthouse</a> (Google Webdev)</li> <li><a style="background-color: #f5f5f5;" href="https://youtu.be/_G3X_IsozKk">State of Speed Tooling</a>&nbsp;(video from the 2020 Chrome Developer Summit)</li> </ul> Wed, 07 Jul 2021 00:00:00 +1200 Test agent updates: Lighthouse 8.0.0 https://www.speedcurve.com/blog/update-225 <p>We are rolling out an update to our test agents. Changes include:</p> <ul> <li>Lighthouse has been updated from version <b>7.5.0</b> to <b>8.0.0</b> </ul> Wed, 07 Jul 2021 00:00:00 +1200 LUX update: lux.js v216 https://www.speedcurve.com/blog/update-224 <p>We have released v216 of lux.js. This release contains bug fixes and other improvements:</p> <ul> <li>Long tasks buffering (which was previously reverted in v214) has been reinstated, enabling LUX to instrument long tasks that occurred before the snippet. <li><a href="https://support.speedcurve.com/en/articles/3816439-lux-page-labels">JavaScript page labels</a> are now evaluated quicker than before. <li>LUX can now use a simpler beacon mechanism with <code>LUX.beaconMode = "simple"</code>. This sends a beacon by inserting an image into the page, rather than a <code>&lt;script&gt;</code> tag. </ul> Mon, 21 Jun 2021 00:00:00 +1200 Hello from Europe! https://www.speedcurve.com/blog/hello-from-europe <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/115/andy.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Andy Davies" /></p> <p>I&rsquo;m delighted to be joining some of my favourite web performance people at SpeedCurve!</p> <h2 dir="ltr">Who am I?</h2> <p>I&rsquo;ve been a full-time web performance consultant for around nine years. For about half that time I worked freelance, and the other half for Site Confidence / NCC Group in the UK.</p> <p>My journey into performance started in the late 1990s, while I was working for an elearning provider and discovering the challenges of delivering rich content over the internet. To overcome some of these challenges, we built our own Java-based player, complete with caching, content compression, and even bandwidth detection so it could switch between video, audio, and text versions of a course depending on network speed.</p> <p>Ultimately the business didn&rsquo;t survive the dotcom bust, but it lit a spark...</p><p dir="ltr">Then in 2008, while I was helping an educational publisher launch their elearning platform (and running into the familiar challenges of how do you build and deliver rich content over the internet) I came across Steve Souders&rsquo;s first book, <a href="https://www.oreilly.com/library/view/high-performance-web/9780596529307/">High Performance Web Sites</a>.</p> <p dir="ltr">We were already using both synthetic and network-based real user monitoring (RUM) tools to measure how fast our site was, so we knew we had room to improve. Steve&rsquo;s book gave us some recipes to make those improvements.</p> <p dir="ltr">Within a few releases, we went from customers calling our support team to complain about how slow the platform was to customers calling to say how much faster it was.</p> <p dir="ltr">I think that was the point I became hooked!</p> <p dir="ltr"><img class="blog-img-md" src="https://blog-img.speedcurve.com/img/115/andy-book-2.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p dir="ltr">Since then I&rsquo;ve worked with organisations large and small &ndash; across sectors such as retailing, publishing, and marketing &ndash; to help them deliver faster experiences for their customers. I&rsquo;ve written <a href="https://andydavies.me/books/">a couple of books</a>, <a href="https://andydavies.me/talks/">spoken at conferences</a>, and become an organiser of <a href="https://ldnwebperf.org/">London&rsquo;s Web Performance Meetup</a>.</p> <p dir="ltr">As much as I enjoy freelance consulting, I&rsquo;ve always felt there aren&rsquo;t enough consultants. To help more sites, we&rsquo;ve got to take the knowledge that consultants have and build it into tools and products.</p> <h2 dir="ltr">Why SpeedCurve?</h2> <p dir="ltr">I&rsquo;ve known some of the SpeedCurve team for quite a while. I met <a href="https://twitter.com/Souders">Steve</a> at the first O'Reilly Velocity Conference in the EU in 2011, <a href="https://twitter.com/cliffcrocker">Cliff</a> gave the first tutorial at my first Velocity in the US, and I was fascinated by <a href="https://twitter.com/tameverts">Tammy</a>&rsquo;s <a href="https://www.oreilly.com/pub/e/3149">neuroscience-based talk</a> in 2014. (After all, performance is about people.)</p> <p dir="ltr">Velocity 2014 was also when I met <a href="https://twitter.com/MarkZeman">Mark</a>, right at the time that he&rsquo;d built SpeedCurve alongside his day job, launched it in New Zealand, and won a prize to launch it in the US. I think this was the first time I&rsquo;d seen someone bring strong visual design to a web performance product, and it was lovely.</p> <p dir="ltr">The performance company I worked for had a similar product, but it had never gotten much traction, so it was both fascinating (and frustrating) to watch a small bootstrapped business innovate and execute faster than we could, and to grow in space we&rsquo;d been unable to.</p> <p dir="ltr">While I worked for a competitor, my exposure to SpeedCurve was limited. It wasn&rsquo;t until I started freelancing again &ndash; and working with clients who used SpeedCurve &ndash; that I really came to appreciate its capabilities. (SpeedCurve's <a href="https://support.speedcurve.com/en/articles/2418481-create-custom-metrics">support for User Timing</a> was the &lsquo;cherry on top&rsquo; for me.)</p> <p dir="ltr">Unshackled from competitive pressures &ndash; and with clients in common &ndash; we talked more. I worked on a couple of projects for SpeedCurve, too.&nbsp;One of those projects was a competitive analysis of synthetic and real user monitoring (RUM) products. That analysis reinforced to me how strong SpeedCurve was compared to many other vendors in this space.</p> <p dir="ltr">If the way to help more sites get faster is to combine what I learned as a consultant into products, then one of the strongest web performance vendors seems an excellent place to do that.</p> <h2 dir="ltr">What I&rsquo;m going to be doing</h2> <p dir="ltr">I&rsquo;m going to be working alongside Tammy and Cliff, supporting our customers, helping them to get the most out of our products, and providing advice about how to improve the speed of their sites.</p> <p dir="ltr">We&rsquo;re also going to be exploring what our consulting offering could look like and what services we should provide to complement our products.</p> <p dir="ltr">As an industry, we&rsquo;ve gotten really good at measuring how fast (or slow) a site is, but we&rsquo;re not so good at helping answer the &lsquo;so what&rsquo; type of questions. And these &lsquo;so what&rsquo; type of questions are the questions that intrigue me. Such as...</p> <ul> <li dir="ltr">How can we help customers understand whether their site is fast enough, or what&rsquo;s the benefit of being faster?</li> <li dir="ltr">If they need to be faster, where should they focus their optimisation efforts?</li> <li dir="ltr">And, of course, what can actually be done to make a site faster?</li> </ul> <p dir="ltr">We collect a treasure trove of data about the performance of our customers&rsquo; sites. I&rsquo;m going to be researching how we can use our data to help answer those types of questions.</p> <p dir="ltr"><strong>If you&rsquo;d like to talk about what you&rsquo;d be looking for from our consulting offering, I&rsquo;d love to hear from you.</strong></p> <p dir="ltr">It&rsquo;s been a real pleasure to watch SpeedCurve grow over the past seven years. I&rsquo;m thrilled to have joined the team!</p> Tue, 15 Jun 2021 00:00:00 +1200 NEW! Chrome Beta and Canary support & LCP element highlighting https://www.speedcurve.com/blog/largest-contentful-paint-canary-beta <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/114/rendering-times.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></p> <p>Phew! Between the fast-paced release cycle for Chrome and the rapid evolution of Core Web Vitals, the month of May has been a busy one here at SpeedCurve. With that, we are excited to bring you some new features and enhancements to help you stay focused and ahead of the game as we move into summer.</p> <p>Read on to learn more about:</p> <ul> <li>Chrome Beta and Canary support</li> <li>Largest Contentful Paint (LCP) element highlighting</li> <li>Key rendering times</li> </ul><h2>Introducing Chrome Beta and Canary support</h2> <p>Keeping pace with Chrome release cycles is challenging. On one hand, we like to keep <a href="https://support.speedcurve.com/en/articles/74069-what-browsers-and-devices-can-i-test-with">our synthetic monitoring agents</a> as stable as possible, while on the other we know how important it is to get in front of changes in order to keep your app running smoothly. We're excited to announce custom browser support for Chrome Beta and Canary, in addition to our stable version.</p> <h3>Understanding Chrome releases</h3> <p>The Chromium team provides the community with early&nbsp;<a href="https://www.chromium.org/getting-involved/dev-channel" target="_blank" rel="noopener">access to all releases.</a>&nbsp;For our purposes, we provide three versions for synthetic testing:</p> <ul> <li><strong>Stable</strong>&nbsp;&ndash; This is the latest supported version of Chrome that has been tested for stability on our platform. We update our stable release about once a quarter, so it's quite possible we will be a version or two behind the current stable Chromium build.</li> <li><strong>Beta</strong>&nbsp;&ndash; Think of this as 'What's next'. It's typically very consistent, with builds rolled out about once a week.</li> <li><strong>Canary</strong>&nbsp;&ndash; This is the very latest and greatest. This is expected to be under heavy development, with new builds rolling out daily.</li> </ul> <h3>Getting started</h3> <p>Adding Beta and/or Canary to the list of available browsers in your synthetic test settings is quite simple. <a href="https://support.speedcurve.com/en/articles/5258057-running-synthetic-tests-with-chrome-canary">Check out our guide for a quick walkthrough</a>.</p> <p><a href="https://support.speedcurve.com/en/articles/5258057-running-synthetic-tests-with-chrome-canary" target="_blank" rel="noopener"><img class="blog-img" src="https://blog-img.speedcurve.com/img/114/canary-how-to.png?auto=format,compress&amp;fit=max&amp;w=2000" alt="" /></a></p> <p><strong>Pro tip: If you want to compare across all three versions, set up a custom stable version as well.</strong> While the default desktop version of Chrome is already available, you can ensure that all other settings you select for the profile are the same.</p> <p>Once created, you can easily add the new profiles to your existing or future site configuration.&nbsp;</p> <h3>Common use cases</h3> <h4>Validate changes seen in RUM</h4> <p>Changes to your real user monitoring (RUM) metrics can happen at any time, for any number of reasons. Our stable agents are only updated after extensive testing and can be out of step with the latest Chrome release. Having early access to beta helps you spot browser-related issues before your users do.</p> <h4>Get ahead of key metric changes</h4> <p>Whether its <a href="https://speedcurve.com/blog/new-vitals-dashboard/">Core Web Vitals</a>, <a href="https://support.speedcurve.com/en/articles/2569490-lighthouse-scores-and-audits">Lighthouse</a>, or your own <a href="https://support.speedcurve.com/en/articles/2418481-create-custom-metrics">custom metrics</a>, having early access to Chrome builds can help you manage any surprises coming your way and keep you moving in the right direction.</p> <h4>Future-proof your application</h4> <p>While we've historically seen browsers getting faster, that doesn't mean your application will always follow suit. Understanding the behavior of modern-day applications on a rapidly changing platform like Chrome can bring you peace of mind. Your future self will thank you.</p> <h3 style="font-family: Gotham, sans-serif;">What if things look strange?</h3> <p>Don't panic. The Beta and Canary releases are under active development and it's expected that you'll see some instability. If you see an unexpected change related to your metrics or Lighthouse behavior,&nbsp;<a href="mailto:support@speedcurve.com">we'd love to hear about it.</a>&nbsp;Otherwise, cross-check with&nbsp;<a href="https://www.chromestatus.com/features/schedule" target="_blank" rel="noopener">Chrome Platform Status</a>&nbsp;and the&nbsp;<a href="https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/metrics_changelog/README.md">Web Vitals Changelogs</a>.</p> <h2>LCP highlights and key rendering</h2> <p>Chrome recently did everyone a solid by giving us more context for Largest Contentful Paint. As of Chrome 90, the bounding rectangle of the element classified with LCP has been exposed. (Thanks, <a href="https://twitter.com/yoavweiss">Yoav</a>!) This allows us to highlight the element within the frame, removing any doubt of what Chrome identified when reporting LCP.</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/114/lcp_netflix.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Image of Netflix website with highlighted LCP" width="351" height="241" /></p> <p>Pairing this with our own Last Painted Hero metric gives you a great way to determine meaningful rendering moments within the page:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/114/renderingmoments.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Image of SpeedCurve rendering moments for Netflix" /></p> <h3>Where can I find these visualizations?</h3> <p>The LCP highlights can be seen in your <a href="https://speedcurve.com/blog/new-vitals-dashboard/">Vitals dashboard</a>, your <a href="https://support.speedcurve.com/en/articles/1565262-see-your-detailed-historic-synthetic-test-results">test details</a>, and your Sites dashboard, where you'll also see how the metrics are trending over time, like this:</p> <p><img class="blog-img" src="https://blog-img.speedcurve.com/img/114/sites-dashboard.jpg?auto=format,compress&amp;fit=max&amp;w=2000" alt="Image of summary rendering times in SpeedCurve Site dashboard." /></p> <p>Things are moving pretty fast in the world of performance these days, but we hope this gives you the head start you need to stay out in front. We think these new capabilities are pretty cool and hope you do as well. As always, if you have any questions or feedback, please shout out in the comments or via email at support@speedcurve.com.</p> Thu, 03 Jun 2021 00:00:00 +1200 Test agent updates: Chrome 91, Lighthouse 7.5 https://www.speedcurve.com/blog/update-221 <p>Updated test agents are being rolled out to all regions today. They include updates to several pieces of core software, including:</p> <ul> <li>Chrome has been updated from version <b>89</b> to <b>91</b> <li>Lighthouse has been updated from version <b>7.3.0</b> to <b>7.5.0</b> </ul> Wed, 02 Jun 2021 00:00:00 +1200 New features: Chrome Beta and Canary support & LCP element highlighting https://www.speedcurve.com/blog/update-222 <p>Today's release includes a some great new features:</p> <li> Support for both Chrome beta and canary test agents <li>LCP element highlighting and new key rendering visuals </p> <p>For more information on these features, <a href="https://speedcurve.com/blog/largest-contentful-paint-canary-beta/">see our latest post.</a></p> Wed, 02 Jun 2021 00:00:00 +1200