One of the great things about Google's Core Web Vitals is that they provide a standard way to measure our visitors’ experience. Core Web Vitals can answer questions like:
Sensible defaults, such as Core Web Vitals, are a good start, but one pitfall of standard measures is that they can miss what’s actually most important.
Largest Contentful Paint (LCP) makes the assumption that the largest visible element is the most important content from the visitors’ perspective; however, we don’t have a choice about which element it measures. LCP may not be measuring the most appropriate – or even the same – element for each page view.
In the case of a first-time visitor, the largest element might be a consent banner. On subsequent visits to the same page, the largest element might be an image for a product or a photo that illustrates a news story.
The screenshots from What Hifi (a UK audio-visual magazine) illustrate this problem. When the consent banner is shown, then one of its paragraphs is the LCP element. When the consent banner is not shown, an article title becomes the LCP element. In other words, the LCP timestamp varies depending on which of these two experiences the visitor had!
What Hi Fi with and without the consent banner visible
Experimentation tools that use asynchronous scripts – such as Google Optimize, Adobe Target, and Visual Web Optimizer – recommend using an anti-flicker snippet to hide the page until they've finished executing. But this practice comes with some performance measurement pitfalls:
In this post we'll look at how anti-flicker snippets work, their impact on Web Vitals, and how to measure the delay they add to visitors' experience.
Chances are, you're here because of Google's update to its search algorithm, which affects both desktop and mobile, and which includes Core Web Vitals as a ranking factor. You may also be here because you've heard about the most recent potential candidates for addition to Core Web Vitals, which were just announced at Chrome Dev Summit.
A few things are clear:
If you're new to Core Web Vitals, this is a Google initiative that was launched in early 2020. Web Vitals is (currently) a set of three metrics – Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift – that are intended to measure the loading, interactivity, and visual stability of a page.
When Google talks, people listen. I talk with a lot of companies and I can attest that, since Web Vitals were announced, they've shot to the top of many people's list of things to care about. But Google's prioritization of page speed in search ranking isn't new, even for mobile. As far back as 2013, Google announced that pages that load slowly on mobile devices would be penalized in mobile search.
Keep reading to find out:
One of the huge benefits of tracking web performance over time is the ability to see trends and compare metrics. Last year we added new functionality that makes it easy for you to bookmark and compare different synthetic tests in your test history. We recently added some additional enhancements to make comparing tests even easier.
With the 'Compare' feature, you can generate side-by-side comparisons that let you not only spot regressions, but easily identify what caused them:
Along the way, we've also made it much more intuitive for you to drill down into your detailed synthetic test results. Let's take a look...
After Google's announcement about Lighthouse 8 this past month, we have updated our test agents. We've gotten a lot of questions about what has changed and the impact on your performance metrics, so here's a summary.
I love conversations about performance, and I'm fortunate enough to have them a lot. The audience varies. A lot of the time it’s a front-end developer or head of engineering, but more and more I’m finding myself in great conversations with product leaders. As great as these discussions can be, I often walk away feeling like there was a better way to streamline the conversation while still conveying my passion for bringing fellow PMs into the world of webperf. I hope this post can serve that purpose and cover a few of the fundamental areas of web performance that I’ve found to be most useful while honing the craft of product management.
So, whether you are a PM or not, if you're new to performance I've put together a few concepts and guidelines you can refer to in order to ramp up quickly. This post covers:
Let's get started...
Performance budgets are one of those ideas that everyone gets behind conceptually, but then are challenged to put into practice – and for very good reason. Web pages are unbelievably complex, and there are hundreds of different metrics available to track. If you're just getting started with performance budgets – or if you've been using them for a while and want to validate your work – this post is for you.
A performance budget is a threshold that you apply to the metrics you care about the most. You can then configure your monitoring tools to send you alerts – or even break the build, if you're testing in your staging environment – when your budgets are violated.
Understanding the basic premise of performance budgets is pretty easy. The tricky part comes when you try to put them into practice. This is when you run into three important questions:
Depending on whom you ask, you could get very different answers to these questions. Here are mine.
Getting up to speed on Core Web Vitals seems to be at the top of everyone's to-do list these days. Just in time for the holidays, we are happy to bring you our new Vitals dashboard to help you get a huge jumpstart.
We love to visualize performance data (in case you haven't heard). We love it even more when we can be true to one of our biggest motivations at SpeedCurve: leveraging both RUM and Synthetic data to help you take action on what matters most.
We’ve been pretty vocal about Core Web Vitals since Google announced this initiative last spring. We love the idea of having a lean, shared set of metrics that we can all rally around – not to mention having a broader conversation about web performance that includes teams throughout an organization.
For many site owners, the increased focus on Core Web Vitals is driven by the fact that Google will be including them as a factor in search ranking in May 2021. Other folks are more interested in distilling the extremely large barrel of performance metrics into an easily digested trinity of guidelines to follow in order to provide a delightful user experience.
We’ve had some time to evaluate and explore these metrics, and we're committed to transparently discussing their pros and cons.
The purpose of this post is to explore First Input Delay (FID). This metric is unique among the three Web Vitals in that it is can only be measured using real user monitoring (RUM), while the other two (Largest Contentful Paint and Cumulative Layout Shift) can be measured using both RUM and synthetic monitoring.
In this post we'll cover:
Let's dig in!
Back in May, we shared that SpeedCurve supports Google's Core Web Vitals in both our synthetic monitoring and real user monitoring tools. Two of the Web Vitals – Largest Contentful Paint (LCP) and First Input Delay (FID) – were actually available in SpeedCurve for quite a while prior to the announcement. The newcomer to the scene was Cumulative Layout Shift (CLS), and, not surprisingly, it's the metric that's gotten the most questions.
A few of the questions I've been asked (or asked myself) about Cumulative Layout Shift:
Six months in, I've had a chance to gather and look at a lot of data, talk with customers, and learn from our friends in the performance community. Here's what I've learned so far.
One of the big challenges with Google's new Cumulative Layout Shift (CLS) metric is understanding which elements actually moved on the page, when they moved, and by how much. To help with debugging your CLS scores, we've added a new visualization to SpeedCurve that shows each layout shift and how each individual shift adds up to the final cumulative metric.
For each layout shift, we show you the filmstrip frame right before and right after the shift. We then draw a red box around the elements that moved, highlighting exactly which elements caused the shift. The Layout Shift Score for each shift also helps you understand the impact of that shift and how it adds to the cumulative score.
Is your site fast? Adding to a string of recent announcements including Lighthouse v6 and Core Web Vitals, Google has introduced Fast page labelling in Chrome 85 for Android. If you are curious about what this means for your site and how you can get in front of it, read on!
Lighthouse v6 has arrived! The much-anticipated update to Lighthouse is now available to SpeedCurvers as part of our latest test agent updates. Keep reading to find out what this means and how it may affect your performance metrics.
Google recently announced an initiative called 'Web Vitals', which focuses on three performance metrics they consider essential for improving the user experience:
With the exception of FID, all of these metrics are available in both RUM and Synthetic. FID requires a real user for calculation and therefore is only available in RUM. In place of FID for Synthetic, we recommend tracking JS Long Tasks or Total Blocking Time as an alternative CPU metric.
The Chrome team has snuck a fundamental change to how user timing marks work into Chrome 78. For many years now, our own Steve Souders has championed user timing, and he has always believed you should be able to set a value for a user timing mark or measure.
With User Timing Level 3, now you can! Level 3 lets you explicit set a startTime whereas previously you could only pass in the name of a mark.
performance.mark("name", {startTime: 1000})
Here are some common questions I’m asked when I talk with people about performance:
Today, I’m very excited to announce the release of a new project that helps answer those questions – and more!
Page Speed Benchmarks is an interactive dashboard that lets you explore and compare web performance data for leading websites across several industries – from retail to media.
With Page Speed Benchmarks, you can do things like:
If you already like tools like the HTTP Archive, I think you'll love how you can use Page Speed Benchmarks to complement the insights you're already getting. Keep reading to find out how we set up these benchmarks, and how you can mine our test data – even if you're not a SpeedCurve user – for your own performance research.
For the past two years, the performance.now() conference has been the most valuable performance event I've attended. So valuable, in fact, that I've made some of the talks the cornerstone of this list of performance resolutions for 2020. I'd love to know how many – if any – of these are on your list. As always, I'd love people's feedback!
Here at SpeedCurve, the past few months have found us obsessing over how to define and measure user happiness. We've also been scrutinizing JS performance, particularly as it applies to third parties. And as always, we're constantly working to find ways to improve your experience with using our tools. See below for exciting updates on all these fronts.
As always, we love hearing from you, so please send your feedback and suggestions our way!
Over the winter holiday we added a bunch of new metrics to LUX:
The number of performance metrics is large and increases every year. It's important to understand what the different metrics represent and pick metrics that are important for your site. Our Evaluating rendering metrics post was a popular (and fun) way to compare and choose rendering metrics. Recently I created this timeline of performance metric medians from the HTTP Archive for the world's top ~1.3 million sites: