With Long Animation Frames (commonly referred to as LoAF, pronounced 'LO-aff') we finally have a way to understand the impact of our code on our visitors' experiences.
Long Animation Frame – a frame that took longer then 50ms from its start to when it started painting
LoAF allows us to understand how scripts and other tasks affect both hard and soft navigations, as well as how scripts affect interactions. Using the data LoAF provides, we can identify problem scripts and target changes that improve our visitors' experience. We can also finally start to quantify the impact of third-party scripts as they execute in our visitors' browsers.
Keep reading to learn:
CPU consumption by the browser is one of the main causes – if not the number one cause – of a poor user experience. The primary culprit? JavaScript execution. Now you can use SpeedCurve to monitor Long Animation Frames (LoAFs) and fix the third parties and other scripts that are hurting your page speed.
Until recently, we've had little evidence from the field that definitively attributes the root cause of rendering delays. While JavaScript Long Tasks gave us a good indication that there were blocking tasks affecting metrics such as Interaction to Next Paint and Largest Contentful Paint, there was no way to attribute the work or understand how it was ultimately affecting rendering.
Fortunately, we've gotten a lot of help from Chrome in improving the attribution – and ultimately the actionability – of the data we collect in the field with RUM. The introduction of the Long Animation Frames API (LoAF) not only gives us better methods for understanding what's happening on the browser's main thread, in some cases it also gives us attribution to both first- and third-party scripts that occur during a LoAF.
This has been a highly anticipated addition to SpeedCurve, which is available for all our RUM users today. This post covers what's new in the product and points you to a few new resources to help you get up to speed on all things related to LoAF.
Have you ever wondered why your site got faster, but your business and user engagement metrics didn't improve? The answer might lie on the performance plateau.
Have you ever asked yourself these questions?
"I made my pages faster, but my business and user engagement metrics didn't change. WHY???"
"How do I know how fast my site should be?"
"How can I demonstrate the business value of page speed to people in my organization?"
The answers might lie with identifying and understanding the performance plateau for your site.
Our newest performance hero is passionate, provocative, and unapologetically honest. While he's a true champion for web performance, his impact can be measured more broadly across the web. Join us in celebrating Alex Russell!
Alex Russell has been a strong voice in the web community for as long as I can remember. He's currently a Partner PM at Microsoft, working on Edge. Before that, he spent several years working at Google on Chrome, web standards, and much more.
Not only is Alex an accomplished engineer, he's also an amazing speaker and writer. I last saw Alex on stage at performance.now() in November, where he delivered this inspiring talk that got a lot of attendees talking.
If you could measure the impact of site speed on your business, how valuable would that be for you? Say hello to correlation charts – your new best friend.
Here's the truth: The business folks in your organization probably don't care about page speed metrics. But that doesn't mean they don't care about page speed. It just means you need to talk with them using metrics they already care about – such as conversion rate, revenue, and bounce rate.
That's why correlation charts are your new best friend.
Comparing site outages to page slowdowns is like comparing a tire blowout to a slow leak. One is big and dramatic. The other is quiet and insidious. Either way, you end up stranded on the side of the road.
Downtime is horrifying for any company that uses the web as a vital part of its business (which is to say, most companies). Some of you may remember the Amazon outage of 2013, when the retail behemoth went down for 40 minutes. The incident made headlines, largely because those 40 minutes were estimated to have cost the company $5 million in lost sales.
Downtime makes headlines:
It's easy to see why these stories capture our attention. These are big numbers! No company wants to think about losing millions in revenue due to an outage.
While Amazon and other big players take pains to avoid outages, these companies also go to great effort to manage the day-to-day performance – in terms of page speed and user experience – of their sites. That’s because these companies know that page slowdowns can cause at least as much damage as downtime.
This month, we've made some updates to our synthetic testing agents. In addition to upgrading the underlying operating system, we've added support for:
We often hear how special, generous, and supportive the web performance community is. This didn't happen overnight. This month, we're excited to recognize someone who has been a huge part of creating the community culture we enjoy today: Sergey Chernyshev.
Whether answering questions on social media, helping someone with a proposal for a conference talk, or simply being welcoming and kind to newcomers, webperf folks are some of the most generous people you could ever hope to find. There are so many folks out there who are organizing, educating, evangelizing, and building great tooling in an effort to improve user experience on the web. Sergey has been doing all of those things earlier and longer than almost everyone!
Have you ever looked at the page speed metrics – such as Start Render and Largest Contentful Paint – for your site in both your synthetic and real user monitoring tools and wondered "Why are these numbers so different?"
Photo by Freepik
Part of the answer is this: You have a lot of control over the design and code for the pages on your site, plus a decent amount of control over the first and middle mile of the network your pages travel over. But when it comes to the last mile – or more specifically, the last few feet – matters are no longer in your hands.
Your synthetic testing tool shows you how your pages perform in a clean lab environment, using variables – such as browser, connection type, even CPU power – that you've selected.
Your real user monitoring (RUM) tool shows you how your pages perform out in the real world, where they're affected by a myriad of variables that are completely outside your control.
In this post we'll review a handful of those performance-leaching culprits that are outside your control – and that can add precious seconds to the amount of time it takes for your pages to render for your users. Then we'll talk about how to use your monitoring tools to understand how your real people experience your site.
The median web page is 8% bigger than it was just one year ago. How does this affect your page speed, your Core Web Vitals, your search rank, your business, and most important – your users? Keep scrolling for the latest trends and analysis.
For almost fifteen years, I've been writing about page bloat, its impact on site speed, and ultimately how it affects your users and your business. You might think this topic would be exhausted by now, but every year I learn new things – beyond the overarching fact that pages keep getting bigger and more complex, as you can see in this chart, using data from the HTTP Archive:
In this post, we'll cover: