Skip to content

Core Web Vitals how-tos, RUM deep dive & industry benchmarks update

Sep 1, 2021

Hi there,

When I start compiling each of these newsletters, I rarely have a theme in mind, yet there are a couple of recurring topics that have emerged in this edition:

Core Web Vitals. When web performance makes the pages of the Wall Street Journal, you know it’s big news. Core Web Vitals are part of Google's recent page experience update. If you're not already measuring and tracking them, this is a good time to start. 

Real user monitoring. If you want to understand how actual humans experience your site, the only way to do that is by monitoring real user behaviour. But real user monitoring is fraught with complexity. "How do I find meaningful insights?" is one question I hear a lot. Other questions include "Why doesn't my RUM data match my synthetic test data?" And "Why do I need RUM *and* synthetic?"

Luckily, there's an ever-growing body of research and best practices to support you on your journey to master Web Vitals and RUM. These are exciting times in the web performance industry. I'm glad you're along for the ride. :)

Until next time,
Tammy
@tameverts

NEW! RUM Sessions dashboard

Real user monitoring (RUM) data is rich and meaningful. The biggest problem is that there's SO MUCH OF IT. Navigating RUM data has typically been done by peeling back one layer at a time, trying to identify the root cause when we see a change in metrics. 

"Did the last release cause a drop in performance?"
"How do I drill down to see what's going on?"
"Is the issue regional? Or browser based?"

Answering these questions can feel like searching for the proverbial needle in a haystack. Our new RUM Sessions dashboard allows you to drill into a dataset, explore sessions that occurred within a given point in time, and locate those needles in your RUM haystack.

UPDATE: Comparing synthetic tests just got easier

One of the huge benefits of tracking web performance over time is the ability to see trends and compare metrics. Last year we added new functionality that makes it easy for you to bookmark and compare different synthetic tests in your test history. Now you can compare tests directly from your charts:

If you just want to compare two tests and aren't interested in bookmarking them, click on the data point and then click on "Compare Test" within the popup. You'll see a message instructing you to select another test for comparison. After you select a second test, you'll get taken straight to a compare dashboard where you can see the side-by-side test results.

NEW! Industry speed benchmarks for Japan

We have a number of SpeedCurve users who want an ongoing understanding of how popular Japanese websites perform, so we've added Japan to our Industry Page Speed Benchmarks dashboard

This dashboard gives you a high-level perspective into the performance of industry-leading sites, plus the ability to drill down and see detailed test results, including waterfall charts and Lighthouse scores. You don't need a SpeedCurve account to use this dashboard, so head over and check it out

FREE EVENT: Smashing Meets for Speed

If, like me, you're hungry for more industry events, you're in luck! On Thursday, September 30, from 9am to noon (PDT), the Smashing Conferences team is hosting a free online event – including talks by Addy Osmani, Robin Marx, and me – plus a panel discussion called "The Why and How of Web Performance", moderated by Vitaly Friedman. Get more info on the event and claim your ticket.

ICYMI: Integrate SpeedCurve into your CI/CD environment

It's not enough to make your pages faster... you need to keep them fast. The regression struggle is real! Your synthetic testing can be integrated into your continuous integration or continuous deployment environment as a way to test for and react to performance issues. 

In a typical CI/CD pipeline, synthetic testing fits in either the integration testing stage, the post-deploy stage, or both. If any of your performance thresholds are violated, you can opt to go live anyway (and make fixes later) or break the build.

Good reads from the #webperf community: