If you want to understand how people actually experience your site, you need to monitor real users. The data we get from real user monitoring (RUM) is extremely useful when trying to get a grasp on performance. Not only does it serve as the source of truth for your most important budgets and KPIs, it help us understand that performance is a broad distribution that encompasses many different cohorts of users.
While real user monitoring gives us the opportunity for unparalleled insight into user experience, the biggest challenge with RUM data is that there's so much of it. Navigating through all this data has typically been done by peeling back one layer of information at a time, and it often proves difficult to identify the root cause when we see a change:
"What happened here?"
"Did the last release cause a drop in performance?"
"How can I drill down from here to see what's going on?"
"Is the issue confined to a specific region? Browser? Page?"
Today we're excited to release a new capability – your RUM Sessions dashboard – which allows you to drill into a dataset and explore those sessions that occurred within a given span of time.
We have released v216 of lux.js. This release contains bug fixes and other improvements:
LUX.beaconMode = "simple". This sends a beacon by inserting an image into the page, rather than a
We have released v214 of lux.js. This is a hotfix release that reverts the long task buffering introduced in v213 due to a bug in the buffering implementation.
We have released v213 of lux.js. This release contains bug fixes and other improvements:
We have released v211 of lux.js. This is a bug fix release:
LUX.init()no longer causes all user timing marks & measures to be cleared.
We have released a new version of lux.js. This release includes minor changes to the self-updating mechanism within lux.js.
We have released v209 of lux.js. This is a bug fix release:
We have released v208 of lux.js. This is a bug fix release:
LUX.initcall. Previously they were measured since the beginning of the initial page load, causing user timing values in subsequent pages to be much higher than expected.
document.titlevalue as the page label. Now the page label is updated whenever
We have released v206 of lux.js. This version contains two small fixes:
Prior to the latest update to lux.js, First Input Delay (FID) was only being captured for the first interaction (IX) after window.load. This update ensures that if a user interacts with the page before window.load, we will capture FID for that interaction. Customers that have sites with longer page load times and/or very interactive users may notice more substantial changes to this metric.
I confess, I’m not a statistician. While I pride myself on the 'A' I received in my college statistics class, admittedly it was on a pretty steep curve. That said, I’ve been looking at performance data for many years and have found myself on both sides of the debate about whether or not the practice of sampling performance data is inherently a good or bad idea.
When it comes to real user monitoring (RUM), I’m convinced that the marginal cost of collection, computation, storage, etc. is not always great enough to warrant a practice of collecting ALL THE THINGS by default.
Like any experiment, how you sample RUM data – as well as how much data to sample – depends on the answers you seek. While certainly not an exhaustive list, here are some questions you might ask when looking at implementing a sampled approach to real user monitoring...
If you're already using LUX, the new dashboards are available in your left-hand navbar. If you're not using LUX yet, here's how to activate your free trial.
We fixed a miscalculation in LUX CPU Time metrics. Previously, we returned no CPU time metrics if the browser did not support the Long Tasks API or if there were no Long Tasks. These pages were ignored when calculating the CPU Time metrics. But this was undercounting fast pages. The fix was to return "0" for pages that support the Long Tasks API but are so fast there aren't any long tasks. Most customers won't be affected by this as almost all pages have at least one long task. But for really fast customers this might cause a decrease in your LUX CPU time metrics.
We've added a handful of new metrics to LUX, our real user monitoring solution: First Contentful Paint, First CPU Idle, Longest JS Task, Number of JS Tasks, HTML Size, Total Image Requests, and Connection Type. Read more about these metrics in our blog post.
It's exciting working at SpeedCurve and pushing the envelope on performance monitoring to better measure the user's experience. We believe when it comes to web performance it's important to measure what the user sees and experiences when they interact with your site. A big part of our focus on metrics has been around rendering including comparing TTI to FMP, Hero Rendering, and critical blocking resources.
The main bottleneck when it comes to rendering is the browser main thread getting blocked. This is why we launched CPU charts for synthetic testing over a year ago. Back then it wasn't possible to gather CPU information using real user monitoring (RUM), but the Long Tasks API changes that. Starting today, you can track how CPU impacts your users with SpeedCurve's RUM product, LUX.
SpeedCurve comes with a great set of dashboards for synthetic and RUM. But we know that one size does not fit all when it comes to data charts, which is why we've invested so much work into the Favorites dashboards. For customers who use LUX, it provides a place to create custom charts that combine metrics from synthetic and RUM.
We just added some new RUM metrics from LUX in Favorites to allow for even more customized monitoring:
SpeedCurve is a SPA (Single Page App) so we construct the charts dynamically using JSONP. It works great, but we're always looking for ways to make the dashboards faster. One downside to making requests dynamically is that the browser preloader isn't used. This isn't a factor for later SPA requests, but on the first page view the preloader might still bring some benefits. Or maybe not. We weren't sure, so we ran an A/B test. Long story short, doing the first JSONP request via markup caused charts to render 300 milliseconds faster.
We're excited to announce SpeedCurve's RUM product, LUX.
The name LUX is a play on "Live User eXperience" and reflects how we've taken a different approach compared to other Real User Monitoring products. SpeedCurve's mission is to help designers and developers create joyous, fast user experiences. To do that, we focus on metrics that do a better job of revealing what the user's experience is really like.
In addition to standard RUM metrics like page load time and total size, LUX includes innovative new metrics that have more to do with the user experience like start render time, number of critical blocking resources, images above the fold, and viewport size. LUX's RUM metrics help you figure out which design and development improvements will make your users happier and your business more successful.