We put a lot of thought into curating a thematic set of dashboards that help you understand the performance of your front-end, but sometimes you just want to play with the data yourself and slice 'n' dice the data in all sorts of different ways. We've added a new "Favorites" dashboard that lets you do just that. You can explore the data and build your own charts, then rearrange them and share them with the team to help demonstrate the performance issues you're focused on right now.
Here's a walkthrough showing you how to slice the available data in different ways:
Often when monitoring and debugging site performance we focus on network activity and individual resources, but what about the CPU? As more and more sites switch to using large Javascript frameworks and manipulating the page using Javascript, the execution time this code takes and the available CPU can instead become the performance bottleneck.
For any of SpeedCurve's Chrome-based tests, including emulated devices, we capture the Chrome Dev Tools timeline. From the browser main thread usage in the timeline we extract how busy the CPU is and what it's spending time on. Is it busy executing Javascript functions or busy laying out elements and painting pixels?
We also measure the CPU usage to different key events in the rendering of the page. SpeedCurve's focus is on the user experience and getting content in front of people as fast as possible, so we show you what the CPU is doing up till the page starts to render. This reflects CPU usage during the browser critical rendering path and can highlight various issues. If there's lots of CPU idle time then you're not delivering your resources efficiently. You want to get the CPU busy nice and early rendering the page, rather than sitting idle waiting for slow resources.
In the test below we see in the first pie chart that the CPU is spending a lot of time on layout up to the start render event, which is quite a different picture from the Fully Loaded CPU usage.

If you're a performance engineer, then you're familiar with waterfall charts. They are found in browser dev tools as well as other performance services. I use multiple waterfall tools every day, but the waterfall chart I love the most is the one we've built at SpeedCurve:

Progressive Web Apps (PWAs) combine the best and newest features of the Web to deliver an experience that rivals native applications on mobile. Even better, they work on desktop, too. In fact, they work everywhere that the Web works! "Ah", you say, "that's not true! They require features that don't exist in all browsers." Because PWAs are "progressive", they can adapt to older browsers to deliver the best experience possible given the features that are available.
Given these winning attributes, it's no surprize that PWAs are the most popular movement in web development today. And while there are already numerous conferences, videos, and evangelists for PWAs, there's less focus on testing and tracking the performance of PWAs. Mark and I discussed this gap in PWA performance. In response, we added some new features to SpeedCurve that, coupled with some existing features, provide a great toolkit for evaluating the performance of PWAs.
SpeedCurve’s sweet spot is the intersection of design and performance - where the user experience lives. Other monitoring services focus on network behavior and the mechanics of the browser. Yet users rarely complain that “the DNS lookups are too slow” or “the load event fired late”. Instead, users get frustrated when they have to wait for the content they care about to appear on the screen.
The key to a good user experience is quickly delivering the critical content.
SpeedCurve users may have noticed some changes recently. At the beginning of March we released a major redesign of our Settings UI that gives users more flexibility to get the exact test results they want. The two biggest changes were to the way we emulate devices and to allow different testing configurations for different sites.
At SpeedCurve, we focus on metrics that capture the user experience. A big part of the user experience is when content actually appears in front of the user. Since stylesheets and synchronous scripts are the culprits when it comes to blocking rendering, we've rolled out some new metrics that focus on these critical blocking resources.
The most helpful innovation we made is to highlight the critical blocking stylesheets and synchronous scripts in our waterfall charts. In the following example waterfall chart for ESPN, notice how the critical stylesheets (green) and synchronous scripts (orange) have a red hash pattern. Not surprisingly, the Start Render metric is delayed until after the last of these critical blocking resources is done loading. The "scrubber" at the bottom of the waterfall (showing the screenshot at that point in time) confirms that rendering has been blocked up to this point. Explore an example of a interactive waterfall chart.

If you want to improve performance, you must start by measuring performance. But what should you measure?
Across the performance industry, the metric that's used the most is "page load time" (i.e, "window.onload" or "document complete"). Page load time was pretty good at approximating the user experience in the days of Web 1.0 when pages were simpler and each user action loaded a new web page (multi-page websites). In the days of Web 2.0 and single-page apps, page load time no longer correlates well with what the user sees. A great illustration is found by comparing Gmail to Amazon.
In the last few years some better alternatives to page load time have gained popularity, such as start render time and Speed Index. But these metrics suffer from the same major drawback as page load time: they are ignorant of what content the user is most interested in on the page.
Any performance metric that values all the content the same is not a good metric.
Users don't give equal value to everything in the page. Instead, users typically focus on one or more critical design elements in the page, such as a product image or navbar. In searching for a good performance metric, ideally we would find one that measures how long the user waits before seeing this critical content. Since browsers don't know which content is the most important, it's necessary for website owners to put these performance metrics in place. The way to do this is to create custom metrics with User Timing.
Performance budgets are an important tool for ensuring your site is delivering a great user experience. Steve first experienced performance budgets while Head Performance Engineer at Google. The practice of using budgets to track performance took off with Tim Kadlec's blog post Setting a Performance Budget. The idea is to identify your performance goals and track the metrics that help you achieve your goals.
At SpeedCurve, we give performance budgets first-class status by tracking them in the Site dashboard. Here's an example of tracking a budget for image size.

Before setting your performance budgets, you first have to be monitoring your user experience. Only then can you set budgets and thresholds for improving your baseline user experience. This also allows you to quantify the improvements you're making and share success stories across the organization like "We just improved start render by 32% by reducing image requests to half the budgeted amount".
SpeedCurve now provides a visual diff of every deploy. A full resolution PNG is captured for each URL and each pixel is diffed with the previous deploy allowing you to easily spot any visual changes you may or may not have expected.
The key to practising safe continuous deployment is to have a robust set of tools that give you immediate feedback on how your code has changed between deploys and its effect on the user experience. It's often very hard to spot all the visual changes in each deploy, especially in fast moving teams where a lot of the focus is on unit tests and other automated pass/fail systems. Visual diffs bring an increased level of tracking and confidence when you're able to compare any two deploys and see exactly what has visually changed.

We do a visual diff every time you click "Test Now" on the Deploy dashboard or you use the SpeedCurve API to trigger a round of deploy testing. Integrating with the Deploy API is super easy and provides a robust set of metrics and before/after screenshots, visual diffs, waterfall charts, filmstrips and videos for each deploy. You can then compare any two deploys to see exactly what's changed over time.