Performance Hero: Harry Roberts
This month's performance hero is someone who's helped some of the biggest brands in the world speed up their sites – and who generously shares his wealth of experience with the performance community through articles, videos, and conference talks. Thank you for everything you do, Harry Roberts!
What we love about Harry is that he's both an idealist who believes in delivering great user experiences, and a pragmatist who knows how to measure the impact of site speed on businesses.
Harry and I recently had what we jokingly called a fireside chat. As a SpeedCurve power user with his clients, Harry had a lot of great feedback and insights into how he uses our tools. But our conversation was actually much broader, so I thought a lot of folks might be interested to hear Harry's thoughts about performance in general. I also thought it would be great to give him long-overdue kudos as this month's Performance Hero!
Keep scrolling to watch the video of our chat and hear why Harry had the following things to say:
- Your e-commerce site should not be built with JavaScript. It should not be a SPA.
- Sites that have gone all in on JavaScript are slower and more difficult and more expensive to make fast.
- Nearly every website on the planet that is a SPA shouldn't be.
- No one really wants the fastest website. They want the most effective website.
- The number of product images on a page has a direct relationship to conversion rate. There's a sweet spot between too few and too many. You'll have to watch the video to learn what that sweet spot is. ;)
- A/B testing is an essential tool in your performance toolkit.
- Using RUM is probably the biggest predictor of the success of a site.
- Headless commerce and headless content are great for the developer experience and terrible for the user experience.
- Don't optimize Core Web Vitals for SEO. Optimize for user experience.
- Too many people don't realize that Core Web Vitals and CrUX are completely different things.
- Core Web Vitals are only one part of the much bigger performance picture.
Links
- Harry Roberts
- performance.now() conference
- Core Web Vitals
- Chrome User Experience (CrUX) Report
- CrUX vs RUM
- Ryan Townsend's proposal for Core Web Vitals in Safari/Webkit
Transcript
CLIFF CROCKER:
Hey everybody, this is Cliff from SpeedCurve. I'm really excited today to be chatting with a good friend that we know here at SpeedCurve and love: Harry Roberts.
Harry's an independent web performance consultant with over 17 years of experience, although I kind of find that hard to believe. The guy's young and attractive and... 17 years... come on. Lots of experience working with huge brands. Google, BBC, GE, the UN I think I saw on your site. Is that correct?
HARRY ROBERTS:
Correct, yes. Yeah.
CLIFF:
Awesome. Well-known speaker, internationally recognized leader in front-end performance and much, much more. He's also the co-chair of the best performance conference in the world, performance.now, which is held each fall in Amsterdam, which happens to be the last time that we saw each other. So it's been a while and it's good to see you again. How you doing?
HARRY:
Been a while. That was a phenomenal intro. I'm going to slice that out and I'll just give that out to conferences. That was wild. Thank you very much. The 17 years of experience is true. That's only because I was such a nerd that I registered my domain name when I was 17 years old, which was 17 years ago and I got my first job at that time. So as a consultant, I think I've got 11 years experience overall in tech... wait now, you're right, 17 years. Wow. But yeah, it's great to do.
It's been a few months, I guess it was sort of a fall-wintery sort of Amsterdam. We went for dinner together and I think that's when we had the first idea for this hangout, wasn't it?
CLIFF:
Yeah, absolutely. I mean what we wanted to do is... we thought it'd be great really for everybody, but also for people that are doing consulting or people that are focused on making performance better for clients, agencies, that sort of thing. But also just everybody to hear a little bit more about you and hear about our relationship because it's a pretty unique one.
So obviously we love you and I love when you're up on stage and you're presenting things and I always feel so flattered when you're throwing up a SpeedCurve chart or something like that to illustrate something, but you don't work for SpeedCurve and that's pretty clear, right? You're a friend and you use our tools, but I like to think that's by choice.
HARRY:
It's very much by choice, and in advance of this hangout now and in preparation for this, I was trying to remember how SpeedCurve got on my radar and I simply cannot remember. I dunno if it's worth pointing out for anyone who is watching. We don't have a commercial relationship. I'm not sponsored by you. I just have used SpeedCurve for so long that I think we have just become friends... normally it's me pestering for feature requests or asking for advice.
So I don't actually know where my journey with SpeedCurve started. I can go back into talks and I can go into slide decks many, many years and SpeedCurve dashboards show up, but I can't remember my first ever... that light bulb moment with the tool. It just seems fairly omnipresent now. It's just such a big part of my my day-to-day working life that I don't really remember the first time I used to speak of, which is crazy to me because it still feels shiny and new and fun.
CLIFF:
Yeah, well absolutely. I think while I'm not necessarily shiny new and fun, it does feel like I'm working on a product that is, and it's been certainly a fun ride here. So can you maybe tell us a little bit about how you're using SpeedCurve with your clients or in these engagements that you're doing?
HARRY:
Happily. So I think... context for anyone who doesn't either work in web performance or is new to it... normally what happens is I get an email from a client, a potential client, and they say our website is slow and we'd like to make it faster. And one of the first questions is how do you know it's slow? Is it just general sentiment? Can you just tell? Sometimes you can just tell you've got a slow website, right? There may be customer complaints and that's the serious one. If your customers are complaining that a site is slow, but if you want to make a website faster, you need to objectively know how, where and why is slow in the first place and certainly without a tool like SpeedCurve, or nowadays CrUX, is probably the lowest fidelity way to get there and without the data supporting, the argument of "the site is slow", you can't really make the case for "the site is now faster".
So where it fits... SpeedCurve bookends my projects where I can, if my client doesn't have SpeedCurve or similar install already, the first way that I would use SpeedCurve is when a client gets in touch, let's say the lead time to start that project is going to be two months, six weeks, whatever. I will try and encourage that client right now, "Go and install SpeedCurve, leave it running until I can join the project or the project starts," and then we've got at least a month of rich real data that spans various demographics, geographic locales, device types. I've got really rich data that helps me understand if the site is slow in the first place, where it's slow, why it's slow. That is a huge, huge part of my early work on a project.
If a client is feeling ambitious enough, especially if they've got a conversion-driven website, maybe they're a charity and they're looking for donations or an e-comm site, so they're just literally selling things or I've worked with streaming services and they just want people more engaged – do people watch more minutes of video if the site is faster. If my client is ambitious enough and that is relevant to them, when we install that initial SpeedCurve snippet, when we initially get SpeedCurve set up on the site, what I'll do is I'll then capture conversion rate data.
So not only can we work out is the site slow, then we can work out things like "every further 100 milliseconds of slowdown costs you X percent of revenue" that can then be used. I call that kind of "sprint zero" because that can then be used to actually price the project. So we could say, well, if your site was 250 milliseconds faster, you're going to make an extra $108,000 a year. Therefore the cost of the project probably shouldn't be more than 108K. It should be a percentage of that. And that can really help the business analysis, the business case of web performance. For me, that is the most useful way selfishly for me to use SpeedCurve is to be armed with the early data and it goes much, much deeper. But I don't want to give too long an answer, but that's the first way that I use SpeedCurve is just the richer the data that I have, the more effective the project can be.
Before using SpeedCurve, or without SpeedCurve, you would to an extent be shooting in the dark. You'd draw some synthetic tests, you'd use WebPageTest to say, well, in this laboratory environment, your site is only this fast or slow, but that doesn't really reflect what a real user might experience going into a project with real user data. It is the biggest game changer and it's probably the biggest predictor of a successful project. This has become a very long answer to a short question. Done, part two.
CLIFF:
It's very pertinent. I think it's absolutely something that we see and I've seen from my own personal experience way back in the day, so dating myself as well, when I was at Walmart Labs and I was running the performance team there, I had to really fight for performance. I really had to kind of make a case for it because it's not like developers were sitting around on their hands or twiddling their thumbs doing nothing. They've always got priorities, they've always got things they're focused on. These are things that are usually getting pushed top down less so maybe they're working on tactical debt, things like that.
But I think what you're talking about, what you're illustrating is what I found I had to do, which was tie performance back to business value, being able to say, "Hey, I know that we prioritized X, Y, Z and maybe an improved cart flow or optimized cart and checkout or whatever it might be. However, I can actually show you and demonstrate that improving our front end time at the time was what we were focused on at the 95th percentile by X seconds is potentially going to point the real gains in revenue." So I think that's pretty fantastic.
HARRY:
Yeah, it's huge. And as well, what I find really interesting is you're absolutely right, developers aren't just, they're not building slow websites for a thought. They're not doing things wrong, but there's always something that needs doing that is usually seen as more important. Performance is invisible. No one cares about security until after they've been hacked. So it's really hard to make a case of something that isn't a shiny new feature. I think intuitively, any CEO, any CTO, any head of e-commerce, anyone intuitively knows that faster is better. All things being equal, would you have the exact same site, fast version, or the exact same site, slow version? Everyone's going to pick the faster version, but to what end? Right. So I think that's one of the biggest things, and it's one of the things my clients, that sounds rude. Well, I'll take that back. It's one of the things that I really try and encourage my clients to do that they haven't necessarily thought about is, "what is this even worth to you?"
You could spend more hiring me than this is actually worth for you. And I don't want that for anyone because it's not good business sense, is it? Well, I mean for me, I suppose it is, but I don't want to be the guy who sells solutions to problems that don't exist. And using a tool like SpeedCurve where you can capture that custom data, whether it's a conversion or size of cart or average order value and being able to tie it back to specific customer events is really powerful. And the other thing as well is you can correlate interesting things. So yeah, we can say improved LCP increases the likelihood of purchase by 8%, but because of the amount of rich data that SpeedCurve gives me, and this is a true story, when you've got a product details page and you've got your main product image here, and then you've got all your thumbnails beneath it because SpeedCurve captures things like images above the fold, you can correlate the sweet spot.
And we found out, I think we didn't ever do any real customer research into this, but the more thumbnails, the more product thumbnails, the less likely someone was to purchase. And I just wonder if that's a case of just a bit of overload, a bit of too much information. And we found that the sweet spot is if you just have more images in your gallery carousel thing, your four thumbnails, statistically causation correlation, you can argue many different looks or takes on the data, but it's the silly things like that where, okay, yeah, you've hired me to look at site speed, but just out of interest, is there any correlation between the numbers, images above the fold and likelihood of conversion? And for one client, we found that yeah, that sweet spot was four. They would tend to sell more products if they only had four product images as opposed to seven or 11 and stuff like that. I just find fascinating.
CLIFF:
Yeah, totally. I've heard that. One of our customers also shared with me search results very much the same way. Okay, so we've got search results. You would think, okay, let's just throw as many results as we can on this page, and of course that's not going to do great things for performance. Even if you're lazy loading stuff below the fold,
Bigger, more complex DOM, whatever, it's not necessarily the best idea, but sort of in their mind it's more options, more chances that people are going to go and click and go that product details page. But they found when they actually cut it in half from 50 to 25, performance got better and engagement got a lot higher. However, then they cut it down even more like, Hey, okay, let's go the other direction. Performance really matters here. We need to actually look at minimizing this even more. And they cut it down to something like 10 or under 10 search results, which was kind of silly and they saw engagement drop off even though performance is better, it wasn't always context. So finding that sweet spot I do think is important.
This actually made me think about something about how you go about these engagements and one of the things that we love to do is look at experimentation or A/B testing performance changes. Do you find that your clients are typically up for that? Is it something where they're mature enough to actually say, "Okay, yeah, we're actually going to make this change and send it to X percentage our traffic and see what impact that has on conversion?"
HARRY:
I've got a couple of real good success stories with that. It's not as common as I would like it to be. I feel like experimentation, A/B testing, is always the smartest way to making LCP faster is always going to be the best idea, but making it faster is always better and there will be a point of diminishing returns. So certain things don't need A/B testing, right? Inlining your CSS, we don't need an A/B test for that. It'll just be faster. That is just better. But for certain things where there is a bit more of a trade off where, well, I'll just dive straight in with the example. I worked for the large US manufacturing firm, I guess you might call them, and they were using a particularly slow font provider and I've got a bit of, I'm a typography nerd and I like Type I nicely set sort of type and designs, but that's a point if it comes at a cost.
And this particular font provider was quite slow, so I just asked – and oh, they were licensing a custom font. It wasn't just Google Fonts, it wasn't free. They were licensing a font for just over a hundred thousand dollars a year and it was just really tanking base feed off site speed. So I just asked them, look, can we just run an A/B test and for just 1% of your users, and it was really easy to set this A/B test up directly in SpeedCurve because all we had to do was there isn't quite a traditional stack, which is music to my ears because it's easy to make old stacks fastest. They had a PHP sort of e-comm site and we just literally wrote a bit of PHP that said, but 1% of requests to this site don't give the custom font and set a cookie saying they haven't been given the custom font and then tell SpeedCurve, we've done that.
Bounce rate went from 88% down to 61%, I think roughly those are the numbers. So an over 25 percentage point improvement in bounce rate just from removing a custom font. Now what that meant was we proved a business case for saving them over a hundred thousand dollars a year because, obviously, get rid of the font, get the designer – worked with a really great designer who actually went on to build a really, really nice website about modern font stacks like removing custom fonts that's formed off its own side project, but it's like win on top of win. We could prove that bounce rate was measurably better to about 25 percentage point increase of improvement. And we saved them licensing costs of over a hundred thousand dollars a year. And that one A/B test provided that data.
Another one, I've got a client where they've just got this mega Google Tag Manager instance and the one Google Tag Manager contains four different containers and it's this just beast, this Frankenstein monster of a tag manager that's been touched by a hundred people over the last 10 years. And I'm convinced that's the biggest problem about INP issues, but I can't prove it until we whip it out. But the data that is coming out of there is so critical to the business that we can't just remove it. A very, very brave soul, and I'm very grateful to them, allowed us to run an A/B test whereby 5% of users to one particular part of the site, it's a very highly trafficked site, just weren't given a tag manager at all. Now I'm not comfortable sharing the results of that just yet. It's still actually technically an inflight project, but again, we can tell SpeedCurve this session didn't have Tag Manager.
We can go to the local storage as well, which is nice. We don't actually need to use cookies. So it's all GDPR compliant. We can use local storage to record that. So we don't need to worry about consent and compliance and we can just run an A/B test and we can tell SpeedCurve "I want to plot all of the non-tag manager sessions against the with tag manager and just sort of quantify the impact." And then we can correlate that with... is the data we're getting more valuable than the engagement we're losing. Because okay, I'm a performance engineer. No one cares about performance more than me. But if the business could say, right, we can see that tag manager is harming INP, but the data we're getting from tag manager is worth more than what the bad INP is losing us, then just have bad INP, right?
CLIFF:
Yeah.
HARRY:
No one really wants the fastest website. If someone wanted the fastest website, they'd let me delete all of their CSS, delete all of their JavaScript, use Times New Roman. It's a balancing act and ultimately what we want is the most effective website. And even if that means slower but more profitable, then we just stick with slower. Ideally, we'd have both, but realistically my clients just want more effective websites.
CLIFF:
I think it comes down to revenue. I mean it does mean most of the sites that we're working with, they are there for a purpose. They're there for whether it's generating revenue or in case of nonprofits just generating that user engagement. There's always, that's the ultimate measure, right?
HARRY:
Yeah, of course.
CLIFF:
The ultimate outcome is not better performance. It's whatever that is that you're striving towards.
HARRY:
Yeah, exactly.
CLIFF:
So you mentioned a lot of examples there, which I really appreciate. Are there any common themes or issues that you're finding across your clients? Are they all, "Hey, I always go in and I do this because I know it's some low hanging fruit and I'm going to get a great win for this?" Or are they all unique? Is it more taste specific? And obviously without giving away too many of the goods, obviously you are for hire, we're not trying to get free consulting here, but what are some of those patterns you're seeing?
HARRY:
So I'll be completely honest with you, there are only two big patterns that I tend to see.
Sites that have gone all in on JavaScript are slower and more difficult and more expensive to make fast. That's the first pattern I will see.
Any site that is an SPA is going to have two problems. One, it's going to be hard to capture decent metrics because SPAs sidestep the web platform, it's not SpeedCurve's fault. That data can't be captured. It's not Core Web Vitals or CrUX's fault. That data can't be captured if you're going to sidestep the web platform. You don't get web platform features that makes analysis incredibly difficult, even the best tooling in the world at present and bridge that gap... that if you using an SPA, you're going to struggle to make that site faster. It's harder to know where it's slow.
Common patterns that evolve are the more JavaScript you have, the slower that site is going to be. And the worst thing is that sites tend towards more JavaScript over time, not less. You release a site on Next.js, now it's already got too much JavaScript. In six months time, you can guarantee it won't have less. It's always going up. So those kind of sites always tend towards worse performance, which means that guardrails, tracking bundle sizes, regression tests, deploy tests, they're vital. That's one common pattern. As soon as I see an inquiry from a client and they've got a single page application, and also don't even get me started on single page applications, if you've got a homepage, a search results page, a product page, a product listing page, that's not a single page application, I've just named four pages. Nearly every website on the planet that is an SPA shouldn't be. Photoshop in the browser, that's a single page app.
Your e-commerce site should not be built with JavaScript. It should not be a single page app. That's me ranting now. So that's the most common pattern is if you've got a single page app, if you're using a lot of JavaScript, it's going to be a lot harder to find the problems and remedy them. Not impossible, just more expensive. And usually the tragedy there is that's a problem of team of course for themselves. The flip side is, again, I'm reluctant to name any names even though this is like a victory. I worked with a firm here in the UK, they're a worldwide brand, but they're a very British firm and their site was built on Laravel, but it was a lamps stack, which I mean, and everything's having a bit of a spring or a bit of a resurgence at the moment, which I'm very happy to see.
But we got this project to 99.8% passing Core Web Vitals over 335 million page views. So getting to 99.8%, passing over 335 million page views, that was trivial. That was easy when building on a traditional boring stack, I wouldn't fancy my chances doing that with a React application, Nuxt view, whatever. So that's the only real pattern I ever tend to see.
Another emerging pattern I'm seeing people are really digging the ergonomics and the developer experience of headless commerce and headless content. And again, not going to name any names, but there are certain providers who all they do is headless content. You've got your digital asset management, you've got all of your CMSes in one provider, all of your SKUs for your web shopper in another provider. And these are clients where they've built single page apps on top of this headless architecture and they're suffering horrendously with time to first byte issues because that one request that would traditionally go into a Magento store and it would just say, right here's Magento, let's go and look in your database. Here are the products, here's how many they've got in stock. Here's the price, send that back one HTTP request one sort of there and back life cycle.
Now what happens is a request comes in for a server rendered page or SSR, and all of a sudden you've got an API endpoint that lives in a totally different place and it might even live on a different continent, depending on who your provider is. So that one request that comes in for a page that traditionally would've just been one request, one response, it now fans out to, right, we need to go over here to get the CMS content for this product. We need to go to this provider to get the SKU information, the stock price information, all of that kind of stuff. So now I see these really trendy modern stacks that are like, yeah, it's convenient. Yeah, it's nice. It kind of feels a bit edgy to be using all these new tools to build what is really a solved problem. I'm just seeing Time to First Byte skyrocket and I've got two clients with very, very similar stacks, very similar objectives, suffering the exact same fate. So those are the two most emergent patterns I'm seeing in the last couple of years, maybe a bit longer.
CLIFF:
It's kind of crazy. I'm seeing it a lot too, literally it was just looking at it before this call with a customer where their time to first byte just jumped up horrendously and it wasn't necessarily due to super side rendering. It wasn't due to any real architectural changes that they made and we're still trying to get to the bottom of it. They're going to start capturing server timing so they can get a little bit more detail into like, "Hey, is this the CDN? Are we seeing requests that are going back to origin?" But then there's also this emergence of looking more at some of those subparts, I would call them I guess without a better term for what's in time first byte, the redirect time, server time, connection time, the TLS handshake, all that stuff. So it still feels like... it's funny, we always talked about 90% of the time spent on the front end, and I think that that's absolutely true. I wouldn't go back against that, but we can't ignore this other part that is so critical to start render and FCP, LCP, all those things that are more sort of a user experience metric.
HARRY:
Right. So this is actually, I had exactly what you just said, had a major breakthrough on a client project with SpeedCurve, and this is going to seem like a setup, but if you want, I can show you it now. If we just just take a pause now while I go and get a dashboard up and we'll get back and I'll show you the bit you mentioned about subpart on Time to First Byte... SpeedCurve got me to the bottom of a really complex problem recently. Well let me just go and grab some dashboards and I'll show you what I mean because the kind of information that is vital.
CLIFF:
Okay, so I swear I'm not throwing Harry a softball here, but I think he's got a good use case to sort of illustrate some of the getting behind Time to First Byte a little bit from a client.
HARRY:
Yeah, yeah. So exactly as you were just saying, the subpart of Time to First Byte and the historical not belief because it is true most slowdowns happen on the front end, but if you've got bad Time to First Byte, you're really going to struggle to hit your front end targets. So I think as most people watching probably are already aware, Google make this incredible amount of data freely available to have this for free is truly remarkable. It's not without its downsides. This is data specific only to Chrome and that's Chrome not on iOS. But having this for free I think is a remarkable see change for the industry. But what you'll see this client around this Time to First Byte last year suffered Time to First Byte regressions, but this is very opaque. What's going on there? Is it increased server load? Is it API calls? Is it database reads?
What is going on here? This data, and remember it is free so we can't criticize it, but this data is very opaque. So I told this client, Hey look, we can get to the bottom of this and many other things. This client is actually a huge, huge SpeedCurve fan now. They adore SpeedCurve. We installed SpeedCurve and I got some great insights because what we can see, and this covers the last I think one month of data is we break down the subpart or SpeedCurve allows us to break down the subparts that everyone else has called Time to First Byte, right? What you'll see is that DNS, your initial connection and your TLS are basically effectively zero now. They're so quick, they don't even really register other network. Again, zero server response 800 milliseconds. But the real concern for me is this 600 millisecond redirect.
So I was like, immediately, not problem solved as such, but this is like an immediate right. Well there's your problem. Redirect, this site is suffering with redirect. One thing I really adore about SpeedCurve, and you do need to be a little bit of a graph nerd or know a little bit about interrogating data because I think SpeedCurve is the performance engineer's tool. I think it's very, very well suited to the power user. It's a bit like a Swiss Army knife. It's got loads of different tools on it. You just need to know when to use which one. I was fascinated by this data. Obviously it seems that about 600 milliseconds of our delay are attributable to redirects. That seems quite extreme, and I didn't really notice many redirects as I used the application. So one thing, and this is the simplest way to look at it, is just the number of incidents of these things.
So if we look at server response on the 4th of March at 1:00 PM we've got 32,000, nearly 33,000 page views that logged a server response metric or a timing for server response the same day or the same snapshot, we logged just 561 redirects. So what we can see here is that redirects are actually, though very long and very slow and very problematic, they are in the vast minority of use cases. So looking at this data, I can go deeper rather than just saying we've got a problem with redirect. What I can say is "We appear to have a very, very localized problem with redirects and wherever these are happening, it's in the minority, but they are very extreme."
So this is just the time series graph and I guess most people using SpeedCurve, this will be the most common type of graph they would interface with. But I'm really fond of being able to plot histograms. So what I did is I just created a quick chart to where I plot the number of incidences of backend times overall time verse bite, the number of redirects and the number of server responses, again over the same one month period. And what we'll see is that if we're actually try and look at redirects, I dunno if you can even see that, if you look around this sort of 0.2 and 0.4 on the X axis, the number of redirects we actually incur on this site is minuscule. And what this allowed me to do is pinpoint the fact that yes, redirects do take a lot longer than I would like, but they're not really the problem affecting this site at scale. Most of the time we're not incurring redirects. When we do, they're very, very expensive, right? 600 milliseconds. But it allowed me to not go down the wrong path. It allowed me to not waste time assuming that redirects were happening everywhere.
And being able to just manipulate raw data like this in many different chart types is... to me it's phenomenal because yes, you have to know what you're doing, but even if you've got a slight intuition for graphs or looking at data, you can save so much time. So if you look at the process... this, we've got backend time, it's very opaque, don't really know why. And we look at this, you might draw the conclusion, "Oh, backend time is slow because we've got redirects." Then we look at this, we say "We have slow redirects, but that's not why we have slow backend times." And it turns out for this particular client, this particular project, this particular site, the time really is just majority spent on server responses, expensive API calls, and things like that rather than this almost misleading red herring of redirect.
This for me is a bit of a rollercoaster journey and I actually, I dunno if she ever told you, but I actually, when Tammy [Everts] and I were organizing one of the PerfNow events, we were on a call about the conference. I asked her to stay on the call for a bit afterwards just so I could tell her about this. I just felt so proud of myself by digging through this data and getting to the bottom of it. You can't do that with tools that don't let you manipulate your own data. But for me that was a really fun, fun. Don't ever invite me to a dinner party. My idea of fun.
I thought this was a really case I have and I'll use, I thought this was a really cool use case for the tool. Just really enlightening. So the subparts thing you're talking about just that stepping through each of these three tabs and just seeing the progression of just digging deeper, I just found that really, really valuable. And yeah, it's just funny you should mention sub parts of Time to First Byte because that was one of the biggest breakthroughs I had. The most fun breakthroughs I had recently with SpeedCurve.
CLIFF:
Nice.
HARRY:
I'm going to stop sharing now.
CLIFF:
Yeah. Awesome. Thanks for sharing that. I think it's always fun to see people using the tool and seeing how they're using it in a lot of similarities I think that we run into. So yeah, I think that that's going to be helpful for a lot of people. We haven't really gone here yet, but you've mentioned a couple metrics that I can use to jump to this topic, which is Core Web Vitals on top of having a great data set in CrUX. I do think Core Web Vitals have been a pretty great move for our industry, certainly in terms of visibility and getting more of a conversation going around performance. But do you find that your customers, the people that you're working with, do you feel like they're focusing too much on Core Web Vitals? Do you feel like they're maybe focusing enough? Where's that line and what are your thoughts on that?
HARRY:
Not conflicting views... different. It's a difficult one. I'm with you. I think Core Web Vitals are great as a metric, but I think there are two sides to Core Web Vitals. I think the move to standardize performance metrics is really valuable. And I genuinely LCP, INP, CLS. I think are great metrics and I like how well researched they are, how well defended they are. I'm a fan of Core Web Vitals as a metric, but we have to remember is there's a different side to Core Web Vitals and that different side to Core Web Vitals is you've got to pass these to get up high on Google and that's like the cult of Core Web Vitals. And the biggest problem there is that, well, not the biggest, there are multiple problems. One is are you prioritizing performance for the wrong reasons? Given that it's now an incentive, there's a slew of people gaming the system. I don't care if they're doing that. It doesn't really affect me. I don't work with those kind of people so it doesn't really affect me. But the ethicist or the person who cares inside of me, there is one somewhere that cares, I just think it's doing it for the wrong reasons.
But then the other problem of course is that even if you cared for the right reasons and you think Core Web Vitals are great and we know that better LCP leads to better user experience, better customer satisfaction... it's Chrome-only for now. That's not Chrome's fault. I mean we've got the Interop thing – shout out to Ryan Townsend, a good friend of ours who actually put forward Vitals for Interop 2025. He did a very simple but very important thing he did to get it on the radar and Safari are now considering implementing it.
But if you're going to go all in on Core Web Vitals, and again, it's not your fault as a site owner or as a business, but if you are over-indexing on Core Web Vitals, you're leaving a lot of opportunity on the table for your entire iOS audience. For a start, I've worked with clients in the past who one very high-end fashion retailer, they were over 80% iOS traffic. Optimizing Core Web Vitals for them was not a wasted endeavor, but not a measurable one. I've got a current client, they're 51% iOS traffic and their entire project is based on improving Core Web Vitals. Here's an interesting bit of research you could do. You could take all the data, you've got your anonymous customer data in SpeedCurve, you could run the numbers. You know what would be really interesting... be really interesting to see if do Core Web Vitals data gathered from Chrome visits in any way predict faster iOS experiences?
Obviously because there is no LCP in Safari, you can't do it for, but could you proxy it? FCP is in Safari and Chrome. So what I'd love to see is, maybe, I'm sorry I'm giving you a job now, but given the data SpeedCurve has, anonymizing that and saying, well, do you know what? If you do improve Core Web Vitals, it looks like that will have a positive impact on Safari or iOS by this percentage because... do you know what I'm trying to say? Sure. Could we still use Core Web Vitals as a predictor for better experiences in Safari? My instinct is of course, yes, but we don't know. So yeah, Core Web Vitals, I'm a fan of the metrics. I'm a fan of the rigorous definitions of the metrics. I'm less of a fan of the kind of cult of Core Web Vitals, optimizing Core Web Vitals for the sake of it rather than understanding the business impact.
And of course Google haven't helped, they haven't actually published any expected increase in ranking. If they said, "Oh yeah, it's about 5% of your ranking," that would give people a lot more confidence in whether they should embark on it or not. So everyone's just been shooting in the dark and I'm starting to see what I think I would call the post Core Web Vitals world. I'm getting inquiries now where people might be using Core Web Vitals as the metrics to optimize, but they're not doing it for SEO. They're doing it because they know that INP will affect engagement and LCP might affect conversions.
CLIFF:
Right.
HARRY:
So it's a bit of a double-edged sword. I think the cult of Core Web Vitals is sort of in the rear-view mirror now. The whole hype train around search rankings. Any studies are just anecdotal, no one knows for definite if it's worth doing. So I feel like people are leaving that behind. I'm seeing in my own inquiries in the work I'm doing, people are less bothered about SEO and they're going back to olden days... the kind of web performance you and I came up on.
CLIFF:
Sure, sure. Well you heard it here first, the title for your next conference talk, the cult of Core Web Vitals. I love it. I agree with you. And again, it's kind of hard because on one hand it's like I really hold the team up. Annie Sullivan and her team have just done such a fantastic, phenomenal job.
HARRY:
Incredible work.
CLIFF:
Like moving the needle in a very big way, in a very big way. And honestly, I can tell you just by talking with them and meeting with her team, they're doing it because they love performance and they really want to make the user experience better. And being able to maybe leverage some of the impact on our search ranking or whatever to move the dial... kudos to them. That's awesome. However, they have no control over what WebKit does. They have no control over Apple, essentially. And that is just such a huge gap. We do hope to see at least LCP and potentially event timing, INP, in this year with WebKit. But still no CLS. But then that begs even a bigger question like this data that people are starting from and going off of is the Chrome User Experience Report, is Crux data. I would put money on this to say there is no way in hell that Chrome is going to say, yeah, let's include Safari data.
HARRY:
Well, I mean you cannot get, because Edge has Core Web Vitals as a first class citizen, it's Chromium, but you can't get Edge data into CrUX. And again, that's to the expected because it's CrUX. Which begs the question, would Apple have to call theirs SUX? Yeah, Safari, right? Sorry, that wasn't very good.
CLIFF:
That would work. That would work. I don't know. That might be a branding issue a little bit. I think the folks from RUM Vision have quoted that term already, so I'll give props to them for it.
HARRY:
Really? Yeah, I, but you're absolutely right. So even if Safari do start supporting the metrics, surfacing that data, because of course – and this confuses a lot of my clients, understandably – So Core Web Vitals and CrUX are different things. Core Web Vitals are the metrics that could be available everywhere. CrUX is just the data captured by Chrome visits that cover those metrics. So even looking at basically just that subtle distinction that CrUX and Core Web Vitals are different things, and I think a lot of people don't understand the impact that has. And a lot of my clients, a lot of developers, a lot of developers don't even realize that Chrome on iOS, certainly at the time of recording, is just Safari. It's not Chrome at all. Yeah, it syncs your bookmarks across different devices. That's about it. It's not Chrome at all and it's a huge blind spot.
Not to sort of big you up too much on this fireside chat, but one of the biggest draws for my clients is that if we put SpeedCurve on their site, they get full visibility of their Safari and iOS traffic and their three Firefox users. But yeah, so that plugs a huge gap. And my biggest current client, when we looked at that Google Analytics data, they saw they were about 51% iOS traffic. And I had to tell 'em, well, you do understand that all of the Core Web Vitals data we've got and all the CrUX data we're benchmarking ourselves against only takes into account 49% of our users. And that sent a bit of a, I was in the senior leadership and that sent a bit of a ripple through the room because people hadn't realized, and that made SpeedCurve an even easier sell because I was like, oh, we can remedy that easily, but we can't do it with CrUX.
CLIFF:
Right. Makes sense. Makes sense. So I know that we're kind of coming up on the end of our time here and we've covered a lot and certainly I could go on, we should just do this monthly or something, but without trying to sound too pitchy, without trying to sound too much of a paid shill myself, you use a lot of tools and I'll be the first to say that there's some fantastic tools out there. I've never been more proud to be in a group of competitors that have such a high caliber of focus around performance. So I don't want to knock those tools at all because credit where credit's due. And I've worked at some of those companies before and worked on those products. So I'm not going to be a hypocrite. But what would you say is different about SpeedCurve? Why would you use SpeedCurve? You've given some examples already, so I apologize for making you restate them, but from your view, what's the real differentiator there?
HARRY:
Well, let's take one step back and sort of agree with what you just said is there are some great alternatives to SpeedCurve out there. I've got a personal account with Request Metrics to check out what they're getting up to and it's really good. I was working on an e-com site recently and RUM Vision was already installed on there. And it's like if you've got a project that is a Core Web Vitals project, then those tools are very, very honed in on that and they're very predisposed to, if you're fixing Core Web Vitals, here's your Core Web Vitals data.
One thing I really, really do love about SpeedCurve... and actually other competitors, other tools exist other than the ones I've just mentioned. One thing I find with a lot of other companies that are a bit older is their RUM solution seems like a bit of an afterthought in a lot of cases. I've used tools, and I'm not going to name any names, because this is critical. It is a bit of a diss, I found that tools that companies that traditionally sell APM, like backend application performance monitoring tools, I tend to find that their RUM tools are not a core business unit and therefore don't get the same resources, the same look-in. I've seen companies changing that. I've seen companies improving their RUM offering.
So two things that I like about SpeedCurve is that you've been around long enough that you aren't just a Core Web Vitals parrot. You were designed before Core Web Vitals even existed, which means that there's a key focus on a holistic look at user experience. You've got User Happiness Score, you've got ability to track Safari, Firefox, other browsers. It means that if you aren't just doing a pure Core Web Vitals project, which a lot of mine aren't, a lot of mine are revenue based and conversion based. Having a tool that was built before Core Web Vitals means that they've got a very, very mature and solid baseline for doing API oriented performance improvements rather than just keeping Google happy. And the other thing is you weren't spun off from an APM product. SpeedCurve was built from the outset to be, I think, I mean I've never even asked Mark [Zeman] what his motive was, but I think it just, I imagine Mark was just like, "We need to build the RUM tool that's missing" and build it from scratch and build it to be a first class citizen. It's not an add-on to your APM, it's a whole product in its own right and I think that's strong and it's stood the test of time.
HARRY:
I adore Treo. Treo is just as big a part of my toolkit as SpeedCurve is. They solve incredibly different problems for me. They are entirely complementary. There's no overlap other than sheer quality of product. So SpeedCurve. I think my three main tools that I could not live without: SpeedCurve, Treo, WebPageTest. Those are the three main things. They're all very complementary. I've worked on, like I say, pure Core Web Vitals projects that I've had RUM Vision installed already. And do you know what? If you want to know everything about your Core Web Vitals, that's a really good place to start.
The biggest problem with a lot of other tools is you don't really get to manipulate that data very well. So the example I just showed with the Time to First Byte subparts, if you're using a tool that is very Core Web Vitals orientated, until Chrome surfaces those subparts to us, they'll always be opaque because using proper RUM natural in browser stuff and not using just CrUX and stuff like that... sorry, I'm going to reword that because RUM Vision does use proper RUM. They're not using CrUX.
CLIFF:
Yeah, yeah, absolutely.
HARRY:
What I meant to say there was probably the, because you can sort of manipulate your own data very intricately in SpeedCurve, you just can build up a far more forensic picture of what's going on. And that's probably why it's my favorite. I think it's probably, like I said before, it's the performance engineer's tool. It's the one I like because it's very forensic. It allows me to get really deep in the weeds, but that's because I've been doing this for, as you say, 17 years. In the weeds is where I belong. So yeah, for me it's just a really, really useful tool for getting sleeves up and hands dirty.
CLIFF:
Yeah. Yeah, I feel like that's sort of where I was always drawn to SpeedCurve as well as a practitioner and working at a competitor prior to SpeedCurve. And I think Mark's initial take on it was also just about how he wanted to present the data in a way. He's a designer, he's a designer by trade, and when Steve [Souders] joined him, it was sort of the design and performance element between the two of "How do we display this data in a meaningful way that makes sense?"
But further to that, you're right, they were filling a hole that wasn't there. Or sorry, a RUM product that hadn't been built yet. And I would say for our peers as well in this group, I would include them in this idea that this observability focus that everyone has, and for good reason, there's APM and now it's observability and monitoring... all these different services and machines essentially I think missed the mark. Again, where's the user experience in all that? Where's the front end observability? Because I do believe, and I will say that some of those larger observability providers have kind of created RUM in that vein and a bit more as a checkbox where you're not necessarily getting that deep understanding and ability to share it across people who aren't necessarily DevOps or front-end engineers or others that are going to be key to making business decisions and funding performance initiatives.
HARRY :
Yeah. Well that leads me on to an interesting point actually because one thing I've neglected to mention is I've been talking about SpeedCurve through my eyes as a performance engineer, but the reason this is a great story arc because remember I told you my SpeedCurve bookends my projects and I'll start with installing it for my benefits so I know what's going on.
Once a project reaches sufficient maturity, I need to hand that over to a client. I need to know that the business understands how to use SpeedCurve. One thing I love about SpeedCurve is just how cheap and free and easy it is to spin up dashboards on a whim. I also really respect that you don't do pricing based on the number of seats because that really helps democratize performance.
Any new client project, whenever they install SpeedCurve and I get my login details for their SpeedCurve account, I'll create a private dashboard that I just call "Scratchpad" or "Playground". That's where I'll just throw things at the wall to see what sticks. No one else needs to see this. This is my messy inner workings of my brain, but it's so cheap and easy for me to do that. LCP went up, right? What metrics correlate with that? I'll throw all those at the wall. It'll be a messy just having a messy desk, but it's mine and no one ever needs to see that. And then when I've distilled an idea, I can then go to the shared dashboards and sort of have an overview dashboard and I can high dip my thoughts there. And generally what I tend to do with clients is I'll say, "Look, SpeedCurve is a fantastic tool. You can't really break anything, just go and play around. But the day-to-day observability and day-to-day practical use, go and look at your live dashboard. Go and look at your custom dashboard that we've made."
I might make a dashboard for marketing, one for finance, one for DevOps or whoever it might be. And that's one thing that customers, I'm a user of SpeedCurve, but the customers, my clients who end up signing up to SpeedCurve, they get a very different thing out of it, which is just distilled useful information, never information overload. It's like, "Look, go create your own dashboards privately, play around, just see what sticks. It's fun. It really is fun. But ultimately we've got one or two places that you're going to go to and that's your pulse check. Is a site humming nicely? Is it just ticking along? Have we had a regression?"
So yeah, that brings me back. That's my second half of my, how I use SpeedCurve is setting up those regressions, making sure clients are comfortable, not over faced with it. And I love that it can do both. It's a power tool for power users, but equally you can just put it in TV mode, stick it on a screen in the office, and everyone just knows what they're looking at. It's incredible.
CLIFF:
That's awesome. Well, this has been long, so I know I'm going to have to do a lot of post editing here. But again, could go on and on. Thank you so much, Harry. You're a great friend. You're a great champion in our industry, true performance guru that we've all looked up to for a while and we absolutely love working with you. I'm going to include some links to how you can get in touch with Harry. Harry is for hire and obviously does an amazing job. If you work with him, your site will get faster. So definitely want to promote him and hold him up as a strong partner for SpeedCurve. So thanks again, Harry. Really appreciate the time.
HARRY:
That's incredibly kind of you. Yeah, no, it's just nice to hang out, man. We always have a nice time when we hang out, so I've really enjoyed this. And yeah, I dunno. Thanks for your support. People don't see the back channel but times I'll ping you or Andy [Davies], "Can we add this feature? Can we do this?" And the fact that you've stayed such a driven and focused and small team just means that, I dunno, it's just really nice. It's nice working with you. I dunno. Appreciate it. Emotional now. But no, really, really enjoyed this. It's been so much fun.
CLIFF:
All right. Well hopefully I don't have to wait until November, but if it is, or I guess October. It is October. Well, we'll see you at PerfNow in the fall.
HARRY:
See you in Amsterdam. But yeah, until then, we'll catch up soon.
CLIFF:
All right. See you, buddy.
HARRY:
Take care, dude. Thank you.