Chris made a few notes about Core Web Vitals the other day, explaining why measuring these performance metrics are so gosh darn important:
I still think the Google-devised Core Web Vitals are smart. When I first got into caring about performance, it was all: reduce requests! cache things! Make stuff smaller! And while those are all very related to web performance, they are abstractly related. Actual web performance to users are things like how long did I have to wait to see the content on the page? How long until I can actually interact with the page, like type in a form or click a link? Did things obnoxiously jump around while I was trying to do something? That’s why Core Web Vitals are smart: they measure those things.
There’s certainly a lot of tools out there that help you measure these extremely important metrics. Chris’ post was timely because where I work at Sentry¹, we’re launching our own take on this. My favorite once-a-year-blogger and mentor at Sentry, Dora Chan, explained why using real user data is important when it comes to measuring Web Vitals and how our own product is approaching it:
Google Web Vitals can be viewed using the Google Chrome Lighthouse extension. You may be asking yourself, “Why would I need Sentry to see what these are if I already have Google?” Good question. Google provides synthetic lab data, where you can directly control your environment, device, and network speed. This is good and all, but it only paints part of the picture. Paint. Get it?
Sentry gathers aggregate field data on what your users actually experience when they engage with your application. This includes context on variable network speed, browser, device, region, and so forth. With Web Vitals, we can help you understand what’s happening in the wild, how frequently it’s happening, why, and what else is connected to the behavior.
Sentry breaks down all these transactions into the most important metrics so you can see how your customers are experiencing performance problems. Perhaps 42% of the time a transaction has an input delay of more than 301ms. Sentry would show that problem and how it correlates with other performance issues.
I think this is the power of tying Core Web Vitals with user data — or what Dora calls “field data” — because some of our users are experiencing a fast app! They have great wifi! All the wifis! That’s great and all, but there are still users on the other side who put up with a more miserable experience, and having a visual based on actual user data allows us to see which specific actions are slowing things down. This information is what gives us the confidence to hop into the codebase and then fix the problem, but it also helps prioritize these problems in the first place. That’s something we don’t really talk about when it comes to performance.
What’s the most important performance problem with your app right now? This is a trickier question than we might like to admit. Perhaps a First Paint of five seconds isn’t a dealbreaker on the settings page of your app but three seconds on the checkout page is unbearable for the business and customers alike.
So, yeah, performance problems are weird like that; the same result of a metric can mean different things based on the context. And some metrics are more important than others depending on that context.
That’s really why I’m so excited about all these tools. Viewing how users are experiencing an app in real time and then, subsequently, visualizing how the metrics change over time — that’s just magic. Lighthouse scores are fine and dandy, and, don’t get me wrong, they are very useful. They’re just not an extremely accurate measure of how users actually use your app based on your data.
This is where another Sentry feature comes into play. After you’ve signed up and configured everything, head to the Performance section and you’ll see which transactions are getting better over time and which have regressed, or gotten slower:
Tony Xiao is an engineer at Sentry and he wrote about how he used this feature to investigate a front-end problem. That’s right: we use Sentry to measure our Sentry work (whoa, inception). By looking at the Most Regressed Transactions report, Tony was able to dig into the specific transaction that triggered a negative result and identify the problem right then and there. Here’s how he described it:
To a fault, code is loyal to its author. It’s why communicating with your code is critical. And it’s why trends in performance monitoring is so valuable: it not only helps you understand the ups and downs, but it can help point you in the right direction.
Anyway, I’m not really trying to sell you on Sentry here. I’m more interested in how the field of front-end development is changing and I think it’s super exciting that all of these tools in the industry are coming together at this moment in time. It feels like our understanding of performance problems is getting better — the language, the tools, the techniques are all evolving and a tide is turning in our industry.
And that’s something to celebrate.
- Yes, I do work at Sentry but this is not a sponsored post. The topic of Core Web Vitals just so happens to be something I’m paying attention to and I had some follow-up thoughts after reading the post by Chris. ↩️
The post Measuring Core Web Vitals with Sentry appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
source https://css-tricks.com/measuring-core-web-vitals-with-sentry/
No comments:
Post a Comment