·1 min de lecture

Understanding Core Web Vitals

Auteur(s) de l'article

As developers, we are responsible for the experience of users on our websites, as well as for search engine optimization (SEO). This is a topic which has been time and again talked and written about, because it is not as straightforward as one could believe.
There is not one way or step-by-step plan which can ensure the performance of a website. The only thing we can do is make mistakes and learn from them, as well as leaning on the tools that are provided to measure our success.
The introduction of the Web Vitals by Google in 2020 has offered a way of measuring websites performances with quantifiable benchmarks.
This article proposes to resume the “Core Web Vitals”, a subset of Web Vitals, and the ways on how to improve those metrics. In a second article, I will move away from the theory to go into a case study and what we did in order to solve some of the recurring issues we have at Antistatique regarding performances.

Core Web Vitals: what is it ?

The user experience of a website depends upon many factors, such as the internet connection, where they are located, the device used to browse it, and many more. This is why it is so difficult to measure how good a website performs.
However, thanks to common metrics, the Web Vitals, as well as tools, such as Lighthouse or Page Speed Insights, we end up with performance scores which help us improve what we do.
Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.

Philip Walton - Web Vitals (web.dev/vitals)

Three main aspects of the user experience are targeted with the Web Core Vitals: loading, interactivity and visual stability.

Loading: Largest Contentful Paint

According to the Response-Time Limits theory

Quite a few studies have shown that the longer the user has to wait for a page to load, the more likely they are to abandon the website. Up to 1 seconds, the UX is overall not affected, while up to 10 seconds, the user’s attention already starts to move away from the page currently loading.

Obviously, this is the last thing anyone wants ! 😅
The Largest Contentful Paint (LCP) metric measures this by focusing on the largest element in the viewport being currently loaded. In order to improve this metric, a few things can be done:

Lazy loading

Lazy loading means that the specified resources will only be loaded when they are required. The rest of the page will be loaded as usual, in one bulk. This will reduce the overall payload.
One issue that can arise is when we have huge JS or CSS files, as this can prove costly. A possible solution is to divide it into multiple smaller files. We can then decide which are the critical assets needing to be loaded or preloaded right away, and which can wait and be lazy-loaded.

Preload the critical resources


This is linked to the previous point. Sometimes, we have resources which needs to be loaded as soon as possible. For those cases, we can preload them simply by adding a <link rel=”preload”> in the head tag.

Another interesting attribute that can be added to the <link> tag is rel="preconnect". This tells the browser that you need resources coming from another domain and that you need them as soon as possible. The same thing can be done for DNS resources by using rel="dns-prefetch" instead.

Image optimization

Websites today also tend to have more and more images. If not taken care of, they can be the reason of long loading time. The development in HTML helps to prevent this issue, along with the new image format that are getting more support from the browsers.
WebP and AVIF formats provide better compression - meaning a lower file size - while keeping the quality of the image. But since they are not yet supported by every browsers, we can provide them as alternatives to the JPEG format thanks to the <source> tag.

The subject of how to optimize images is quite complex. Here are a few resources I found useful to better understand what can be done: https://observablehq.com/@eeeps/w-descriptors-and-sizes-under-the-hood and https://jakearchibald.com/2015/anatomy-of-responsive-images/#varying-size-and-density

And obviously, the image size is critical as well ! With the srcSet and size attributes provided by the <img> and <source> tags, we can now upload an image in different sizes, and let the browser decides which it should load according to the quality of the internet connection, the device being used and many more criteria.

Minifying / Caching

This goes without saying, but assets should be minified in order to reduce bundle sizes. Caching strategy most often depends on the stack. 
A CDN (Content Delivery Network) is also a good option to improve performance. When a user makes a request to a website, instead of fetching all the resources and assets directly from the origin server - which might be located at the opposite side of the world - it will instead do it from an "edge" server located closer to the user and therefore, get them much faster.

Efficient load of third party JavaScript

Third party JavaScript are all scripts which domain names are different than the current page being viewed. They often don't need to be available as soon as the page loads. The script attribute defer, as opposed to async, allows us to - well - defer their loading, ensuring better performance.

Interactivity: First Input Delay (FID)

FID measures the time from when a user first interacts with a page (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

Philip Walton - First Input Delay (web.dev/fid)

The delay in the interaction is often linked to the browser’s main thread being occupied. For example, it might be occupied parsing and executing a JS file. While doing so, it is unavailable to respond to the user interaction.
Ways to improve this metric are:

Better handling of third party JavaScript

Just as for the previous metric, LCP, it is better to defer non-essential third party JavaScript.

Reduce JavaScript execution time

Limiting complex operation, reducing the bundle size and preventing memory leaks are all things that can reduce the execution time. 

Visual stability: Cumulative layout shift

The idea of this metric is to prevent layout shifting unexpectedly. As a user, if I intend to click on a button and - because of a layout shift - I suddenly end up clicking on something else, it can be really disturbing.
This can happen because the resources are loaded asynchronously and, in some cases, elements might be added dynamically to the content. 
The culprit might be an image or video with unknown dimensions, a font that renders larger or smaller than its fallback, or a third-party ad or widget that dynamically resizes itself.

Philip Walton and Milica Mihajlija - First Input Delay (web.dev/cls)

What can be done:

Adding size attributes on images and videos elements

This way, the space can be reserved in the layout. Another possibility is to use CSS aspect ratio boxes.
This might be useful for example when we have a cover image at the top of a page. If the space is not reserved, you will end up with the main content being first loaded, and only once the browser has finished loading the image, the layout will shift to give space to the cover image. Quite disturbing for the user !

Prefer transform animation

When animating elements of the DOM, instead of playing on their width or height - which impacts the layout directly -  prefer transform properties such as rotate or translate.

Limit dynamically inserted content

Obviously, this is only when first loading the page and is not linked to the insertion of content following a user interaction.
If there is a need to absolutely insert content dynamically, it is a good idea to use a loading placeholder. It will at least inform visually the user that movement is to be expected on the page.

The recurring performance issues

At Antistatique, the most common issues we face regarding performance are mostly linked to three topics: image optimization, assets loading and sizes and, especially when working with headless stack, caching. As seen above, those are issues that will definitely impact the Core Web Vitals and as such, the Lighthouse performance. 
In a second article, I will share the work we did/are doing on a website we are currently developing. The idea is not to propose a “how to improve performance” kind of article, but to share the reflection process we had on those issues and what solutions/alternatives we came up with.


Mise à jour le: 11 February 2022