How can lazy loading improve web portal performance by reducing initial load times and optimizing resource usage? What are the best techniques to implement lazy loading in images, components, and data fetching using JavaScript frameworks like React, Vue, and Angular? How does lazy loading impact SEO and user experience?

Hello,

The easiest way to implement image lazy loading is by simply adding the loading="lazy" HTML attribute to <img> image tags. That will not load the image until it is about to come into view in the web browser as the user scrolls down the page.

Something else that we do here at DaniWeb is to use the async and defer attributes on <script> HTML tags, so that they load external Javascript files asynchronously without blocking the parsing and rendering of the HTML page. Use defer if your Javascript file requires the HTML DOM to be ready, or relies on other Javascript files, and use async if you want the Javascript file to be executed the moment that it's finished downloading.

The goal, of course, is if you can't do everything within your power to make your site load as fast as possible, then focus on the user experience and perceived performance to the end user. An example of that is to use AJAX techniques to load portions of the webpage, that might be a little more computationally intensive, for example, with Javascript after the rest of the page has already been loaded for the user. Especially if the content would otherwise delay generating the HTML, and it's content that is below-the-fold (aka not visible on the first screen-fold).

The loading="lazy" HTML attribute-value pair was introduced in 2019–2020, so I would guess it will be safe to use after 2030–2035, except if you don't care about internet visitors and are targeting an intranet of a company where you control the browsers and their versions.

Of course, lazy loading impacts SEO. Modern SEO has more to do with how the web app is constructed and how it performs. That means you shouldn't use lazy loading ATF (above the fold) in most cases. I wrote "in most cases" because there are situations where you can't be sure what will be above the fold while programming. However, most cases are clear. For example, the first image of a top slideshow is above the fold (ATF) and should NOT be lazy loaded.

Cumulative Layout Shift (or CLS for friends) is where most sites have problems. Many struggle with other metrics as well, but even the "good" ones have issues with CLS. There are many ways to approach this ATF/CLS paradox. We've tried different strategies over the years, and based on experience, this is what I think works best right now:

Your JavaScript should do only three things:

  1. Lazy load everything that is not ATF, allowing some space to load before appearing on screen (e.g., rootMargin: "0px 0px 300px 0px").
  2. Handle JSON-LD data. I would write more, but I guess this is out of scope.
  3. Load the app's JS files (which will then load the non-critical CSS files) after an interaction (mousemove on desktop, onVclick on mobile).

I'm happy that SEO these days has nothing to do with the shady tactics of the past and is more about the quality of the web app and the message it conveys.

If you want an example of lazy loading in the embedded JavaScript for the first web page view, I’d be happy to share it with you.

commented: Agree with everything except the bit that it's unsafe to use loading="lazy" until at least 2030 +34

The loading="lazy" HTML attribute-value pair was introduced in 2019–2020, so I would guess it will be safe to use after 2030–2035, except if you don't care about internet visitors and are targeting an intranet of a company where you control the browsers and their versions.

According to Can I Use the attribute currently works for over 96% of all web users across both desktop and mobile browsers. For the others, there's no harm having it and it just won't do anything.

That means you shouldn't use lazy loading ATF (above the fold) in most cases.

I would agree with this. That is why, in my previous post, I mentioned the AJAX technique to improve perceived performance should only be used when content is below-the-fold.

I'm happy that SEO these days has nothing to do with the shady tactics of the past and is more about the quality of the web app and the message it conveys.

Very much agreed!!

Dani, in my small software company (we are only five people), we have a rule for the web: support it until the problem affects less than 1 in 200 users, or 0.5%. We waited 10 years for Internet Explorer to reach this threshold, and finally, in 2025, we no longer program new web apps to support Internet Explorer.

This is a liberating moment for me personally—supporting all the weird IE quirks alongside modern JavaScript was an ugly puzzle over the years. Of course, in the few months since our new apps stopped supporting Internet Explorer (affecting less than 1 in 200 visitors), we’ve received several complaints from some companies for whom we develop apps.

On "Can I Use," IE was supposedly "dead" in 2013, yet we only dropped support on 01/01/2025—and we still got complaints. (I won’t dwell on those because it was a rational decision.)

Saying that loading="lazy" will be production-ready without any additional JavaScript by 2030 is optimistic. It will probably be 2035 or even 2040.

The reason why I disagree with you is because I do not see a reason to not use a performance optimization that will benefit 96% of web users, and degrade gracefully and cause absolutely no side-effects for the remaining 4%. What is the harm in using it in production now?

I especially disagree with you because this is a performance optimization. As a completely unrelated example, if you could implement some low hanging fruit to improve the CLS for only 25% of users, and the remaining 75% remained completely unchanged and affected, why wouldn't you do that? 25% means a better experience for at least 1 out of every 4 users. The alternative, not using it, is a better experience for 0 out of every 4 users.

I think this is a completely different discussion than a web app where functionality breaks or provides an inconsistent user experience depending upon the end-user's tech stack. That's a different beast entirely. In that case, yes, you absolutely need to support bugs that affect 4% of your users.

On "Can I Use," IE was supposedly "dead" in 2013

I'm not sure what you mean by that? I see it saying that IE is still in use by 0.4% of web users. It is up to you to determine if that is an acceptable loss. I don't think they make that determination for you because, as pointed out in my previous post, a performance optimization is entirely different than breaking functionality or affecting the UI/UX.

In reality, the usage of IE is lower than 0.4%. In some categories of web apps, it's around 0.1%, while in others, it's about 0.2%. That is why we decided to stop supporting IE from 01/01/2025. If it were above 0.5%, you shouldn’t even consider introducing something without first testing it in Internet Explorer. I can't understand how any app can be in production while ignoring 0.5% of its audience, because 0.5% is not "an acceptable loss" by any metric.

I don’t get the "performance optimization is entirely different from breaking functionality or affecting the UI/UX", and I couldn’t find any previous post of yours discussing it. In a web app that is already up and running, the priority should be to avoid breaking functionality and affecting the UI/UX before making any performance optimizations. In a new web app, however, you can take the opposite approach—optimize performance first and then build the UI/UX around it.

I think perhaps we are misunderstanding each other.

Of course, it's important to not introduce any breaking functionality for any percentage of users.

However, what is the harm in adding loading="lazy" to an existing web app? 96% of users will experience a performance improvement. The other 4% of users will have no negative consequences, and everything will be exactly the same for them. That 4% will not lose any functionality. That 4% will not experience any UI/UX consequences or see any new bugs. What is the reason to not do it? I'm not understanding.

First, you need to reduce radically the usage of JS and CSS. I don't know about React, but Angular is clearly too big. Pack loadable items into bundles and don't use one huge bundle: create collection of bundles. Switch on JavaScript type=module and load software under the demand. Use W3C CSS instead of nice but large CSS frameworks. Avoid WASM: it is useful in pages with heavy calculations only. JS performs DOM manipulations faster and requires much less resources. Consider WEB components in client-side programming.

However, what is the harm in adding loading="lazy" to an existing web app? 96% of users will experience a performance improvement. The other 4% of users will have no negative consequences, and everything will be exactly the same for them. That 4% will not lose any functionality. That 4% will not experience any UI/UX consequences or see any new bugs. What is the reason to not do it? I'm not understanding.

I agree with you 100%. I just didn't understand this argument until you phrased it like that. Also sorry for being late to respond , but I wanted to do some tests and dig a little deeper into that.

I made a comparison test of loading="lazy" on the img tag vs JavaScript lazy loading images with 'rootMargin: "0px 0px 300px 0px"' in the IntersectionObserver settings. What I found was something that I didn't know, and therefore I am sharing it.

Browsers have different distance from viewport thresholds for loading="lazy" images. For example, Chromium-based browsers have a distance from viewport threshold of 1250px to 2500px based on the speed of the connection (@see Click Here)
On the other hand, Firefox has the policy of starting to load the image when it is "about to become visible," and that could be even 1px distance from the viewport threshold. (If someone has an official number here, please share.) This, of course, makes the user experience awful.

In the first case, lazy loading loses a lot of its meaning and also affects SEO. In the test I mentioned (the same page with the same images), on pagespeed.web.dev, the speed index in the 'loading="lazy"' version was a lot higher (more than one second), and the CLS was almost double. (The same goes for the Lighthouse report in Developer Tools, which has more to do with what Google is really counting.) They use Chromium-based browsers for those tests, so that makes sense because the browser has to load a lot more images from the beginning.

Also, using IntersectionObserver in JavaScript for lazy loading allows you to create your own policies for how lazy loading is done, e.g., start loading those images after the first user interaction (click or mouse move) when the main thread is not busy.

I guess I don't understand why implementing loading="lazy" will hurt the CLS if you're only doing it for images below the fold, and always specifying image dimensions (which would ensure to never affect CLS at all)?

Yes I couldn't get the CLS impact either. I understand the "speed index" SEO impact of using loading="lazy" (that in my mind make it not usable) vs using IntersectionObserver but not the CLS. I created one more test that now it looks like the the speed index impact is there , but the CLS impact is negligible (now I see a really bigger LCP impact though). I must create a really simple basic test , with nothing above the fold to understand better if there is a CLS impact (that it doesn't really make sense). I will post my results here.

Even if using it above the fold, as long as you specify image dimensions, there should never be a CLS impact.

CLS measures needing to reposition elements on the page as it loads or due to user interaction. As long as the spot for the image is carved out from the very beginning, even if the image loads much later, no other element on the page would ever need to shift to make room.

Speed index should also not be affected at all by using it. Speed index measure how long it takes to load the first screen of content of a webpage. That means that if you’re using it only for content below the fold, speed index should never be affected.

Dani as you wrote "Speed index" is supposed to be about the visible part of a web page - ATF. But because in Lighthouse I kept getting that loading="lazy" method had worse "Speed index" I created two simplified html pages for test.

I gave a lot of height of what "Above the fold" is supposed to be , 940px in order this not be the issue. I would be very interested to learn what your results are. I propose to test it opening two tabs in a Chromium browser with Network > Disable Cache in both , load the pages in Desktop , being sure that there is a visible part (the Above the fold text), and then perform a Lighthouse audit for each one tab selecting "Desktop".

The version with lazy loading through JS IntersectionObserver is in:
Click Here

And the version with loading="lazy" is in:
Click Here

I have a theory of why this might be happening , but first I will wait a confirmation from you or/and others that this is happening to you also.

I would like to confirm that the JS version is giving me an LCP of 0.2s and the loading="lazy" version is giving me an LCP of 1.1s, so certainly a huge difference there. FCP, total blocking time, CLS (0), and speed index (0.2s) scores are the same for both.

I admit that, at first glance, I'm not understanding why the LCP is so long for the second version. Interested in hearing your theory.

Actually, I take it back. I assume that, using Chrome on a fast Internet connection, and a super high screen resolution, none of the images are actually being lazy loaded.

Sorry. I’ve just been really exhausted all day. It just clicked that LCP is only for ATF. Soooo, yeah, I don’t know what’s going on there.

As I wrote in a previous post, I believe that the way loading="lazy" is being implemented in browsers makes it almost unusable. Chromium-based browsers have a really high distance from viewport threshold, loading many images simultaneously upon page entry, while Firefox has a really low threshold, which means the user experience is awful.

My theory as to why loading="lazy" results are worse in Lighthouse metrics (which really affects SEO) compared with IntersectionObserver relates to the behavior of Chromium-based browsers. If the browser loads many images at once on page entry, the main thread is busy, which delays above-the-fold content. Those delays might be due to the fact that it takes more time for the ATF content to stabilize (busy main thread) or even affect the total "Speed Index."

We often tend to think that browsers read and present HTML in the same order we do. I don't believe this is the case (perhaps very old browsers did). Many years now, browsers want to have a full idea of the HTML before even starting to present anything.

Since this is only a theory, I don't have any solid evidence of why this is happening. The only thing I can say for sure is that everyone (that I know) who ran this test reported worse Lighthouse metrics for the loading="lazy" version. It's always fun to perform such tests because you learn new things that are rarely documented.

Lighthouse metrics (which really affects SEO)

Please correct me if I'm wrong, but I'm fairly confident that Lighthouse metrics do not affect SEO. Lighthouse is just a tool that Google offers to SEOs to be able to offer actionable page improvements. In reality, it's real world performance data from Chrome users that factors into a site's SEO. This real world data can be found in the Core Web Vitals section of Google Search Console.

Unfortunately, your theory doesn't really make much sense to me based on what I know about the DOM, HTML rendering, and Javascript execution. However, I don't have a good explanation for the behavior either.

Dani, I completely agree (of course) that Lighthouse, as a tool, doesn’t directly impact SEO rankings. However, Core Web Vitals , which Lighthouse helps measure , do influence Google’s ranking signals. If a webpage performs poorly in Lighthouse audits for metrics like LCP, Speed Index, CLS, e.t.c. those issues will also affect real Chrome users, ultimately having a negative impact on its SEO.

If a webpage performs poorly in Lighthouse audits for metrics like LCP, Speed Index, CLS, e.t.c. those issues will also affect real Chrome users, ultimately having a negative impact on its SEO.

I have found that to not be the case. As an example, we use Google's signed exchanges, which allow Chrome to prefetch our URL on the Google SERPS pages, so they load faster if a searcher clicks through to DaniWeb. This is an example of when Lighthouse might provide poor results, but the actual user performance tells a different story.

Hello Dani,

I decided last week not to respond to your last post, but, let's be honest, I can't keep myself from responding.

Your example could be valid if you take into account:

What percentage of websites/web apps use SXG?
Of those, what percentage of the indexed pages by Google are actually cached by Google at any given moment?
Of those, what percentage is the first visited page of the website?
Of those, what percentage does Google caching actually improve Core Web Vitals metrics?

I don't want to give answers to those. I believe that however you do the math, the result will be the same.

Is your example valid? Yes, if you stretch a fringe and long-shot possibility into an opinion. But I am writing this because I genuinely don't understand what you are saying, other than that you disagree with something. (Of course, I give you credit for not disagreeing with the numbers and the reality... that would have been self-evident some years ago among our colleagues, but now I feel great that other noticeable members of the community, like you, don't say that 1 is 0.)

Do you think that Lighthouse audit metrics are not an efficient way that we have to understand what affects Core Web Vitals in a web app?

I have written this in the past too, I get the hostility against Core Web Vitals, and by extension, the results of Lighthouse audits. I have that feeling too in the back of my mind. But, with few painful exceptions, those metrics have led me to try new architectural approaches that are now so integrated into our way of doing things.

It's like you are the nerd and the cool kid in the class at the same time. You should find a way to score 100% while keeping all the cool stuff that makes your web app stand out.

Is your example valid? Yes, if you stretch a fringe and long-shot possibility into an opinion.

My post was based on my own use case, which is the only data that I have access to.

What percentage of websites/web apps use SXG?

DaniWeb uses SXG.

Of those, what percentage of the indexed pages by Google are actually cached by Google at any given moment?

While Google is not transparent about this, we can make some educated guesses. According to Google Search Console, at this moment in time, Google is currently indexing 207,000 of our pages. Also according to GSC crawl stats, Google has made 232,000 crawl requests for an SXG certificate over the past 3 months.

Of those, what percentage is the first visited page of the website?

Nearly all of our traffic comes from Google. About 93% of URL requests represent the first visited page of the website.

Of those, what percentage does Google caching actually improve Core Web Vitals metrics?

Without SXG, our mobile traffic used to fail core web vitals. Once we implemented SXG, our mobile traffic consistently began passing all core web vitals. It's been my experience that SXG has brought our real world LCP for mobile traffic down from about 3s down to 2.1s. As you know, Google considers 2.5s the cutoff.

Do you think that Lighthouse audit metrics are not an efficient way that we have to understand what affects Core Web Vitals in a web app?

I believe that they can be helpful in determining where your low hanging fruit is or where you may need to make improvements, but no, I personally don't believe they are an effective representation of CWV. That is just based on my own experience with DaniWeb.

There are two reasons for that. The first is that my use of SXG makes what I see in Lighthouse much worse than what Google Search Console shows me for real world visitors' CWV. The second is that me being on a very fast connection in Silicon Valley and the majority of my users being on a very slow connection overseas makes what I see in Lighthouse much better than what Google Search Console shows me for real world visitors' CWV.

If a webpage performs poorly in Lighthouse audits for metrics like LCP, Speed Index, CLS, e.t.c. those issues will also affect real Chrome users, ultimately having a negative impact on its SEO.

Basically what I was saying was that I found that to not be the case with my own website. Like I said, half the time SXG makes what I see in Lighthouse worse than what the average real Chrome users experience. The other half of the time me being on a fast connection and the majority of my website's visitors being on a slow connection makes what I see in Lighthouse better than what the average real Chrome users experience.

Dani, first of all, thank you for sharing the numbers. Yes, if LCP for mobile devices has improved from 3 seconds to 2.1 seconds (with SXG being the only difference), it’s clear that SXG is working well for DaniWeb.

232,000 crawl requests for an SXG certificate over the past three months for 207,000 pages is negligible. Your SXG certificates have a limited lifespan, and even if they didn’t, Google can determine that a page like this one (which we’re on right now) is dynamic and shouldn’t be cached.

Of course, Lighthouse audit results are influenced by factors like your location, connection, and other variables. That’s why we have tools like PageSpeed Insights, VPNs, and others to account for these differences. But again, if you’ve seen such a significant real-life improvement using SXG, all I can say is: that’s great!

I also don't believe that Lighthouse audit results are an effective representation of CWV , but I believe that are an efficient way that we have to understand what affects CWV in a web app

Your SXG certificates have a limited lifespan

SXG certificates have a maximum lifespan of 90 days, and the majority of our pages have cache-control HTTP headers anywhere between 7 and 30 days. (Some longer, some shorter) If a forum thread is 10 years old, and has not received a new reply in 9 years 10 months, we can feel comfortable caching that thread for non logged in users for a longer time, for example, than a thread that is only a few days old. But yes, if you are logged in, no pages are cached. However, we do still use caching mechanisms such as Memcached to cache portions of a page, such as the list of Related Topics in the sidebar, even if you are logged in.

Google can determine that a page like this one (which we’re on right now) is dynamic and shouldn’t be cached

All of our pages (including this one!) are cached for non-logged in users, which includes Googlebot. We use a Vary: Cookie HTTP header so that we can send different Cache-Control headers depending on whether you are logged in with a cookie or not. We have definitive evidence that SXG is working properly for our forum threads.

Of course, Lighthouse audit results are influenced by factors like your location, connection, and other variables.

That's why I disagreed with you that a Lighthouse audit is always representative of real world users. In my case, the majority of my real world users do not have the same location and/or technology as I do.

I decided last week not to respond to your last post, but, let's be honest, I can't keep myself from responding.

Face it. It's because you just love talking to me.

commented: I love being part of the DaniWen experience +11
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.