The Google Analytics javascript tag we use in our HTML <head> is:

<script async src="https://www.googletagmanager.com/gtag/js?id=G-3BQYMGHE7E"></script>
<script>
    window.dataLayer = window.dataLayer || [];
    function gtag(){dataLayer.push(arguments);}
    gtag('js', new Date());    
    gtag('config', 'G-3BQYMGHE7E');
</script>

Then, there are a handful of gtag() calls elsewhere on the page.

I'm confused by how async works in this context. How does the page know not to call any of the gtag() calls until after the async JS has completed loading? I thought that by adding async, it's non-blocking? Doesn't that mean the rest of the page continues?

Recommended Answers

All 12 Replies

How does the page know not to call any of the gtag() calls until after the async JS has completed loading?

Short answer , it doesn't .

What you do in the first line window.dataLayer = window.dataLayer || []; is that you define an array if it isn't already there. Then with the gtag function you push elements to it.
When the google analytics gtag js file loads , the first thing it does is to check if already are any data in this array , if there are it uses them to do what ever has to do with those. Also it creates some kind of a "listener" to this array object to track when it changes in the future and do what ever it need to be done with those data.

In my opinion it is extremely useful to create a Tracker js function object that loads asynchronously all the tracker js scripts that you need (google analytics , facebook pixel e.t.c. ) and do whatever it needs with those when its is loaded.
That way:

  • You can effectively load and use only trackers that the user accepted ( external cookies acceptance + data privacy policy)
  • Avoid the penalty in PageSpeed insights ( that Google uses more then people realize to rank sites (using real users data from their Chrome browser) ) because you don't load your trackers js when the page loads but when the user does an action e.g. accept cookies or scroll a bit .
    There are many more reasons to have a Tracker js function object but because this is not the question I won't get you bored with that.
commented: Much thanks! +34

That makes a whole lotta sense. What about Google ads?

This goes in the document header:

<script async src="https://securepubads.g.doubleclick.net/tag/js/gpt.js"></script>
<script>
  window.googletag = window.googletag || {cmd: []};
  googletag.cmd.push(function() {
    googletag.defineSlot('/86406173/ArticleLeaderboard', [[970, 250], [320, 50], [468, 60], 'fluid', [970, 90], [320, 100], [728, 90]], 'div-gpt-ad-1698633249866-0').addService(googletag.pubads());
    googletag.pubads().enableSingleRequest();
    googletag.enableServices();
  });
</script>

And this in the document body:

<!-- /86406173/ArticleLeaderboard -->
<div id='div-gpt-ad-1698633249866-0' style='min-width: 320px; min-height: 50px;'>
  <script>
    googletag.cmd.push(function() { googletag.display('div-gpt-ad-1698633249866-0'); });
  </script>
</div>

So, following what I'm understanding from you, first the async tag that loads gpt.js loads on its own time and then, when eventually loaded, monitors the window.googletag array.

Then, the googletag.cmd.push() function passes in an anonymous function that calls things like defineSlot() that I assume are only found in gpt.js. I guess the anonymous function, both in the header and in the body, just get added to the stack, and are eventually called by gpt.js?

Also, we don’t have a lot of trackers or third party JS on DaniWeb. Just Google Analytics and Google DFP (ad server). Just those two above. That’s all.

Which leads me into my next question. I’ve now done a whole bunch of research over the past hour or so and everything I’m seeing says to use async if you’re not manipulating the DOM and to use defer if you are. Google recommends in their documentation to use async for their ad server (which injects ads into the DOM with the JS posted in my previous post). Why wouldn’t there be performance gains here using defer instead?

As for the google ads part:

I guess the anonymous function, both in the header and in the body, just get added to the stack, and are eventually called by gpt.js?

I don't use use Google Publisher Tags the same way , but yes from the code that you provided I also guess that this is what is happening.

Sometime back , when using Google Ads remarketing , Google would need to detect the code that they provided to complete the settings of it and it didn't if you used dynamic Tracker function object. So the solution is simple , first put the code they provide , complete the settings and then move to a Tracker js function object. I don't know if nowadays they detect dynamically added functionality but when ever I am using something like that I first put their code and then switch to a Tracker js function object.

As for your next topic , I guess that Google provides that code because they have to do with millions of webmasters and want to make the things as simple as possible for them to integrate with Google analytics , Google Ads Remarketing or Google Publisher Tags. If I had to chose between defer and async in a tag in the original HTML of a page , I would use defer but luckily I don't because I don't see any reason to load javascript files before the DOM Content is loaded , the only thing that matters in this stage is what the visitor sees above the fold.

But there is a real issue with performance that PageSpeed Insights ( or better Chrome Lighthouse -real users data is used by Google for ranking - ) highlight it immediately.

Take a look at:
https://pagespeed.web.dev/analysis/https-www-daniweb-com/dspf58qozh?form_factor=desktop
Or even better open Lighthouse in Chrome Developers Tools and analyze daniweb.com for desktop. As you can see in PageSpeed Insights Daniweb is failing and data from Lighthouse are even worse.

I will not go one by one the point Insights or Lighthouse are making , I will just mention one point they make:
Avoid long main-thread tasks , and there it lists /gtag/js?id=G-3BQYMGHE7E(www.googletagmanager.com) .

When you load a js you use the main thread , this happens the same time that it has a lot of other things to do e.g. fetching css and applying them. Of course with async in the original HTML in a tag things are worse because it may pauses the dom parser but also with defer things are also bad, because if the js is executed when the dom parser ends and at the same time the main thread has many more things to do then you have a problem.

There are many approaches to solve this, and here is just one based on the code of Daniweb.

I see that you have minified your CSS and load it in one file . That's great (it would be even greater if you had to files for the critical css that loads with the page and one for the non critical (e.g. hover) that loads after all the other main thread operations by js ) . Why don't you do the same with javascript ? You could have one minified js file that loads after the dom content is loaded. There you could load and execute external js files (like google analytics) when is needed , that could be when cookies accepted (@see GDPR ), or even when the visitor interacts with the page (e.g. the first mousemove in the screen or a bit of a first scroll in mobile web app) using passive listeners. That way the main thread will complete what ever it has to do before starting to load and execute Google analytics or Google Tag Manager.

If you want I could provide a really simplified example of this strategy , that itself is already simplified (e.g. would be better to use a shared worker to fetch every non critical (below the fold) resource in a createObjectURL from a blob in a way that the main thread can use , but I will not go there)

My last post was from my mobile phone, so here's a more thought out response:

In my opinion it is extremely useful to create a Tracker js function object that loads asynchronously all the tracker js scripts that you need (google analytics , facebook pixel e.t.c. ) and do whatever it needs with those when its is loaded.

Are you referring to something like Google Tag Manager, where you just insert the one javascript on your page, and all third-party trackers and tags all load from within the tag manager instead of being hard coded directly into your HTML?

Or, are you referring to something like Cloudflare Zaraz? Although I haven't yet used it, from my understanding, Zaraz allows you to run Google Analytics server-side from the cloud. I've been hesitant to use Zaraz because it costs money and DaniWeb is already operating in the red as it is. I don't feel as if the performance gains will outweigh the costs since it would just be Google Analytics we're talking about.

You can effectively load and use only trackers that the user accepted ( external cookies acceptance + data privacy policy)

We use Google Ad Manager + AdSense, which require (more or less) that we use Google's GDPR messaging functionality. You can use a third-party messaging tool that has been certified by Google, but the available options cost money and/or are heavier javascript-wise than Google's native one. Since we use Google's functionality, our Google Ad Manager JS loads Google Funding Choices JS (essentially GDPR messaging) itself. We can't stick Google Ad Manager into external javascript because it differs widely on every page, depending on how many ads on the page (which is generally based on how many posts there in the topic, etc.), category targeting for the ads on the page, etc.

But there is a real issue with performance that PageSpeed Insights ( or better Chrome Lighthouse -real users data is used by Google for ranking - ) highlight it immediately.

Our emulated LCP is 1.2s. Our real world LCP has always fluctuated very widely month-to-month, depending on how much Google loves us that month. In other words, sometimes the majority of traffic Google sends us (which accounts for > 90% of our overall traffic) is from the US/UK/CAN/etc., and sometimes it's primarily from developing countries with really slow Internet connections and people connecting with older technology.

That's great (it would be even greater if you had to files for the critical css that loads with the page and one for the non critical (e.g. hover) that loads after all the other main thread operations by js

I can look into doing this.

Why don't you do the same with javascript ? You could have one minified js file that loads after the dom content is loaded. There you could load and execute external js files (like google analytics) when is needed , that could be when cookies accepted (@see GDPR ), or even when the visitor interacts with the page (e.g. the first mousemove in the screen or a bit of a first scroll in mobile web app) using passive listeners. That way the main thread will complete what ever it has to do before starting to load and execute Google analytics or Google Tag Manager.

Simply because I use Google Analytics to track bounce rate, etc. and how many people come to a webpage and then either don't interact with it at all, or leave after < 1s. We then use Google Analytics events to record a handful of user interactions we care about. We would lose a lot of useful information used to improve our bounce rate if GA completely dismisses non-interactive users. For example, we wouldn't even know if 90% of the people arriving via a Google search immediately bounced after 5s by without so much as moving their mouse or interacting with the page at all.

One of the things I find so difficult when it comes to trying to figure out what works best is that no two page requests are the same. I tried switching Google Ad Manager from async to defer, and PageSpeed Insights showed a much shorter blocking time on desktop, but a significantly longer blocking time on mobile, and with the FCP and LCP being within .1s of each other on both desktop and mobile. The problem is that every time you load a page, you are served a different ad. Some ads are very heavy, and some ads are rather lightweight.

One of the things I find so difficult when it comes to trying to figure out what works best is that no two page requests are the same.

Exact same HTML/JS/CSS code on both. Both use async to load Google Analytics and Google Ad Manager:
Yours vs Mine and then Mine a few minutes later

Desktop:
FCP ranges from .5s to .9s
LCP ranges from .8s to 1.2s
Blocking is always 0ms

Mobile:
FCP ranges from 1.8s to 2.4s
LCP ranges from 2.3s to 2.9s
Blocking time is 430 ms when FCP/LCP is higher and 980 ms when FCP/LCP is lower

I switched to using defer and PageSpeed Insights was giving me even wider variability each time I refreshed.

That's great (it would be even greater if you had to files for the critical css that loads with the page and one for the non critical (e.g. hover) that loads after all the other main thread operations by js

So we already do that with things like the CSS used to generate this Markdown editor that I am typing in right now. You can view source of this current page and search for js-editor-css and you'll see where we dynamically inject both the javascript to handle this editor as well as the CSS for it, only when necessary. It's been like that for a very long time.

At your suggestion, I took some time to investigate whether it makes sense to additionally defer loading the CSS used to generate modals, tooltips, popovers, and anything else I can find that doesn't contribute to static elements on the page, and that we aren't already deferring. I discovered that those things only account for 5.5% of our existing CSS file. I think the overhead of an additional HTTP request might negate any performance gains of removing such a small amount from our primary CSS file.

Also, I thought about it some more, and I'm leaning towards not switching from async to defer for the Google Ad Manager tag. Firstly, when I tried adding it and then testing with Pagespeed Insights, the results were super widely everywhere because the cost of the ad differed on every page load.

Secondly, despite reading about how things that manipulate the DOM should be deferred, I realized that there's a ton of computation that needs to happen on Google's side in terms of selecting a targeted ad, before it needs to manipulate the DOM. Therefore, if I understand correctly (which I'm not so sure I do), by using async, the Google servers can do some of that work simultaneously while the DOM is still getting ready.

Note that, per the above code, gpt.js needs to have that googletag array populated with all the ad spots and ads on the page for it to start doing all the calculations on its end, before it's ready to inject anything into the DOM.

Dani , I have spend many hours of my life frustrated trying to achieve the 100% in every metrics in an app in PageSpeed Insight and more importantly in Chrome Lighthouse. I don't optimize a web app for this , i delete it and write it from the scratch. There are so deep the architectural changes that needs to be done that worth modifying an existing project. What frustrates me more is when they change the rules of those metrics in something opposite of what they had (e.g. image sizes in mobile web apps). I don't consider the 100% in those metrics as the goal (you could do easily an almost empty web site achieving 100%) but as the precondition.

I can't share any well grounded opinion about how you can integrate Doubleclick Ads and Google FundingChoices without receiving the "Reduce the impact of third-party code" message because I haven't done it for many years. I understand why you don't want to load gtag dynamically after a visitor action or a simple delay, because you don't want to lose the data of visits that don't interact with your site and stay <1s . I don't ready made solution for this one , I am thinking about an implementation of Google Tag Manager server-side tagging for those , but still you would lose some of those data. In web apps we create in EU this isn't really an issue because with EU GDPR law you are not allowed to load anything external that uses cookies before the user accept the usage of those cookies ( and this acceptance should be done in your own app container because you are responsible for it) , so you couldn't measure that anyway with Google Tag Manager and be legal (you can measure it in your own data in your own db or with facebook pixel server side conversions API or even with some server - side options that Tag Manager has (losing some interesting data there as e.g. from what campaign the visit came, and others) ) .

The client side app architecture that we use is fundamentally different from daniweb. That means that my opinions, based on what I have seen that works, don't apply -as is- in daniweb. But I would suggest some changes that are easy.

For example you have the message:
Serve static assets with an efficient cache policy 26 resources found
that could easily change with a single line in a nginx (if i remember correctly you use nginx) conf file , e.g.:
add_header Cache-Control "max-age=31556952, public";

The -Values assigned to role="" are not valid ARIA roles.- issue is something that also can easy be fixed. The role="form" there is the issue and even Mozilla docs that lists it has a big warning not to use it if you don't have a really good reason why.

The -Displays images with incorrect aspect ratio- is an easy one too (change the sizes of either the original images or change the img width and height values).
The -Avoid an excessive DOM size- is a tricky one but , for example: why you load the wysiwyg (HTML , JS , CSS) before the user presses the reply button ?

These is just some ideas , that I thought to share them with you , you know your architecture better so you know of course better from me what to do , but the "failed" in PageSpeed Insights at daniweb home page , coming from Chrome Lighthouse real visitors data is something that in my opinion must change in order to give Daniweb a fair shot at Google search results.

I have a slightly different attitude towards this stuff than you have. I honestly don't care at all about achieving 100% in PageSpeed Insights. To me, those are just numbers that an algorithm is using to count the number of good or bad techniques they discover on the page, but aren't necessarily a reflection of what end-users think of the page or what the performance or UI/UX experience is like at all. As you pointed out, it's showing anywhere from 98% to 100% for desktop performance, and also saying the core web vitals assessment failed.

Instead, I use the recommendations from PageSpeed Insights, WebPageTest.org, etc. to let me know about low hanging fruit (that's easy to change), or let me know where my big problem areas are so that I can investigate them further myself. But then I leave it up to my own investigation to decide whether it warrants being fixed, or is a non-issue in my book, etc.

As pointed out in my previous messages, my biggest problem with all of this is that DaniWeb relies on ads, and that's not going to change anytime soon. Ads are very resource-intensive, CPU-intensive, and bandwidth-intensive. That's also not going to change. Sometimes we run a campaign for a few months that has a very lightweight ad, and sometimes we don't. We're at the whim of ad agencies that inject all sorts of tracking pixels into their ads. Not to mention ad servers point to other ad servers that point to other ad servers that finally load the ad. DaniWeb performance is faster than a rocketship without any ads.

I am thinking about an implementation of Google Tag Manager server-side tagging for those

Yup, that's why I mentioned Zaraz, which implements GA4 server-side on the edge (of the CDN), closest to the website visitor. We don't do that simply because of $.

Serve static assets with an efficient cache policy 26 resources found

Funny you should mention this now. Up until yesterday, cdn.daniweb.com cached assets for 90 days and static.daniweb.com cached assets for only 30 days. I already changed this yesterday for static.daniweb.com to instead cache assets for 1 year and also be immutable. It's going to take up to a month for the change to trickle down to end-users browsers because the assets are all currently cached by the CDN in the cloud, so we have to wait for them to expire and for the CDN to re-fetch them from our servers and discover the updated cache times. For technical reasons, we need to keep cdn.daniweb.com at 90 days.

The -Values assigned to role="" are not valid ARIA roles.- issue is something that also can easy be fixed.

This was a case of me being misinformed. I didn't realize that role="form" should never be used on <form> HTML tags. I'll fix this right away.

The -Displays images with incorrect aspect ratio- is an easy one too (change the sizes of either the original images or change the img width and height values).

This is a case of some legacy avatars not being perfect squares. Nowadays we resize server-side, but many years ago we would just display the avatar as a proper square as long as what user uploaded was roughly a square and within the file size limits. I think there are probably still a handful of avatars from members who joined 15 or 20 years ago that are potentially off by 1 or 2 pixels in one direction, and that's what is being complained about here.

but the "failed" in PageSpeed Insights at daniweb home page , coming from Chrome Lighthouse real visitors data is something that in my opinion must change in order to give Daniweb a fair shot at Google search results

I don't necessarily agree here, primarily because I don't trust what it says. In Google Search Console, the Page Experience section says that I have 99.7% good URLs on desktop. When I click into Core Web Vitals, it shows 0 poor URLs and a small list of URLs that need improvement because they have a group LCP of 2.7s. None of those URLs are the homepage.

Mobile CWV is another matter entirely. We were squarely in "good" for a year or two, and then a couple of months ago the majority of URLs shifted to "needs improvement" despite me not changing anything on my end. It now shows that the majority of URLs fall into a group with a group LCP of 2.7s. But then it shows examples of URLs in this group with LCP usage data, and shows they have an LCP of 2.3s.

Screenshot_2023-10-31_at_12.22.58_PM.png

That started me on this adventure trying to improve things that included a big server software upgrade over the weekend (upgrading to PHP 8 with JIT compiling). I'm expecting end-user mobile performance to shift back into "good" in Search Console in about a month since that's roughly how long it takes after making a change before Search Console tends to notice it IMHO.

As far as desktop goes, this is the first time I've ever seen it complain about failing CWV. But I think the PHP upgrade, plus switching Google Analytics from async to defer, should help there as well.

The only thing I changed 2 weeks ago (per this thread) was Google Analytics from async to deferred.

Now, Desktop is passing and Mobile is failing (the opposite of before). I really don't think it has anything to do with the change I made. As I alluded to before, it is always constantly in flux based on the geographic locations of the visitors Google sends to us that month.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.