I have just designed this web: https://linkstore.com.vn/

Recommended Answers

All 14 Replies

I dunno, rproffitt. I don't really put a whole lot of weight in those things because they are soooo cookie cutter. For example, he got dinged because "There are 5 static components without a far-future expiration date." But maybe that was by design? There are times when you actually do want that. Not only that, but obviously they are not judging you on the quality of your coding or site usability, professionalism, etc.

Also, Link Store is using Wordpress with the WooCommerce plugin, so half the things that it complains about are due to the platform and you can't do anything about them. The other half are if you are using a horribly-coded theme. Wordpress themes are always soooo bloated.

I just plugged DaniWeb's homepage into GTMetrix and we got a C grade for browser caching just because we aren't caching the Google Analytics tracking pixel for an extended period of time. Caching it would defeat the purpose of it, first of all. Second of all, it's completely out of my control, and every site that uses Google Analytics and Google Ads would get the same ding.

I like the analyzers because they give you a good first look at sites then you use your expertise to fine tune or ignore some of the results.

Not everyone has access to Dani or her kind. Automation is needed.

I wonder how you feel about the W3C checker.

I put a lot more weight in the W3C Validator because it's either valid code or it's not. Using a compiled language as an analogy, W3C Validator checks for parse errors and whether or not the thing is valid code and will compile. Things like GTMetrix attempt to score you on performance quality, but they do so with a very rudimentary algorithm that doesn't take intent into consideration.

I'm confused as to what benefit you get out of GTMetrix. Don't get me wrong. I use them myself quite heavily when I want to see if there's something I'm not thinking of that can maybe scrape a fraction of a second off of page load times or that sort of thing. I might use them for competitive analysis in SEO to see if a competitor is doing a particular something more efficiently than I'm doing that particular something.

But in terms of using them to evaluate someone else's site as a whole, that wasn't their intended purpose, and they are horrible at doing it. Mostly because their purpose is to find some potential edge cases that you could use to improve your loading times, and they don't factor in site usability, navigation, ease of use, UI/UX, design, professionalism, valid code, efficient code, the quality of the front-end code, or anything else that could possibly ever be used to judge a well-designed/developed site from a bad one. They are simply created as tools to help you find edge cases to improve site loading times. If you're using it to get an overall impression of someone else's site, you're basically getting your first impression based on a simple count of third-party plugins they are implementing, and how many lines of code their javascript file is, as opposed to the efficiency / effectiveness / quality of said javascript code.

commented: Fair. +0

I really like W3C's tool. So beyond that how can folk automate site checking for other than valid code. As in items that matter.

Both tools have helped fledging web authors to get their site up to speed with less errors. Isn't that a good thing?

So beyond that how can folk automate site checking for other than valid code. As in items that matter.

I don't know of any free tools, but I have come across some paid platforms that can evaluate code quality, although I don't know how good they are because I've never tried. But it's like any other language, right? I can use a compiler to quickly detect if your C++ code has any parse errors preventing it from compiling (e.g. the equivalent of a W3C validator). But are there any good tools that actually evaluate the code to tell you if it's well-written, efficient, elegant code? In my experience, that really falls to the stronger, more experienced programmers to be able to evaluate code quality. If it was something that could be automated, then code wouldn't be an art, right?

Both tools have helped fledging web authors to get their site up to speed with less errors. Isn't that a good thing?

Tools such as GTMetrix do not help to point out or fix errors in any way, shape, or form. In fact, they have nothing at all to do with errors. They are simply evaluating loading time in a browser, and make some suggestions and recommendations on what you can do to cut corners and put caching in place to speed things up a bit. Their recommendations are rather broad, and not necessarily what's in the best interest of your web app, either. For example, on DaniWeb, they simply did a brute force check of all of the images that loaded to see if they are being cached by the web browser. They found and suggested that Google's tracking pixels should be cached. Obviously, they shouldn't be, or their functionality would break.

Tools such as GTMatrix are good as a last check to see if you forgot any low hanging fruit that could help improve your loading times, but only after you've ensured your application is well written and your code is efficient. However, it's then up to you to look through each of their suggestions and decide if doing so makes sense for your use case or not. The score they assign is completely arbitrary, and really has much more to do with how many lines of code there are than anything else.

What paid tools do you like?

After seeing many folk correct errors on their own with tools such as mentioned so far, should I recant and well, what should we be doing?

Tools such as GTMetrix can be used to make improvements, so suggesting that people use it to check their site and look into the suggested recommendations is not bad advice. However, two important things to note: It just checks site load times in your browser and suggests some browser recommendations you can employ for more efficient client-side caching, etc. It has nothing at all to do with the quality, effiency, performance, or accuracy of the code. The other thing to note is it just gives you a brute force list of all possible suggestions that have the potential to improve performance in certain conditions. They are not recommendations that are specific to your web app. It's up to you to not blindly macgyver ways to use them all, which would do more harm than good, but rather figure out which ones make sense for your use case and your environment/stack, and employ the low hanging fruit.

The best way I could describe it would be if this were C++, and there was a GTMetrix for your code that basically resulted in:

  • We detect you are using an If-Else statement with multiple Else-Ifs. Consider using a Switch statement instead.
  • We detect you are using a class with no public methods. Consider using a struct instead.
  • We detect you are defining a variable and never redefining it. Consider declaring it as static instead.
  • etc.

Not all of the recommendations make sense for all use cases, and it's up to you to decide which makes the most sense for your web app. I should also note that my C++ example actually evaluates code, which GTMetrix doesn't evaluate lines of code at all. It simply calculates load time in a web browser and suggests performance tuning such as telling the web browser to cache certain page elements, and does not take into consideration how complex or not the code is. Again, let me repeat, it does not evaluate a website's code at all.

What paid tools do you like?

I have no recommendations to offer because I've never used any. I just know they exist because I've come across landing pages or ads for paid tools over the years. However, it goes back to my initial thought, which was if it was possible to automate evaluating code quality, then coding wouldn't be an artform as it is.

You know what ... I actually thought about it some more, and I realized there actually are tools that can be used to test each different component of a site. For example, I rely on Screaming Frog and DeepCrawl for my SEO audits. Again, it's an aduit ... it audits your site and gives you very detailed data, but then it's up to you (or the SEO you hire) to figure out how to make that actionable.

It's the same with performance audits such as GTMetrix or Chrome PageSpeed Insights. They are automated tools which perform audits on your site's performance, but then it's up to a non-AI intelligence (aka a human developer) to decide what should be made actionable.

We've veered off the welcome mat here. I'll share we turn on -wall or -WALL for GCC compiles. For our code, we resolve all the errors and warnings or understand why GCC complains. Just because it works isn't good enough.

I think the difference is that W3C is a validator ... the markup is either correct or it's not. All these other tools are just providing suggestions to optimize browser-side caching to either increase the speed of HTTP requests, or simulate performance enhancements. The point I was initially trying to make was using a score based on a count of how many optimizations out of a pre-defined list your web app is using to gain perceived performance improvements isn't necessarily a reflection of a web app's quality. But yes, I digress. Hello Linkstore ;)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.