www.domain.com/favicon.ico is showing up as a Soft 404 in my GSC Coverage report. I can't imagine blocking it with robots.txt because it seems as if bots might want to access it from time to time. Suggestions? Or should I just ignore?

Recommended Answers

All 4 Replies

I also have gone through the same problem with my website. One of my clients told me that my website is showing favicon.ico as a soft 404. To get rid of this situation, I followed the tips below, which worked well for me:

  • It can be because of the website's content if Google has been listing your page as a soft 404 despite its existence. If necessary, edit the page's content and submit it to Google again. This is a common issue when there's less content. It's simple to fix by adding extra material to the page. After being modified, the page will be easier for Google to crawl, which will help to remove such error.
  • Another way to resolve this error is by keeping the page on your website and deindexing it from search results. The search engine will not index that specific website page if a no-index directive is added to the header. By doing this, Google won't show the page under the Problem report, allowing you to fix the soft 404 error.

These tips helped me to solve my issue, and now my page is working properly. You can also try it and let me know if this helped you solve your issue.

What page are you referring to?? Yes, it's true that pages can be incorrectly categorized as soft 404s if they don't have a lot of content. In such a case, the solution is to either add more relevant content to the thin content page, or to deindex it from search results. However, my issue is related to a favicon.ico file, not a page.

The content type for the image file is image/x-icon. There should be no reason for Googlebot to confuse it for an HTML page.

There is no need to block favicon.ico with robots.txt as it is not an important file for SEO purposes. You can safely ignore it in your GSC Coverage report. But If you are confident that bots should not be accessing that URL, then you can block it with robots.txt. Otherwise, you can ignore it.

Of course I am not blocking favicon.ico with robots.txt. I am aware of the importance of favicon. That’s why I was worried when I posted this topic.

However, it appears to have fixed itself. Apparently it was a bug on Google’s end. I will mark this as resolved.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.