Actually, I owe a website where a particular set of inner pages is not getting indexed in Google and other Search engines. I have checked sitemap.xml, robots.txt file and also the meta robots tag and I have never found any issues. Then, I did a Google Fetch with the help of Google Search Console, but the result with 404 page not found. So, I need a proper guidance to rectify this issue.


From my observations, Google won't necessarily indicate every web page it is aware of exists within a domain regardless of how you bait it, particularly if the web page is extremely similar to others within the domain. The fact that it doesn't indicate it is aware of a 404 page is logical.

Try to build some backlinks for that URLs. Might be then google will consider.

The page to be indexed must contain rich and unique content. Ideally it should be based on research and visitor should feel enlightened when reading it.

With that said, it should have fair amount of popularity on internet.


You have to create unique content of pages and change some design of the page is more responsive and try to create some backlinks..

My friend recommend backlink indexer for it. I am not still use it.. you have go for a try..

May be Google is not getting some thing relavent to index your webpages. Chect out whether your content is as per the topic or not.

So, Google Fetch returned an HTTP 404 error?

You can rule out issues with robots.txt. The file tells bots which resources it should not request. You simply would not receive an HTTP response (or error) because a well-behaved bot will not make requests for blocked resources.

The robots meta tag can also be ruled out. The tag is embedded in an HTML document, and it can only be read if the page is retrieved, which would mean an HTTP 2xx error, document found.

It's probably not a firewall issue because other pages on the site can be accessed, or a permissions issue as that would result in an HTTP 403 Forbidden error.

Sitemaps tell bots where to find resources on a host. If the sitemap contains errors, such as bad URLs, it will cause the web server to return an HTTP 404 Not Found when a bot attempts to download the resource.

I'd take a closer look at the URLs in your sitemap. Watch out for any unusual characters that might cause bots to truncate URLs. For example I have known GoogleBot to read URLs like[1]/ as , unless the square brackets were encoded as %5B and %5D.

Another possibility is a misconfigured redirect, but then why would that affect search engines and not all visitors to your site? If you do find a redirect is responsible it's probably safer to remove it. Search engines expect to see exactly the same content as shown to human visitors of your site.

If you're unable to find the fault, mail me a troublesome URL and I will look at it here.

Check on google webmaster tools .

You can submit your webpage to the search engines for indexing. Try to check the URL affixing "cache:" in Google. You can know the status of the page whether it is indexed or not.

Try to do Google fetch for the inner pages one day after the other. You can change the frequency of web page crawling in the xml sitemap. Parallely check in the Google for cache of your website.

Whatever techniques you have tried is correct. Only thing is Google might have not crawled the website immediately after submission. Wait for sometime and do the Fetch consistently.

Keyword importance is my primary perspective. Yes, have a good keyword research based on what you serve or offer. Backlinking the tactics should be the secondary one. Based on the keyword you have to follow SEO off page strategies to each pages. Before the content should be rich on each page. Where we very much concern about visitors who and how long the visitors stays on each page.

Consulting services on cyber security and phishing

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.