Start New Discussion within our Linux and Unix Community

We have the valid URLs:

If I want to disallow them all in robots.txt, are both of these valid and will they do the same thing?

Disallow: /foo
Disallow: /foo/

Will the latter also block the URL or will that be interpreted as a page underneath the root directory, and not within the foo directory? Contrastly, will the former be interpreted as only blocking the single page and not the foo directory?

Using "Disallow: /foo/" would block the foo directory and everything in it.

Technically, without the trailing slash, Disallow blocks the 1 item, such as a single file. I would assume this would indicate a disallow on a single file named foo not the directory /foo/.

You can also use "Disallow: /foo*/" to block any subdirectory that began with "foo".

BTW, google has some webmaster tools available that will test the robots.txt file and report on the results.

This article has been dead for over six months. Start a new discussion instead.