robot.txt tell the SE'S which page to crawl or which not. there are robot.txt generator sample robot.txt code are this
Sitemap: your site sitemap.xml
hope this helps
Hi.....robots.txt is a file that tells search engine crawlers/bots not to crawl the particular page/content....If you want search engines not to crawl some of your pages/content, then you should use robots.txt file in order to give direction to crawlers about which pages they should crawl and which not......
A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages.
Robots.txt is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. ...
Robots.txt is a file which spiders read to determine which parts of a website they may visit and may not visit
By default your robots.txt file keeps bots out of /images/, /components/ etc.,
Basically, robots.txt is a plain text file which is placed in a server's root directory it includes information on whether search engine robots should index the site or parts of the site.it is mostly used to hide dome site's things from search engine. why don't you Google it?
A robot - is technically a program from search engines like Google, Yahoo and MSN that are set out on the internet to do the job of finding out new websites, indexing them and gathering the right information about the website. They are sometime called "spiders", "crawlers" and even "bots".
Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do.