robot.txt tell the SE'S which page to crawl or which not. there are robot.txt generator sample robot.txt code are this
User-agent: *
Sitemap: your site sitemap.xml
Disallow: /cgi-bin/
Disallow: /wp-includes/
Disallow: /wp-content/
Disallow: /wp-admin/
hope this helps

the robots.txt is used to control the search engines and crawlers on your site.the robots. txt is thetext file access by the http.

Robots.txt is a text file to provide or avoid the access for spider/crawler to crawl your site..

By default: its dofollow /// site has been crawl by spiders but if u set
nofollow ur file is not indexed by crawler..

U can check it through: xyz.com/robots.txt

Hi.....robots.txt is a file that tells search engine crawlers/bots not to crawl the particular page/content....If you want search engines not to crawl some of your pages/content, then you should use robots.txt file in order to give direction to crawlers about which pages they should crawl and which not......

A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages.

Robots.txt is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. ...
Robots.txt is a file which spiders read to determine which parts of a website they may visit and may not visit
By default your robots.txt file keeps bots out of /images/, /components/ etc.,

Basically, robots.txt is a plain text file which is placed in a server's root directory it includes information on whether search engine robots should index the site or parts of the site.it is mostly used to hide dome site's things from search engine. why don't you Google it?

A robot - is technically a program from search engines like Google, Yahoo and MSN that are set out on the internet to do the job of finding out new websites, indexing them and gathering the right information about the website. They are sometime called "spiders", "crawlers" and even "bots".

Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do.

It is used to prevent crawlers from indexing any of your personal files, pages etc.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.