I want to disallow certain query strings with robots.txt and I want to check I am doing this correctly... I am using: Disallow: /browse.asp?cat=*-* I want to check that this rule will allow these urls to be indexed: *browse.asp?cat=123/1234/1234-1 browse.asp?cat=123/1234-1* While disallowing these urls: *browse.asp?cat=1234-1 browse.asp?cat=1234-2* Will this rule work? or will the wild card cause it to dissallow all the cat query strings? Thanks.

Member Avatar
Member Avatar
+0 forum 2

Hifriends! I've just started working on SEO for my sites, but I have problems in my first attempt. I signed int to Google Webmaster tools...added a site and verified it successfully. My site even started appearing on Google Search and within a very short time, it was on the first page results! But then it disappeared from Google Search.Even if I search for site:kilivacation.com its not showing up, Google says there is no such a site..if you are owner.... Going into Google Webmaster tools, if I go to Crawl -->robots.txt Tester, I get the error "You have a robots.txt file …

Member Avatar
Member Avatar
+0 forum 9

Hi, I am going to submit a new website in Google (deveoped in PHP) today. I have 'include' folder which has header.php, footer.php and navigation.php files. I have included them in each web page of my site using <?php include ('./includes/header.php'); ?>. I do not want these php files to be indexed separate in Google so- Can I block include folder and If I block hearder/footer/navigation, will it effect the pages? Will all my pages indexed as usual with html in header/footer/navigation? Since I have completed my website so I can not do major change now :) thanks for coming …

Member Avatar
Member Avatar
+0 forum 2

We have the valid URLs: www.daniweb.com/foo www.daniweb.com/foo/ www.daniweb.com/foo/1 www.daniweb.com/foo/2 www.daniweb.com/foo/3 If I want to disallow them all in robots.txt, are *both* of these valid and will they do the same thing? Disallow: /foo Disallow: /foo/ Will the latter also block the URL www.daniweb.com/foo or will that be interpreted as a page underneath the root directory, and not within the foo directory? Contrastly, will the former be interpreted as only blocking the single page and not the foo directory?

Member Avatar
Member Avatar
+0 forum 1

My URL is: http://www.mildaspergers.com/sitemap.txt and for robots: http://www.mildaspergers.com/robots.txt Hello, When I started, I only submitted one site to Google (which went fine). Then I tried adding more pages, and it would not go through. I have added a phpbb script, tried editing it, to no avail. I tired messing around with eiliminating or reuploading my robots.txt file, no go. So what is the deal? I can reach it with my browser, but not with fetch or sitemap tool. I have 644 permissions on both, so that is not a problem. I would ask my host, but the only problem is …

Member Avatar
Member Avatar
+0 forum 5

I would like to know that which things of website, an expert SEO must put into robots.txt file for disallow? Which pages is better to not show to Search Engines?

Member Avatar
Member Avatar
+0 forum 1

I have created a sub domain and put robots file As follows User-agent: * DisAllow: / Will it affect my main domain? I have robots file in my main domain as follows User-agent: * Allow: /

Member Avatar
Member Avatar
+0 forum 2

I have page1.html that is being 302 redirected (temporary redirect) to page2.html page2.html is disallowed in my robots.txt file Under normal circumstances, when googlebot encounters a 301 redirect from page1.html to page2.html, it will index page2.html, and when googlebot encounters a 302 redirect from page1.html to page2.html, it will index page1.html Since, theoretically, the url of page1.html is what would be indexed, would it still be indexed considering page2.html is blocked?

Member Avatar
Member Avatar
+0 forum 5

I want to report deadlinks on my site, my current script does work but it allows search engine bots to click the dead link report which makes it hard for me to determine which reports are form visitors and which are from bots. What am I doing wrong here? As the bots are still able to click the link and send the report to me. Any help is much appreciated Thanks. [CODE] $x=$_GET['Deadlink']; $agent=$_GET['agent']; $bots = array( 'bing' => 'http://onlinehelp.microsoft.com/en-us/bing/gg132928.aspx', 'yahoo' => 'http://help.yahoo.com/help/us/ysearch/slurp', 'google' => 'http://www.google.com/support/webmasters/bin/answer.py?answer=182072', 'ask'=>'http://www.ask.com/questions-about/Webmaster-Tool', ); $agent = strtolower($_SERVER['HTTP_USER_AGENT']); foreach( $bots as $name => $bot) { if(stripos($agent,$bot)!==false) { …

Member Avatar
Member Avatar
+0 forum 3

The End.