Hi,I am rather new to SEO. As you all know, one-way links are worth more than reciprocal links.. Now many sites including some directories ask for reciprocal linkiing (very annoying). If I choose to link back to these sites but don't allow GoogleBot to index my links page, will the bot treat the links from the above sites as one-way links or will it still know they are reciprocal links?

And would you consider this black-hat SEO?

Recommended Answers

All 6 Replies

it's still a reciprocal link and not considered as black a hat technique.

but how will google know that the links are reciprocal if it cannot index my links page?

it will be cheat for others.

Don't fool yourself that just because you use robots.txt or rel=nofollow to "exclude" a page Google will not look at that page and evaluate it. Google has separate algorithms for detecting spam, and these do not behave the same as those that actually index pages. I've seen several examples of this recently -- their anti-spam detection is getting stronger and definitely goes by different rules.

So, even if you use robots exclusion on pages with reciprocal links, Google will see them and still discount the return links. Worse, if Google detects a pattern of abuse at your site, such as doing reciprocal links but trying to trick them into being one way, you could be banned or penalized.

Think about it. Google wants to index quality, honest websites. You know this is a method to cheat page rank, and not a very good one. Google will look for this an indication of a poor quality site, and treat you accordingly.

So, I would not do this. It may work in the short term, but it will catch up with you.

Thanks for the tip John. I have to say that I agree with you. I just thought I'd post here just to be sure. Though I was under the (naive?) impression that googlebot wouldn't crawl your page if your robot file disallows it. I guess not.
Cheers mate.

Though I was under the (naive?) impression that googlebot wouldn't crawl your page if your robot file disallows it.

To be clear, the standard GoogleBot crawler will honor robots.txt, so by all means continue to use robots protocols (robots.txt, robots meta, rel=nofollow) to manage crawling and indexing of your site.

However, Google has other 'bots that look at websites, evaluating them for cloaking, spam, relevancy, etc. Google also has experimental 'bots that follow many things you would not normally expect, such as forms, JavaScript links, etc. These don't crawl under the same frequency or rules as the standard GoogleBot, but they can and will look at any page on your site, especially when looking for spam.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.