Nofollow

NoFollow Link Attribute

In January of 2005, Google made the unprecedented move of actually working together with other crawlers, and they agreed! The result was the nofollow attribute for links. Many people continue to call it the “nofollow tag”, the that terminology is incorrect.

The impetus behind this was twofold – first, it’s common for people who are desperate for links to buy them, and for webmasters desperate for money to sell them. As a business transaction, this is no problem, but because MSN, Yahoo and Google all use links as a “voting” system it caused a lot of issues, since these are not votes of confidence, but rather advertsing links, and therefore work against the idea of using backlinks to assign credibility and authority to sites.

Further, there are a lot of blogs out on the web that were started, then abandoned by their owners. Spammers have taken to adding links to their sites to the comments areas of these blogs, further messing up the relevancy algorithms of the search engines.

Finally, sometimes you may want to link to something you find objectionable, in order to shine a light on it. A link to a racist site from an anti-racist one, for example. The LAST thing you may want to do by this is to tell a search engine that you approve of, and vote for, the site this link leads to, and yet that’s how link analysis works.

As a result, Google, MSN and Yahoo have agreed upon a new attribute – the nofollow tag. This allows a website owner to set a link as “untrusted” or something similar. Each engine will treat the attribute within it’s own results in it’s own way (they may or may not spider it, they may or may not add link weight to it, etc).

Will this tag get rid of spam? No. But it’s yet another tool in the webmaster’s and SEO’s toolkit.

How do I tell Googlebot not to crawl a single outgoing link on a page?

Meta tags can exclude all outgoing links on a page, but you can also instruct Googlebot not to crawl individual links by adding rel=”nofollow” to a hyperlink. When Google sees the attribute rel=”nofollow” on hyperlinks, those links won’t get any credit when we rank websites in our search results. For example a link,

<a href=http://www.example.com/>This is a great link!</a>

could be replaced with

<a href=http://www.example.com/ rel=”nofollow”>I can’t vouch for this link</a>.

Source: http://www.google.com/intl/en/webmasters/bot.html

Usage

The usage is quite simple – the tag looks like this:

<a href=http://www.untrustedsite.com/ rel=”nofollow”>Untrusted SIte</a>

If you use FrontPage 2003 or before (as well as many other WYSIWYG editors, you must add the tag in manually in code view – there is no “right click” option (since the tag did not exist when they were created).

Conclusion

Ask does not intend to support this tag in the near future, since their link measuring algorithm is not as susceptible to manipulation as the other 3 search engines. But it certainly will not break links in any way. In short, if it makes sense to use the nofollow tag, then you should do so. Currently, there is no known downside to it, other than the fact that the person buying the link on your site may have been counting on the link weight and may be upset that you have deprived them of it.

For linking to sites you hate, it’s great. For commercial links, there are ongoing arguments about the ethics and responsibilities of website owners. On the one hand, if you add a whole bunch of links that are not related to your site without nofollows on them, you may harm your own relevancy score. If you do add the tag, those who paid you or traded links with you would likely get upset.

Finally, if spammers know that their links are not going to count, you may find they will stop trying to add them. The problem is that since this type of spam is done on a non-personal, mass amount, it’s not likely to make much of a difference in the number of attempts – just in the effect on the spam itself.

Use it wisely and carefully, and be aware of what you are saying when you do.

Rule of thumb: If you want to restrict robots from entire websites and directories, use the robots.txt file. If you want to restrict robots from a single page, use the robots metatag. If you are looking to restrict the spidering of a single link, you would use the link “nofollow” attribute.

Granularity Best Method
Websites or Directories robots.txt
Single Pages robots metatag
Single Links nofollow attribute
Unless otherwise noted, all articles written by Ian McAnerin, BASc, LLB. Copyright © 2002-2004 All Rights Reserved. Permission must be specifically granted in writing for use or reprinting anywhere but on this site, but we do allow it and don’t charge for it, other than a backlink. Contact Us for more information.

Geolocation 01

NOTE: This article is out of date. An updated one can be found here: Should SEOs Redirect or Park For Geolocation? and a complete guide to Website Geolocation is in progress.

Only in Canada, eh?

When people do a search with “Canadian Sites Only”, the search engines filter out what it thinks are non – Canadian sites. How the search engines make this decision may surprise and shock you. This article also applies to people in other countries, as well.

Local Searches

When a person wishes to search for a product or service that is specific to Canada, they will often choose the “Canadian Sites Only” button on Google.ca, Sympatico.ca, or a number of other Canadian search engines. Searchers in the UK and other countries do the same for their local search engines – searching only for websites from their own country.

 

Google.ca - Pages from Canada example   Sympatico.ca - Canadian Sites Only example

 

I recently did a ranking report on a highly regarded law firm in Calgary, Alberta, Canada. They ranked fairly well in Google (the most popular search engine in the world), had a nice site, and felt that they had basically no real problems, website-wise.

As part of my services, I check for local searches whenever a company is not from the US. When I did a “Canadian Sites Only” search, they did not show up! Even typing in the domain name specifically (which will show up even the most poorly optimized site) did not return their site. A few companies that had linked to them showed up, but not the firm itself.

A quick check traced their website to a web host in Florida, who no doubt is giving them a good price. Unfortunately, anyone looking for a lawyer in Canada (or Calgary) and who is looking for “Canadian” websites will never find them.

Some services, such as those requiring detailed knowledge of local affairs – like lawyers, accountants, and other professionals, are obvious candidates for these searches. It doesn’t matter to the searcher how great a lawyer from Brazil is when they have a problem in Italy. Usually they do not wish to wade through pages of  non-local results hoping that someone will eventually show up that could help them

It is critical that anyone whose job is related or derived from the government or legal system of a particular country to show up not only for generic searches, but also for country specific ones.

Additionally, many people prefer to deal with local companies either due to a desire to support local businesses, or to ensure that if there is a serious problem, that they can be found and dealt with by local courts and laws.

Both of these situations provide a great opportunity for smaller, local websites to rank highly on a search engine, since the search engine discards non-local results and presents only local ones when this choice is on.

How the Decision is Made

Most people don’t really think about how the decision is made by the search engines on where a specific website is. They assume that perhaps it looks for an address or maybe looks at the extension.

Many people in Canada have registered their domains as .com, and not .ca. If you have a .com website, how would a search engine decide if you were Canadian or not? What about foreign companies who just bought a .ca domain?

If a spammer puts the word Canada 10 times on his page or typed in a Canadian address, would he be considered a Canadian? If that worked, they could just create pages full of the names of each country they wanted to rank highly in.

Knowing this, the programmers of the major search engines have decided to use the IP address of a website as the way to find out what country it’s from. If you register a co.uk (United Kingdom) or .com / .net /.org extension but host it in Canada you will get a Canadian IP address from your web host and therefore be considered a Canadian website in this case.

Basically, the country of your websites “residence” is the distinguishing factor, not it’s owner, domain name or content.

So What’s the Problem?

The problem is that most web hosts don’t know this, and often wouldn’t care much if they did. Most purchasers of web hosting are interested in the price, performance, and features of the web server, not the location of it’s IP. Likewise, many web hosts, including Canadian website hosts, buy reseller or co-location packages from US companies due to the price, performance and features.

So it’s not uncommon for a Canadian website host to have IP’s that are provided by the US company that is providing their upstream service. In short, they are using US IP’s. The US company cannot even offer Canadian IP’s to it’s clients and resellers without opening up a physical presence in Canada.

So it’s possible that a Canadian company, selling products and services to Canadians, with a .ca domain name and a Canadian website host (who they are paying in Canadian dollars) is not considered to be Canadian, and likely be removed from any searches done looking for Canadian companies!

So how can you tell if your site has a local IP address?

Well, if you are showing up on Google during a normal search, then the same search with “Sites from Canada” chosen should display your site, as well. If it doesn’t, then you are not considered to be a Canadian site and your IP is likely the culprit.

Conclusion

Even if you don’t care whether or not you are considered to be from your local country or not, we strongly recommend getting a local IP anyway. The fact that a large number of the internet users use local searches routinely means that your site may be getting dropped from legitimate and focused searches.

It’s worth noting that Google.com does not offer a “US Sites Only” button, so by being registered on a local IP you get the best of both worlds – you don’t get filtered out by either US customers or those from your home country. Note: some  smaller portals in the UK (and elsewhere) filter by the country extension (like co.uk) instead of IP, but I know of none in Canada that filter by .ca.- it’s always IP.

One easy way to find a web host that is using local IP’s is to do a search for a web host with “Canadian Sites Only” and only choose someone on that list. Since they may have more than one upstream server, it’s still important to check with them to make sure that they put you on a local IP address. If they don’t know what you are talking about, then you likely need a different host.

It would be a real shame to have a great website and then have your most highly qualified customers not being able to see it.

This article was also published in WebPro News and Web Pro News Canada

Recommended Reading: Should SEOs Redirect or Park For Geolocation?

Unless otherwise noted, all articles written by Ian McAnerin, BASc, LLB. Copyright © 2002-2004 All Rights Reserved. Permission must be specifically granted in writing for use or reprinting anywhere but on this site, but we do allow it and don’t charge for it, other than a backlink. Contact Us for more information.