According to Wikipedia
"Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the users' browser."
DAMN! I am writing again a BLACK HAT SEO ARTICLE.
Cloaking is a term used for presenting a different page to human visitor whereas an entirely different page to search engine spiders or bots for the same url. Means the page you find at www.abc.com is entirely different than the page a Search Engine Spider find at www.abc.com
Cloaking is done by cloaking softwares that compare the IP address of the coming request for a webpage to a database of known IP addresses of specific search engine spiders. If the IP address matches one on the list, it serves a page that was specifically written for the search engines and if not than it shows a page made for human visitors. Most cloaking software has all the search engine spiders’ IP addresses indexed plus they are updated regularly.
So where is BLACK HAT In there ?
Ahh! I get it. You can build hundreds of targeted keyword rich pages designed specifically for the search engine spiders, or a link farm having 1000 links pointing to a website( HINT: USE 50 Links pointing to Google.com), while at the same time building a regular page for human visitors so they don't get annoyed. So whenever a search engine spider visits your site, it is detected and sent to your that exclusive pages for them. On the other hand, whenever a human visitor arrives, the software detects that it’s not an IP from a search engine spider, so the human arrives at the non-exclusive such that a regular page.
A cloaking software see that following things when someone comes to a website
- IP address
- User-Agent
- The HTTP_REFERER header
- The HTTP Accept-Language header
Types Of Cloaking
There are two types of Cloaking
- User Agent Cloaking
- IP based Cloaking
User Agent Cloaking
In User Agent Cloaking, the cloaking script compares the User Agent text string which is sent when a page is requested with its list of search engine User Agent names and then serves the approp riate page. enerally, if a search engine spider requests a page, the User-Agent variable contains the name of the search engine. If the cloaking script does not detect the name of a search engine in the User-Agent variable, it assumes that the request has been made by a human being and delivers the page which was designed for human beings.
IP Based Cloaking
This is the more complicated method to do cloaking. It invloves IP address od search engine spiders. When a visitor (a search engine or a human) requests a page, the cloaking script checks the I.P. address of the visitor. If the I.P. address is present in the I.P. database, the cloaking script knows that the visitor is a search engine and delivers the page optimized for that search engine. If the I.P. address is not present in the I.P. database, the cloaking script assumes that a human has requested the page, and delivers the page which is meant for human visitors.
How Search Engine Detects Page Cloaking?
- If a site using user-agent cloaking method than a search enigne send a spider which does not report a user agent name.
- If a site using IP Based cloaking method then the search engines can send a spider or bot from a different I.P. address than any I.P. address which it has used previously.
- A Human representative from search engine. Emm. Well It may be possible
So Is this Technique still have Some Worth?
I don't think so cause now Google don't give importance to number of link but the quality and relevancy of link. So such type of thing now don't work or caught after sometime resulting in that sweet ban in Google.
Here is great visual representation of the whole process of Cloaking.
This image and is from ELLIANCE: LINK. From here you can aslo find some more Visual representation if duifferent SEO Process.
Here is a Fire Fox Extension to change your user agent name to see if a site is claoking or not. Extension
Any suggestion or comments are welcome.