All the Bots…..

Google currently indexes over 8 billion web pages. However, before these pages were placed in the index, they were each crawled by a special spider known as the GoogleBot. Unfortunately, many web masters do not know about the internal workings of this virtual robot.
In fact, Google actually uses a number of spiders to crawl the Web. You can catch these spiders by examining your log files.
This article will attempt to reveal some of the most important Google spiders, their function, and how they affect you as a web master. We’ll start with the well-known GoogleBot.


GoogleBot
Googlebot, as you probably know, is the search bot used by Google to scour the web for new pages. Googlebot has two versions, deepbot and freshbot. Deepbot is a deep crawler that tries to folow every link on the web and download as many pages as it can for the Google index. It also examines the internal structure of a site, giving a complete picture for the index.
Freshbot, on the other hand, is a newer bot that crawls the web looking for fresh content. The Google freshbot was implemented to take some of the pressure off of the GoogleBot. The freshbot recalls pages already in the index and then crawls them for new, modified, or updated pages. In this way, Google is better equipped to keep up with the ever-changing Web.
This means that the more you update your web site with new, quality content, the more the Googlebot will come by to ch

Scroll to Top