How to create robots.txt file and check if it is working

The robots.txt file is described in three different Google webmaster guidelines and is of great importance to Google as the main thing they do is index the internet with their crawler named Googlebot.

Webmasters can manage control how Google interacts with their webpages by using the robots.txt file. The content of this file tells search engine crawlers how they should visit your site.

If there are files and directories you do not want indexed by search engines, you can use a robots.txt file to define where the robots should not go. These files are very simple text files that are placed on your web server.

They must be placed on the root folder, as an example…
www.yourwebsite.com/robots.txt

If you want to see any websites robots.txt file you can just add “/robots.txt” to their domain name.
Here for example is the robots.txt file I use on this site – http://blog.mylinuxvps.com/robots.txt

What do they do exactly?

Robots.txt files tells your instructions to a search engine robot.
The first thing that a search engine spider looks at when it is visiting a page is the robots.txt file. It looks for it because it wants to know what it should do. If you have instructions for a search engine robot, you must tell it those instructions.

The most common problem people have with robots.txt files is that they don’t know how to create them.

If you can make web pages, you can also make a robots.txt file. The file is a text file, which means that you can use notepad, WordPad, or any other plain text editor. You can also make them in FrontPage or Dreamweaver by using the “code” view. You can even “copy and paste” them.
So instead of thinking “I am making a robots.txt file”, just think, “I am writing a note” they are the exact same process.

What should the robots.txt say?

That depends on what you want it to do.
Most people want robots to visit everything in their website. If this is the case with you, and you want the robot to index all parts of your site, there are three options to let the robots know that they are welcome.

1) Do not have a robots.txt file

If your website does not have a robots.txt file then this is what happens –
A robot comes to visit. It looks for the robots.txt file. It does not find it because it isn’t there. The robot then feels free to visit all your web pages and content because this is what it is programmed to do in this situation.

2) Make an empty file and call it robots.txt

If your website has a robots.txt file that has nothing in it then this is what happens… A robot comes to visit. It looks for the robots.txt file. It finds the file and reads it. There is nothing to read, so the robot then feels free to visit all your web pages and content because this is what it is programmed to do in this situation.

3) Make a file called robots.txt and write the following two lines in it… (These are “instructions” for the robot to follow)

User-agent: *
Disallow:

If your website has a robots.txt with these instructions in it then this is what happens… A robot comes to visit. It looks for the robots.txt file. It finds the file and reads it. It reads the first line. Then it reads the second line. The robot then feels free to visit all your web pages and content because this is what it is what you told it to do.

What do the robot instructions mean?

Here is an explanation of what the different words mean in a robots.txt file

User-agent:

The “User-agent” part is there to specify directions to a specific robot if needed. There are two ways to use this in your file.

If you want to tell all robots the same thing you put a ” * ” after the “User-agent” It would look like this…

User-agent: *

(This line is saying “these directions apply to all robots”)
If you want to tell a specific robot something (in this example Googlebot) it would look like this…

User-agent: Googlebot

(This line is saying “these directions apply to just Googlebot”)

Disallow:

The “Disallow” part is there to tell the robots what folders they should not look at. This means that if, for example you do not want search engines to index the photos on your site then you can place those photos into one folder and exclude it.

Let’s say that you have put all these photos into a folder called “images”. Now you want to tell search engines not to index that folder.
Here is what your robots.txt file should look like:

User-agent: *
Disallow: /images

The above two lines of text in your robots.txt file would keep robots from visiting your photos folder. The “User-agent *” part is saying “this applies to all robots”. The “Disallow: /images” part is saying “don’t visit or index my photos folder”.

Googlebot specific instructions

The robot that Google uses to index their search engine is called Googlebot. It understands a few more instructions than other robots. The instructions it follows are well defined in the Google help pages (see resources below).

In addition to the “User-name” and “Disallow” Googlebot also uses the…

Allow:

The “Allow:” instructions lets you tell a robot that it is okay to see a file in a folder that has been “Disallowed” by other instructions. To illustrate this, let’s take the above example of telling the robot not to visit or index your photos. We put all the photos into one folder called “photos” and we made a robots.txt file that looked like this…

User-agent: *
Disallow: /images

Now let’s say there was a photo called mycar.jpg in that folder that you want Googlebot to index. With the Allow: instruction, we can tell Googlebot to do so, it would look like this…

User-agent: *
Disallow: /images
Allow: /images/mycar.jpg

This would tell Googlebot that it can visit “mycar.jpg” in the photo folder, even though the “photo” folder is otherwise excluded.

Testing your robots.txt file

To find out if an individual page is blocked by robots.txt you can use open our robots.txt file located here http://blog.mylinuxvps.com/robots.txt and see what is good to allow and what is good to disallow.

If you are using a Google sitemap as part of their webmaster tools, then you can log in and see if Google is having any issues crawling your site. There is also a robots.txt tool that allows you to experiment a little, letting you know if there are any problems with your file prior to putting it online.

Key Concept:

– If you use a robots.txt file, make sure it is correctly written because an incorrect robots.txt file can block the bots that index your website.

Be the first to comment

Leave a Reply