Webmasters use robots.txt to give instructions about their site to web robots. You can use robots.txt to disallow visits by specific robots and/or to specific areas of your website. This file should be placed in the root of your domain. For example if you identify from your web logs that certain bots (except the well-known ones such as Google, Yahoo, etc.) are having very high activities accessing your page, you may want to ban them to avoid high data transfer.

The simplest robots.txt file uses the two following rules:

User-agent: the robot you want the following rule to apply to.

Disallow: the URL you want to block from the web spiders and web robots.

Examples:

To disallow all robots from a folder content:

User-agent: *

Disallow: /myprivatefolder/

To allow a single robot and disallow others:

User-agent: Google

Disallow:

User-agent: *

Disallow: /

To disallow a single robot:

User-agent: BadBot

Disallow: /

Note: Bad robots can ignore your robots.txt instructions. Especially malware robots such as email address harvesters, spammers, etc. will pay no attention.

Read Caspio Best Practices Guide to SEO Deployment. For more information, look through online resources.