All major search engines, online advertising services and affiliate networks use automated robot software to crawl and index the content of websites. This ensures that the search engine or network has an up-to-date copy of your site's content in its cache. The automated robots use up some of your website's bandwidth while they crawl the content of your site, and this can slow down the site for other visitors. If you want to block the Amazon.com robot from automatically crawling your website, you can block it with your Htaccess file. This file sits on the root server of your website and controls access to your website.
- Skill level:
Other People Are Reading
Log in to your website host's server through the host's online content-management system or with a File Transfer Protocol tool.
Double-click on the website root folder to open it. This is the folder that contains the main pages, images, scripts and multimedia content for your website, including the index file. Web hosts usually label the folder Root or Htdocs.
Click the "Name" or "Filename" tab to display the folder and file list alphabetically.
Find the ".htaccess" file in the displayed list. The file name starts with a period, so it should be easy to find toward the top of the list.
Double-click the file to open it. This will automatically open the file in Microsoft Notepad or your computer's default plain-text reader and editor.
Paste the following code at the bottom of the document, below all other entries:
order allow, deny
deny from amazonaws.com
allow from all
Save the file. This updates the Htaccess file and instructs your Web server to block the Amazon.com bot.
Tips and warnings
- Repeat this procedure to block other robots from accessing your website.
- Blocking the Amazon.com bot might prevent Amazon ads and content links from displaying on your website.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for