Is a crawl-delay rule ignored by Googlebot?

Is a crawl-delay rule ignored by Googlebot? John Mueller answers this question on Google’s SEO Snippets video series.

The crawl-delay directive for robots.txt files was introduced by other search engines in the early days. The idea was that webmasters could specify how many seconds a crawler would wait between requests to help limit the load on a web server. That’s not a bad idea overall.

However, it turns out that servers are really quite dynamic, and sticking to a single period between requests doesn’t really make sense.

The value given there is the number of seconds between requests, which is not that useful now that most servers are able to handle so much more traffic per second. Instead of the crawl-delay directive, we decided to automatically adjust our crawling based on how your server reacts. So if we see a server error, or we see that the server is getting slower, we’ll back off on our crawling.

Additionally, we have a way of giving us feedback on our crawling directly in Search Console. So site owners can let us know about their preferred changes in crawling.

With that, if we see this directive in your robots.txt file, we’ll try to let you know that this is something that we don’t support.

Of course, if there are parts of your website that you don’t want to have crawled at all, letting us know about that in the robots.txt file is fine.

Say Hello! Don’t be shy.