Why aren't all my pages crawled?

Oh Dear! crawls websites to report broken links and mixed content reporting. In some circumstances we won't crawl all pages. This page explains some of those situations.

Crawl prevented by robots.txt or similar #

If you have a robots.txt page with content similar to this, Oh Dear! will not crawl a single page on your site.

User-agent: *
Disallow: /

This content essentially tells robots (search engines like Google/Bing, but also our crawler) to not crawl any page on this site, starting from and including the root page /.

If you have a robots.txt page like this, we will report 0 pages scanned in Oh Dear!.

You can tweak which pages we can/can't crawl in your robots.txt though, for more fine grained controls.

Additionally, we also listen to both the HTML tags as well as the x-robots-tag HTTP header. If we see an HTML tag similar to this, we won't crawl that particular page:

<meta name="robots" content="noindex" />

And here's an example HTTP header that prevents robots from crawling the site:

x-robots-tag: noindex

JavaScript initiated content #

Our crawler currently does not parse JavaScript. We fetch the content from your site and its pages and look at the raw HTML (DOM) we get back to search for links.

If your HTML is empty because it is dynamically injected with JavaScript during page load, we won't be able to find and crawl any pages.

_Note: we are working on JavaScript support and will be able to launch this shortly. If you have a SPA without any prerender component, please reach out to try our beta program.

Rate limits against our crawler #

Webservers can sometimes implement a feature called rate limiting. It reduces the amount of requests a particular IP address or User-Agent can make. Since we crawl websites on a frequent basis, our crawlers are sometimes affected by this.

If we receive an 429 Too Many Requests HTTP status message during our crawls, we'll notify you of this in the detailed view of the report.

To resolve this issue, please whitelist our IP addresses so we are no longer rate limited.

Limitations of the crawler & broken links checking #

We have a few limitations in place to help protect your website when we crawl it.

  • We crawl at most 5.000 pages per website added in Oh Dear
  • Crawls are limited to a 20 minute duration

Whichever limit is hit first (max amount of pages or the 20 minute) will stop our crawling.

This helps protect your site from infinite page loops or excessive load caused by our crawler.

If your site has more than 5.000 pages, you can add the site multiple times with different starting points. We start crawling based on the URL you enter. For instance, if you add the following 5 sites, each will start crawling on its own and report broken pages based on that entry point.

  • yoursite.tld/en
  • yoursite.tld/fr
  • yoursite.tld/nl
  • yoursite.tld/blog/archive
  • yoursite.tld/...

This gives you more control over where we should start crawling and checking for broken pages.

Was this page helpful to you? Feel free to reach out via support@ohdear.app or on Twitter via @OhDearApp if you have any other questions. We'd love to help!