Enhancing our API for better agentic consumption

Published on March 27, 2026 by Mattias Geniar

AI coding agents like Claude Code and Codex are becoming a real part of developer workflows. They don't just write code, they call APIs, interpret responses, and take action based on what they find. That means the quality of your API responses directly affects how useful an agent can be.

We've shipped a series of improvements to the Oh Dear API with this in mind. Every change helps humans too, but we specifically optimized for how agents consume and reason about data.

Historical check runs #

Until recently, our API only returned the latest results for each check. That's fine for a quick status check, but it makes it impossible for an agent to answer questions like "compare yesterday's broken links to today's, show me the diff."

Now you can list all completed runs for any check type:

curl https://ohdear.app/api/monitors/1/checks/broken_links/runs \
    -H "Authorization: Bearer $OHDEAR_TOKEN" \
    -H 'Accept: application/json'
{
  "data": [
    {
      "id": 98765,
      "check_type": "broken_links",
      "result": "succeeded",
      "started_at": "2026-03-17T00:30:00+00:00",
      "ended_at": "2026-03-17T00:53:58+00:00",
      "ohdear_url": "https://ohdear.app/monitors/1234/check/broken-links/report/98765"
    },
    {
      "id": 98710,
      "check_type": "broken_links",
      "result": "failed",
      "started_at": "2026-03-16T00:30:00+00:00",
      "ended_at": "2026-03-16T00:48:12+00:00",
      "ohdear_url": "https://ohdear.app/monitors/1234/check/broken-links/report/98710"
    }
  ]
}

You can filter by date range and result status:

Parameter Example Description
filter[started_after] 20260315000000 Runs started at or after this UTC timestamp
filter[started_before] 20260320000000 Runs started at or before this UTC timestamp
filter[result] failed Only succeeded, warning, or failed runs

Once you have a run ID, pass it to the check endpoint to get the detailed results from that specific run:

curl "https://ohdear.app/api/broken-links/1?run_id=98710" \
    -H "Authorization: Bearer $OHDEAR_TOKEN"

This works for broken links, mixed content, certificate health, and sitemap checks. Runs are kept for approximately 10 days.

For an agent, the workflow becomes straightforward: list runs, pick two, fetch both, diff the results. That's a powerful debugging tool when you're trying to figure out when a new broken link appeared or when your SSL certificate chain changed.

When an agent fetches data from an API, it often needs to show the user where to look for more details. But constructing dashboard URLs from API data is error-prone. Different check types have different URL structures, and an agent would need to know our routing conventions.

We solved this by including an ohdear_url field in every response that links directly to the right dashboard page:

{
  "meta": {
    "run_id": 98765,
    "run_started_at": "2026-03-17T00:30:00+00:00",
    "run_ended_at": "2026-03-17T00:53:58+00:00",
    "ohdear_url": "https://ohdear.app/monitors/1234/check/broken-links/report/98765"
  }
}

The URL points to the exact report page for that specific run. An agent can now say "you have 3 new broken links since yesterday, view the full report" with a link that actually works.

This is a small addition, but it removes a whole category of guesswork. The API tells you exactly where to look.

Documentation URLs and markdown-friendly docs #

Every API response now includes a documentation_url in the links section:

{
  "links": {
    "documentation_url": "https://ohdear.app/docs/features/broken-links-detection"
  }
}

When an agent encounters a response it doesn't fully understand, it can fetch the documentation URL to learn more. And to top it all off, our docs are served as clean markdown when you ask for it.

curl -H 'Accept: text/markdown' \
    https://ohdear.app/docs/features/broken-links-detection

Instead of getting a full HTML page (with navigation, footers, scripts, and all the chrome), the agent gets just the content as markdown. The token savings are significant:

Page HTML Markdown Saving
Monitor API docs 386 KB 22.9 KB -94%
Uptime feature docs 370 KB 14.5 KB -96%

We wrote about the caching challenges we ran into when building this feature. The short version: both Cloudflare and our Laravel response cache needed to learn that Accept: text/markdown requests should get different responses than regular browser requests.

The detection logic is simple. We support two ways to request markdown:

  • Append .md to any docs URL: https://ohdear.app/docs/api/monitors.md
  • Send the Accept: text/markdown header

Both return plain markdown with a text/markdown content type and a noindex robots tag.

Full list of crawled URLs #

Our broken links endpoint has always returned the URLs that are broken. But sometimes you need the full picture: how many URLs were crawled, how many are internal vs. external, and what types of resources were found.

We've added two new endpoints for this.

Summary gives you the counts without the weight of individual URLs:

curl https://ohdear.app/api/crawled-urls/1/summary \
    -H "Authorization: Bearer $OHDEAR_TOKEN"
{
  "data": {
    "total": 1234,
    "internal": 800,
    "external": 434,
    "by_type": {
      "image": 250,
      "link": 900,
      "og:image": 14,
      "script": 40,
      "stylesheet": 30
    }
  }
}

Details returns the full paginated list (100 per page) with the same fields as the broken links response:

curl https://ohdear.app/api/crawled-urls/1/details \
    -H "Authorization: Bearer $OHDEAR_TOKEN"

Each entry now also includes type (link, image, script, stylesheet, og:image) and error_message fields, so you know exactly what kind of resource failed and why.

Both endpoints support run_id for historical data, just like broken links.

This is particularly useful for agents doing site audits. Instead of just knowing what's broken, they can understand the full scope of a crawl and reason about the ratio of broken to healthy URLs.

What this unlocks #

These changes aren't just incremental API improvements. Together, they create a feedback loop that makes agents more capable when working with Oh Dear.

An agent using Claude Code or Codex can now:

  1. List historical runs and compare results across time periods
  2. Link users to the right dashboard page instead of generic "check Oh Dear" suggestions
  3. Read our documentation as markdown when it needs more context, without wasting tokens on HTML chrome
  4. Audit all crawled URLs, not just the broken ones

The pattern we're following is: give the agent everything it needs to be self-sufficient. If it can answer its own questions by following links in the API response, it doesn't need hardcoded knowledge about our URL structure or documentation layout.

We also provide an llms.txt file and a full MCP server for AI assistants that want deeper integration.

Try it out #

If you're building with AI agents and want to integrate site monitoring, here's how to get started:

# List your monitors
curl https://ohdear.app/api/monitors \
    -H "Authorization: Bearer $OHDEAR_TOKEN"

# Check historical broken link runs
curl https://ohdear.app/api/monitors/1/checks/broken_links/runs \
    -H "Authorization: Bearer $OHDEAR_TOKEN"

# Get a crawl summary
curl https://ohdear.app/api/crawled-urls/1/summary \
    -H "Authorization: Bearer $OHDEAR_TOKEN"

# Read our docs as markdown
curl -H 'Accept: text/markdown' https://ohdear.app/docs/api/monitors

All of these changes are live now and documented in our API docs.

Feedback? #

If you run into anything unexpected or have ideas for making the API more agent-friendly, reach out via [email protected] or ping @mattiasgeniar or @OhDearApp. We'd love to hear how you're using these features!

Start using Oh Dear today!

  • Access to all features
  • Cancel anytime
  • No credit card required
  • First 10 days free

More updates

Want to get started? We offer a no-strings-attached 10 day trial. No credit card required.

Start monitoring

You're all set in
less than a minute!