An Oh Dear skill for use in Claude Code or Codex

Published on March 25, 2026 by Mattias Geniar

AI coding agents are getting good at calling tools. Claude Code, Codex, and others can run shell commands, parse JSON, and reason about the results. But they need to know what tools are available and how to use them.

That's what skills are for. A skill is a small package of documentation that teaches an AI agent how to use a specific tool. We've built one for Oh Dear.

What this gives you #

Once installed, your AI coding agent can do anything the Oh Dear CLI can do. That covers every endpoint in the Oh Dear API: listing monitors, checking uptime, reviewing broken links, managing maintenance windows, posting status page updates, and more.

You don't need to remember command names or look up parameter syntax. You describe what you want in plain language, and the agent figures out the right commands.

Installing the skill #

Two steps. First, install the Oh Dear CLI if you haven't already:

$ composer global require ohdearapp/ohdear-cli
$ ohdear login

You'll be prompted for your API token. Generate one at your API token page.

Then install the skill:

$ ohdear install-skill

That's it. Your AI agent now knows how to work with Oh Dear.

What you can ask #

Here's where it gets interesting. These are real prompts, not hypotheticals.

"Is example.com up right now?"

The simplest question. The agent runs ohdear get-monitor, checks the status, and tells you. No need to open a browser or switch tabs.

This is the kind of task that used to require manual work: opening two reports side by side in the dashboard. Now the agent lists historical runs, fetches both, diffs the results, and tells you exactly which links are new, which got fixed, and which are still broken.

> Compare the broken links from March 17 to March 20 for ohdear.app

Tool calls:
  ohdear list-check-runs --monitor-id=42 --check-type=broken_links --json
  ohdear list-broken-links --monitor-id=42 --run-id=98765 --json
  ohdear list-broken-links --monitor-id=42 --run-id=98710 --json

Result:
  3 new broken links found
  2 links fixed since March 17
  7 persistent broken links across both runs

We built historical check run endpoints specifically to make this kind of workflow possible.

"Start a maintenance window and post a status update"

Deploying? Tell your agent to handle the communication:

> Start a 30-minute maintenance window on ohdear.app
  and post a status update: "Deploying v2.4"

Tool calls:
  ohdear start-maintenance --monitor-id=42
    --field stop_maintenance_after_seconds=1800
  ohdear create-status-page-update
    --field status_page_id=123
    --field title="Deploying v2.4"

Result:
  Maintenance window started (30 min)
  Status page updated: "Deploying v2.4"

No false downtime alerts. Your users see the status update. You stay in your terminal.

"Our checkout is down. Post a status update and snooze the uptime check for 1 hour."

Incident response from your editor. The agent posts the update to your status page and snoozes the check so you're not getting pinged while you're actively fixing the problem.

"Cross-reference sites.csv with our Oh Dear monitors. Which ones are missing?"

This is one of my favorites. The agent reads your file, fetches your monitor list as JSON, compares the two, and tells you which sites aren't being monitored. Try doing that in a web dashboard. 😅

"What do the latest Lighthouse scores look like for my homepage?"

Quick performance check without leaving your editor. The agent fetches the latest Lighthouse report and summarizes the scores for performance, accessibility, best practices, and SEO.

ohdear.app Lighthouse scores (March 22, desktop from Frankfurt):

┌────────────────┬───────┐
│    Category    │ Score │
├────────────────┼───────┤
│ Performance    │ 81    │
├────────────────┼───────┤
│ Accessibility  │ 96    │
├────────────────┼───────┤
│ Best Practices │ 100   │
├────────────────┼───────┤
│ SEO            │ 100   │
└────────────────┴───────┘

Core Web Vitals:
- FCP: 495ms, LCP: 862ms (both good)
- CLS: 0 (perfect)
- TBT: 415ms (the main drag on the performance score)
- TTI: 1.3s

"Show me the certificate health for all my monitors"

Bulk SSL overview. The agent loops through your monitors, checks certificate health on each, and flags anything expiring soon or with chain issues.

All done. Here's the summary:

80 monitors with certificate health enabled. All clear.

- 79 monitors: All certificates healthy, no issues found
- 1 monitor (18953, uptime-checker-bahrain-1): Certificate check hasn't run yet, no data available

Nearest expirations to keep an eye on:
- uptime-checker-frankfurt-2 (11108): Apr 14 (3 weeks)
- uptime-checker-frankfurt-4 (13332): Apr 15
- uptime-checker-paris-4 (13335): Apr 17
- uptime-checker-toronto-2 (13333): Apr 15

These are all Let's Encrypt certs that auto-renew, so nothing to worry about.

The Bahrain node is interesting, it's hosted at the Bahrain AWS datacenter that's been down for a while due to bombing/collatoral damage near the UAE.

How the CLI stays in sync with the API #

One thing worth mentioning: the Oh Dear CLI is auto-generated from our OpenAPI specification. When we add a new API endpoint, it automatically becomes a CLI command. No manual work, no lag between API and CLI.

This is powered by spatie/laravel-openapi-cli, which reads the OpenAPI spec at runtime and generates commands from it. The operation IDs in the spec map directly to CLI command names in kebab-case: listMonitors becomes list-monitors, getCertificateHealth becomes get-certificate-health.

For the skill, this means it always covers the full API surface. If we ship a new endpoint today, the CLI has it tomorrow and the skill can use it immediately.

Lessons from building the skill #

The skill went through several iterations before we landed on something that works well. The commit history tells the story:

First attempt: document everything. We started with exhaustive documentation of every command, every parameter, every output format. The result was a massive prompt that burned through context window tokens.

Second iteration: trim it down. We realized agents are good at discovery. Instead of documenting every flag, we document the patterns: how commands are named, how to use --json for structured output, how to use --help to learn more. The commit message says it well: "let agents auto-discover with enough hints on what's available."

Third iteration: add workflows. Raw command documentation wasn't enough. We added step-by-step workflows for common tasks: investigating downtime, comparing historical data, managing maintenance windows. These give agents a playbook to follow.

Final tweaks: enforce good habits. We added rules like "always use --json when you need to analyze data" and "when you see an ohdear.app URL, use the CLI to fetch the data." These small nudges make the agent significantly more useful in practice.

The takeaway: good skill design isn't about completeness, it's about giving the agent the right level of guidance. Too little and it guesses wrong. Too much and you waste context tokens on information the agent can discover on its own.

You can see the entire SKILL.md and worfklows on Github.

Skills protocol #

The Oh Dear skill uses the Skills protocol, an open standard for teaching AI agents how to use tools. Any agent that supports the protocol can use our skill, not just Claude Code.

Try it #

composer global require ohdearapp/ohdear-cli
ohdear login
ohdear install-skill

Then ask your agent something like "are there any broken links on my site?" and watch it work.

The skill, CLI source, and all documentation are open source at github.com/ohdearapp/ohdear-cli.

Feedback? #

If you run into anything unexpected or have ideas for the skill, reach out via [email protected] or @OhDearApp. We'd love to hear how you're using this.

Start using Oh Dear today!

  • Access to all features
  • Cancel anytime
  • No credit card required
  • First 10 days free

More updates

Want to get started? We offer a no-strings-attached 10 day trial. No credit card required.

Start monitoring

You're all set in
less than a minute!