# AI-powered Monitoring

[AI-powered monitoring](/docs/features/ai-powered-monitoring.md) uses AI to verify anything you want on your pages. Through the API, you can retrieve the AI's analysis results, including the prompt used, the response text, and token usage.

You'll need a [monitor ID](/docs/api/monitors.md) for these endpoints.

**Example request:**

```bash
$ OHDEAR_TOKEN="your API token"
$ curl https://ohdear.app/api/monitors/1/ai-responses/latest \
    -H "Authorization: Bearer $OHDEAR_TOKEN" \
    -H 'Accept: application/json' \
    -H 'Content-Type: application/json'
```

All endpoints below follow the same [authentication](/docs/api/introduction.md#authenticate-against-the-api) pattern.

## AI response fields
<!-- toc: Response fields -->

Every AI-powered monitoring endpoint returns a response object with these fields:

```json
{
  "id": 456,
  "result": "ok",
  "finish_reason": "stop",
  "prompt": "Ensure the page on https://example.com contains a login link and lists at least 3 product features",
  "text": "The page contains a login link in the top navigation bar. I found 5 product features listed: real-time alerts, SSL monitoring, uptime tracking, broken link detection, and performance insights.",
  "notification_title": "AI check passed",
  "notification_body": "The page contains all required elements.",
  "used_tools": [
    {
      "name": "fetch_url",
      "arguments": { "url": "https://example.com" },
      "displayName": "Fetch URL",
      "displayArguments": ["https://example.com"]
    }
  ],
  "used_prompt_tokens": 1250,
  "used_completion_tokens": 340,
  "raw_response": "{\"output\": [{\"type\": \"message\", \"content\": [{\"type\": \"output_text\", \"text\": \"...\"}]}]}",
  "started_at": "2026-02-11T09:00:00.000000Z",
  "ended_at": "2026-02-11T09:00:05.000000Z",
  "created_at": "2026-02-11T09:00:05.000000Z"
}
```

- `id`: unique identifier for this AI response
- `result`: the AI's verdict -- Possible values: `ok`, `failed`, `error` (see [result values](#result-values) below)
- `finish_reason`: why the AI stopped generating -- Possible values: `stop`, `length`, `contentFilter`, `toolCalls`, `error`, `other`, `unknown`
- `prompt`: the prompt that was sent to the AI
- `text`: the AI's full response text explaining its analysis
- `notification_title`: short title used in [notifications](/docs/api/notification-destinations.md) when this check triggers an alert
- `notification_body`: detailed body text used in notifications
- `used_tools`: array of tools the AI invoked during its analysis (see [tool format](#tool-format) below)
- `used_prompt_tokens`: number of input tokens consumed
- `used_completion_tokens`: number of output tokens generated
- `raw_response`: the complete raw API response from the AI provider (only included on single-response endpoints, not on the [list endpoint](#list-all-ai-responses))
- `started_at`: when the AI check started running (UTC)
- `ended_at`: when the AI check finished (UTC)
- `created_at`: when this response was stored (UTC)

## Get a specific AI response
<!-- toc: Get one -->

<pre>GET /api/monitors/{monitorId}/ai-responses/{aiResponseId}</pre>

Returns a single AI-powered monitoring response by its ID. The response follows the [AI response shape](#ai-response-fields) above, including the full `raw_response`.

## Get the latest AI response
<!-- toc: Get latest -->

<pre>GET /api/monitors/{monitorId}/ai-responses/latest</pre>

Returns the most recent AI-powered monitoring response for a monitor. The response follows the same [AI response shape](#ai-response-fields), including the full `raw_response`.

## List all AI responses
<!-- toc: List all -->

<pre>GET /api/monitors/{monitorId}/ai-responses</pre>

Returns a paginated list of AI-powered monitoring responses for a monitor. Each item follows the same [AI response shape](#ai-response-fields), except `raw_response` is excluded to keep response sizes manageable.

**Query parameters:**

- `filter[created_at]` (string, optional) -- only return responses created at or after this timestamp in `YmdHis` format, in UTC (e.g., `20260211000000`)

## Result values

| Value | Description |
|-------|-------------|
| `ok` | The AI determined the check passed |
| `failed` | The AI determined the check failed |
| `error` | An error occurred during the AI check (e.g., the page couldn't be loaded) |

## Tool format

Each entry in the `used_tools` array describes a tool the AI called during its analysis:

- `name`: internal tool identifier
- `arguments`: the arguments passed to the tool
- `displayName`: human-readable tool name
- `displayArguments`: simplified argument list for display purposes
