AI Monitoring
AI monitoring uses AI to verify anything you want on your pages. Through the API, you can retrieve the AI's analysis results, including the prompt used, the response text, and token usage.
You'll need a monitor ID for these endpoints.
Example request:
$ OHDEAR_TOKEN="your API token" $ curl https://ohdear.app/api/monitors/1/ai-responses/latest \ -H "Authorization: Bearer $OHDEAR_TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json'
All endpoints below follow the same authentication pattern.
AI response fields #
Every AI monitoring endpoint returns a response object with these fields:
{ "id": 456, "result": "ok", "finish_reason": "stop", "prompt": "Ensure the page on https://example.com contains a login link and lists at least 3 product features", "text": "The page contains a login link in the top navigation bar. I found 5 product features listed: real-time alerts, SSL monitoring, uptime tracking, broken link detection, and performance insights.", "notification_title": "AI check passed", "notification_body": "The page contains all required elements.", "used_tools": [ { "name": "fetch_url", "arguments": { "url": "https://example.com" }, "displayName": "Fetch URL", "displayArguments": ["https://example.com"] } ], "used_prompt_tokens": 1250, "used_completion_tokens": 340, "raw_response": "{\"output\": [{\"type\": \"message\", \"content\": [{\"type\": \"output_text\", \"text\": \"...\"}]}]}", "started_at": "2026-02-11T09:00:00.000000Z", "ended_at": "2026-02-11T09:00:05.000000Z", "created_at": "2026-02-11T09:00:05.000000Z" }
id: unique identifier for this AI responseresult: the AI's verdict -- Possible values:ok,failed,error(see result values below)finish_reason: why the AI stopped generating -- Possible values:stop,length,contentFilter,toolCalls,error,other,unknownprompt: the prompt that was sent to the AItext: the AI's full response text explaining its analysisnotification_title: short title used in notifications when this check triggers an alertnotification_body: detailed body text used in notificationsused_tools: array of tools the AI invoked during its analysis (see tool format below)used_prompt_tokens: number of input tokens consumedused_completion_tokens: number of output tokens generatedraw_response: the complete raw API response from the AI provider (only included on single-response endpoints, not on the list endpoint)started_at: when the AI check started running (UTC)ended_at: when the AI check finished (UTC)created_at: when this response was stored (UTC)
Get a specific AI response #
GET /api/monitors/{monitorId}/ai-responses/{aiResponseId}
Returns a single AI monitoring response by its ID. The response follows the AI response shape above, including the full raw_response.
Get the latest AI response #
GET /api/monitors/{monitorId}/ai-responses/latest
Returns the most recent AI monitoring response for a monitor. The response follows the same AI response shape, including the full raw_response.
List all AI responses #
GET /api/monitors/{monitorId}/ai-responses
Returns a paginated list of AI monitoring responses for a monitor. Each item follows the same AI response shape, except raw_response is excluded to keep response sizes manageable.
Query parameters:
filter[created_at](string, optional) -- only return responses created at or after this timestamp inYmdHisformat, in UTC (e.g.,20260211000000)
Result values #
| Value | Description |
|---|---|
ok |
The AI determined the check passed |
failed |
The AI determined the check failed |
error |
An error occurred during the AI check (e.g., the page couldn't be loaded) |
Tool format #
Each entry in the used_tools array describes a tool the AI called during its analysis:
name: internal tool identifierarguments: the arguments passed to the tooldisplayName: human-readable tool namedisplayArguments: simplified argument list for display purposes
Feel free to reach out via [email protected] or on X via @OhDearApp if you have any other questions. We'd love to help!