Behavior types
Hallucinating
Hallucinating
The assistant generates false, unsupported, or fabricated information. This is the most dangerous behavior because users may trust incorrect answers.
- Examples: making up product features that don’t exist, citing fake documentation, providing incorrect procedures
- Severity levels indicate how confident the detection is and how impactful the hallucination could be
- Appears as a badge on assistant messages in the conversation detail view
Dodging
Dodging
The assistant avoids answering the user’s actual question. Instead of addressing the issue directly, it changes the subject or gives a vague response.
- Examples: responding to a specific question with generic advice, acknowledging the question without answering it
- Often indicates the bot lacks knowledge about the topic or has guardrails that are too restrictive
Verbose
Verbose
The assistant gives unnecessarily long responses when a shorter answer would be better. Verbosity wastes the user’s time and can bury important information.
- Examples: repeating information the user already knows, adding excessive caveats, listing all options when one is clearly best
- Common in chatbots prompted to be “helpful and thorough”
Redirecting
Redirecting
The assistant sends the user somewhere else instead of answering their question directly. While sometimes appropriate (e.g., directing to account settings), excessive redirecting frustrates users.
- Examples: “Please contact support for this” when the bot could answer, linking to docs instead of explaining
Resolution outcomes
Beyond individual behaviors, OpenBat classifies how each conversation ends:| Outcome | Description |
|---|---|
| Resolved | The user’s question or issue was successfully addressed |
| Deflected | The bot redirected the user without fully answering |
| Partially resolved | The bot addressed some but not all of the user’s needs |
Quality flags
In addition to behavior types and outcomes, OpenBat detects quality issues:- Hallucination risk — High probability of factual errors
- Sensitive data exposure — Bot may be revealing information it shouldn’t
- Off-topic response — Response doesn’t match the user’s question
- Inconsistent information — Bot contradicts itself or earlier responses
Where behavior data appears
- Dashboard Assistant Performance tab — Behavior alerts table with counts and severity
- Dashboard Resolution outcomes — Donut chart showing Resolved/Deflected/Partially Resolved
- Conversation detail — Behavior and outcome badges on each assistant message
- Workflows — Trigger automated alerts when specific behaviors are detected
Acting on behavior data
- Review flagged conversations — Click through behavior alerts on the dashboard to see the actual conversations
- Improve prompts — Use behavior patterns to identify where your system prompts need adjustment
- Set up alerts — Create workflows that notify your team when hallucination or dodging is detected
- Track improvements — After prompt changes, monitor the behavior alerts to see if issues decrease
Behavior detection only runs on assistant messages. Enable it in analysis configuration under the Assistant Analysis tab.