Analysis configuration controls what OpenBat extracts from every conversation. You can toggle each analysis type on or off, define custom vocabularies for intents, topics, and flags, and let OpenBat auto-discover new patterns from your data.
All analysis settings live in your chatbot’s Settings page under the User Analysis and Assistant Analysis tabs.
User analysis
Assistant analysis
User analysis runs on every message sent by the end user. You can configure four analysis types.Sentiment analysis
Default state: onEvery user message receives a sentiment score between -1 (very negative) and +1 (very positive). Scoring includes chunk-level reasoning so you can understand which part of a message drove the result.Toggle this off if you do not need per-message sentiment tracking. Disabling it stops user_sentiments records from being created.Intent classification
Default state: onEach user message is assigned exactly one intent from your custom intent list. If the model’s confidence falls below 0.35 on all known intents, it suggests a new one automatically. You can approve or reject these suggestions.Defining intents
Each intent definition has three fields:| Field | Example | Purpose |
|---|
| Slug | troubleshooting | Machine-readable identifier used in filters and the API |
| Display name | Troubleshooting | Human-readable label shown in the dashboard |
| Description | User is trying to fix a broken feature or resolve an error | Tells the LLM when to assign this intent |
Write clear, specific descriptions. The LLM uses your description to decide which intent fits each message, so vague descriptions lead to misclassification.Auto-suggestion: When the LLM encounters a user message that does not match any existing intent with sufficient confidence, it proposes a new intent definition. Suggestions appear in your intent list as pending items that you can approve or reject.
Topic detection
Default state: off (opt-in)When enabled, each user message can be tagged with up to 3 topics. Topics are useful when a single message touches multiple subject areas — for example, a message about billing that also mentions a bug.Defining topics
Topic definitions follow the same structure as intents: slug, display name, and description. Auto-discovery works the same way — the LLM suggests new topics when it encounters patterns not covered by your current list.Flag detection
Default state: onFlags represent business signals detected in user messages. Unlike intents, multiple flags can fire on a single message.System flags
These six flags are always available:| Flag | What it detects |
|---|
churn_risk | Language suggesting the user may leave |
frustrated_user | Expressions of frustration or anger |
buying_signal | Interest in purchasing, upgrading, or expanding usage |
competitor_mention | References to competing products or services |
urgent_request | Time-sensitive or high-priority asks |
confused_user | Signs the user does not understand the response or product |
Custom flags
You can add your own flags per chatbot. Each custom flag follows the same slug, display name, and description structure. Use custom flags to track domain-specific signals that matter to your business. Assistant analysis runs on every message generated by your chatbot. You can configure four analysis types.Behavior classification
Default state: onEvery assistant message is classified into one behavior category. The vocabulary is fixed and cannot be customized.| Behavior | What it means |
|---|
hallucinating | The response contains fabricated information |
dodging | The assistant avoids answering the question directly |
yes_man | The assistant agrees without substance or pushback |
over_apologizing | Excessive apologies that add no value |
redirecting | The assistant points the user elsewhere instead of helping |
verbose | The response is unnecessarily long |
robotic | The tone feels mechanical or templated |
helpful | The response is on-topic and useful |
other | None of the above categories apply |
Outcome classification
Default state: onEach assistant response is classified by resolution outcome. The vocabulary is fixed.| Outcome | What it means |
|---|
fully_resolved | The user’s question or problem was completely addressed |
partially_resolved | Some progress was made but the issue is not fully solved |
deflected | The assistant redirected without attempting to solve |
capability_failure | The assistant could not help due to a limitation |
incorrect | The response contained wrong information |
other | None of the above outcomes apply |
Response quality
Default state: onEach assistant message is scored across six quality dimensions. Every dimension receives its own score, and the weighted total gives you an overall quality rating.| Dimension | Weight | What it measures |
|---|
| Relevance | 25% | Does the response address what the user asked? |
| Completeness | 25% | Does it cover the full scope of the question? |
| Clarity | 20% | Is the response easy to understand? |
| Accuracy | 15% | Is the information factually correct? |
| Conciseness | 10% | Is the response free of unnecessary filler? |
| Tone match | 5% | Does the tone fit the context of the conversation? |
Assistant flag detection
Default state: off (opt-in)This applies the same flag mechanism used for user messages to assistant responses. Enable it when you want to detect specific response patterns — for example, flagging when the assistant mentions a deprecated feature or offers a discount without authorization.
Generate definitions
You do not have to write intent, topic, or flag definitions from scratch. The Generate Definitions feature analyzes your existing conversation history and suggests a vocabulary tailored to your chatbot’s domain.
This is available for:
- Intent definitions — generates intents based on recurring user question patterns
- Topic definitions — generates topics based on subject areas that appear in your conversations
- Flag definitions — generates custom flags based on business signals detected in your data
To use it, open the relevant definition list in your analysis settings and select Generate Definitions. OpenBat scans recent conversations and returns a set of suggested definitions with slugs, display names, and descriptions. Review each suggestion and accept or discard it.
Generate Definitions works best when you already have a meaningful volume of conversations. If your chatbot is new, start by defining a few intents or topics manually and let auto-discovery fill in the gaps over time.