1. Why this matters.
When an AI assistant sounds sure of itself but is actually guessing, the result can be worse than no answer at all. Fullview AI avoids that trap by attaching a Confidence Score to every topic‑level answer it generates and by letting you enforce a Confidence Score Minimum. Together, these two controls determine whether the assistant:
answers the user immediately,
requests more detail, or
defers/escalates to a human teammate.
Because you have direct, no‑code control over the threshold, you can keep customer‑facing quality where you want it, without resorting to opaque “trust‑me” settings of many other AI providers.
2. What is a Confidence Score?
A Confidence Score is an internal, continuously updated signal (0–100) that expresses the agent’s self‑estimated probability of providing a complete, correct, and context‑appropriate answer for a given topic.
Complete – the reply will cover all core sub‑questions we have seen in training.
Correct – the content matches documentation, knowledge articles, and conversational feedback.
Context‑appropriate – the answer fits the product scope you defined in your Organization Prompt (see Organization Prompt article).
Note: We deliberately keep the exact scoring formula private. This protects the model from adversarial prompt‑hacking and preserves our competitive edge while still giving you full functional control of confidence thresholds.
3. How the score is calculated (high‑level view)
Signal category | Examples | Weighting notes |
Historical performance on the topic | End‑user thumbs‑up/down, QA audits. | Negative feedback lowers the score fast; positive feedback decays more slowly to avoid over‑confidence. |
Scope alignment | Fit to Organization Prompt constraints | Questions that are out of scope or irrelevant to the nature of your application will not be attempted. |
Training data | Knowledge center articles, human training. | Fullview AI will prioritize human training over knowledge center articles, if human training has been applied. |
Scores are recalculated in near real‑time whenever any of these signals change.
4. Confidence Score Minimums: What they are and why they matter
A Confidence Score Minimum is the threshold below which the agent will not automatically answer a user. Administrators and workspace owners can:
Define the minimum confidence needed before giving an answer / guiding the user on their screen.
Define fallback behavior:
This is done in your help center software via workflows / custom logic for routing
For example:
Ask a clarifying question (recommended for onboarding flows).
Transfer to human chat or ticketing queue.
Surface a curated help‑center article only.
5. How to adjust confidence score minimums
You can find Confidence Score settings under Settings -> AI Agent -> Minimum confidence score.
See video below:
6. Best‑practice recommendations
Goal | Suggested setting |
Stay within the recommended range | <60% threshold will ensure a good balance of automation volume and high quality answers (remember confidence score is not equal to resolution rate). |
Handle edge‑cases | Create explicit “Out‑of‑scope” topics for peace of mind. |
Lift confidence with training | Make thumbs‑up/down mandatory for internal testers; review topics with negative feedback and apply human training to quickly improve. |
7. Frequently Asked Questions
Does a low score mean the AI can't successfully answer?
Not at all. it often means the question is new. Confidence scores do not equal resolution rates. Use it as an invitation to enrich training data or update your Organization Prompt or apply more training data (e.g human training).How often are scores updated?
Within seconds of new feedback.What happens if I set the minimum too high?
You’ll lower answer coverage and may end up with less answered questions, even though the AI might still have given a correct answer. Hence we recommend to stay within 20-60% threshold.
8. Key takeaways
Confidence Scores quantify answer reliability; Confidence Score Minimums give you power to control behavior.
Scores adapt in real time, driven by training data quality, feedback, and scope alignment.
Thoughtful threshold tuning plus active feedback loops keep your assistant both helpful and safe.