Confidence Meter
Show how confident the agent is in its next step.
Install
npx ax-depute@latest add confidence-meterpnpm dlx ax-depute@latest add confidence-meteryarn dlx ax-depute@latest add confidence-meterbunx ax-depute@latest add confidence-meterOverview
Displays an agent's confidence score with optional reasoning. Supports a full meter display (a horizontal bar) as well as a compact badge display for tight spaces like roster rows in Subagent Card.
Basic usage
<Confidence Meter
value={85}
showLabel={true}
showValue={true}
reasoning="I have successfully retrieved the user's billing history."
/>Compact Badge Display
For tight UI spaces like tables or agent rosters, use the badge variant.
<Confidence Meter
value={42}
display="badge"
size="sm"
/>Props
| Prop | Type | Default | Description |
|---|---|---|---|
value | number | undefined | Confidence score from 0-100. If omitted, meter shows an indeterminate pulsing state. |
display | 'meter' | 'badge' | 'meter' | Structural variant: full horizontal bar (meter) or a compact inline pill (badge). |
size | 'sm' | 'md' | 'lg' | 'md' | Size of the component. |
showValue | boolean | true | Whether to show the exact numeric percentage. |
showLabel | boolean | true | Whether to display a textual confidence category (e.g., "High", "Medium", "Low"). |
reasoning | string | undefined | An optional explanation from the agent justifying its confidence score. |
animate | boolean | true | Whether to animate the bar filling and color transitions when the value changes. |
When to use
- The agent's confidence in its next action is meaningful signal — e.g., before an
Approval Gatewhere confidence should influence the human's decision - The agent is operating in a domain with variable accuracy (medical, legal, financial) where uncertainty carries real stakes
- You want to preemptively surface a human review trigger: if confidence drops below a threshold, escalate automatically
When not to use
- Your agent backend doesn't produce calibrated confidence scores — displaying a fabricated or uncalibrated number is actively misleading and erodes trust
- The operation is routine and fully reversible — injecting a confidence signal for low-stakes actions adds noise without value
- The agent has already completed the action —
Confidence Metercommunicates intent before execution, not retrospective certainty
Accessibility
- The meter renders as
<meter>(native HTML) withmin,max, andvalueattributes set, giving screen readers built-in percentage readout - The confidence label (High / Medium / Low) is always present as visible text, not conveyed through color alone
- Color differentiation (green / amber / red) meets WCAG 3:1 minimum contrast ratio against the background
- Reasoning text is rendered as a visible
<p>and not hidden behind a tooltip — no hover-only content - Badge variant includes
aria-label="Confidence: 42%"for screen reader clarity in dense layouts
Solution Patterns
Confidence Meter is typically evaluated continuously during an agent's runtime, often surfaced right before an Approval Gate:
Plan Card → [Confidence Meter] → Approval Gate → Run ControlsIf confidence drops below a specified threshold, the flow can be automatically paused for human review.
Design rationale
Why quantitative trust signaling? Humans struggle to calibrate trust in LLMs. Without a signal, every agent response carries the same implicit weight — whether the model is 95% confident or guessing. Surfacing a score before execution creates a moment of deliberate comparison: the human sees the number, and that number reframes how they read the output. A 58% confidence on a financial recommendation is a different document than a 96% one.
Why the reasoning prop? A score without rationale is still partially opaque. An agent that outputs 58% with no explanation is almost as unhelpful as no score at all. The reasoning prop gives the agent space to surface what it's uncertain about — missing context, competing interpretations, ambiguous inputs. That explanation is what makes the confidence score actionable rather than decorative, and what gives the human something to actually verify.