Confidence Meter
Show how confident the agent is in its next step.
Install
npx ax-depute@latest add confidence-meterpnpm dlx ax-depute@latest add confidence-meteryarn dlx ax-depute@latest add confidence-meterbunx ax-depute@latest add confidence-meterOverview
Displays an agent's confidence score with optional reasoning. Supports a full meter display (a horizontal bar) as well as a compact badge display for tight spaces like roster rows in Subagent Card.
Basic usage
<Confidence Meter
value={85}
showLabel={true}
showValue={true}
reasoning="I have successfully retrieved the user's billing history."
/>Compact Badge Display
For tight UI spaces like tables or agent rosters, use the badge variant.
<Confidence Meter
value={42}
display="badge"
size="sm"
/>Props
| Prop | Type | Default | Description |
|---|---|---|---|
value | number | undefined | Confidence score from 0-100. If omitted, meter shows an indeterminate pulsing state. |
display | 'meter' | 'badge' | 'meter' | Structural variant: full horizontal bar (meter) or a compact inline pill (badge). |
size | 'sm' | 'md' | 'lg' | 'md' | Size of the component. |
showValue | boolean | true | Whether to show the exact numeric percentage. |
showLabel | boolean | true | Whether to display a textual confidence category (e.g., "High", "Medium", "Low"). |
reasoning | string | undefined | An optional explanation from the agent justifying its confidence score. |
animate | boolean | true | Whether to animate the bar filling and color transitions when the value changes. |
Composition flow
Confidence Meter is typically evaluated continuously during an agent's runtime, often surfaced right before an Approval Gate:
Plan Card → [Confidence Meter] → Approval Gate → Run ControlsIf confidence drops below a specified threshold, the flow can be automatically paused for human review.
Design rationale
Why quantitative trust signaling? Humans struggle to calibrate trust in LLMs. By forcing the agent to output a confidence score before execution, the UI shifts from a "black box" to a measurable partnership.
Why the "reasoning" prop? A score without rationale is opaque. The reasoning prop provides space for the agent to explicitly state why it arrived at its score (e.g., "I found 3 distinct APIs matching the intent, which introduces ambiguity"), which is crucial for human intervention.