Depute Logo

Confidence Meter

Show how confident the agent is in its next step.

Install

npx ax-depute@latest add confidence-meter
pnpm dlx ax-depute@latest add confidence-meter
yarn dlx ax-depute@latest add confidence-meter
bunx ax-depute@latest add confidence-meter

Overview

Displays an agent's confidence score with optional reasoning. Supports a full meter display (a horizontal bar) as well as a compact badge display for tight spaces like roster rows in Subagent Card.

74%Medium
Score is slightly lower due to ambiguous phrasing in the third source document.
Interactive StorybookView all states, toggle props, and test edge cases.

Basic usage

<Confidence Meter
  value={85}
  showLabel={true}
  showValue={true}
  reasoning="I have successfully retrieved the user's billing history."
/>

Compact Badge Display

For tight UI spaces like tables or agent rosters, use the badge variant.

<Confidence Meter
  value={42}
  display="badge"
  size="sm"
/>

Props

PropTypeDefaultDescription
valuenumberundefinedConfidence score from 0-100. If omitted, meter shows an indeterminate pulsing state.
display'meter' | 'badge''meter'Structural variant: full horizontal bar (meter) or a compact inline pill (badge).
size'sm' | 'md' | 'lg''md'Size of the component.
showValuebooleantrueWhether to show the exact numeric percentage.
showLabelbooleantrueWhether to display a textual confidence category (e.g., "High", "Medium", "Low").
reasoningstringundefinedAn optional explanation from the agent justifying its confidence score.
animatebooleantrueWhether to animate the bar filling and color transitions when the value changes.

Composition flow

Confidence Meter is typically evaluated continuously during an agent's runtime, often surfaced right before an Approval Gate:

Plan Card → [Confidence Meter] → Approval Gate → Run Controls

If confidence drops below a specified threshold, the flow can be automatically paused for human review.

Design rationale

Why quantitative trust signaling? Humans struggle to calibrate trust in LLMs. By forcing the agent to output a confidence score before execution, the UI shifts from a "black box" to a measurable partnership.

Why the "reasoning" prop? A score without rationale is opaque. The reasoning prop provides space for the agent to explicitly state why it arrived at its score (e.g., "I found 3 distinct APIs matching the intent, which introduces ambiguity"), which is crucial for human intervention.

On this page