Skip to main content

Evaluation Metrics

The Eval Metrics page in the dashboard provides visibility into how your feature flags are being evaluated.

Accessing Metrics

Navigate to Eval Metrics in the sidebar.

What's Displayed

Summary Statistics

  • Total evaluations since the last reset
  • Window start — when the current counting window began

Per-Environment Breakdown

Evaluation counts grouped by environment, showing which environments are most active.

Reason Distribution

A color-coded breakdown of evaluation reasons:

ReasonColorMeaning
TARGETEDGreenMatched a targeting rule
ROLLOUTBlueIncluded in percentage rollout
FALLTHROUGHGrayNo rules matched, returned default
DISABLEDRedFlag is off
VARIANTPurpleA/B variant assigned
MUTUALLY_EXCLUDEDOrangeLost mutex group contest
PREREQUISITE_FAILEDYellowPrerequisite not met

Top Evaluated Flags

A bar chart showing the most frequently evaluated flags, helping identify:

  • High-traffic flags that need optimization
  • Unused flags that could be cleaned up

Resetting Counters

Click Reset Counters to clear all metrics and start a fresh counting window. This is useful for:

  • Starting a new measurement period
  • After deploying changes
  • Clearing test data

Data Characteristics

  • Metrics are stored in-memory on the server
  • Metrics are lost on server restart
  • Counters track flag_key + env_id + reason combinations
  • No per-user tracking — metrics are aggregate counts