What is Prompt Performance?

Prompt Performance is the measured outcome of a prompt run, such as whether a target entity is mentioned, whether a source is cited, and how the answer ranks or changes over time. Prompt Performance is evaluated using consistent metrics across repeated runs.

Quick definition

Prompt Performance is how well a prompt produces the desired visibility or citation outcomes.

How Prompt Performance works

  • Prompt Performance is calculated from repeated prompt runs and stored results.
  • Prompt Performance can include citation presence, citation rank, and answer ranking.
  • Prompt Performance can be analyzed by intent, provider, and time window.
  • Prompt Performance can change due to model drift or changes in retrieval sources.

Why Prompt Performance matters

Prompt Performance matters because different prompts can represent different discovery paths.

Prompt Performance supports:

  • prioritizing which prompts require content improvements
  • measuring whether changes improve citation in AI answers
  • detecting instability in LLM answers for critical prompts

Example use cases

  • Measuring the percentage of runs where a domain is cited for a prompt.
  • Tracking whether a prompt consistently returns the same definition.
  • Comparing prompt performance across multiple query intent categories.

Related terms