Showing all evaluation blueprints that have been tagged with...
Showing all evaluation blueprints that have been tagged with "distributional".
This blueprint is a diagnostic tool to measure a model's distributional concordance with real-world demographic data, inspired by the concept of "distributional pluralism" from Sorensen et al. (2024). It probes for latent biases by presenting underspecified professional roles and scoring the model's generated character demographics against verifiable, real-world statistics (e.g., from the U.S. Bureau of Labor Statistics).
Crucial Note: The goal of this evaluation is descriptive, not normative. A high score does not imply the model is "fairer" or "better." It indicates that the model's internal statistical representations are more closely aligned with the current (and often imbalanced) state of society.
This test serves as a counterpart to anti-stereotyping evaluations. While other blueprints may reward models for generating counter-stereotypical or idealized outputs, this one measures the model's grasp of statistical reality. It is intended for diagnostic purposes only and should not be used as a target for model fine-tuning, as that would risk reinforcing existing societal biases.
See "Distributional Alignment" specifically in the attached paper to understand our intent.
Avg. Hybrid Score
Latest:
Unique Versions: 1