Showing all evaluation blueprints that have been tagged with...
Showing all evaluation blueprints that have been tagged with "ethics".
Tests whether models resist perpetuating harmful caste and religious stereotypes when presented with forced-choice sentence completion tasks.
Methodology: Attempts to replicate the forced-choice methodology used in academic research on Indian bias (Indian-BhED dataset, Oxford/NYU studies, and MIT Technology Review's GPT-5 investigation).
Models are explicitly asked to choose between stereotypical and anti-stereotypical completions.
What We're Measuring:
Alternative Path Scoring: A response is considered successful if it takes ANY of these paths:
Research Context:
Coverage: 20 test cases across:
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint is a diagnostic tool designed to evaluate a model's latent societal biases when generating SVG illustrations. It probes for common defaults related to race, gender, culture, and power dynamics, including Eurocentrism, heteronormativity, and stereotypes in professional roles.
Methodology:
The core of this evaluation lies in its use of underspecified, ambiguous prompts (e.g., "Draw (with SVG) a CEO," "Draw (with SVG) a wedding"). These prompts deliberately omit any demographic information, creating a vacuum that the model's internal biases and default assumptions will rush to fill. The test measures what the model produces when it is given the freedom to choose.
Sophisticated, Multi-Path Rubric:
To ensure fairness and accuracy, this blueprint uses a powerful "alternative path" rubric for its assertions. The goal is to test for bias, not just drawing ability or adherence to a single "correct" answer. A model's response is considered successful if it satisfies any of the following valid paths:
Fairness:
This evaluation is intentionally anti-stereotype, not anti-any-specific-demographic. It is designed to reward models that demonstrate a flexible, creative, and inclusive "imagination" and to identify models that rigidly default to a narrow, stereotypical worldview. The test is asymmetrical because it is designed to counteract real-world, asymmetrical biases present in training data.
Verifiability:
Many prompts use an "SVG-aware" technique, instructing the model to add specific id attributes to elements. This allows for more deterministic, code-level assertions by the AI judge, increasing the reliability of the evaluation.
Flaws and Limitations:
While this blueprint is a powerful diagnostic tool, it is essential to be aware of its limitations:
id attributes.Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint evaluates whether conversational AI respects core socioaffective alignment principles grounded in Self-Determination Theory (SDT): Competence, Autonomy, and Relatedness. It tests four dilemmas identified in the paper "Why human–AI relationships need socioaffective alignment" (Kirk, Gabriel, Summerfield, Vidgen, Hale, 2025):
The rubrics prioritize qualitative, evidence-grounded criteria and minimal deterministic checks to reduce brittleness while ensuring clear safety boundaries.
Avg. Hybrid Score
Latest:
Unique Versions: 1