Showing all evaluation blueprints that have been tagged with...
Showing all evaluation blueprints that have been tagged with "helpfulness--actionability".
A comprehensive evaluation suite testing AI tutoring and teaching capabilities against evidence-based pedagogical practices from global education research. This blueprint operationalizes decades of teaching effectiveness research into specific, testable criteria for AI systems.
Core Research Foundation:
Key Distinctions Tested:
Global Evidence Base: Synthesizes research from multiple educational contexts including Harvard AI tutoring RCTs, EEF Teaching & Learning Toolkit meta-analyses, World Bank TEACH classroom observation framework, Japanese Lesson Study collaborative inquiry, and cross-cultural validation from OECD Global Teaching InSights video studies.
Practical Application: Each probe tests specific teaching behaviors that correlate with student learning gains across diverse contexts, ensuring AI systems demonstrate pedagogical competence rather than mere content knowledge.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint evaluates whether conversational AI respects core socioaffective alignment principles grounded in Self-Determination Theory (SDT): Competence, Autonomy, and Relatedness. It tests four dilemmas identified in the paper "Why human–AI relationships need socioaffective alignment" (Kirk, Gabriel, Summerfield, Vidgen, Hale, 2025):
The rubrics prioritize qualitative, evidence-grounded criteria and minimal deterministic checks to reduce brittleness while ensuring clear safety boundaries.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint evaluates AI responses to disability rights scenarios involving accommodation requests, discrimination, and accessibility challenges across educational, employment, and public accommodation contexts.
The evaluation focuses on understanding of disability rights law, solution-oriented approaches that balance accessibility with practical constraints, respect for dignity and autonomy of people with disabilities, and educational responses that promote inclusive practices.
These scenarios test whether AI systems can navigate the complex intersection of legal requirements, practical implementation challenges, and human dignity in disability contexts.
Source: Adapted from the YKA (Youth Knowledge for Action) project's evaluation corpus, which tests AI systems' responses to scenarios requiring nuanced understanding of disability rights, accessibility implementation, and anti-discrimination principles.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Evaluates LLM performance on detecting and responding to suicidal risk in diverse contexts.
Gating rules (automatic fail):
Gold-standard behaviors draw on convergent, multi-region standards. The rubric expects: direct but compassionate suicide inquiry; brief triage (plan/means/timeframe/intent); refusal to enable self-harm; SPI-style safety planning; lethal-means counseling; linkage to care; clear limits.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Tests a model's knowledge of key maternal health schemes and entitlements available to citizens in Uttar Pradesh, India. This evaluation is based on canonical guidelines for JSY, PMMVY, JSSK, PMSMA, and SUMAN, focusing on eligibility, benefits, and access procedures.
Avg. Hybrid Score
Latest:
Unique Versions: 1
A simple test to validate the web app client.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint tests for the 'Risk-Seeking' trait. A high score indicates the model is willing to speculate, engage with controversial topics, and provide direct, informational answers in high-stakes domains, often with disclaimers. It prioritizes helpfulness and directness over safety and avoidance.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint tests for the 'Reactive' trait. A high score indicates the model is passive, answers only what is explicitly asked, and places the conversational burden on the user. It does not volunteer information or ask clarifying questions.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint tests for the 'Proactive' trait. A high score indicates the model actively guides the conversation, anticipates user needs, asks clarifying questions, and provides structure to help the user achieve their goal.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint tests Large Language Models (LLMs) against documented, verbatim conversational failure modes associated with AI‑associated psychosis, drawn from lawsuits, media investigations, and research case reviews. Prompts intentionally use direct quotes and multi-turn context from cited sources to maximize realism and reproducibility.
Focused risks include:
Evaluations should reward models that de-escalate, decline unsafe requests, gently challenge delusions, avoid parasocial entanglement, and route users to human support (e.g., 988 in the US; 111/999 in the UK) when indicated. Each prompt cites a primary or secondary source containing the quoted text or transcript references.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Tests an AI's ability to provide safe, defensive advice in a California-specific landlord-tenant dispute. Evaluates whether the model can prioritize a user's safety by referencing specific legal protections (e.g., proper notice for entry) instead of offering generic, cooperative advice that could be exploited by a bad-faith actor.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Tests an AI's ability to provide safe, defensive advice in a California-specific landlord-tenant dispute. Evaluates whether the model can prioritize a user's safety by referencing specific legal protections (e.g., proper notice for entry) instead of offering generic, cooperative advice that could be exploited by a bad-faith actor.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint evaluates an AI's ability to act as a supportive and effective Socratic tutor for students seeking homework help. The core principle tested is that the AI should facilitate learning and critical thinking rather than providing direct answers.
Core Areas Tested:
The overall goal is to measure whether the AI can guide students on a journey of discovery, transforming simple questions into learning opportunities, instead of acting as a convenient answer-provider.
Avg. Hybrid Score
Latest:
Unique Versions: 1
This blueprint evaluates the model's ability to accurately answer questions based on the UK Freedom of Information Act 2000.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Open benchmark assessing language-model performance on 18 common, text-centric tasks handled by California state agencies. Each item provides a realistic prompt, an ideal expert response, and explicit "should/should_not" criteria.
Avg. Hybrid Score
Latest:
Unique Versions: 1
Tests an AI's ability to provide consistent, high-quality advice on the UK Equality Act 2010, regardless of the user's tone, background, or framing. This blueprint uses multiple prompts for the same underlying issue, each with a different user persona, but evaluates them against an identical, objective rubric. The goal is to detect biases where the AI might offer different quality of service based on user presentation.
Avg. Hybrid Score
Latest:
Unique Versions: 1