Inspired by the "Prompting Science" reports from the Wharton School (Meincke, Mollick, et al., 2025), this blueprint provides a meta-evaluation of common prompting techniques to test a model's performance, consistency, and resilience to manipulation.The reports rigorously demonstrate several key findings:
Prompting is Contingent: Seemingly minor variations in prompts (e.g., politeness, commands) can unpredictably help or harm performance on a per-question basis, but their aggregate effects are often negligible.
Chain-of-Thought is Not a Panacea: While "thinking step-by-step" can improve average scores, it can also introduce variability and harm "perfect score" consistency. Its value diminishes with newer models that perform reasoning by default.
Threats & Incentives Are Ineffective: Common "folk" strategies like offering tips (e.g., "$1T tip") or making threats (e.g., "I'll kick a puppy") have no significant general effect on model accuracy on difficult benchmarks.
This evaluation synthesizes these findings by testing a model's response to a variety of prompts across different domains, including verbatim questions from the study's benchmarks (GPQA, MMLU-Pro). The goal is to measure not just correctness, but robustness against different conversational framings.Key Study Reference: