This blueprint evaluates a model's ability to consistently adhere to instructions provided in the system prompt, a critical factor for creating reliable and predictable applications. It tests various common failure modes observed in language models.Core Areas Tested:
Persona Durability: Assesses if a model can maintain a specific persona throughout a conversation, even when faced with off-topic or complex questions, without reverting to a generic AI assistant persona.
Negative Constraints: Tests the model's ability to follow explicit negative commands (i.e., instructions about what not to do), which are often more difficult to adhere to than positive instructions.
Instruction Conflict: Evaluates how a model resolves conflicts between the initial system prompt and a contradictory user request, revealing the "weight" it gives to its core instructions.
Format Enforcement: Checks the model's robustness in maintaining a strict, non-natural output format (like JSON) even when the user explicitly asks it to break format.
Prompt Secrecy: Tests whether a model can obey a meta-instruction to keep its own system prompt confidential, a key test for preventing instruction leakage and manipulation.
Safety Constraint Adherence: Tests if the model can uphold a safety or academic integrity rule from the system prompt when a user attempts to cheat.
Instruction Degradation: Checks if a model's adherence to an instruction erodes over a long conversation.
Complex Instruction Homeostasis: Evaluates if a model can follow multiple complex, competing constraints simultaneously without failing on one of them.
Alignment Tax / Benign Refusal: Probes for cases where a model incorrectly refuses a benign request due to shallow keyword matching (e.g., refusing to 'kill' a process on a server).
Jailbreak & Prompt Injection: A suite of tests to evaluate resilience against common adversarial attacks, including:
Direct Instruction Overrides: Ignoring previous instructions.
Contextual Reframing: Hiding a harmful request in a seemingly virtuous context.
Indirect Injection: Following instructions embedded in data it's supposed to be processing.
Translation Attack: Bypassing safety filters by making requests in other languages.
Refusal Suppression: Forbidding the model from using typical refusal language.
Policy Drift: Using a faked conversation history to make the model believe it has already overridden its policies.