# OpenAI's IH-Challenge Cuts Unsafe LLM Behavior by 90% - Date: 2026-03-12 OpenAI has released a new training dataset that significantly improves how language models prioritize conflicting instructions—a critical vulnerability that hackers have exploited to jailbreak AI systems. The dataset, called IH-Challenge, was published on arXiv (arXiv:2603.10521) on March 11, 20... ---