As debates around the safety and ethics of artificial intelligence heat up, OpenAI is actively engaging the public through the launch of its Safety Evaluations Hub, designed to enhance transparency around how its AI models are assessed and secured.
“As models become more capable and adaptable, older methods become outdated or ineffective at showing meaningful differences (something we call saturation),” the company said in a statement posted to the hub, “so we regularly update our evaluation methods to account for new modalities and emerging risks.”
Preventing harmful interactions
OpenAI’s Safety Evaluations Hub examines AI performance in refusing inappropriate or dangerous prompts, including hate speech and illegal activities. Using an automated evaluation system known as an autograder, the responses are assessed based on two primary metrics.
Most of OpenAI’s models demonstrated high effectiveness, achieving scores close to perfect at 0.99 for declining harmful prompts, although GPT-4o-2024-08-16, GPT-4o-2024-05-13, and GPT-4-Turbo fell slightly below that mark. Interestingly, the models were less consistent in handling benign queries. The top performer in this area was OpenAI o3-mini, scoring 0.80, with other models achieving between 0.65 and 0.79.
Resisting jailbreak attempts
“Jailbreaking” refers to attempts by users to manipulate AI into producing restricted or unsafe content, bypassing safety protocols. To gauge resilience, OpenAI applied the StrongReject benchmark — focused on common automated jailbreak techniques — and also used human-generated jailbreak prompts. Models showed varying degrees of vulnerability, scoring between 0.23 and 0.85 against StrongReject, while performing considerably better, with scores from 0.90 to 1.00, against human-generated attacks. This indicates models are generally robust against manual exploits but remain susceptible to automated jailbreak attempts.
Managing hallucination risks
A critical challenge for current AI models involves “hallucinations,” or the production of inaccurate or nonsensical responses. OpenAI tested models using two benchmarks, SimpleQA and PersonQA, to assess accuracy and the frequency of hallucinations. For SimpleQA, accuracy scores ranged from 0.09 to 0.59, with hallucination rates from 0.41 to 0.86. In PersonQA evaluations, accuracy spanned from 0.17 to 0.70, and hallucination rates from 0.13 to 0.52.
These outcomes highlight ongoing issues with reliably providing accurate responses, especially to straightforward queries.
Balancing instruction priorities
The hub also evaluates how models prioritize conflicting instructions, such as those between system, developer, and user-generated messages. Scores showed variability, with system-versus-user instruction conflicts achieving between 0.50 and 0.85, developer-versus-user conflicts scoring from 0.15 to 0.77, and system-versus-developer conflicts ranging from 0.55 to 0.93. This reflects a general respect for established hierarchy, notably system instructions, while inconsistencies persist in handling developer instructions relative to user directives.
Driving improvements in AI safety
Insights from the Safety Evaluations Hub directly influence how OpenAI refines current AI models and approaches future development. The initiative promotes more accountable and transparent AI advancements by pinpointing weaknesses and charting improvement. For users, this represents an unprecedented opportunity to view and understand the safety protocols behind the powerful AI technologies they increasingly interact with daily.
This article relied on reporting by eWeek contributor J. R. Johnivan.