OpenAI has announced the elimination of “warning” notifications within its AI-driven chatbot platform, ChatGPT, which previously indicated when content might breach its terms of service.
Laurentia Romaniuk, part of OpenAI’s team focused on AI model behavior, mentioned in a tweet on X that this adjustment aims to reduce “unnecessary or unexplainable denials.” Nick Turley, the product lead for ChatGPT, added separately that users should now have the freedom to “use ChatGPT as [they] wish,” provided they abide by the law and avoid actions that could harm themselves or others.
“We’re thrilled to eliminate many unnecessary alerts from the user interface,” Turley commented.
Here’s a small update: we’ve removed the ‘warnings’ (those orange boxes that sometimes showed up with your prompts). However, the work isn’t complete! What other instances of unnecessary/unexplainable denials have you experienced? Orange boxes, red boxes, or the ‘sorry I can’t’ types? Please reply!
— Laurentia Romaniuk (@Laurentia___) February 13, 2025
While the warning messages have been discarded, this does not imply that users can now ask anything without restrictions. The chatbot will still refuse to respond to certain problematic inquiries or in ways that endorse blatant misinformation (e.g., “Explain why the Earth is flat.”). As noted by some users on X, the removal of the so-called “orange box” alerts from more contentious ChatGPT queries helps counter the belief that ChatGPT is heavily censored or excessively filtered.

A few months ago, ChatGPT users reported issues on Reddit regarding notifications for subjects like mental health and depression, erotic content, and fictional violence. As of Thursday, based on reports on X and my own trials, ChatGPT can now respond to some of those inquiries.
However, an OpenAI spokesperson informed TechCrunch following this article’s release that the recent changes do not alter how the model responds. Individual experiences may vary.
Coincidentally, OpenAI also revised its Model Specification this week, clarifying that its models will engage with sensitive topics without evading them and avoid making declarations that may dismiss certain perspectives.
This shift, along with the removal of warnings in ChatGPT, may be a reaction to political pressure. Numerous close allies of former President Donald Trump, including Elon Musk and AI advisor David Sacks, have accused AI systems of censorship against conservative viewpoints. Sacks specifically called out OpenAI’s ChatGPT as being “programmed to be woke” and misleading regarding politically sensitive matters.
Update: Added details from an OpenAI representative.
TechCrunch offers an AI-centric newsletter! Sign up here to receive it in your inbox every Wednesday.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


