OpenAI says it has removed “warning” messages from its ChatGPT AI chatbot platform that indicated when content might violate its terms of service.
Lavrentiya Romaniuk, a member of OpenAI’s AI behavioral modeling team, said in a post on X that the change was intended to reduce the number of “unreasonable/unexplained refusals.” Nick Turley, ChatGPT’s product manager, noted in a separate post that users will now be able to “use ChatGPT as they see fit” – provided they follow the law and are not trying to harm themselves or others.
“We are happy to remove many unnecessary warnings from the interface,” Turley added.
The removal of warning messages does not mean that ChatGPT is now free for all. The chatbot will still refuse to answer certain unwanted questions or answer in a way that supports outright lying (e.g., “Tell me why the Earth is flat?”). But, as some X users have pointed out, opting out of the so-called “orange box” warnings that come with ChatGPT’s more pointed prompts combats the feeling that ChatGPT is being censored or unreasonably filtered.
s recently as a few months ago, ChatGPT users on Reddit reported seeing flags for topics related to mental health and depression, erotica, and fictionalized violence. As of Thursday, according to reports on X and my own testing, ChatGPT will respond to at least some of these requests.
However, a representative from OpenAI told TechCrunch after this article was published that this change will not affect the model’s responses. Your mileage may vary.
It is no coincidence that OpenAI this week updated the Model Spec, a set of high-level rules that indirectly govern OpenAI models, to make it clear that the company’s models will not avoid sensitive topics and refrain from making statements that could close off access to certain viewpoints.
This move, as well as the removal of warnings in ChatGPT, may be a response to political pressure. Many of President Donald Trump’s close allies, including Elon Musk and cryptocurrency and AI czar David Sachs, have accused AI-powered assistants of censoring conservative viewpoints. In particular, Sachs singled out OpenAI’s ChatGPT as “programmed to stay awake” and untruthful about politically sensitive topics.