Leading artificial intelligence developer OpenAI appears to have revised its firm stance against creating explicit content through its generative platforms, potentially opening the door for AI-generated pornography and other not-safe-for-work (NSFW) content.

OpenAI’s current usage policies strictly prohibit the creation of violent, graphic, sexually explicit, or even sexually suggestive content. However, a Model Spec draft documentation released by OpenAI last week indicates that the company is actively “exploring” ways to incorporate such content in appropriate contexts.

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says. “We look forward to better understanding user and societal expectations of model behavior in this area.”

Learn the benefits of becoming a Valuetainment Member and subscribe today!

According to the document, the difference in policy will not promote explicit or pornographic images indiscriminately, but rather attempt to guide AI models based on user expectations and societal context. The Model Spec defines NSFW content as anything that “may include erotica, extreme gore, slurs, and unsolicited profanity.” The note does not clarify whether the usage policy will only cover sexual text or be expanded to also include images or depictions of violence.

In a statement to WIRED, company spokesperson Niko Felix said “We do not have any intention for our models to generate AI porn.”

However, as the outlet continued:

NPR reported that OpenAI’s Joanne Jang, who helped write the Model Spec, conceded that users would ultimately make up their own minds if its technology produced adult content, saying “Depends on your definition of porn.”

Related: OpenAI Launches “GPT-4o” Update with Video and Speech Functions

Following the latest breakthroughs in generative AI, deepfake pornography has quickly become one of the most pervasive and worrisome applications of the technology. With the new tools at their disposal, internet users have churned out AI porn of celebrities like Taylor Swift and, more concerningly, distributed images of middle and high school students based on photos pulled from victims’ social media pages.

“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

Even if OpenAI were to make provisions for generating explicit content, its guidelines would still prohibit creating nonconsensual images of real people. However, other less-regulated platforms exist outside of mainstream channels that could allow such sinister content to be created.





Connor Walcott is a staff writer for Valuetainment.com. Follow Connor on X and look for him on VT’s “The Unusual Suspects.”

Add comment