Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

January 30, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

January 30, 2026

Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial

January 29, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » The Former Staffer Calling Out OpenAI’s Erotica Claims
Startup

The Former Staffer Calling Out OpenAI’s Erotica Claims

adminBy adminNovember 20, 20257 ViewsNo Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

When the history of AI is written, Steven Adler may just end up being its Paul Revere—or at least, one of them—when it comes to safety.

Last month Adler, who spent four years in various safety roles at OpenAI, wrote a piece for The New York Times with a rather alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health. “Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully,” he wrote. “We decided AI-powered erotica would have to wait.”

Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” the mental health concerns around how users interact with the company’s chatbots.

After reading Adler’s piece, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and on this episode of The Big Interview, he talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he’s set out for the companies providing chatbots to the world.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we get going, I want to clarify two things. One, you are, unfortunately, not the same Steven Adler who played drums in Guns N’ Roses, correct?

STEVEN ADLER: Absolutely correct.

OK, that is not you. And two, you have had a very long career working in technology, and more specifically in artificial intelligence. So, before we get into all of the things, tell us a little bit about your career and your background and what you’ve worked on.

I’ve worked all across the AI industry, particularly focused on safety angles. Most recently, I worked for four years at OpenAI. I worked across, essentially, every dimension of the safety issues you can imagine: How do we make the products better for customers and rule out the risks that are already happening? And looking a bit further down the road, how will we know if AI systems are getting truly extremely dangerous?

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

Startup January 30, 2026

Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial

Startup January 29, 2026

The Math on AI Agents Doesn’t Add Up

Startup January 28, 2026

How Claude Code Is Reshaping Software—and Anthropic

Startup January 27, 2026

China’s Renewable Energy Revolution Is a Huge Mess That Might Save the World

Startup January 25, 2026

How China’s ‘Crystal Capital’ Cornered the Market on a Western Obsession

Startup January 24, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

January 30, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

January 30, 2026

Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial

January 29, 2026

Today’s Wordle #1685 Hints And Answer For Thursday, January 29

January 29, 2026

The Math on AI Agents Doesn’t Add Up

January 28, 2026

Latest Posts

How Claude Code Is Reshaping Software—and Anthropic

January 27, 2026

U.S. Revamps Wildfire Response Into Modern Central Organization

January 27, 2026

Studies Are Increasingly Finding High Blood Sugar May Be Associated With Dementia

January 26, 2026

China’s Renewable Energy Revolution Is a Huge Mess That Might Save the World

January 25, 2026

Google’s Last Minute Offer For Pixel Customers

January 25, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT