Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Duolingo’s Luis von Ahn Wants to Delete the Blockchain

April 12, 2026

California Suspends Enforcement of Law Requiring VCs to Report Diversity Data

April 11, 2026

Iran Threatens to Start Attacking Major US Tech Firms on April 1

April 10, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » The Former Staffer Calling Out OpenAI’s Erotica Claims
Startup

The Former Staffer Calling Out OpenAI’s Erotica Claims

adminBy adminNovember 20, 20257 ViewsNo Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

When the history of AI is written, Steven Adler may just end up being its Paul Revere—or at least, one of them—when it comes to safety.

Last month Adler, who spent four years in various safety roles at OpenAI, wrote a piece for The New York Times with a rather alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health. “Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully,” he wrote. “We decided AI-powered erotica would have to wait.”

Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” the mental health concerns around how users interact with the company’s chatbots.

After reading Adler’s piece, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and on this episode of The Big Interview, he talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he’s set out for the companies providing chatbots to the world.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we get going, I want to clarify two things. One, you are, unfortunately, not the same Steven Adler who played drums in Guns N’ Roses, correct?

STEVEN ADLER: Absolutely correct.

OK, that is not you. And two, you have had a very long career working in technology, and more specifically in artificial intelligence. So, before we get into all of the things, tell us a little bit about your career and your background and what you’ve worked on.

I’ve worked all across the AI industry, particularly focused on safety angles. Most recently, I worked for four years at OpenAI. I worked across, essentially, every dimension of the safety issues you can imagine: How do we make the products better for customers and rule out the risks that are already happening? And looking a bit further down the road, how will we know if AI systems are getting truly extremely dangerous?

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Duolingo’s Luis von Ahn Wants to Delete the Blockchain

Startup April 12, 2026

California Suspends Enforcement of Law Requiring VCs to Report Diversity Data

Startup April 11, 2026

Iran Threatens to Start Attacking Major US Tech Firms on April 1

Startup April 10, 2026

OpenAI Acquires Tech Talk Show ‘TBPN’—and Buys Itself Some Positive News

Startup April 9, 2026

AI Research Is Getting Harder to Separate From Geopolitics

Startup April 8, 2026

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Startup April 7, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Duolingo’s Luis von Ahn Wants to Delete the Blockchain

April 12, 2026

California Suspends Enforcement of Law Requiring VCs to Report Diversity Data

April 11, 2026

Iran Threatens to Start Attacking Major US Tech Firms on April 1

April 10, 2026

OpenAI Acquires Tech Talk Show ‘TBPN’—and Buys Itself Some Positive News

April 9, 2026

AI Research Is Getting Harder to Separate From Geopolitics

April 8, 2026

Latest Posts

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

April 6, 2026

Apple Still Plans to Sell iPhones When It Turns 100

April 5, 2026

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

April 3, 2026

Kalshi Has Been Temporarily Banned in Nevada

April 2, 2026

‘A Rigged and Dangerous Product’: The Wildest Week for Prediction Markets Yet

April 1, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT