Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

November 25, 2025

NYT ‘Pips’ Hints, Answers, And Walkthrough, Tuesday November 25

November 25, 2025

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

November 24, 2025
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » Meet the Humans Trying to Keep Us Safe From AI
Startup

Meet the Humans Trying to Keep Us Safe From AI

adminBy adminJune 28, 20230 ViewsNo Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

A year ago, the idea of holding a meaningful conversation with a computer was the stuff of science fiction. But since OpenAI’s ChatGPT launched last November, life has started to feel more like a techno-thriller with a fast-moving plot. Chatbots and other generative AI tools are beginning to profoundly change how people live and work. But whether this plot turns out to be uplifting or dystopian will depend on who helps write it.

Thankfully, just as artificial intelligence is evolving, so is the cast of people who are building and studying it. This is a more diverse crowd of leaders, researchers, entrepreneurs, and activists than those who laid the foundations of ChatGPT. Although the AI community remains overwhelmingly male, in recent years some researchers and companies have pushed to make it more welcoming to women and other underrepresented groups. And the field now includes many people concerned with more than just making algorithms or making money, thanks to a movement—led largely by women—that considers the ethical and societal implications of the technology. Here are some of the humans shaping this accelerating storyline. —Will Knight

About the Art

“I wanted to use generative AI to capture the potential and unease felt as we explore our relationship with this new technology,” says artist Sam Cannon, who worked alongside four photographers to enhance portraits with AI-crafted backgrounds. “It felt like a conversation—me feeding images and ideas to the AI, and the AI offering its own in return.”


Rumman Chowdhury led Twitter’s ethical AI research until Elon Musk acquired the company and laid off her team. She is the cofounder of Humane Intelligence, a nonprofit that uses crowdsourcing to reveal vulnerabilities in AI systems, designing contests that challenge hackers to induce bad behavior in algorithms. Its first event, scheduled for this summer with support from the White House, will test generative AI systems from companies including Google and OpenAI. Chowdhury says large-scale, public testing is needed because of AI systems’ wide-ranging repercussions: “If the implications of this will affect society writ large, then aren’t the best experts the people in society writ large?” —Khari Johnson


Sarah Bird’s job at Microsoft is to keep the generative AI that the company is adding to its office apps and other products from going off the rails. As she has watched text generators like the one behind the Bing chatbot become more capable and useful, she has also seen them get better at spewing biased content and harmful code. Her team works to contain that dark side of the technology. AI could change many lives for the better, Bird says, but “none of that is possible if people are worried about the technology producing stereotyped outputs.” —K.J.


Yejin Choi, a professor in the School of Computer Science & Engineering at the University of Washington, is developing an open source model called Delphi, designed to have a sense of right and wrong. She’s interested in how humans perceive Delphi’s moral pronouncements. Choi wants systems as capable as those from OpenAI and Google that don’t require huge resources. “The current focus on the scale is very unhealthy for a variety of reasons,” she says. “It’s a total concentration of power, just too expensive, and unlikely to be the only way.” —W.K.


Margaret Mitchell founded Google’s Ethical AI research team in 2017. She was fired four years later after a dispute with executives over a paper she coauthored. It warned that large language models—the tech behind ChatGPT—can reinforce stereotypes and cause other ills. Mitchell is now ethics chief at Hugging Face, a startup developing open source AI software for programmers. She works to ensure that the company’s releases don’t spring any nasty surprises and encourages the field to put people before algorithms. Generative models can be helpful, she says, but they may also be undermining people’s sense of truth: “We risk losing touch with the facts of history.” —K.J.


When Inioluwa Deborah Raji started out in AI, she worked on a project that found bias in facial analysis algorithms: They were least accurate on women with dark skin. The findings led Amazon, IBM, and Microsoft to stop selling face-recognition technology. Now Raji is working with the Mozilla Foundation on open source tools that help people vet AI systems for flaws like bias and inaccuracy—including large language models. Raji says the tools can help communities harmed by AI challenge the claims of powerful tech companies. “People are actively denying the fact that harms happen,” she says, “so collecting evidence is integral to any kind of progress in this field.” —K.J.


Daniela Amodei previously worked on AI policy at OpenAI, helping to lay the groundwork for ChatGPT. But in 2021, she and several others left the company to start Anthropic, a public-benefit corporation charting its own approach to AI safety. The startup’s chatbot, Claude, has a “constitution” guiding its behavior, based on principles drawn from sources including the UN’s Universal Declaration of Human Rights. Amodei, Anthropic’s president and cofounder, says ideas like that will reduce misbehavior today and perhaps help constrain more powerful AI systems of the future: “Thinking long-term about the potential impacts of this technology could be very important.” —W.K.


Lila Ibrahim is chief operating officer at Google DeepMind, a research unit central to Google’s generative AI projects. She considers running one of the world’s most powerful AI labs less a job than a moral calling. Ibrahim joined DeepMind five years ago, after almost two decades at Intel, in hopes of helping AI evolve in a way that benefits society. One of her roles is to chair an internal review council that discusses how to widen the benefits of DeepMind’s projects and steer away from bad outcomes. “I thought if I could bring some of my experience and expertise to help birth this technology into the world in a more responsible way, then it was worth being here,” she says. —Morgan Meaker


This article appears in the Jul/Aug 2023 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at [email protected].

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

Startup November 25, 2025

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

Startup November 24, 2025

Inside a Wild Bitcoin Heist: Five-Star Hotels, Cash-Stuffed Envelopes, and Vanishing Funds

Startup November 22, 2025

Inside the Multimillion-Dollar Plan to Make Mobile Voting Happen

Startup November 21, 2025

The Former Staffer Calling Out OpenAI’s Erotica Claims

Startup November 20, 2025

OpenAI’s Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It

Startup November 19, 2025
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

‘Odd Lots’ Cohost Joe Weisenthal Has Predictions About How the AI Bubble Will Burst

November 25, 2025

NYT ‘Pips’ Hints, Answers, And Walkthrough, Tuesday November 25

November 25, 2025

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

November 24, 2025

Google’s Black Friday Special Offers For Pixel 10 Pro Customers

November 24, 2025

Today’s Wordle #1618 Hints And Answer For Sunday, November 23

November 23, 2025

Latest Posts

NYT ‘Pips’ Hints, Answers, And Walkthrough, Saturday November 22

November 22, 2025

Inside the Multimillion-Dollar Plan to Make Mobile Voting Happen

November 21, 2025

600 LED Drones Bring David Hockney Paintings To Life In The Night Sky

November 21, 2025

The Former Staffer Calling Out OpenAI’s Erotica Claims

November 20, 2025

How Prediction Markets Are Beating The Experts

November 20, 2025
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2025 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT