Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

February 24, 2026

Inside the Rolling Layoffs at Jack Dorsey’s Block

February 23, 2026

Code Metal Raises $125 Million to Rewrite the Defense Industry’s Code With AI

February 22, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact
Startup

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

adminBy adminJuly 22, 20230 ViewsNo Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.

“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.

Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.

It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.

“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

Startup February 24, 2026

Inside the Rolling Layoffs at Jack Dorsey’s Block

Startup February 23, 2026

Code Metal Raises $125 Million to Rewrite the Defense Industry’s Code With AI

Startup February 22, 2026

Senators Urge Top Regulator to Stay Out of Prediction Market Lawsuits

Startup February 20, 2026

Zillow Has Gone Wild—for AI

Startup February 19, 2026

OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity

Startup February 18, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

February 24, 2026

Inside the Rolling Layoffs at Jack Dorsey’s Block

February 23, 2026

Code Metal Raises $125 Million to Rewrite the Defense Industry’s Code With AI

February 22, 2026

Senators Urge Top Regulator to Stay Out of Prediction Market Lawsuits

February 20, 2026

Zillow Has Gone Wild—for AI

February 19, 2026

Latest Posts

Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake

February 16, 2026

Salesforce Workers Circulate Open Letter Urging CEO Marc Benioff to Denounce ICE

February 15, 2026

Jeffrey Epstein Advised an Elon Musk Associate on Taking Tesla Private

February 14, 2026

AI Industry Rivals Are Teaming Up on a Startup Accelerator

February 13, 2026

‘Uncanny Valley’: Tech Elites in the Epstein Files, Musk’s Mega Merger, and a Crypto Scam Compound

February 11, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT