Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Reid Hoffman Wants Silicon Valley to ‘Stand Up’ Against the Trump Administration

January 15, 2026

TCL’s 2026 TV Range Swims Against The RGB MiniLED Tide

January 15, 2026

Why Are Grok and X Still Available in App Stores?

January 13, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact
Startup

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

adminBy adminJuly 22, 20230 ViewsNo Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.

“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.

Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.

It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.

“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Reid Hoffman Wants Silicon Valley to ‘Stand Up’ Against the Trump Administration

Startup January 15, 2026

Why Are Grok and X Still Available in App Stores?

Startup January 13, 2026

Steve Jobs’ Early Apple Items Are Going Up for Auction—Along With His Bow Ties

Startup January 12, 2026

Billion-Dollar Data Centers Are Taking Over the World

Startup January 11, 2026

AI Devices Are Coming. Will Your Favorite Apps Be Along for the Ride?

Startup January 10, 2026

Google Gemini Is Taking Control of Humanoid Robots on Auto Factory Floors

Startup January 8, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Reid Hoffman Wants Silicon Valley to ‘Stand Up’ Against the Trump Administration

January 15, 2026

TCL’s 2026 TV Range Swims Against The RGB MiniLED Tide

January 15, 2026

Why Are Grok and X Still Available in App Stores?

January 13, 2026

Steve Jobs’ Early Apple Items Are Going Up for Auction—Along With His Bow Ties

January 12, 2026

Billion-Dollar Data Centers Are Taking Over the World

January 11, 2026

Latest Posts

Google Gemini Is Taking Control of Humanoid Robots on Auto Factory Floors

January 8, 2026

AI Labor Is Boring. AI Lust Is Big Business

January 6, 2026

The Dollar Is Facing an End to Its Dominance

January 4, 2026

So Long, GPT-5. Hello, Qwen

January 2, 2026

In Cryptoland, Memecoin Fever Gives Way to a Stablecoin Boom

December 31, 2025
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT