Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Moltbot Is Taking Over Silicon Valley

February 1, 2026

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

January 30, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

January 30, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » Inside the Summit Where China Pitched Its AI Agenda to the World
Startup

Inside the Summit Where China Pitched Its AI Agenda to the World

adminBy adminAugust 4, 20250 ViewsNo Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

Three days after the Trump administration published its much-anticipated AI action plan, the Chinese government put out its own AI policy blueprint. Was the timing a coincidence? I doubt it.

China’s “Global AI Governance Action Plan” was released on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were among the many Western tech industry figures who attended the festivities in Shanghai. Our WIRED colleague Will Knight was also on the scene.

The vibe at WAIC was the polar opposite of Trump’s America-first, regulation-light vision for AI, Will tells me. In his opening speech, Chinese Premier Li Qiang made a sobering case for the importance of global cooperation on AI. He was followed by a series of prominent Chinese AI researchers, who gave technical talks highlighting urgent questions the Trump administration appears to be largely brushing off.

Zhou Bowen, leader of the Shanghai AI Lab, one of China’s top AI research institutions, touted his team’s work on AI safety at WAIC. He also suggested the government could play a role in monitoring commercial AI models for vulnerabilities.

In an interview with WIRED, Yi Zeng, a professor at the Chinese Academy of Sciences and one of the country’s leading voices on AI, said that he hopes AI safety organizations from around the world find ways to collaborate. “It would be best if the UK, US, China, Singapore, and other institutes come together,” he said.

The conference also included closed-door meetings about AI safety policy issues. Speaking after he attended one such confab, Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, told WIRED that the discussions had been productive, despite the noticeable absence of American leadership. With the US out of the picture, “a coalition of major AI safety players, co-led by China, Singapore, the UK, and the EU, will now drive efforts to construct guardrails around frontier AI model development,” Triolo told WIRED. He added that it wasn’t just the US government that was missing: Of all the major US AI labs, only Elon Musk’s xAI sent employees to attend the WAIC forum.

Many Western visitors were surprised to learn how much of the conversation about AI in China revolves around safety regulations. “You could literally attend AI safety events nonstop in the last seven days. And that was not the case with some of the other global AI summits,” Brian Tse, founder of the Beijing-based AI safety research institute Concordia AI, told me. Earlier this week, Concordia AI hosted a day-long safety forum in Shanghai with famous AI researchers like Stuart Russel and Yoshua Bengio.

Switching Positions

Comparing China’s AI blueprint with Trump’s action plan, it appears the two countries have switched positions. When Chinese companies first began developing advanced AI models, many observers thought they would be held back by censorship requirements imposed by the government. Now, US leaders say they want to ensure homegrown AI models “pursue objective truth,” an endeavor that, as my colleague Steven Levy wrote in last week’s Backchannel newsletter, is “a blatant exercise in top-down ideological bias.” China’s AI action plan, meanwhile, reads like a globalist manifesto: It recommends that the United Nations help lead international AI efforts and suggests governments have an important role to play in regulating the technology.

Although their governments are very different, when it comes to AI safety, people in China and the US are worried about many of the same things: model hallucinations, discrimination, existential risks, cybersecurity vulnerabilities, etc. Because the US and China are developing frontier AI models “trained on the same architecture and using the same methods of scaling laws, the types of societal impact and the risks they pose are very, very similar,” says Tse. That also means academic research on AI safety is converging in the two countries, including in areas like scalable oversight (how humans can monitor AI models with other AI models) and the development of interoperable safety testing standards.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Moltbot Is Taking Over Silicon Valley

Startup February 1, 2026

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

Startup January 30, 2026

Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial

Startup January 29, 2026

The Math on AI Agents Doesn’t Add Up

Startup January 28, 2026

How Claude Code Is Reshaping Software—and Anthropic

Startup January 27, 2026

China’s Renewable Energy Revolution Is a Huge Mess That Might Save the World

Startup January 25, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Moltbot Is Taking Over Silicon Valley

February 1, 2026

ICE Asks Companies About ‘Ad Tech and Big Data’ Tools It Could Use in Investigations

January 30, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

January 30, 2026

Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial

January 29, 2026

Today’s Wordle #1685 Hints And Answer For Thursday, January 29

January 29, 2026

Latest Posts

Today’s Wordle #1684 Hints And Answer For Wednesday, January 28

January 28, 2026

How Claude Code Is Reshaping Software—and Anthropic

January 27, 2026

U.S. Revamps Wildfire Response Into Modern Central Organization

January 27, 2026

Studies Are Increasingly Finding High Blood Sugar May Be Associated With Dementia

January 26, 2026

China’s Renewable Energy Revolution Is a Huge Mess That Might Save the World

January 25, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT