Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union

February 28, 2026

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

February 28, 2026

Backyard Baseball Is Getting A New Game And I’m Ready For It In July

February 27, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety
Innovation

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

adminBy adminFebruary 28, 20261 ViewsNo Comments11 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

The most important AI governance meeting of 2026 was not in Davos. It was not in Mumbai. It happened in Paris, in a room most business leaders have never heard of, and what was said there should have landed on every board agenda going forward.

While executives networked in Switzerland and applauded India’s AI ambitions, more than 800 researchers from 65 countries gathered at UNESCO House for IASEAI’26, the second annual conference of the International Association for Safe and Ethical AI. They were not there to celebrate. They were there to name, precisely and on the record, what is already going wrong, and why the governance responses being proposed may not be adequate for the problem they are meant to solve.

What followed was the most honest three days in AI this year.

What Is IASEAI And Why Does It Matter?

IASEAI was born from the 2023 AI Safety Summit at Bletchley Park, England when a steering committee that included many of the leading minds in AI including Stuart Russell, Geoffrey Hinton, Yoshua Bengio, Max Tegmark, and Kate Crawford concluded the world needed a permanent institution, not occasional summits, to pace AI development alongside capability growth. Mark Nitzberg, a researcher and AI governance architect who had spent years arguing that safety work required institutional infrastructure rather than conference declarations, became a co-founder and now serves as Interim Executive Director. IASEAI formally incorporated as a nonprofit in November 2024.

The choice of Nitzberg as the organization’s operational leader is itself a signal about what IASEAI is trying to be. He is not a celebrity researcher. He is a builder, someone focused on the unglamorous work of turning expert consensus into durable institutions. His background spans AI research, policy, and the organizational design questions that most safety conversations skip over entirely: who decides, who verifies, who is accountable when something goes wrong, and what mechanisms exist to enforce any of it.

IASEAI’s inaugural 2025 conference, held at OECD headquarters, produced a ten-point Call to Action urging binding safety standards and global cooperation. The return to UNESCO headquarters in 2026 signaled both growing institutional legitimacy and escalating urgency.

Three weeks before the conference, the Second International AI Safety Report, led by Turing Award winner Yoshua Bengio and authored by more than 100 experts from over 30 countries, confirmed the central paradox driving Nitzberg’s work: general-purpose AI can now solve graduate-level mathematics, write production-grade software, and design proteins. Yet the same models still hallucinate, and reasoning breaks down across multi-step processes. Powerful enough to transform industries. Unreliable enough to cause catastrophic failures. That paradox defined three days of debate.

What Are The Biggest AI Safety Risks In 2026?

The conference organized sessions around themes that read like a threat assessment: alignment and value learning, agentic safety, control and containment, AI in warfare, interpretability, and the future of work. The inclusion of “agentic safety” as a standalone track was the most significant signal. As AI systems evolve from chatbots into autonomous agents that browse the web, execute code, make purchases, and coordinate with other agents, the safety challenge changes fundamentally. It is no longer about filtering offensive outputs. It is about preventing systems that act in the world from acting in ways we did not intend and cannot reverse.

That shift is not hypothetical. In February 2026 alone, OpenAI recruited the founder of OpenClaw Peter Steinberger to accelerate personal AI agents, Alibaba launched Qwen3.5 explicitly for the “agentic AI era,” and DeepSeek V4 arrived as the expected sequel to the January 2025 release that erased $600 billion from Nvidia’s market cap in a single day. The AI arms race is no longer between companies. It is between nations. And the agents being deployed grow more autonomous by the week.

What Did The IASEAI Researchers Actually Find?

Stuart Russell opened the conference by advancing his framework for “provably beneficial AI,” systems designed with built-in uncertainty about human objectives so they defer to human judgment rather than pursue fixed goals without constraint. His metaphor was stark: humanity’s current AI trajectory resembles “everyone in the world getting onto a brand-new kind of airplane that has never been tested before. It’s going to take off, and it’s never going to land.” The difference between 2025 and 2026, he noted, is that the plane is now accelerating.

Geoffrey Hinton compared the moment to early climate change disputes, where scientific consensus eventually emerged, but only after years of costly inaction. The cost of delay is not linear. It compounds alongside the capability curve.

The most commercially urgent finding came from Matija Franklin, a Google DeepMind researcher whose work on AI manipulation was incorporated into the EU AI Act. His paper on “Virtual Agent Economies” documents a trajectory toward a vast, spontaneous agent economy where autonomous AI systems already transact, negotiate, and coordinate at scales and speeds beyond human oversight. No one designed it. No one governs it. And no major company has fully examined what it means when their AI agent makes a commitment their legal team would never have approved. His DeepMind co-author Iason Gabriel, named by TIME as one of the 100 most influential people in AI along with Stuart Russell, extended that analysis to the manipulation and inequality risks that emerge when millions of AI assistants interact with each other on behalf of users across systems that no governance framework was built to address.

The most sobering session came from Zuzanna Wojciak of WITNESS, the human rights organization that defends the evidentiary value of authentic documentation. Her point cut through every technical debate in the room: deepfakes are not primarily a technology problem. They are an evidence problem. When perpetrators can dismiss authentic footage of human rights abuses as AI-generated, and when detection tools fail on non-facial content like conflict zones, the infrastructure of accountability itself is under attack. That argument reaches well beyond human rights documentation. It extends into courtrooms, boardrooms, and every organization that relies on verified information to make decisions.

Has The United States Abandoned AI Safety Leadership?

The question was not stated from the main stage at IASEAI’26. It did not need to be. The answer was visible in the empty seats.

The United States sent no meaningful delegation. While European institutions, Asian governments, and civil society organizations from more than 65 countries debated binding safety frameworks and whistleblower protections, Washington was largely absent. Conference participants who spoke privately described the US posture as a strategic choice, not a scheduling conflict.

That choice is legible in the policy record. The Biden administration’s 2023 executive order on AI safety established voluntary commitments from leading developers. The current administration revoked it in January 2025, replacing it with a framework explicitly prioritizing “American AI dominance” over safety coordination. A December 2025 executive order then proposed preempting state AI laws, creating what constitutional scholars describe as a deliberate vacuum: federal law too weak to constrain the industry, state law too fragmented to fill the gap, and international frameworks dismissed as constraints on competitiveness.

Representatives from allied nations were notably direct in off-record conversations: they are building safety frameworks without the United States, and they expect those frameworks to become the de facto global standard by sheer market weight, regardless of what Washington does. Nitzberg, who has engaged with policymakers across multiple governments in his work building IASEAI, has argued consistently that governance gaps created by one major power do not stay empty. They get filled by whoever shows up.

When The Governance System Held. Once. By Luck.

The sharpest evidence of where US AI policy actually stands came from Anthropic. CEO Dario Amodei recently disclosed that the Department of War had demanded removal of two specific safeguards from Claude as a condition of continued government contracts: the capability to enable mass domestic surveillance and the capability to power fully autonomous weapons without human oversight. Anthropic refused. President Trump has now banned Anthropic from use in government systems and the Pentagon will designate Anthropic a “supply chain risk,” a label previously reserved for adversary nations, while simultaneously calling Claude essential to national security.

The governance system held once, for one company, under unusual circumstances. Anthropic could refuse because it had the financial runway, the public profile, and the founding mission to absorb the political cost. Most AI companies operating on government contracts have none of those things. The question that went largely unasked in press coverage: how many companies received similar demands and complied, quietly, because refusing was not a viable option?

This is the pattern that US absence from IASEAI makes visible. The country that built the most capable AI systems in the world has chosen to govern them primarily through pressure and procurement rather than frameworks and accountability. That is not a governance system. That is luck. And it is the kind of luck that does not repeat reliably across an entire industry.

The Job Cuts That Are Actually An AI Safety Story

The same week IASEAI concluded, Block announced the layoff of 4,000 employees, nearly half its workforce. Jack Dorsey’s shareholder letter was direct: AI automation over headcount. Block’s stock rose 24%. Dorsey predicted most companies would follow within a year.

This is not a jobs story. It is an AI safety story the safety community has been slow to claim. When AI systematically decouples capital from labor, employment-linked tax revenues contract, consumer demand erodes, and social stability degrades faster than any safety net was designed to respond. Unlike previous technological disruptions, the speed and concentration of AI displacement is outpacing the historical pattern of new work creation. The question for board directors is not whether their industry reaches the Block conclusion. It is what happens to their customer base when their customers’ industries get there first.

What Is The Best AI Governance Framework Available Right Now?

The most actionable idea from IASEAI came from Gillian Hadfield, Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins, and the team at Fathom led by Bri Treece. Their Independent Oversight Marketplace for AI, built around Independent Verification Organizations, is the most practical governance framework currently in circulation. Expert-led IVOs verify AI safety standards. State governments authorize the marketplace. Companies that earn certification gain a credible trust signal. The framework moves at the pace of innovation rather than legislation.

It has one structural flaw that honest advocates, including Nitzberg, have been willing to name directly. Voluntary certification creates adverse selection. The organizations most eager to seek verification are, almost by definition, not the organizations most likely to be the problem. Without embedding IVO certification into procurement requirements, liability exposure, or insurance pricing, the framework risks becoming a trust signal for organizations that never needed the signal in the first place.

The path forward is straightforward but requires saying it plainly: voluntary governance is the bridge to mandatory governance. Companies that build toward IVO certification now will have structural advantages when the mandate arrives. Boards that treat AI safety as a compliance cost today are making the same mistake organizations made when they treated cybersecurity as an IT expense in 2010. Customers, regulators, and talent are all asking the same question: can we trust your AI?

The Alpha Institute for AI Governance is one IVO built specifically for the boardroom, helping directors assess and verify AI governance maturity at the organizational level. As autonomous agents begin transacting on behalf of companies in ways no current legal framework anticipated, independent verification is a fiduciary question, not a reputational one.

Four Questions Every Board Must Answer Before The Next Meeting

As one IASEAI participant put it during the final workshop: “We are building the plane, flying it, and writing the safety manual simultaneously. The question is whether we finish the manual before the turbulence gets worse.”

The turbulence is already here. Four questions cut to what matters.

Can your board define “safe AI” in technical terms rather than compliance terms, in a single sentence it wrote itself? If not, you are governing a system you have not defined.

Where are autonomous AI agents making or influencing decisions on behalf of your organization right now, without human review before those decisions have consequences? Not in theory. Right now.

Which of your AI vendors would have complied with the Pentagon’s demand if they lacked Anthropic’s profile and resolve? Do you know your vendors’ safety commitments well enough to answer that?

When the Block workforce thesis reaches your industry, what happens to the customers of every competitor that makes the same decision? Have you modeled the demand destruction on the other side of your cost savings?

The organizations that navigate what comes next will not be the ones that moved fastest or the ones that moved most cautiously. They will be the ones that knew precisely what they were building, what it was capable of doing without them, and what they were responsible for when it did.

The manual is not finished. The plane is already in the air. The only question that matters now is who is writing the next page.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Backyard Baseball Is Getting A New Game And I’m Ready For It In July

Innovation February 27, 2026

Solving The Data Bottleneck For Physical AI

Innovation February 26, 2026

Today’s Wordle #1686 Hints And Answer For Friday, January 30

Innovation January 30, 2026

Today’s Wordle #1685 Hints And Answer For Thursday, January 29

Innovation January 29, 2026

Today’s Wordle #1684 Hints And Answer For Wednesday, January 28

Innovation January 28, 2026

U.S. Revamps Wildfire Response Into Modern Central Organization

Innovation January 27, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union

February 28, 2026

As Davos & India Celebrated AI, Paris Sounded The Alarm On AI Safety

February 28, 2026

Backyard Baseball Is Getting A New Game And I’m Ready For It In July

February 27, 2026

An FBI ‘Asset’ Helped Run a Dark Web Site That Sold Fentanyl-Laced Drugs for Years

February 26, 2026

Solving The Data Bottleneck For Physical AI

February 26, 2026

Latest Posts

Mark Zuckerberg Tries to Play It Safe in Social Media Addiction Trial Testimony

February 24, 2026

Inside the Rolling Layoffs at Jack Dorsey’s Block

February 23, 2026

Code Metal Raises $125 Million to Rewrite the Defense Industry’s Code With AI

February 22, 2026

Senators Urge Top Regulator to Stay Out of Prediction Market Lawsuits

February 20, 2026

Zillow Has Gone Wild—for AI

February 19, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT