What is superintelligence and why are tech giants calling for its ban?

adminOctober 22, 2025

More than 800 prominent figures from across the technology, scientific, political, and cultural spectrum have signed a public statement calling for a prohibition on the development of “superintelligent” artificial intelligence systems.

The coalition, coordinated by the Future of Life Institute and announced Wednesday, includes Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson, five Nobel laureates, AI pioneers Geoffrey Hinton and Yoshua Bengio, both often called “godfathers of AI.”

It also includes former US Joint Chiefs Chairman Mike Mullen, Pope Francis’s AI advisor Paolo Benanti, and even Prince Harry and Meghan, the Duke and Duchess of Sussex.

The politically diverse group reflects what organizers describe as universal concern over the existential risks posed by machines that could surpass human intelligence.

What is superintelligence?

Superintelligence, sometimes referred to as artificial superintelligence or ASI, refers to a type of AI that would be smarter than humans in virtually every area that matters.

That’s very different from today’s AI tools, such as ChatGPT, which fall under the category of “narrow AI” because they are designed to perform specific tasks.

Even the most advanced systems we have now, including large language models like GPT-5, are still limited.

They can write code, draft essays, and even pass exams, but they don’t think independently, set their own goals, or understand the world the way humans do.

They are really just predicting patterns based on training data, not reasoning about the future or making autonomous decisions.

Some researchers think the next milestone is AGI, artificial general intelligence, which would match human intelligence and learn new tasks on its own.

But superintelligence would go far beyond that.

In theory, it could outthink humans in science, strategy, engineering, medicine, basically every cognitive domain, and could solve problems that are currently well outside our reach.

That exponential leap is exactly why the debate around its risks is so intense.

Why are tech leaders calling for a ban?

Those who signed the petition say the current race toward superintelligence, led by companies like OpenAI, Google, and Meta, is moving far faster than governments or regulators can keep up with.

Even insiders are sounding alarms: Sam Altman has said he would be surprised if superintelligence weren’t here by 2030, and Meta has gone so far as to rename part of its AI division “Meta Superintelligence Labs,” making its ambitions crystal clear.

The petition describes superintelligence as a risk on the scale of pandemics or nuclear weapons.

That framing isn’t new, as it echoes a 2023 statement from AI executives urging world leaders to treat AI extinction risk as a top global priority.

Backers of the ban say the stakes are existential.

If superintelligent systems are built without strong safety rules, they argue, humans could lose control over critical systems, suffer mass economic displacement, or face far worse outcomes.

Stuart Russell, the UC Berkeley AI safety researcher who signed the petition, stressed that this isn’t meant to be a blanket ban but rather a demand that safety protocols be in place for a technology that, according to its own creators, could plausibly end humanity.

This petition is the latest in a growing wave of coordinated attempts to slow down the rush toward ever-more powerful AI.

The post What is superintelligence and why are tech giants calling for its ban? appeared first on Invezz