Tech News
← Back to articles

Worried about superintelligence? So are these AI leaders - here's why

read original related products more articles

MirageC/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

Leaders argue that AI could existentially threaten humans.

Prominent AI figures, alongside 1,300 others, endorsed the worry.

The public is equally concerned about "superintelligence."

The surprise release of ChatGPT just under three years ago was the starting gun for an AI race that has been rapidly accelerating ever since. Now, a group of industry experts is warning -- and not for the first time -- that AI labs should slow down before humanity drives itself off a cliff.

Also: What Bill Gates really said about AI replacing coding jobs

A statement published Wednesday by the Future of Life Institute (FLI), a nonprofit organization focused on existential AI risk, argues that the development of "superintelligence" -- an AI industry buzzword that usually refers to a hypothetical machine intelligence that can outperform humans on any cognitive task -- presents an existential risk and should therefore be halted until a safe pathway forward can be established.

A stark warning

... continue reading