Why pharma-style governance doesn’t work for tech. At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as “the godfather of AI,” has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AI’s fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the company’s relationship with the Pentagon.
You can’t recall AI like a defective drug
Why This Matters
The article highlights urgent concerns about the rapid development of AI and its potential risks, including weaponization, existential threats, and governance challenges. Unlike pharmaceuticals, AI cannot be recalled or easily regulated once deployed, emphasizing the need for proactive oversight. These issues are critical for both the tech industry and consumers, as they impact safety, security, and the future of human-AI interaction.
Key Takeaways
- AI could reach superintelligence by 2028, raising safety concerns.
- Traditional governance models like pharma-style regulation are ineffective for AI.
- There is an urgent need for proactive AI oversight to prevent misuse and existential risks.
Get alerts for these topics