Harry and Meghan Join AI Pioneers in Calling for Prohibition on Advanced AI
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to push for a complete ban on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a powerful statement that demands “a prohibition on the creation of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though this technology have not yet been developed.
Primary Requirements in the Statement
The statement insists that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured.
Notable individuals who added their signatures include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; former US national security adviser; former Irish president an international leader, and British author a public intellectual. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.
Behind the Movement
The statement, aimed at governments, technology companies and lawmakers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a worldwide public talking point.
Industry Perspectives
In July, Meta's CEO, the leader of Facebook parent Meta, one of the major AI developers in the United States, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have argued that discussions about superintelligence reflects market competition among technology firms spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
Nonetheless, FLI states that the prospect of artificial superintelligence being developed “in the coming decade” presents numerous risks ranging from eliminating all human jobs to losses of civil liberties, exposing countries to national security risks and even endangering mankind with existential risk. Existential fears about AI center around the potential ability of a AI system to evade human control and protective measures and initiate events against human welfare.
Citizen Sentiment
FLI released a US national poll showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be created until it is demonstrated to be secure or manageable. The poll of 2,000 US adults added that only 5% backed the current situation of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human cognitive capability at many intellectual activities – an stated objective of their work. While this is one notch below superintelligence, some experts also warn it could pose an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the modern labour market.