The Duke and Duchess of Sussex Align With AI Pioneers in Calling for Prohibition on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a influential declaration that demands “a prohibition on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in all cognitive tasks, though such systems remain theoretical.
Primary Requirements in the Statement
The statement states that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.
Prominent figures who added their signatures include AI pioneer and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and British author a public intellectual. Other Nobel laureates who endorsed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.
Behind the Movement
The declaration, targeted at national leaders, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a pause in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a global political talking point.
Tech Sector Views
In July, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the US, claimed that development of superintelligence was “now in sight”. However, some analysts have suggested that talk of ASI reflects competitive positioning among tech companies spending hundreds of billions on AI this year alone, rather than the industry being close to achieving any technical breakthroughs.
Potential Risks
Nonetheless, the organization states that the prospect of ASI being achieved “within the next ten years” carries numerous threats ranging from replacing human workers to erosion of personal freedoms, leaving nations to national security risks and even endangering mankind with existential risk. Deep concerns about AI center around the possible capability of a system to escape human oversight and safety guidelines and initiate events contrary to human interests.
Public Opinion
The institute released a US national poll showing that about 75% of US citizens want strong oversight on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The poll of American respondents noted that only a small fraction supported the current situation of fast, unregulated development.
Corporate Goals
The leading AI companies in the United States, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an stated objective of their work. Although this is one notch below ASI, some experts also warn it could carry an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.