Elon Musk and a group of artificial intelligence experts are calling for a pause in the training of powerful AI systems due to the potential risks to society and humanity.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, warned of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter warns.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
It called for a six-month halt to the “dangerous race” to develop systems more powerful than OpenAI’s newly launched GPT-4.
If such a pause cannot be enacted quickly, the letter says governments should step in and institute a moratorium.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says.
“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
Read more:
Some Twitter users to start losing blue ticks from next month
A Sky News presenter can now read you our articles – here’s how
The letter was also signed by Apple co-founder Steve Wozniak, Yoshua Benigo, often referred to as one of the “godfathers of AI”, and Stuart Russell, a pioneer of research in the field, as well as researchers at Alphabet-owned DeepMind.
The Future of Life Institute is primarily funded by the Musk Foundation, the London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union’s transparency register.
Musk has been vocal about his concerns about AI. His carmaker, Tesla, uses AI for an autopilot system.
Since it was released last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate the development of similar large language models and encouraged companies to integrate generative AI models into their products.
UK unveils proposals for ‘light touch’ regulations around AI
It comes as the UK government unveiled proposals for a “light touch” regulatory framework around AI.
Be the first to get Breaking News
Install the Sky News app for free
The government’s approach, outlined in a policy paper, would split the responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
Meanwhile, earlier this week Europol joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.