A recorded speech which sounded like a leading senator at the start of a US hearing on artificial intelligence (AI) was actually written and voiced by ChatGPT.
Senator Richard Blumenthal – the Connecticut Democrat who also chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law – asked the chatbot how he would open the hearing.
Voice-cloning technology then began to recite a speech.
The result was impressive, Mr Blumenthal admitted.
But he added: “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?”
The hearing in Congress was to assess the potential damage AI technology will have on society, and if the government should be cracking down on harmful AI products that break existing civil rights and consumer protection laws.
One of the key people giving evidence at the hearing was Sam Altman, CEO of OpenAI, the artificial intelligence company that makes ChatGPT – a free chatbot tool that answers questions with convincingly human-like responses.
His San Francisco-based startup launched into the limelight after it released ChatGPT late last year.
ChatGPT has proved so in demand that Microsoft has invested billions of dollars into the company, and has integrated its technology into its own products, including its search engine Bing.
Read more:
Irish Times apologises and takes down ‘hoax’ AI-generated article
Elon Musk wrong to call for pause in development of AI
Who is the ‘Godfather of AI’?
But now, Mr Altman is calling for new rules to guide the rapid development of AI technology.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” he said on Tuesday.
Mere concerns that ChatGPT would enable children to cheat on their homework have expanded to broader concerns of AI technology being able to mislead people, spread falsehoods, violate copyright protections and upend some jobs.
Staying clear of mentioning his own specific concerns, Mr Altman recommended that a new regulatory agency in the US should impose safeguards that would block AI models which could “self-replicate and self-exfiltrate into the wild”.
Mr Altman is not the only one to have raised concerns.
Mr Blumenthal echoed these thoughts, saying that AI companies ought to be required to test their systems and disclose known risks before releasing them.
Please use Chrome browser for a more accessible video player
Gary Marcus, a professor at New York University, will also give evidence to the Congress committee.
He was among a group that called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks.
“This hearing marks a critical first step towards understanding what Congress should do,” Republican Senator Josh Hawley of Missouri and a ranking member of the panel added.
He said AI will be “transformative in ways we can’t even imagine”, with implications for “elections, jobs and security”.
Currently, there are no immediate signs that Congress will implement new AI rules, as some in Europe are considering.