The Federal Trade Commission (FTC) unanimously approved a measure to streamline its staff’s ability to issue civil investigative demands (CIDs) in AI investigations while retaining its authority to determine when CIDs are issued.
The U.S. Federal Trade Commission (FTC) has announced the approval of a new streamlined process for investigating cases involving the unlawful use of artificial intelligence (AI), marking an increased focus on addressing potential legal violations related to AI applications.
The commission unanimously approved a measure to streamline FTC staff’s ability to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the Commission’s authority to determine when CIDs are issued.
The FTC issues CIDs to obtain documents, information and testimony that advance FTC consumer protection and competition investigations. According to the FTC’s statement, the omnibus resolution will be effective for ten years.
FTC authorizes compulsory process for AI-related products and services: https://t.co/ALlbc4Gecw
— FTC (@FTC) November 21, 2023
In conjunction with other measures, this action underscores the FTC’s commitment to investigating cases related to artificial intelligence. Detractors of the technology have expressed concerns that it could amplify fraudulent activities.
According to a report, during a September hearing, Commissioner Rebecca Slaughter, a Democrat nominated for another term, aligned with two Republicans at the agency. They concurred that the focus should be on challenges such as the use of AI to enhance the persuasiveness of phishing emails and robocalls.
Related: OpenAI to rehire Sam Altman as CEO with new initial board members
The emergence of artificial intelligence has opened up new avenues for human expression and creative capabilities. However, the capacity to perform various tasks with a digitally generated AI identity has also brought about new challenges to address. As per Sumsub data, the proportion of fraud attributed to deep fakes more than doubled from 2022 to Q1 2023, witnessing a notable increase in the United States, from 0.2% to 2.6%.
On Nov. 16, the agency unveiled a competition to determine the most effective method to safeguard consumers from fraud and other risks associated with voice cloning. Voice cloning technology has grown more sophisticated as text-to-speech AI technology has improved. The technology holds promise for consumers, such as medical assistance for those who may have lost their voices due to accident or illness.
Magazine: AI Eye: Apple developing pocket AI, deep fake music deal …