The Canadian agency also noted privacy violations, social manipulation and bias among the concerns that AI raises.
The Canadian Security Intelligence Service — Canada’s primary national intelligence agency — raised concerns about the disinformation campaigns conducted across the internet using artificial intelligence (AI) deepfakes.
Canada sees the growing “realism of deepfakes” coupled with the “inability to recognize or detect them” as a potential threat to Canadians. In its report, the Canadian Security Intelligence Service cited instances where deepfakes were used to harm individuals.
“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information. This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”
It also referred to Cointelegraph’s coverage of the Elon Musk deepfakes targeting crypto investors.
Yikes. Def not me.
— Elon Musk (@elonmusk) May 25, 2022
Since 2022, bad actors have used sophisticated deepfake videos to convince unwary crypto investors to willingly part with their funds. Musk’s warning against his deepfakes came after a fabricated video of him surfaced on X (formerly Twitter) promoting a cryptocurrency platform with unrealistic returns.
The Canadian agency noted privacy violations, social manipulation and bias as some of the other concerns that AI brings to the table. The department urges governmental policies, directives, and initiatives to evolve with the realism of deepfakes and synthetic media:
“If governments assess and address AI independently and at their typical speed, their interventions will quickly be rendered irrelevant.”
The Security Intelligence Service recommended a collaboration amongst partner governments, allies and industry experts to address the global distribution of legitimate information.
Related: Parliamentary report recommends Canada recognize, strategize about blockchain industry
Canada’s intent to involve the allied nations in addressing AI concerns was cemented on Oct. 30, when the Group of Seven (G7) industrial countries agreed upon an AI code of conduct for developers.
As previously reported by Cointelegraph, the code has 11 points that aim to promote “safe, secure, and trustworthy AI worldwide” and help “seize” the benefits of AI while still addressing and troubleshooting the risks it poses.
The countries involved in the G7 include Canada, France, Germany, Italy, Japan, the United Kingdom, the United States and the European Union.
Magazine: Breaking into Liberland: Dodging guards with inner-tubes, decoys and diplomats