A growing number of malware creators are now taking advantage of the significant interest in Chatgpt to lure victims, Facebook owner Meta has noticed. According to its head of information security, the AI-based chatbot is “the new crypto” for bad actors and the social media giant is preparing for various abuses.
Malware Inspired by Chatgpt Is on the Rise, Facebook’s Parent Company Says
Meta, the corporation behind Facebook, has found that malware purveyors are now exploiting public interest in Chatgpt, Openai’s chatbot powered by artificial intelligence (AI), to entice users into downloading malicious apps and browser extensions.
The company has identified around 10 malware families and over 1,000 malicious links that have been promoted as tools featuring Chatgpt since March, according to a report quoted by Reuters. On Wednesday, its representatives likened the phenomenon to crypto-themed scams.
In some of the cases, the malware delivered working Chatgpt functionality alongside abusive files, Meta noted. At a press conference on the findings in the report, its Chief Information Security Officer Guy Rosen remarked that for the bad actors “Chatgpt is the new crypto.”
During the briefing on Wednesday, Rosen and other Meta executives also pointed out that Facebook’s parent company is preparing its defenses for a variety of potential abuses related to generative AI technologies like Chatgpt.
The rising popularity and rapid development of platforms like the Microsoft-funded chatbot have raised concerns among authorities around the world, including that such tools are likely to make online disinformation campaigns easier to propagate, Reuters noted.
Meta executives believe it’s still early for examples of generative AI being used in information operations, although Rosen commented that he expects some bad actors to employ such technologies to “try to speed up and perhaps scale up” their activities.
In a statement issued after their meeting in Japan at the end of April, digital ministers of the G7 countries agreed that their developed nations should adopt AI regulations that are “risk-based” while enabling the development of AI technologies.
In a recent interview, entrepreneur and investor Elon Musk accused Openai, the developer of Chatgpt which he helped found, of “training the AI to lie.” He also announced plans to create a rival to the offerings of tech giants which he called “Truthgpt.”
Do you think the trend of malware actors leveraging public interest in Chatgpt to lure victims will continue to grow? Share your thoughts on the subject in the comments section below.