The move follows a recent data breach suffered by the AI platform on March 20, where user data was exposed to a user.
Italy’s watchdog in charge of protecting data has announced that it’s temporarily blocking the artificial intelligence chatbot ChatGPT and is opening an investigation over suspected breaches of data privacy rules.
The data protection agency has ordered the immediate limitation of data processing for Italian users by OpenAI, the United States company behind ChatGPT. The agency highlighted that this was a response to the recent data breach that the AI platform suffered on March 20.
We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted. We take this very seriously and are sharing details of our investigation and plan here. 2/2 https://t.co/JwjfbcHr3g
— OpenAI (@OpenAI) March 24, 2023
In addition, the Italian data watchdog also said that there is a lack of information for users in terms of data collected by OpenAI. Furthermore, the agency noted that there’s a lack of legal basis that justifies the mass collection and storage of personal data by the AI as it trained its algorithms. The agency also noted that information given by the AI chatbot doesn’t always reflect real data and determined that there may be inaccuracy in terms of processing personal data.
Apart from these, the Italian data watchdog also highlighted a potential breach of ChatGPT’s own data protection rules. According to the agency, even though ChatGPT limits its use to only people above 13 years old, there is no filter that verifies the user’s age within the application. This means that minors could be exposed to unsuitable answers for their developing minds.
Related: Here’s how the crypto industry is using artificial intelligence
Apart from Italy, AI chatbot is also facing heat from other parts of the world. On March 31, the Center for Artificial Intelligence and Digital Policy (CAIDP) filed a complaint against ChatGPT, attempting to stop the release of powerful AI systems to the masses. The CAIDP described the chatbot as a “biased” and “deceptive” platform which is a risk to public safety and privacy.
Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom