Italy’s temporary ban on ChatGPT has prompted other European nations to investigate whether they need to adopt stricter measures to control the popularity of chatbots and the ever-growing use of generative AI. 

Despite split opinion among European parliamentarians regarding the content and extent of the EU AI Act, some regulators have noted that current mechanisms can be utilized to regulate the emerging category of generative AI firms. These include the General Data Protection Regulation (GDPR), which empowers users to manage their personal data.

ChatGPT has also been gathering pace in Latin America, with the likes of Brazil beginning to ponder how best to deal with its ethical issues. 

Generative AI, including OpenAI’s ChatGPT, employs algorithms to produce human-like responses to text queries by analyzing vast amounts of data, which may include data owned by internet users.

Italy’s ban

Last month, OpenAI took ChatGPT offline in Italy after the government’s Data Protection Authority temporarily banned the chatbot and launched a probe over the artificial intelligence application’s suspected breach of privacy rules.

After a cyber security breach that exposed people’s ChatGPT conversations and financial information, the investigation into OpenAI was launched.

The Italian agency, also known as Garante, blamed Microsoft Corp-backed OpenAI for neglecting to verify the ages of ChatGPT users and the “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot.

Additionally, the regulator expressed concerns that the lack of age verification exposed minors to inappropriate content that was beyond their level of understanding. Bard, Google’s chatbot, is available to users above the age of 18 due to similar concerns. 

Italy has now laid out several requirements for OpenAI to continue to operate within the country. These include:

–       Full transparency: OpenAI must publish an information notice detailing its data processing.

–       Age verification: The firm must implement age gating measures to prevent minors from accessing ChatGPT.

–       Legal clarification: It must clarify the legal basis for processing people’s data for training its AI.

–       User rights: The company must provide ways for users to exercise their data rights and object to data processing.

–       Local awareness: OpenAI must conduct a local awareness campaign to inform Italians about the use of their data. 

OpenAI has been given until April 30 by the Italian data-protection authority to respond to its concerns or face a fine of €20 million ($21.7 million) or up to 4% of its annual revenues.

Italy is the first Western nation to take action against an AI-powered chatbot.

A domino effect?

Privacy regulators in France and Ireland have contacted their Italian counterparts to gain more insight into the rationale for the ban on ChatGPT. The German commissioner for data protection has also expressed concerns over data security and has hinted that Germany might follow Italy’s lead and prohibit ChatGPT.

“We will collaborate with all EU data protection authorities regarding this issue,” said a spokesperson for Ireland’s Data Protection Commissioner. 

Meanwhile, the privacy regulator in Sweden stated that it had no intention of banning ChatGPT and had not been in touch with the Italian watchdog. 

Spain’s regulator has requested that the privacy watchdog of the European Union assess the privacy issues related to OpenAI’s ChatGPT. 

In Latin America, Brazil was ranked as the country with the most ChatGPT users per person. However, the country is playing catch-up on its AI regulation.

A Senate working group has recommended a focus on citizens’ rights, risk categorization, and governance measures, with provisions for informing individuals when they are interacting with AI systems, providing explanations of system decisions and the right to contest provisions. The proposal also includes provisions for non-discrimination and protection of personal data.

There is concern that AI regulation will be overbearing, and limit the development potential of models such as ChatGPT. Arjun Chandar, founder & CEO of IndustrialML, told 150Sec, “Any restrictions being planned over the next year need to be carefully limited to applications which put safety and social order at risk, while still allowing companies to use it to improve their business processes.”

Nutan B, Vice President of Consulting at Gramener, said that “Achieving a delicate equilibrium between fostering AI innovation and addressing regulatory priorities such as privacy, security, fairness, and ethical considerations will demand proactive collaboration among all stakeholders. I think this will pave the way for a brighter and more ethical future for AI-powered solutions.”

Like other privacy regulators, Italy’s Garante is independent of the government and was one of the first to caution Chinese-owned TikTok about violating existing European Union privacy regulations.

Although privacy commissioners are in favor of tighter regulation, governments appear to be more lenient.

Italy’s deputy prime minister criticized the decision of the country’s own regulator, describing it as “excessive,” and a spokesperson for the German government stated that banning ChatGPT would not be necessary.

OpenAI, whose AI platform gained widespread attention after its launch in November, has said that it actively works to minimize personal data use in training its AI systems. The company has no offices in the European Union.