Italy has banned ChatGPT. Here’s what other countries are doing

Italy has banned ChatGPT. Here’s what other countries are doing

Italy last week became the first Western country to ban ChatGPT, the popular AI chatbot. ChatGPT has impressed researchers with its capabilities, while worrying regulators and ethicists about the negative implications for society. The move has highlighted the lack of any concrete regulation, with the European Union and China among the few jurisdictions developing rules tailored to AI. Various governments are exploring how to regulate AI and some are considering how to deal with general purpose systems such as ChatGPT.

This photo illustration shows the ChatGPT logo in an office in Washington, DC, on March 15, 2023.

Stephen Reynolds | AFP | Getty Images

Italy has become the first country in the West to ban ChatGPT, the popular artificial intelligence chatbot from US startup OpenAI.

Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily stop processing Italian user data amid an investigation into a suspected breach of Europe’s strict privacy regulations.

The regulator, which is also known as Garante, cited a data breach at OpenAI that allowed users to view the threads of conversations other users were having with the chatbot.

“There appears to be no legal basis supporting the mass collection and processing of personal data to ‘train’ the algorithms on which the platform relies,” Garante said in a statement on Friday.

Garante also noted concerns about the lack of age restrictions on ChatGPT and how the chatbot could serve up factually incorrect information in its responses.

OpenAI, which is backed by Microsoft, risks facing a fine of 20 million euros ($21.8 million), or 4% of its annual global revenue, if it does not come up with a solution to the situation within 20 days.

Italy is not the only country reckoning with the rapid pace of AI progress and its implications for society. Other governments are rolling out their own AI rules, which, whether they mention generative AI or not, will undoubtedly affect it. Generative AI refers to a set of AI technologies that generate new content based on user requests. It’s more advanced than previous iterations of AI, thanks in no small part to new big language models that have been trained on massive amounts of data.

There have long been calls for AI to face regulation. But the pace at which technology has advanced is such that it is difficult for governments to keep up. Computers can now create realistic art, write entire essays, or even generate lines of code, in seconds.

“We have to be very careful that we don’t create a world where people are somehow subservient to a larger machine future,” said Sophie Hackford, a futurist and global technology innovation adviser for US agricultural equipment maker John Deere. , for CNBC’s “Squawk.” Box Europe” Monday.

“Technology is here to serve us. It’s there to make our cancer diagnosis faster or to make people stop doing jobs we don’t want to do.”

“We need to think about this very carefully now, and we need to act on this now, from a regulatory perspective,” she added.

Various regulators are concerned by the challenges AI poses to job security, data privacy and equality. There are also concerns about advanced artificial intelligence manipulating political discourse through the generation of false information.

Many governments are also beginning to consider how to deal with general-purpose systems like ChatGPT, with some considering joining Italy in banning the technology.

Last week, the UK announced plans to regulate AI. Instead of imposing new regulations, the government asked regulators in various sectors to implement existing AI regulations.

The UK proposals, which do not mention ChatGPT by name, outline several key principles that companies should follow when using AI in their products, including security, transparency, fairness, accountability and contestability.

Britain is not at this stage proposing restrictions on ChatGPT, or any type of AI for that matter. Instead, it wants to ensure that companies are developing and using AI tools responsibly and giving users enough information about how and why certain decisions are made.

In a speech to Parliament last Wednesday, Digital Minister Michelle Donelan said the sudden popularity of generative AI showed that the risks and opportunities surrounding the technology were “emerging at an incredible pace”.

By taking a non-statutory approach, the government will be able to “respond quickly to advances in AI and intervene further if necessary,” she added.

Dan Holmes, a fraud prevention lead at Feedzai, which uses AI to fight financial crime, said the key priority of the UK approach was addressing “what good use of AI looks like”.

“Moreover, if you’re using AI, these are the principles you need to think about,” Holmes told CNBC. “And it often comes down to two things, which is transparency and fairness.”

The rest of Europe is expected to take a much more restrictive stance on artificial intelligence than its British counterparts, who have increasingly diverged from EU digital laws since the UK’s withdrawal from the bloc.

The European Union, which is often at the forefront when it comes to regulating technology, has proposed groundbreaking AI legislation.

Known as the European AI Act, the rules will greatly limit the use of AI in critical infrastructure, education, law enforcement and the judicial system.

It will operate in accordance with the EU General Data Protection Regulation. These rules govern how companies can process and store personal data.

When the AI ​​act was first dreamed up, officials hadn’t counted on the amazing progress of AI systems capable of generating impressive art, stories, jokes, poems and songs.

According to Reuters, draft EU rules consider ChatGPT to be a form of general-purpose AI used in high-risk applications. High-risk AI systems are defined by the commission as those that could affect people’s fundamental rights or safety.

They would face measures that include tougher risk assessments and a requirement to eliminate discrimination stemming from dataset-feeding algorithms.

“The EU has a large and deep pocket of expertise in AI. They have access to some of the best talent in the world and this is not a new conversation for them,” said Max Heinemeyer, Darktrace’s chief product officer. CNBC.

“It is worth trusting them to have the best of member states at heart and fully aware of the potential competitive advantages these technologies could bring against the risks.”

But while Brussels has been hammering out AI laws, some EU countries are already watching Italy’s actions on ChatGPT and debating whether to follow suit.

“In principle, a similar procedure is also possible in Germany,” Ulrich Kelber, Germany’s Federal Commissioner for Data Protection, told the Handelsblatt newspaper.

French and Irish privacy regulators have contacted their counterparts in Italy to learn more about its findings, Reuters reported. The Swedish data protection authority ruled out a ban. Italy is able to move forward with such actions as OpenAI does not have a single office in the EU.

Ireland is usually the most active regulator when it comes to data privacy as most US tech giants such as Meta and Google have their offices there.

The US has yet to propose any formal rules to bring oversight of AI technology.

The country’s National Institute of Science and Technology has issued a national framework that gives companies that use, design or deploy AI systems guidance on managing potential risks and harms.

But it operates on a voluntary basis, meaning firms will not face consequences for not complying with the rules.

So far, there is no word on any action being taken to restrict ChatGPT in the US

Last month, the Federal Trade Commission received a complaint from a nonprofit research group alleging that GPT-4, OpenAI’s latest large language model, is “biased, misleading, and a risk to privacy and public safety” and violates agency AI guidelines.

The complaint could lead to an investigation into OpenAI and the suspension of commercial deployment of its large language models. The FTC declined to comment.

ChatGPT is not available in China, nor in various countries with heavy internet censorship such as North Korea, Iran and Russia. It is not officially blocked, but OpenAI does not allow local users to register.

Some big tech companies in China are developing alternatives. Baidu, Alibaba and, some of China’s biggest tech firms, have announced plans to rival ChatGPT.

China has been keen to ensure that its tech giants are developing products in line with its strict regulations.

Last month, Beijing introduced the first regulation of its kind on so-called “deepfakes,” synthetically created or altered images, videos or text made using AI.

Chinese regulators have previously introduced rules governing how companies operate recommendation algorithms. One of the requirements is that companies must submit details of their algorithms to the cyberspace regulator.

Such regulations could theoretically apply to any type of ChatGPT-style technology.

— CNBC’s Arjun Kharpal contributed to this report

Leave a Reply

Your email address will not be published. Required fields are marked *