[ad_1]
No, it’s not an April Fools’ joke: OpenAI has started geoblocking access to its generative AI chatbot, ChatGPT, in Italy.
The move follows an order by the local data protection authority Friday that it must stop processing Italians’ data for the ChatGPT service.
In a statement which appears online to users with an Italian IP address who try to access ChatGPT, OpenAI writes that it “regrets” to inform users that it has disabled access to users in Italy — at the “request” of the data protection authority — which it known as the Garante.
It also says it will issue refunds to all users in Italy who bought the ChatGPT Plus subscription service last month — and notes too that is “temporarily pausing” subscription renewals there in order that users won’t be charged while the service is suspended.
OpenAI appears to be applying a simple geoblock at this point — which means that using a VPN to switch to a non-Italian IP address offers a simple workaround for the block. Although if a ChatGPT account was originally registered in Italy it may no longer be accessible and users wanting to circumvent the block may have to create a new account using a non-Italian IP address.
On Friday the Garante announced it has opened an investigation into ChatGPT over suspected breaches of the European Union’s General Data Protection Regulation (GDPR) — saying it’s concerned OpenAI has unlawfully processed Italians’ data.
OpenAI does not appear to have informed anyone whose online data it found and used to train the technology, such as by scraping information from Internet forums. Nor has it been entirely open about the data it’s processing — certainly not for the latest iteration of its model, GPT-4. And while training data it used may have been public (in the sense of being posted online) the GDPR still contains transparency principles — suggesting both users and people whose data it scraped should have been informed.
In its statement yesterday the Garante also pointed to the lack of any system to prevent minors from accessing the tech, raising a child safety flag — noting that there’s no age verification feature to prevent inappropriate access, for example.
Additionally, the regulator has raised concerns over the accuracy of the information the chatbot provides.
ChatGPT and other generative AI chatbots are known to sometimes produce erroneous information about named individuals — a flaw AI makers refer to as “hallucinating”. This looks problematic in the EU since the GDPR provides individuals with a suite of rights over their information — including a right to rectification of erroneous information. And, currently, it’s not clear OpenAI has a system in place where users can ask the chatbot to stop lying about them.
The San Francisco-based company has still not responded to our request for comment on the Garante’s investigation. But in its public statement to geoblocked users in Italy it claims: “We are committed to protecting people’s privacy and we believe we offer ChatGPT in compliance with GDPR and other privacy laws.”
“We will engage with the Garante with the goal of restoring your access as soon as possible,” it also writes, adding: “Many of you have told us that you find ChatGPT helpful for everyday tasks, and we look forward to making it available again soon.”
Despite striking an upbeat note towards the end of the statement it’s not clear how OpenAI can address the compliance issues raised by the Garante — given the wide scope of GDPR concerns it’s laid out as it kicks off a deeper investigation.
The pan-EU regulation calls for data protection by design and default — meaning privacy-centric processes and principles are supposed to be embedded into a system that processes people’s data from the start. Aka, the opposite approach to grabbing data and asking forgiveness later.
Penalties for confirmed breaches of the GDPR, meanwhile, can scale up to 4% of a data processor’s annual global turnover (or €20M, whichever is greater).
Additionally, since OpenAI has no main establishment in the EU, any of the bloc’s data protection authorities are empowered to regulate ChatGPT — which means all other EU member countries’ authorities could choose to step in and investigate — and issue fines for any breaches they find (in relatively short order, as each would be acting only in their own patch). So it’s facing the highest level of GDPR exposure, unprepared to play the forum shopping game other tech giants have used to delay privacy enforcement in Europe.
[ad_2]
techcrunch.com