Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union (EU) and U.K.
The move follows pushback from the Irish Data Protection Commission (DPC), Meta’s lead regulator in the EU, which is acting on behalf of several data protection authorities (DPAs) across the bloc. The U.K.’s Information Commissioner’s Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said in a statement today. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
While Meta is already tapping user-generated content to train its AI in markets such as the U.S, Europe’s stringent GDPR regulations has created obstacles for Meta — and other companies — looking to improve their AI systems with user-generated training material.
However, Meta began notifying users of an upcoming change to its privacy policy last month, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The company argued that it needed to do this to reflect “the diverse languages, geography and cultural references of the people in Europe.”
These changes were due to come into effect on June 26, 2024 — 12 days from now. But the plans spurred not-for-profit privacy activist organization NOYB (“none of your business”) to file 11 complaints with constituent EU countries, arguing that Meta is contravening various facets of GDPR. One of those relates to the issue of opt-in versus opt-out, vis à vis where personal data processing does take place, users should be asked their permission first rather than requiring action to refuse.
Meta, for its part, was relying on a GDRP provision called “legitimate interest” to contend that its actions are compliant with the regulations. This isn’t the first time Meta has used this legal basis in defence, having previously done so to justify processing European users’ for targeted advertising.
It always seemed likely that regulators would at least put a stay of execution on Meta’s planned changes, particularly given how difficult the company had made it for users to “opt out” of having their data used. The company says that it has sent out more than 2 billion notifications informing users of the upcoming changes, but unlike other important public messaging that are plastered to the top of users’ feeds, such as prompts to go out and vote, these notifications appeared alongside users’ standard notifications — friends’ birthdays, photo tag alerts, group announcements, and more. So if someone doesn’t regularly check their notifications, it was all too easy to miss this.
And those who do see the notification won’t automatically know that there is a way to object or opt-out, as it simply invited users to click through to find out how Meta will use their information. There was nothing to suggest that there is an option here.
Moreover, users technically weren’t able to “opt out” of having their data used. Instead, they had to complete an objection form where they put forward their arguments for why they wanted to opt out — it was entirely at Meta’s discretion as to whether this request was honored, though the company said it would honor each request.
Although the objection form was linked from the notification itself, anyone proactively looking for the objection form in their account settings had to click through six separate not-so-obvious links to get there, with the “right to object” link discretely placed 1,100 words in to a generative AI policy page.
When asked why this process required the user to file an objection, rather than opt-in, Meta’s policy communications manager Matt Pollard pointed TechCrunch to its existing blog post, which says: “We believe this legal basis is the most appropriate balance for processing public data at the scale necessary to train AI models, while respecting people’s rights.”
To interpret this another way, making this opt-in likely wouldn’t generate enough “scale” in terms of people willing to offer their data. So the best way around this, it seems, was to issue a solitary notification in amongst users’ other notifications; hide the objection form behind half-a-dozen clicks for those seeking the “opt-out” independently; and then make them justify their objection, rather than give them a straight opt-out.
In an updated blog post today, Meta’s global engagement director for privacy policy Stefano Fratta said that it was “disappointed” by the request it has received from the DPC.
“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Fratta wrote. “We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”