EU Investigates Google AI for GDPR Compliance
The European Union’s privacy watchdog is scrutinising how Google uses personal data in developing one of its models, Pathways Language Model 2 (PaLM2). On September 12, Ireland’s Data Protection Commission (DPC) — the regulatory body overseeing companies headquartered in Ireland — launched an investigation into whether Google’s use of personal data in PaLM2 complies with the General Data Protection Regulation (GDPR). It highlights concerns about compliance in AI innovation and data handling.
PaLM2 is Google’s “next-generation language model, with improved multilingual, reasoning and coding capabilities.” It was pre-trained on a “large quantity of webpage, source code, and other datasets,” an approach which is becoming standard across AI platforms. However, this investigation will examine if such data processing is likely to result in what the DPC calls a “high risk to the rights and freedoms of individuals” in the EU. Google has stated that it is committed to complying with GDPR and is cooperating fully with the DPC.
This inquiry is part of a broader EU effort to regulate how AI platforms handle user data; they are considering the extent to which first-party and third-party data is appropriately processed during AI model training and taking action against the tech giants accordingly.
The watchdog announced that social media platform X has agreed to halt the processing of European user data for its generative AI chatbot, Grok, after concerns about privacy risks. In June, Meta similarly paused plans to use European user data for training its AI models after discussions with regulators. Last week, the Information Commissioner’s Office (ICO) announced that LinkedIn, which is owned by Microsoft, had also confirmed that it had paused its use of UK users’ information in training its AI models, in response to regulators’ concerns.
As AI continues to evolve, the need for compliance with privacy laws like GDPR will intensify. Is your organisation prepared to meet such data privacy challenges whilst leveraging the power of AI? Get in touch for help navigating these complex dilemmas.
UPDATE 2ND OCTOBER:
Meta, however, has since notified users in the UK that it intends to progress with its approach to data use in its AI development. Yesterday, the company confirmed it will use public posts and comments from Facebook and Instagram users over 18 to train its AI models, citing “legitimate interests” as the legal basis. While users have the right to object, there is no universal opt-out; only those whose objections are honoured will have their data excluded from these processes.