What worries me is not that Meta or Google continuously choose to infringe the GDPR. What I fear is that, despite all their data collection and usage, such companies are actually in compliance with the Regulation. If so, no amount of enforcement will change the status quo.
What happened in Brussels on June 16-17, 2022?
Europeans think that the problem with the GDPR is insufficient enforcement.
On June 16-17, 2022, several hundred privacy professionals gathered in Brussels at a conference titled “The future of data protection: effective enforcement in the digital world,” convened by the European Data Protection Supervisor, Wojciech Wiewiórowski. Don’t let the generic title fool you; this was a major political event and attracted the top players in the GDPR game. Among one hundred speakers, one could find prominent policymakers (Margrethe Vestager, Věra Jourová, Birgit Sippel, Karen Melchior, Axel Voss, Juan Fernando López Aguilar), regulators (Marie-Laure Denis, Ulrich Kelber), activists (Ursula Pachl, Max Schrems), industry representatives (Julie Brill, Microsoft; Jane Horvath, Apple; William Malcolm, Google), and academics (Orla Lynskey, Paul De Hert, Michael Veale). All the people whose job is to shape the narrative and the trajectory of the data protection law in Europe were present.
What motivated the conference was a shared disappointment with the GDPR’s influence on the Big Tech. Four years after the law has become applicable, we still live in a world of commercial surveillance. In 2022, just like in 2012, when work on the GDPR began, essentially everything we do gets recorded as data and used to advance some corporate interests.
The participants seemed to agree about the reason behind the GDPR’s suboptimal performance: insufficient enforcement in cross-border cases. The GDPR applies directly throughout the Union but is enforced locally by the national Data Protection Authorities. What works well regarding local matters fails when transnational corporations are concerned. Under the so-called “One Stop Shop” mechanism, corporations like Meta and Google can choose one DPA to be overseen by (and they usually select Ireland). Meaning: one member state is overburdened with enforcement costs and overendowed with enforcement power, vis a vis the Big Tech coming from oversees.
The disagreement concerned the way forward: what shall we do about the suboptimal enforcement? On the one hand, the defenders of the status quo called for more time, funding, and mutual dialogue. The law is fine; no need to reform – they seemed to say – just give the DPAs and the NGOs more money and let us do our work. Until about a year ago, this has been the orthodox view in European mainstream circles: the GPDR is perfect on paper; if any intervention is needed, it concerns the factual capacities of the DPAs.
On the other hand, an increasingly larger and louder chorus of calls for legal reform could be heard. Among several ideas, including harmonization of procedural rules, the most straightforward has been the proposal to centralize the enforcement in cross-border cases. Let’s leave the 99% of enforcement as is, with the national DPAs – the reformers seemed to suggest – but carve out the truly expensive, complex, and politically loaded cases for a supranational body, like the EDPB, EDPS, or a newly created entity. From a purely academic standpoint, this proposal seems commonsensical. Indeed, this is the way the EU enforces many of its regulations, including competition law. As personal data protection has been enshrined as a fundamental right in the Charter, one is left puzzled seeing how the EU, on the ground, treats the market economy as more important than human dignity. Do as I say, not as I do, I suppose. Politically, though, centralization would require opening a new battle that few have an appetite for now.
So, what happens now? What will be the results of the conference? In the short term, probably nothing. The EU is busy finishing the DSA and DMA business (which all feature centralized enforcement mechanisms, btw) and in the trenches fighting over the AI Act, the Data Act, and a whole range of the digital policy pieces already on the agenda. In the mid-term, however, centralization of enforcement in cross-national cases seems hard to avoid. When the new Commission and the new European Parliament begin their terms in two years, given how often scandals regarding data processing hit nowadays, the idea that today seems practically impossible might turn into a politically inevitable one. In this sense, the reformists at the EDPS Conference succeeded. An idea has been planted, moved from the academic outskirts into the political mainstream, and sooner or later will become a reality.
So let us imagine that five years from now, the GDPR enforcement concerning the Big Tech has been centralized in the hands of a powerful, well-financed, and stuffed regulatory body. Will we then, finally, move from the world of commercial surveillance into a world of perfect privacy and data autonomy?
I have my doubts.
Unpopular view: the problem with the GDPR is its substance
As we are in the moment when unpopular opinions are uttered, let me express a view considered an absolute heresy in Europe: the GDPR is just a bad law, on substance, when it comes to taming the excesses of data collection and usage by the Big Tech. It could not have stopped the commercial surveillance, and it will not save us, regardless of beefed up the enforcers become.
The GDPR is – to its core – a neoliberal regulation delegating political choices to the market. Europeans hate to hear this; they do not believe it in good faith, but it’s true. Under all the rhetoric about the fundamental rights and the substantive principles, procedural requirements, and data subjects’ rights, the GDPR is exactly the same as the American “notice and choice” model, just with extra steps.
Don’t close this blog post; let me elaborate.
At the foundation of the GDPR model lay several principles that seem to distinguish it from the American in-your-face-neoliberal counterpart: the purpose limitation principle (data can only be processed for the purposes for which it was gathered), the data minimization principle (one cannot collect more data than necessary for a particular purpose), or the legality principle (one must secure a legalizing basis, like consent or a necessity to perform the contract, for processing to be lawful). In addition, one faces robust transparency and accountability obligations, paired with people’s rights to know what data is processed, correct it, or object to further processing. This sounds promising, doesn’t it?
The problem with the GDPR system is that it says nothing about the legality of particular purposes of processing or the lawfulness of specific contracts or business models. On substance, corporations are essentially unconstrained when it comes to specifying what purposes they want to processes the data for, or what place this processing has in the overall commercial transaction. Following the adoption of the Digital Content Directive 2019/770 which (in a supposedly pro-consumer attempt to extent the legal protection to “free” services for which consumers “pay” with personal data) effectively legalized the B2C contracts treating personal data as “payment,” corporations are free to specify what processing they consider necessary to perform the contract.
Consequently, if Facebook or Google hire stellar lawyers to draft their terms of service and privacy policies (which they do), they are absolutely free to decide what purposes to process the data for, or how to construct their contracts. Sure, a lot of legal engineering needs to happen around this – accountability procedures need to be established, and lengthy documents that no one will need must be drafted – but this is a monetary cost, not a substantive constraint, on data collection and usage.
What I fear, as a citizen concerned about the Big Tech’s power over individuals’ lives, impacts on autonomy and mental health, is not that Facebook or Google choose to continuously infringe the GDPR, for whatever reasons (lax enforcement among them). What I fear, as an academic who empirically studies their terms of service and privacy policies in the light of the binding law, is that what Facebook and Google do is perfectly complaint with the GDPR. Sure, they might be infringing the law on the margins – personal advertising systems need to be improved, disclosures could be clearer, etc. – but the very core of their business models is not only outside of the GDPR’s policing power; the GDPR legalizes these practices.
Meta and Google do what they did before; just now they have hundreds of pages of documents explaining how, under the GDPR, these practices are legal. If this is the case, no amount of enforcement will help us.
So, what can be done?
The GDPR, in its name and ambitions, is a general law, applying to both private and public bodies, in the same way, throughout the Union. And, in many cases, it works well. For example, regarding the public administration, which does not come up with purposes of processing on its own, but is endowed with competencies by legislation, the GDPR is a perfect tool to safeguard individuals’ privacy. And in many private sector contexts, like the paradigmatic “pizzeria does not need more than your address and phone number to deliver pizza, and should not use your data to send you further ads” it curtails unwanted commercial communications.
However, the GDPR was not designed for the Big Tech, basing its entire business model on data collection and advertising.
Of course, we need more enforcement, and the centralization in cross-border cases is a no-brainer for anyone who thinks about it seriously. But centralization itself won’t help. We need substantive regulation of purposes of processing.
Put simply: the EU, or the Member States, should take some purposes of processing, some contract types, and some business models outside of the realm of market choices and regulate them directly. Maybe there are some data practices that we want to forbid across the board, like using addictive design in apps used by minors, or directly promoting self-harm and eating disorders, like Instagram did. Or maybe we want to create some specific conditions for other practices, like mental-health protection measures for social media, or pointing out the kinds of products we don’t want to be advertised based on specific data, or in specific hours, or to some social groups. These are political, not technocratic, decisions to be taken.
Regulation of purposes of processing needs to be done case-by-case and sector-by-sector, something the Europeans don’t like. And yet, as problems are very specific (different normative considerations, and different solutions, come into play when speaking about data leading to discrimination in hiring, and contributing to depression in teens) responses need to be tailor-made as well.
In a world in which almost everything is data-driven, the activities of the Big Tech are no longer a personal data protection problem (only). They are consumer law problems, employment law problems, discrimination law problems, mental health law problems, etc. And they need to be addressed as such, by these laws, with a deep understanding of the technology and business models beneath them.
So, is the GDPR a bad law, as I provocatively wrote a couple of paragraphs earlier? Today, in action, it is. It does not have to be, if it is accompanied by substantive regulation of specfic purposes of processing, business models, and types of contracts. If one looks at the history of the idea that ultimately became the GDPR, this was the plan back in the 1970s. But then, you know, Ronald Regan and Margaret Thatcher happened, followed by Bill Clinton, Gerhard Schroeder, and Tony Blair, and we all kind of fell in love with neoliberalism, and delegated these choices entirely to the market. It’s time to wake up.
Concluding: the reformists’ call for the centralization of GDPR’s enforcement in cross-border cases – against companies like Meta or Google – is a step in the right direction but will solve much less than participants in the EDPS Conference have been assuming. It is a necessary move but, by far, insufficient. Or, put differently: it is a second-order problem, discussed widely, while the first-order problem remains unaddressed. The good thing is that regulation of purposes of processing might actually be easier than re-opening the GDPR. The bad thing is that no one thinks about doing it.
Shall we try to plant this idea now?