Zuboff v. Hwang, or: are targeted ads a bubble?

The Internet runs on ads. Ads pay for the operations of Google and Facebook, and a lot of other stuff, including journalism. You might dislike them, but they’re really important. However, what if they’re just one, huge bubble; a scam waiting to fall apart like the subprime mortgage derivatives back in 2008?

tl;dr: Read Tim Hwang’s Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet, or at least listen to this podcast with him.

Advertising is the prime source of revenue for big tech companies like Google or Facebook. It is also the cornerstone of the “Grand Bargain” — you get access to services and content for free, but we get to collect data about you and use it to personalize the ads you see. Even though everyone’s (correctly) upset about all this data collection and threats to privacy, one must admit: the consumption of the Internet’s perks is still extremely egalitarian. One might be unable to afford a dentist appointment or a daily healthy dinner, but with a smartphone and internet access, everyone can “afford” to use Instagram, Google Maps, Gmail, Whatsapp, YouTube, and everything else. Ads subsidize all this.

Now, there are two narratives about online ads that seldom meet. On the one hand, academics and privacy/digital rights advocates tell the story of how personalized ads influence our minds and behavior, stripping us of autonomy. Because ads are based on data about us and millions of others, their timing/content/context, etc. can be so good as to influence purchasing behavior to a degree threatening human freedom. This, also, provides an incentive to keep collecting all this data.

The most well-known elaboration of this critique has been Shoshana Zuboff’s 2019 “The Age of Surveillance Capitalism.” Zuboff not only described the phenomenon of data-driven marketing; she also provided a conceptual framework to talk about it, and a theory explaining it. In her view (admittedly criticized by some academics), the mechanisms behind online ads are so reliable that corporations now trade in so-called “behavioral futures.” The idea is this: if I’m a marketer, I am so good and sophisticated that I can guarantee that if you spend X on my services, I will increase your sales by Y in the Z period of time. Of course, we don’t know who exactly will buy your product – this is just statical certainty – but we know that someone will. Because of this certainty, you can already now sell this future profit, or use it as collateral in some other transaction. A complex web of financial products surrounds online ads.

Scary isn’t it? Or exciting, if you want to make money.

The second narrative about online ads is somehow contradictory: they suck. How many times has it happened to you that you already bought something, and yet keep receiving the ads for the same/similar product? How many times have you seen an ad and thought “how can they be so dumb?” Lately, a colleague of mine, who’s a law professor at an American law school got an ad suggesting to them a part-time law degree program at the same law school. A Google ad, the best on the market! This is just an anecdote, I know, but I’m sure you have your own.

A tremendous book I just read (well, listened to on Audible) is Tim Hwang’s “Subprime Attention Crisis.” Hwang analyses lots of data available about the efficacy of online ads and makes a case that they’re just one, huge bubble. Many corporations think they are valuable and actually work, but it might soon turn out that they don’t. Once this happens, the whole financial ecosystem funding the operation of the internet will collapse. How could that happen?

One option is that the companies will simply realize they’re overpaying and limit their ad spending with programmatic ads. This could lead to some sort of “Internet recession” but not necessarily a crisis. The other option, however – and here we get back to Zuboff’s claim that “behavioral futures” already serve as collateral – is that at some point we’ll realize that all this promised value, value already reinvested, does not exists. That’s when the bubble bursts.

Now, whether this is actually the case – that behavioral futures are packed together and sold to a degree threatening the stability of the internet ecosystem – or who’s betting on this future value – is beyond my ability to know. But the idea is so intriguing it got me back to blogging after a couple of years of a pause.

All this to say: a “shock” enabling policymakers to radically remake the Internet as we know it might be around the corner. And to follow Naomi Klein’s reading of Milton Friedman: our job is to keep ideas on how a better world could look like alive.

CLAUDETTE: Automating Legal Evaluation of Terms of Service and Privacy Policies using Machine Learning

It is possible to teach machines to read and evaluate terms of service and privacy politics for you.

Have you ever actually read the privacy policy and terms of service you accept? If so, you’re an exception. Consumers do not read these documents. They are too long, too complex, and there are too many of them. And even if they did the documents, they have no way to change them.

Regulators around the world, acknowledging this problem, put in place rules on what these documents must and must not contain. For example, the EU enacted regulations on unfair contractual terms; and recently the General Data Protection Regulation. The latter, applicable since 25th May 2018, makes clear what information must be presented in privacy policies, and in what form. And yet, our research has shown that, despite substantive and procedural rules in place, online platforms largely do not abide by the norms concerning terms of service and privacy policies. Why? Among other reasons, there is just too much for the enforcers to check. With virtually thousands of platforms and services out there, the task is overwhelming. NGOs and public agencies might have competence to verify the ToS and PPs, but lack the actual capability to do so. Consumers have rights, civil society has its mandate, but no one has time and resources to bring them into application. Battle lost? Not necessarily. We can use AI for this good cause.

The ambition of the CLAUDETTE Project, hosted at the Law Department of the European University Institute in Florence, and supported by engineers from the University of Bologna and the University of Modena and Reggio Emilia, is to automate the legal evaluation of terms of service and privacy policies of online platforms, using machine learning. The project’s philosophy is to empower the consumers and civil society using artificial intelligence. Currently artificial intelligence tools are used mostly by large corporations and the state. However, we believe that with efforts of academia and the civil society AI-powered tools for consumers and NGOs can and should be created. Our most technically advanced tool, described in our recent paper, CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service, can detect potentially unfair contractual clauses with 80%-90% accuracy. Such tools can be used both to increase consumers’ autonomy (tell them what they accept), and increase efficiency and effectiveness of the civil society’s work, by automating big parts of their job.

Our most recent work has been an attempt to automate the analysis of privacy policies under the GDPR. This project, funded and supported by the European Consumer Organization, has led to the publication of the report: Claudette Meets GDPR: Automating the Evaluation of Privacy Policies Using Artificial Intelligence. Our findings indicate that the task can indeed be automated once a significantly larger learning dataset is created. This learning process was interrupted by major changes in privacy policies undertaken by the majority of online platforms around 25 May 2018, the date when the GDPR started being applicable. Nevertheless, the project led us to interesting conclusions.

Doctrinally, we have outlined what requirements a GDPR-complaint privacy policy should meet (comprehensive information, clear language, fair processing), as well as the ways in which these documents can be unlawful (if required information is insufficient, language unclear, or potentially unfair processing indicated). Anyone – researchers, policy drafters, journalists – can use these “golden standards” to help them asses existing policies, or draft new ones, compliant with the GDPR.

Empirically, we have analyzed the contents of privacy policies of Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking.com, Skyscanner, Netflix, Steam and Epic Games. Our normative study indicates that none of the analyzed privacy policies meet the requirements of the GDPR. The evaluated corpus, comprising 3658 sentences (80.398 words), contains 401 sentences (11.0%) which we marked as containing unclear language and 1240 sentences (33.9%) that we marked as potentially unlawful clauses, i.e. either a “problematic processing” clause or an “insufficient information” clause (under articles 13 and 14 of the GDPR). Hence, there is significant room for improvement on the side of business, as well as for action on the side of consumer organizations and supervisory authorities.

The post originally appeared at the Machine Lawyering blog of the Centre for Financial Regulation and Economic Development at the Chinese University of Hong Kong