Zuboff v. Hwang, or: are targeted ads a bubble?

The Internet runs on ads. Ads pay for the operations of Google and Facebook, and a lot of other stuff, including journalism. You might dislike them, but they’re really important. However, what if they’re just one, huge bubble; a scam waiting to fall apart like the subprime mortgage derivatives back in 2008?

tl;dr: Read Tim Hwang’s Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet, or at least listen to this podcast with him.

Advertising is the prime source of revenue for big tech companies like Google or Facebook. It is also the cornerstone of the “Grand Bargain” — you get access to services and content for free, but we get to collect data about you and use it to personalize the ads you see. Even though everyone’s (correctly) upset about all this data collection and threats to privacy, one must admit: the consumption of the Internet’s perks is still extremely egalitarian. One might be unable to afford a dentist appointment or a daily healthy dinner, but with a smartphone and internet access, everyone can “afford” to use Instagram, Google Maps, Gmail, Whatsapp, YouTube, and everything else. Ads subsidize all this.

Now, there are two narratives about online ads that seldom meet. On the one hand, academics and privacy/digital rights advocates tell the story of how personalized ads influence our minds and behavior, stripping us of autonomy. Because ads are based on data about us and millions of others, their timing/content/context, etc. can be so good as to influence purchasing behavior to a degree threatening human freedom. This, also, provides an incentive to keep collecting all this data.

The most well-known elaboration of this critique has been Shoshana Zuboff’s 2019 “The Age of Surveillance Capitalism.” Zuboff not only described the phenomenon of data-driven marketing; she also provided a conceptual framework to talk about it, and a theory explaining it. In her view (admittedly criticized by some academics), the mechanisms behind online ads are so reliable that corporations now trade in so-called “behavioral futures.” The idea is this: if I’m a marketer, I am so good and sophisticated that I can guarantee that if you spend X on my services, I will increase your sales by Y in the Z period of time. Of course, we don’t know who exactly will buy your product – this is just statical certainty – but we know that someone will. Because of this certainty, you can already now sell this future profit, or use it as collateral in some other transaction. A complex web of financial products surrounds online ads.

Scary isn’t it? Or exciting, if you want to make money.

The second narrative about online ads is somehow contradictory: they suck. How many times has it happened to you that you already bought something, and yet keep receiving the ads for the same/similar product? How many times have you seen an ad and thought “how can they be so dumb?” Lately, a colleague of mine, who’s a law professor at an American law school got an ad suggesting to them a part-time law degree program at the same law school. A Google ad, the best on the market! This is just an anecdote, I know, but I’m sure you have your own.

A tremendous book I just read (well, listened to on Audible) is Tim Hwang’s “Subprime Attention Crisis.” Hwang analyses lots of data available about the efficacy of online ads and makes a case that they’re just one, huge bubble. Many corporations think they are valuable and actually work, but it might soon turn out that they don’t. Once this happens, the whole financial ecosystem funding the operation of the internet will collapse. How could that happen?

One option is that the companies will simply realize they’re overpaying and limit their ad spending with programmatic ads. This could lead to some sort of “Internet recession” but not necessarily a crisis. The other option, however – and here we get back to Zuboff’s claim that “behavioral futures” already serve as collateral – is that at some point we’ll realize that all this promised value, value already reinvested, does not exists. That’s when the bubble bursts.

Now, whether this is actually the case – that behavioral futures are packed together and sold to a degree threatening the stability of the internet ecosystem – or who’s betting on this future value – is beyond my ability to know. But the idea is so intriguing it got me back to blogging after a couple of years of a pause.

All this to say: a “shock” enabling policymakers to radically remake the Internet as we know it might be around the corner. And to follow Naomi Klein’s reading of Milton Friedman: our job is to keep ideas on how a better world could look like alive.

CLAUDETTE: Automating Legal Evaluation of Terms of Service and Privacy Policies using Machine Learning

It is possible to teach machines to read and evaluate terms of service and privacy politics for you.

Have you ever actually read the privacy policy and terms of service you accept? If so, you’re an exception. Consumers do not read these documents. They are too long, too complex, and there are too many of them. And even if they did the documents, they have no way to change them.

Regulators around the world, acknowledging this problem, put in place rules on what these documents must and must not contain. For example, the EU enacted regulations on unfair contractual terms; and recently the General Data Protection Regulation. The latter, applicable since 25th May 2018, makes clear what information must be presented in privacy policies, and in what form. And yet, our research has shown that, despite substantive and procedural rules in place, online platforms largely do not abide by the norms concerning terms of service and privacy policies. Why? Among other reasons, there is just too much for the enforcers to check. With virtually thousands of platforms and services out there, the task is overwhelming. NGOs and public agencies might have competence to verify the ToS and PPs, but lack the actual capability to do so. Consumers have rights, civil society has its mandate, but no one has time and resources to bring them into application. Battle lost? Not necessarily. We can use AI for this good cause.

The ambition of the CLAUDETTE Project, hosted at the Law Department of the European University Institute in Florence, and supported by engineers from the University of Bologna and the University of Modena and Reggio Emilia, is to automate the legal evaluation of terms of service and privacy policies of online platforms, using machine learning. The project’s philosophy is to empower the consumers and civil society using artificial intelligence. Currently artificial intelligence tools are used mostly by large corporations and the state. However, we believe that with efforts of academia and the civil society AI-powered tools for consumers and NGOs can and should be created. Our most technically advanced tool, described in our recent paper, CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service, can detect potentially unfair contractual clauses with 80%-90% accuracy. Such tools can be used both to increase consumers’ autonomy (tell them what they accept), and increase efficiency and effectiveness of the civil society’s work, by automating big parts of their job.

Our most recent work has been an attempt to automate the analysis of privacy policies under the GDPR. This project, funded and supported by the European Consumer Organization, has led to the publication of the report: Claudette Meets GDPR: Automating the Evaluation of Privacy Policies Using Artificial Intelligence. Our findings indicate that the task can indeed be automated once a significantly larger learning dataset is created. This learning process was interrupted by major changes in privacy policies undertaken by the majority of online platforms around 25 May 2018, the date when the GDPR started being applicable. Nevertheless, the project led us to interesting conclusions.

Doctrinally, we have outlined what requirements a GDPR-complaint privacy policy should meet (comprehensive information, clear language, fair processing), as well as the ways in which these documents can be unlawful (if required information is insufficient, language unclear, or potentially unfair processing indicated). Anyone – researchers, policy drafters, journalists – can use these “golden standards” to help them asses existing policies, or draft new ones, compliant with the GDPR.

Empirically, we have analyzed the contents of privacy policies of Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking.com, Skyscanner, Netflix, Steam and Epic Games. Our normative study indicates that none of the analyzed privacy policies meet the requirements of the GDPR. The evaluated corpus, comprising 3658 sentences (80.398 words), contains 401 sentences (11.0%) which we marked as containing unclear language and 1240 sentences (33.9%) that we marked as potentially unlawful clauses, i.e. either a “problematic processing” clause or an “insufficient information” clause (under articles 13 and 14 of the GDPR). Hence, there is significant room for improvement on the side of business, as well as for action on the side of consumer organizations and supervisory authorities.

The post originally appeared at the Machine Lawyering blog of the Centre for Financial Regulation and Economic Development at the Chinese University of Hong Kong

Law & economics against property and for central planning

I just had a wonderful Italian-stylrice-pastae lunch, which made me too sleepy to read, and so I wrote this post. The post itself is a joke. Or is it?

Grant property rights on this! seems to be a remedy for all evil according to some Chicago style law&economics utilitarians. In consequence, law&economics pretending-to-be-analytic-while-actually-being-polital-activists scholars often go hand in hand with the prophets and proponents of neoliberalism. But this will soon end.

Property theories (like all normative theories) could be roughly divided into deontological and consequentialist. The previous say: there is a reason to grant property rights to people (flourishing, natural law etc.) and so they should be granted, regardless of whether this will lead to the most efficient outcome. The latter, on the other hand, claim that we should grant property rights (or not) to people, because in consequence they will be better off; or rather: the total utility will be the highest and division the fairest if we grant subjects property rights.

Proponents of l&e, often considering themselves intellectual heirs of Hume (reason! and there’s no God!), will however evenly often start with from-was-to-ought argument: radio spectrum has been distributed more efficiently since property rights were granted in place of administrative distribution; capitalist states where better off than communist, because they had clear and working property rights system; Moscow streets in 90s were ruled by gangs, because there was no such system; and so it means that property rights and market are better than their lack and/or central planing, so let’s grant them.

(This is pasta, sometimes formaggio of Hayek-and-local-knowldge and Akerlof-market-for-lemons gets added).

However, even if we derive ‘ought to’ from ‘to be’, the direction of time arrow makes a difference. Just because some social ordering was less efficient in the past, it does not mean that it will be less efficient in the future. The world is changing.

What is the problem of central planning?
1. There is super a lot of data;
2. This data in not agreeable because it’s spread everywhere, and people’s preferences happen to change;
3. Since there is so much data and we also don’t have it, we can’t really build a proper equation;
4. Even if we had such an equation, and managed to collect the data to insert, we wouldn’t have computing power to count it;
5. And even if we did, we wouldn’t be able to keep distributing the goods fast enough.

Oh, wait: that was the problem of central planning in 70s. Or maybe even late 90s. Or maybe it still is one, hard to tell, quite a while since some government really tried for the last time.

So let’s jump to the future: 2050, everyone has a Google (or whatever will replace it) account, info about all our preferences, purchases, searches and actions is collected, BigData and stuff; we also have some chips in our veins scanning our blood, DNA and sending the data to the super computer, which will be 262144 times faster than current one (Moore’s law; and even if not, way faster); and drones fly around bringing you stuff. So our problems are solved by:
1. Google&BigData
2. Google&BigData
3. Google people
4. Super-supercomputer (probably owned by Google)
5. Drones (Amazon, I guess)

Suddenly it will turn out that having all this property, contracts, bargaining, market and stuff leads to a less efficient outcome, both for you and society; and it will be Google algorithm, knowing your skills and talents, telling you what work to do and giving you the best possible stuff in exchange. (‘Wow, I didn’t even know that I really wanted to have salmon for dinner, thx Google and Amazon!’). Or taking it away, when someone will have more fun with it.
In short.
Public/private; government/corporations and other details are insignificant here.

What matters is: if you believe in freedom and you think it’s cool that you can buy sth and it’s yours, stop being a consequentialist-utilitarian neoliberal, or otherwise your grandchildren will live in a Google dystopia.
So what should you be?
Repent and believe in Gospel!
(joking)
(not really, but I sort have to pretend I am)
(so: joking! haha..)
So: read some Locke or Kant or this third guy, what’s his name…? The one who ruined all meaningful moral philosophy..? The one who invented veil of ignorance, google him.

 

Ah, digested. Now I feel much better, can get back to work. And you should do the same!