Fixing Social Media: Hit the Cause, not Effects, of Grand Bargain

social mediaThis post builds, in part, on the ideas I got during 1st Istanbul Privacy Symposium: Data Protection and Innovations, especially conversations with R.E. Leenes. Everything that is wrong here is obviously my fault; but want to acknoweldge that many point here were inspired by others. 

In his excellent Fixing Social Media’s Grand Bargain Jack Balkin demonstrates how the “nature” of digital capitalism creates perverse incentives for social media companies to surveil, addict and manipulate their users. He then surveys a range of regulatory options, ranging from treating social media as public actors in some ways, to antitrust and pro-competition law, to finally reiterate his intriguing idea to treat social media companies as “information fiduciaries”.

In this brief post, I would like to build upon Balkin’s idea, and offer an additional perspective on both the problem and the possible solutions. I want to argue that the role for law is not only to mitigate the results of the “nature” of digital capitalism, but to disrupt the very incentives that led to the Grand Bargain. I first look at the conditions that led the current model, and put to question the assumption that this model is necessary. I also question the assumption that the surveillance and manipulation problem can be fixed within this paradigm. Then, I take look at the “information fiduciaries” proposal, and iterate my reservations towards it, also re-characterizing the ways in which GDPR is constructed. It’s an imperfect instrument, but in my opinion, for different reasons than Balkin puts forward. Finally, I throw in a couple of alternative ideas – coming from consumer law mindset – which are one way to go about changing the very incentives that led to the Grand Bargain.

Where are we?

Obviously, there is not one problem with the ways social media companies currently operate, and so there will be not one solution to all of them. Hence, at some point we could do with a map of what exactly are the challenges, what precisely are the regulatory goals, and what regulatory means have  a chance of bringing these goals about. However, it seems to me that an analysis of the causes and possible cures for the “grand bargain” makes for a good start.

The “grand bargain”, according to Balkin, is: online companies (social media, search engines etc.) offer their marvelous products to users without asking for money, but in exchange collect, analyze and act upon user’s personal data. These companies make money out of advertising. The more time users spend using their products, the more ads they will see. The more data companies have about users, the more effective targeted ad campaigns will be. Hence, the incentive to surveil, addict and manipulate.

This bargain is the “nature” of digital capitalism, Balkin tells us. I could not agree more, if by “nature” we mean an explanation of how things are right now. However, I would question the assumption – especially if we are to talk about political economy – that the things must be this way. Two questions are worth addressing: how did we get where we are; and how can we get out?

How did we get here?

Jaron Lanier interestingly argues that the mistake has been made at the very begging of the Internet’s public existence. We allowed two, possibly contradictory, ideas to flourish at the same time. On the one hand, a radical idea that stuff online should be free. That one should not pay for using browsers, visiting websites, sending emails etc. On the other, the liberal idea that innovation is good and tech entrepreneurship should be incentivized. Given the strong commitment to both, advertising was the only solution. And when online companies realized that the by-product data can be useful, and machine learning algorithms can squeeze a lot of knowledge out of it, the arms race in micro-targeted, behavioral advertising started. Two observations here.

First, it is by no means obvious or proven that targeted advertising leads to “more efficient advertising campaigns, which allow greater revenues”. One obviously assumes that – why else would companies, rational economic actors, spend money on it? But more and more research seems to show that these increased revenues are minimal (if existent at all), and companies’ behavior is a herd phenomenon, based on a hype.

Second, we should seriously ponder the question whether an internet and a public sphere in which stuff is free and on the same time users retain privacy and autonomy is possible. Whether it makes sense to strive for a world where one does not pay with money for using email, social media, browsers and search engines; and in which one retains full (or high) privacy and autonomy. The answer, obviously, will not be binary. But we should spend time thinking whether the trade off between free usage of convenient innovative products, and personal privacy and autonomy, is not inevitable.

“Information fiduciaries” cure symptoms, not the cause

Balkin’s “information fiduciaries” idea has two huge advantages and three problems. It’s a good idea, because it’s 1) simple and 2) possible to realize by courts. It seems to me problematic when one thinks about its 1) operationalization in design process; 2) oversight and enforcement; and 3) the fact that it does not change the perverse incentives, but merely puts legal constraints on how to act upon them.

EU’s adventure with enacting the GDPR seems to make two things clear in the American context. It might be impossible to push any complex data processing regulation through the over-lobbied Congress. And even if it was possible, the result will be so complex and watered-down that it won’t do us any good. That is where employing the concept of a “fiduciary” by the common law courts seems very tempting.

Speaking of GDPR, Balkin is clearly skeptical of this “neoliberal” regulation. As imperfect as GDPR might be, I disagree strongly with his characterization that “GDPR relies heavily on securing end-user consent (…) [and] is still based on a contractual model of privacy protection”. This is an American idea, and with regard to the GDPR, is simply not true. GDPR is an administrative regulation per excellenceIt clearly specifies duties of data controllers, including a need to demonstrate a legal basis of processing, a consent being only one of them. In other words, what companies write in their terms of service and privacy polices does not affect their obligations, and does not change what there are or are not allowed to do with personal data. The “individual rights and transparency” part of the Regulation belongs to the oversight and enforcement side, which relies on the mix of public and private engagement. Realizing that public supervisory authorities will never have enough power to combat huge tech by themselves, GDPR equips individuals with information and access rights, which allows for “class action” by NGOs, increasing the chance of spotting infringements. This is not perfect, but it’s not imperfect for the reasons Balkin invokes. And this helps one see where “information fiduciaries” come short of being the cure.

First, this sounds like a great idea, but even with a good-will company, at some point engineers need guidance on how to implement it. Does showing me ads of sleeping pills at 3 a.m.  go against the duties of care, confidentiality and loyalty? Sure, I guess. Do those duties impose an obligation to pull-off addicting games from my platform? That’s where stuff gets tricky. GDPR’s problem is that it’s long and complex. But the problems caused by social media in 2018 are very complex as well.

Second, if we imagine that social media companies do become information fiduciaries, and even if we assume that their duties are specified sufficiently well, the question is: what do we do if they violate their duties? The big difference between doctors, lawyers and nurses sharing my secret, and social media building up a system that manipulates me and addicts me, is that in the second case I might simply not know. Fiduciary model works perfect, if we assume that people will realize when these duties are infringed. But that is a bold assumption.

Finally, Balkin’s proposal does not really change the incentives to make money out of advertising; it just puts constraints on the ways in which social media companies would be legally allowed to do so. It does not disrupt the grand bargain, it civilizes it. And that is where my biggest skepticism lies. Because, as I wrote above, it just might be impossible to sustain innovation and free access to products without some sort of abuse of power stemming from access to data and control over products.

To “Fix” Social Media, Change their Incentives

Here we get back to the question if the “nature” of the digital capitalism is fixed. And, as Larry Lessig made us see already 20 years ago, the answer is no. Instead of taking it as given and thinking of how to civilize it, let us think how to disrupt the very system that gave rise to these business models.

From the perspective of political economy, my conviction is that we should not (only) regulate data processing, or privacy, directly; but regulate the market in a way that will change the incentives. How?

For example, ban the targeted advertising. Or some forms of it. Or some types of content. Especially if we learn that they do not really work.Ban news feeds shaped by an unknown algorithms. Require that users are in control of the choices. If companies are not allowed to use the data they collect and patterns they infer, the incentive to collect and use it dramatically goes down.

The immediate response I fear is “but the First Amendment!”. I fear it, because I know nothing about it, and cannot properly engage in a discussion. But just let me say: even Americans have bans on ads of cigarettes or alcohol; or rules on ads of medications. Even with the First Amendment there are bans on speech directly endangering the national security (don’t want to use the “t” word, since the perfect surveillance will immediately hit me;). So if social media are/might be addictive and cause mental health problems (as it seems they are); and if they created environments where a foreign power can influence American presidential elections; it seems to me that health or national security could be some arguments justifying such an intrusion.

Or let’s do something else. Make it obligatory to offer a track-free, ad-free, paid option.  Facebook’s yearly revenue is $40 billion, and it has 2 billion users. That is 20 bucks per user per year. We pay ten dollars for Netflix and Spotify and Amazon Prime monthly; why not for Facebook or Google? Sure, that is not an option for many people in less wealthy countries; as I said, it’s of course more complex. And yes, Amazon and Netflix also surveil and addict us. So such a move is not sufficient. But it’s easier to make them stop, when they have a secured income from sources other than abusive ads, manipulation or political propaganda.

Those are obviously imperfect ideas. But they are just one possible way to go about the claim that I’m certain off: the role for law is to change the incentives that led to the “grand bargain”, not only to mitigate the bargain’s results.

CLAUDETTE: Automating Legal Evaluation of Terms of Service and Privacy Policies using Machine Learning

It is possible to teach machines to read and evaluate terms of service and privacy politics for you.

Have you ever actually read the privacy policy and terms of service you accept? If so, you’re an exception. Consumers do not read these documents. They are too long, too complex, and there are too many of them. And even if they did the documents, they have no way to change them.

Regulators around the world, acknowledging this problem, put in place rules on what these documents must and must not contain. For example, the EU enacted regulations on unfair contractual terms; and recently the General Data Protection Regulation. The latter, applicable since 25th May 2018, makes clear what information must be presented in privacy policies, and in what form. And yet, our research has shown that, despite substantive and procedural rules in place, online platforms largely do not abide by the norms concerning terms of service and privacy policies. Why? Among other reasons, there is just too much for the enforcers to check. With virtually thousands of platforms and services out there, the task is overwhelming. NGOs and public agencies might have competence to verify the ToS and PPs, but lack the actual capability to do so. Consumers have rights, civil society has its mandate, but no one has time and resources to bring them into application. Battle lost? Not necessarily. We can use AI for this good cause.

The ambition of the CLAUDETTE Project, hosted at the Law Department of the European University Institute in Florence, and supported by engineers from the University of Bologna and the University of Modena and Reggio Emilia, is to automate the legal evaluation of terms of service and privacy policies of online platforms, using machine learning. The project’s philosophy is to empower the consumers and civil society using artificial intelligence. Currently artificial intelligence tools are used mostly by large corporations and the state. However, we believe that with efforts of academia and the civil society AI-powered tools for consumers and NGOs can and should be created. Our most technically advanced tool, described in our recent paper, CLAUDETTE: an Automated Detector of Potentially Unfair Clauses in Online Terms of Service, can detect potentially unfair contractual clauses with 80%-90% accuracy. Such tools can be used both to increase consumers’ autonomy (tell them what they accept), and increase efficiency and effectiveness of the civil society’s work, by automating big parts of their job.

Our most recent work has been an attempt to automate the analysis of privacy policies under the GDPR. This project, funded and supported by the European Consumer Organization, has led to the publication of the report: Claudette Meets GDPR: Automating the Evaluation of Privacy Policies Using Artificial Intelligence. Our findings indicate that the task can indeed be automated once a significantly larger learning dataset is created. This learning process was interrupted by major changes in privacy policies undertaken by the majority of online platforms around 25 May 2018, the date when the GDPR started being applicable. Nevertheless, the project led us to interesting conclusions.

Doctrinally, we have outlined what requirements a GDPR-complaint privacy policy should meet (comprehensive information, clear language, fair processing), as well as the ways in which these documents can be unlawful (if required information is insufficient, language unclear, or potentially unfair processing indicated). Anyone – researchers, policy drafters, journalists – can use these “golden standards” to help them asses existing policies, or draft new ones, compliant with the GDPR.

Empirically, we have analyzed the contents of privacy policies of Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking.com, Skyscanner, Netflix, Steam and Epic Games. Our normative study indicates that none of the analyzed privacy policies meet the requirements of the GDPR. The evaluated corpus, comprising 3658 sentences (80.398 words), contains 401 sentences (11.0%) which we marked as containing unclear language and 1240 sentences (33.9%) that we marked as potentially unlawful clauses, i.e. either a “problematic processing” clause or an “insufficient information” clause (under articles 13 and 14 of the GDPR). Hence, there is significant room for improvement on the side of business, as well as for action on the side of consumer organizations and supervisory authorities.

The post originally appeared at the Machine Lawyering blog of the Centre for Financial Regulation and Economic Development at the Chinese University of Hong Kong

“Revolution!” found & re-posted

16735786_10154372043097358_972449225_oThe text below has been found written on a toilet cabin door somewhere around the HLS campus. I was shown a photo of it, and sent a transcript. It’s a piece of fiction, and I don’t know what was the author’s intention. I don’t know who the author is either. I totally disagree with the ideas presented there. Read it critically, and try coming up with better and acceptable ways of dealing with the problems it identifies:

“In the end, it was quite simple. In the end, the history had the answers. And now everything is better.

There used to be two big internet companies, providing services to more than a billion people. These services became essential to the societies’ functioning. Yet, these companies were not really respecting the rights of the people. Privacy was inexistent, freedom of speech and assembly often violated, right to digital property not respected at all. Many wise women and men spent thousands of hours thinking about how to make these companies respect human rights of their users. Regulation? No one wanted to regulate. Competition? There was no competition. Petition? They didn’t listen.

Then, somebody remembered how the constitutionalism was born. When the states, not multibillion transnational firms, were major sources of human rights violation. Sure, there were philosophers, cool ideas, public discontent… But in the end, a lot of French people got mad, rallied together, demolished a few buildings, killed a few people, and said: look, here is a list of principles that one needs to follow while governing the state, otherwise we do it again.

And so, a few thousand people gathered in one valley, stormed the HQ of the two companies, demolished this and that, pondered beheading some people, handed over a list  of principles to some of these people and placed some new people on some positions. And said: govern your platforms based in these principles, or else.

Revolution. It worked once, why wouldn’t it work again?

It’s much better now. Is it perfect? No. Was it ideal way to do it? No. Is it better than it was? Waay better.

The notions of public and private actors are contingent and fluid. The notion of the power public in nature is fixed. And the nature of the social universe is such that when such a power is not tamed, it gets tamed.

In the end, it was quite simple.”

As I wrote above, terrifying story. So I repost it here, for us to be able to think of better ways of dealing with the problem of uncontrolled power that might one day emerge with some internet companies.

We have an innovation problem…

…but as in ‘alcohol problem’. innovationx

The last time I checked, innovation was not a fundamental right protected by any constitution. And yet, for the reasons that are nor particularly clear to me, it so often seems to be an argument that trumps (..) strengthening legal protection of consumers or privacy. In a discussion, whether in the classroom or over a beer, whenever someone even mentions regulating the algorithms, or taking a stronger stance on the internet giants’ practices, there is always someone else to say: “No, this will slow down/impede the innovation!”. And then, you’re supposed to say “a, yeah, sorry”.

Really? We’ve had 20 years of that innovation now, should we not run a little assessment of what went fine and what went wrong and whether this really is the way to go? Three points.

Firstly, we have numerous laws that impede innovation, and everyone seems to acknowledge their importance. We have product safety laws and standards, we have rules on clinical trials of drugs on humans and animals, we have labour law – all of these clearly make innovation in many spheres more expensive, longer and difficult. But we have them, to protect human health, life and well being; even if the innovation in these spheres could also contribute to these values.

Secondly – what type of innovation are we talking about? Even more apps and platforms. Spotify and Netflix, and Uber, and Deliveroo. Even better targeted advertising. Even more stuff can be done on one’s smartphone. Cool, it’s convenient, it makes life easier for some of us, but is also has side effects – alienation, uberification of economy, new types of addictions, fake news, filter bubbles – I could go on, but you know all that.

And yet, even though it’s clear and obvious that Google, Facebook et al. are openly violating  European personal data protection law, consumer law on unfair commercial practices and unfair terms, discrimination law; as well as all the values not yet explicitly protected by law (because “innovation”); so many people seem to be fine with that. We won’t regulate them, we won’t actually enforce the laws we have in place, because that could slow down the Progress.

Don’t get me wrong – I’m neither advocating a harsh regulation of new technologies, nor a large-scale enforcement of the laws we created before their emergence. On the contrary, I think we need a proper, informed, balanced and serious discussion on what to do with the law and regulation in the new reality. However, innovation should not be an argument against protection of privacy, increasing transparency or combating discrimination.

I get it: privacy is not as fundamental a value as life and health. But a new dating app, or the fact that your automatically generated playlist is now so perfect, or that you can order any pizza you want, are not as socially valuable as a new medications either.

Thirdly, and finally, I wonder where this comes from? Why are we so easily lured by this rhetoric? Who created it? These are the questions, and this is a post, in the  research-I-would-do-if-only-I-had-time series. I don’t know, though I have a guess. It’s a mix of our modernist idolisation of progress, and really good PR of big business. And that we mistook our lives getting more convenient for our lives getting better.

All I wanted to say it that privacy is not absolute, but neither is innovation. And that we should start thinking about what type of it are we buying at what cost.

Facebook’s exercise of public power

facebook-770688_1280In this post I want to argue that Facebook’s banning of pages, profiles and removing posts is an exercise of public power and as such should be subjected to material and procedural standards of public law and human rights.

Ok, I’m not gonna actually argue that much. But I want to defend a weaker claim: it is not obvious that Facebook’s discretion should not be limited by fundamental rights and freedoms, simply because it is a private company. Same applies to other platforms of equal social importance, like Google, YouTube and Twitter. And many other ‘private’ actors.

Context: one international, and one Polish. You probably all remember Facebook’s removal of the photo of the ‘napalm girl’ and the outcry that followed. Critics where accusing Facebook of the ‘abuse of power’ and ‘censorship’, leading the company to change its initial decision. Arguments of critics involved the fact that the photo is ‘iconic’, and that Facebook’s role in news dissemination is enormous (44% of adults in US get their news from there).

In Poland, the case is of a different political colour. In the last days, a group combating hate speech and xenophobia held a mass-scale action of reporting extreme-right wing Facebook pages, what led to the deletion of dozens of them, including pages of a member of parliament, several nation-wide organisations, some with hundreds of thousands of supporters and followers. This also caused an outcry and even made it to the national tv news in the station currently controlled by the government. Arguments invoked by the critics are essentially the same: freedom of speech, censorship, abuse of power etc. The difference is: this time Facebook’s decision got many supporters, who among other arguments claim that Facebook is a private company, acting for profit, and not only is but also should be allowed to do such things.

Now, there is a clear difference between the two cases. In the case of the ‘napalm girl’, Facebook did a ‘bad’ thing. In case of right-wing pages, it does a ‘good’ thing. There are two reasons for that classification to be widely-shared. Firstly, many of the right-wing pages contained content that might be against the law on hate speech and promoting violence. I will deal with this soon. Secondly, there is an emotional reason. Let me dwell on it first.

It just so happens that Facebook currently has a clear liberal and progressive agenda. And that this agenda suits so many commentators, probably including you and me. However, it is not clear that it will always be so. Today Facebook enjoys quite some freedom. Today liberal and progressive sells. But make two thought experiments. Imagine that Facebook would have a right wing agenda, and block extreme-left pages. Or even just liberal pages, or whatever pages that suit your worldview. Would you still be so sure that what they do is perfectly legit? Secondly, imagine that political winds change. Imagine that Trump wins elections. Imagine that suddenly there is a pressure on Facebook to change the course (‘or else we tax you high’, or ‘we grant people property rights in their personal data’, or anything else that would hurt Fb). And that society at large approves. Will we still defend Facebook’s freedom and full discretion? Or will we then say: hey, but common, everyone uses your services, you shape how people think, you have public responsibility and duty?

Emotions aside: In classical legal thinking, which still prevails in many continental legal traditions, including the Polish one, the world was neat and ordered. There were public bodies, allowed to do only what the law says they can do and holding the monopoly on the use of force; and private bodies, allowed to do everything that the law does not forbid them from doing and not allowed to use physical force against each other. 19th and 20th centuries witnessed a rise of constitutionalism, which led to the human-rights-limitation and control of the exercise of public power by public bodies.

Within that picture, Facebook is indeed a private company. It can do everything that the law does not forbid it from doing. It is not under direct obligation to facilitate freedom of speech, a right to associate, fair trial etc. However, notice three things:

  1. Factually, Facebook’s power is enormous. With billions of people using it, billions of people trusting it with providing news, billions of people using it for organisation and communication, it can easily affect the abovementioned rights and freedoms. It might be a private company, but it holds a ‘public’ position in many senses. Why?
  2. Even assuming that Facebook just deletes what it believes is against the law, it:
    1. interpretes the law by itself, without relying on any court;
    2. executes the law by itself, because it has the factual monopoly on the ‘digital force’. In the tangible world, an owner of a debate club might want to kick out a speaker from his property, but would need police to actually take him or her out. In the tangible world, one might find some banners outrageous, but destroying them would still infringe someone else’s property rights. In the digital world, where there are no ‘bodies’, and people do not hold any property in their digital content, this is legally fine, and factually easy, since Facebook unilaterally controls the platform.
  3. However, Facebook does more than just deleting illegal content. It sets its own rules and standards, often stricter than the law. Moreover, it not only deletes stuff, but through the underlying algorithms it chooses what will be displayed to whom and how often. In this sense, if we look at it as a public space, which it in many senses is (remember social media’s role in the Arab Spring and the Ukrainian Majdan?), it is the sole legislator, the court, and the executor of the ‘law’. I does not hold a public power de lege, but it holds a de facto power perfectly imitating the one we have limited when the state is concerned.

Given all this, I think we need a debate on limiting the discretion of socially important internet platforms when it comes to policing the content displayed/allowed there. Obviously, dozens of questions arise: which ones, who would limit them, is market not enough, how would that impact innovation etc. etc? There are other private parties who exercise other ‘public’ powers elsewhere (think of FIFA, multinational corporations etc.). Should we regulate business at large, or sectors, or what? There is much to be thought through. There is already a lot written on this. Much less read on this. Questions are on the table, and I don’t have tweet-long answers.

But I simply cannot accept the claim that it is perfectly fine that Facebook interprets and executes the law, or actually does whatever it wants, because it is a private company. The power it holds is public in nature, just not yet labelled so by our analog laws. And if that does not convince you, remember: it might soon change that ‘our’ agenda sells. Just like with contracts, we need to make them when everything is fine, but will need them when something goes wrong.

 

Law & economics against property and for central planning

I just had a wonderful Italian-stylrice-pastae lunch, which made me too sleepy to read, and so I wrote this post. The post itself is a joke. Or is it?

Grant property rights on this! seems to be a remedy for all evil according to some Chicago style law&economics utilitarians. In consequence, law&economics pretending-to-be-analytic-while-actually-being-polital-activists scholars often go hand in hand with the prophets and proponents of neoliberalism. But this will soon end.

Property theories (like all normative theories) could be roughly divided into deontological and consequentialist. The previous say: there is a reason to grant property rights to people (flourishing, natural law etc.) and so they should be granted, regardless of whether this will lead to the most efficient outcome. The latter, on the other hand, claim that we should grant property rights (or not) to people, because in consequence they will be better off; or rather: the total utility will be the highest and division the fairest if we grant subjects property rights.

Proponents of l&e, often considering themselves intellectual heirs of Hume (reason! and there’s no God!), will however evenly often start with from-was-to-ought argument: radio spectrum has been distributed more efficiently since property rights were granted in place of administrative distribution; capitalist states where better off than communist, because they had clear and working property rights system; Moscow streets in 90s were ruled by gangs, because there was no such system; and so it means that property rights and market are better than their lack and/or central planing, so let’s grant them.

(This is pasta, sometimes formaggio of Hayek-and-local-knowldge and Akerlof-market-for-lemons gets added).

However, even if we derive ‘ought to’ from ‘to be’, the direction of time arrow makes a difference. Just because some social ordering was less efficient in the past, it does not mean that it will be less efficient in the future. The world is changing.

What is the problem of central planning?
1. There is super a lot of data;
2. This data in not agreeable because it’s spread everywhere, and people’s preferences happen to change;
3. Since there is so much data and we also don’t have it, we can’t really build a proper equation;
4. Even if we had such an equation, and managed to collect the data to insert, we wouldn’t have computing power to count it;
5. And even if we did, we wouldn’t be able to keep distributing the goods fast enough.

Oh, wait: that was the problem of central planning in 70s. Or maybe even late 90s. Or maybe it still is one, hard to tell, quite a while since some government really tried for the last time.

So let’s jump to the future: 2050, everyone has a Google (or whatever will replace it) account, info about all our preferences, purchases, searches and actions is collected, BigData and stuff; we also have some chips in our veins scanning our blood, DNA and sending the data to the super computer, which will be 262144 times faster than current one (Moore’s law; and even if not, way faster); and drones fly around bringing you stuff. So our problems are solved by:
1. Google&BigData
2. Google&BigData
3. Google people
4. Super-supercomputer (probably owned by Google)
5. Drones (Amazon, I guess)

Suddenly it will turn out that having all this property, contracts, bargaining, market and stuff leads to a less efficient outcome, both for you and society; and it will be Google algorithm, knowing your skills and talents, telling you what work to do and giving you the best possible stuff in exchange. (‘Wow, I didn’t even know that I really wanted to have salmon for dinner, thx Google and Amazon!’). Or taking it away, when someone will have more fun with it.
In short.
Public/private; government/corporations and other details are insignificant here.

What matters is: if you believe in freedom and you think it’s cool that you can buy sth and it’s yours, stop being a consequentialist-utilitarian neoliberal, or otherwise your grandchildren will live in a Google dystopia.
So what should you be?
Repent and believe in Gospel!
(joking)
(not really, but I sort have to pretend I am)
(so: joking! haha..)
So: read some Locke or Kant or this third guy, what’s his name…? The one who ruined all meaningful moral philosophy..? The one who invented veil of ignorance, google him.

 

Ah, digested. Now I feel much better, can get back to work. And you should do the same!

When the state of exception becomes the rule

Europe got to the point when the state of exception might become the rule. If this happens, a social/political/legal response will be necessary. In my opinion, we are not intellectually prepared to give such a response. And I believe it is high time we get started. In this brief post, I sketch my idea of how this could look like.

This is an atypical post here, I treat it as a suggestion-giver to a possible EUI-wide initiative, which would connect scholars with a diverse substantial and methodological expertise/interest. However, I would obviously welcome any external cooperation, should this thing take off.

In short: in the aftermath the horrible events in Paris, France has extended the state of emergency for 3 months (90 sec explanation of what this means). We don’t know how the situation in Brussels will develop. God forbid, but it might be that as a result of another attack, or as a means of preventing it, others will follow.

No one questions the fact that we need security, that the criminals must be caught and next attacks must be stopped. However, the process might be longer and more difficult than it seems now. Next measures, based on real or just-strategic secret service reports, might add-up over the course of next months or years. I do not mean militarisation of the streets or curfew, I mean more subtle and ‘less visible’ changes: mass surveillance, arrests without warrants; something that we already witnessed in US after the Patriot Act, to name just CIA secret prisons or the NSA scandal.

If this happens, our notions of Democracy, Freedom, Human Rights or the Rule of Law will be challenged by the new factual (social and political) situation.

I think we can all well argue why this is undesirable. There might little place for argument, though.
What I think we are unprepared for is to argue: how, in this new situation, to best preserve them.
The trade-off between liberty and security is not a simple one, it is not even the correct one. What matters is not only what is done, but how it is done.

11069374_822845311135304_204195225230076064_n

My idea, for a possible response at the EUI, would stand on two pillars: theory and facts-collection.

THEORY 1
What needs to be theorised first is the state of exception, which remains in the dialectical relationship with the ‘standard/desirable/everyday state’.
On the previous, I would go for reading:

  1. The State of Exception by Giorgio Agamben (2005), the classic, where he analyses the concept back from Rome, through the Modernity, through the scary-but-sharp work of Carl Schmitt (Die Diktatur (1921) and Politische Theologie (1922)), WWII, to the Patriot Act of 2001 (also lecture available here);
  2. Normalising the State of Exception by Günter Frankenberg (2014), a longer, but really thorough monograph, connecting strong insight into philosophy, political theory and law with legal analysis of what has happened after the 9/11 in EU and US.

OBSERVATORY
Another task would be data collection on what is actually going on and what media report, both on the level of ‘announced threats’ and the responses including explicit or implicit announcement of the state of exception, limiting the liberties, counter-actions etc. That would obviously lead to the enrichment of the concept.

With these two in mind, THEORY 2
Knowing what exactly is being compromised, and how to theorise it, it would be possible to reconstruct which parts of our ‘traditional’ understanding of Democracy, Human Rights, Freedom and the Rule of Law are being challenged, and prepare the path for the creative work.

Obviously, this will be much more complicated, and the scheme above might be challenged in any way, but I just wanted to show what I have in mind. And ask if anyone would be interested in doing sth like this.

I know we are all super busy, and it’s not that I have that much spare time, but I somehow have the feeling that we owe people something like this. And the more people would join, the better (and faster) this could be done. I might be wrong,

but if you’re interested, drop me an email (Przemyslaw Palka).