[Blog post] Data Protection in Europe: Discourse at CPDP

Feb 12 2014

Recently I found myself at the Computers Privacy and Data Protection (CPDP) conference in Brussels. As someone coming from an NGO known for its capacity building and practical education efforts, and with a background in Internet policy in the states, the conference proved useful in order to better understand the current discourse around privacy and data protection in the EU. Below are my main impressions of this motley gathering of high-ranking academics, advocates, and industry reps.

1. Several company representatives expressed worry over the loss of trust from their user base in recent months. Yet while companies want to engender trust, these reps gave no indication that they plan to open up their coats to show how their services are actually protecting user data. Meanwhile, the FOSS community is trying to move beyond this kind of “just trust us” model, by which company policies and reputation often take the place of accountability processes and independent verification.

The Lavabit scandal from last summer is a prime example of the brokenness of this model: Founder Ladar Levison made bold claims around his email service’s security and privacy protecting properties to people frightened by the NSA news-cycle, presenting himself as an ethical service provider-as-badly-needed hero figure. But Lavabit was never audited externally to verify his claims. After it was shut down, Levison and reps from another reputation-riding service — Silent Circle — announced their new Dark Mail Alliance. This caused some ire in the security community, which challenged the lack of transparency among both service providers, pointed out known flaws in the Lavabit server encryption scheme, and ultimately stamped it with the verdict that until it proves otherwise,  this new effort can be seen primarily as a marketing gambit. More than ever, the greater Free and Open Source Software community feels its essential to move beyond commercial claims of trustworthiness and promises of good-will, but many companies do not. The FOSS community had a minimal presence at CPDP.

2. A big point of discussion was the need to reframe the concept of a “right to be forgotten.” Invisibility rights aren’t new, and the way people forget is not the way computers forget. For example, suing someone will not cause them to forget what they saw about you on Facebook. There is no tool available for eternal sunshine of the spotless mind, and if there were, we might be worse off. My takeaway: we must ask more deeply what it would mean for the network to forget things.

3.  I’ve often heard Information and Privacy Commissioner of Ontario Ann Kavoukian’s framework for Privacy by Design used interchangeably with “ security by design” or “privacy by default”  to describe a way of building and layering technologies and policies to ensure privacy and data protection at a fundamental level in tools and services. Post-CPDP, my impression is that circa 2014, a belief in Privacy by Design is often touted by companies to engender trust in their policies, while sidestepping the bigger questions around data collection. 2-decades in development, this conception of Privacy by Design appears to have been co-opted to justify data collection schemes with no framework for minimization of data collection or surveillance . Your ultimate faith in Privacy by Design might depend on how fully you believe in the power of corporate accountability in the first place. I’d like to see case studies on the implementation of this framework. Can you point me to some?

3.a By the way, did you know that Privacy-Protective Surveillance is a bonafide concept? Seeking to get beyond zero-sum societal debates around more vs. less surveillance, Kavoukian proposes that societies can justify the need for more surveillance as long as governance structures incorporate a set of principles into the design of systems in order to prevent the misuse of the data contained in them. This argument appears to echo some of the discourse around the principles of the EU data protection scheme. It goes something like this: we’re going to keep up this trend of collecting more and more of your data, but we promise to protect it in a way that engenders trust, so that we can enjoy your political good will. At the CPDP panel on Privacy by Design, one service provider framed his enhanced verification technique for biometric smart cards as an innovative form of “Privacy-Protective Surveillance,” which in turn fit comfortably into the Privacy by Design model, according to one of Kavoukian’s deputies present at CPDP. The deputy later explained her viewpoint that in an age in which certain populations depend upon verifiable biometric identification in order to claim many forms of government benefits, it bodes in everyone’s favor to build these mechanisms more securely than ever. Hard to argue with if you stay within the positive-sum surveillance framework described.

4. A lot of policy makers and academics agree that “we should educate the user!” As someone working for an NGO concerned very much with privacy education and outreach, I was a bit heartened to hear a plethora of exasperated calls for more user education around privacy and digital security. However, the call always seemed to come at the end of the talk, to signify the exhaustion of all other levers and instruments of data protection, or the conversation revolved around certification schemes, which one audience member called “the equivalent of requiring a drivers license to use the internet.” Privacy outreach and education are crucial, but the need for more of it should be seen as the start of a conversation. The how of privacy education and outreach is the real problem space.

Conclusions:

The rifts between the conversations around privacy which I witnessed at CPDP and the ones going on in the FOSS world seem quite big. That’s fine, but actors standing orthogonally opposed in our seemingly unified privacy community could reap enormous benefits from sharing the same space more often. I’m glad I went to CPDP: it underlined to me that advocates and allies of the FOSS community must work harder on the framing of their arguments. If a policy maker thinks it’s advisable to frame the newest, most advanced form of invasive biometrics as a privacy protecting element of ever-broadening surveillance schemes, then it seems we need to define the progress we’re pushing for more precisely. Virtually no one at the conference asked why the policy makers present aren’t pushing for less surveillance or less data collection, i.e “data minimization.” For me this was the elephant in the room.

Do you think it’s adequate for the predominant discourse to revolve around building “just trust us”-style accountability into ever-growing collection schemes? If not, then we will need to expand our thinking around what’s acceptable in our data-permeated society at large, and work to establish social norms around what we see as acceptable practices