As soon as the so-called “datagate” scandal has emerged, the press has finally understood what some practitioners and scholars had already understood: the present legal system meant to protect privacy, at least as we know it in Europe, is not working. In fact, one cannot even say it is in crisis, it just doesn’t work.
Let’s be honest: the so-called “datagate” story has more to do with espionage than with privacy (the two are quite different things) but truth is that is that if mail, internet browsing and telephone conversation are controlled by the millions, then any attempt to protect privacy has clearly failed. Another point is that, apparently, the spying story was not exclusive of the USA: Mr. Snowden has indicated that several European governments (allegedly Sweden, Spain, Germany and UK) had joined forces to control the communications in Europe, i.e. to do the same the USA government has been accused of doing. The Italians were left out of this plot apparently because their intelligence organization lacked reliability (according to Mr. Snowden’s reported statements). Interestingly enough this story has never been denied; in fact, the only denial has arrived by the Italian intelligence, which stated that it did not join this plot because it was in violation of the law. In other words, far from denying it, the Italians did confirm the story.
This being the status of things as we know them (and one tends to believe that (a) we know only a little portion of the facts and that (b) the remaining we still do not know is at least just as bad), there is a fundamental question to ask ourselves: is privacy a right of the human person? Assuming a positive answer, next question is: should it be protected? Assuming again a positive answer to this question and also assuming that the society of surveillance is wrong, if we want to restore credibility in privacy laws the first thing to do is to go back to the origin of privacy laws in Europe and start the process all over again. Going back in history, one will find that privacy laws were enacted to protect citizens from governmental interference in their life . In the mid-seventies governments and public administrations started using computers to collect and store informations on their citizens, with the stated purpose to orderly plan and provide the services that such societies had promised. The administrations needed to know how many kids were born, in order to plan the number of schools and teachers, and so on. The systematic collection of personal data from public administrations sparkled the basis of privacy legislation: since computers are so powerful, since they can store, sort and process data at a capacity and speed never known before, the fear rose as to the possibility that they could be used for illegal means, i.e. to control individuals and to control society. Each administration may collect and store information for a lawful purpose, but someone could access all these data (stored and processed by separate entities for lawful purposes) and assemble, process and use the data to pursue an illegal purpose, i.e. to discriminate against the very people the government is supposed to serve. In other words it was felt that there was a potential risk to create a “big brother” that would have known everything about everyone and controlled everything and everyone. Hence, privacy laws were meant to set the rules on how to use the data (these rules are what to-day we know as data principles). For this very reason these laws were correctly identified as laws on the protection of data (not as privacy laws), because their purpose was to establish how and under which conditions personal data could have been used.
By the way, this was not just another law: if one thinks that in those very days all countries in the communist block were busy spying and collecting data on their citizens in order to prevent, stop and in any way fight dissent, data protection laws were a strong signal that western Europe gave to its citizens and to the communist block as well. We are different, our government told us. We are in a democratic world, and we shall not spy on our citizen, and in order to prove it we set the rules and standards we, as governments, will stick to.
European laws were first enacted in the mid-seventies and were based on the IT model of those times; by the time the European Directive was approved in 1995, this technology was obsolete. The pc revolution had exploded in the 80’s and processing of data was no more limited to the large, centralized data centers of private companies or public administrations: everyone had a pc on the desk, so the focus shifted on “processing” of personal data. At this point the original potential risk with public administrations was forgotten, since now the problem was that everyone had the possibility to process data, not just the Governments. In fact it was stated in the EU Directive that all “processing operations concerning public security, defense, State security (including the economic well-being of the State when the processing operation relates to State security matters) and the activities of the State in areas of criminal law” were exempt from the provisions of the Directive. When EU legislators exempted governments and public administrations from the application of privacy laws they forgot and set aside the original seed of privacy laws: a quantum leap had just been made and practically very few noticed, if any.
As time and technology progressed, the gap between the EU Directive (and hence local legislation of Member States) and the real world has grown to a point of no-return. The EU Directive (Directive 95/46/EU) is based on the Strasbourg Convention of 1981; as I just pointed out the Strasbourg Convention reflects the IT world of the mid-seventies, which on its side has nothing to do with the reality of to-day’s technology. The consequences (among others) are that (A) local privacy authorities have struggled to keep up with the pace of innovation, but that (B) they have succeeded only to a limited extent and that (C) privacy laws are seen as a bureaucratic burden, with little effectiveness on the protection of privacy. The new proposed privacy legislation, i.e. the EU Regulation whose approval and implementation is presently pending with European legislators, does not show any change that could significantly modify this scenario.
It has been said that privacy has “an image problem” (Julie E. Cohen, How privacy got a bad name for itself, 126 Harv. L. Rev. 1904 (2013). If this is true, in my view it is due to the present lack of capability of privacy laws to address the real issue in modern society, i.e. the switch from the information society to the surveillance society.
European Law is that it is still based on the “consent” model. The data subject (i.e. the person whose data are being processed) must be informed of the processing of her/his data and, in some cases, it must consent to their processing. This model was adequate in a world where the use of computers was limited, just as it was in the seventies. In today’s world, this model simply does not work and is going to be less and less effective as technology progresses; yet, the proposed EU Regulation keeps moving in the same direction and using the same model. To make things worse, in the draft Regulation some modification have been proposed (compared to the Directive) that would render it one of the most bureaucratic, burdensome and ineffective piece of legislation. At this point it is fair to say that privacy laws (at least in Europe) have more than just an image problem!
Only to name a few: social networks, mobile computing, big data and cloud computing are far from being addressed in the Directive as well as in the new, proposed Regulation. The question, again, is: where do we go from here?
For the sake of discussion, let me step aside for a moment and take a comparative look at privacy laws and IP law. There is a striking similarity between IP rights and privacy rights. The owner of a patent is the sole person or entity that can use it and that has the exclusive right to license it to others; by the same token, no one can use the a registered trade mark without the owner’s consent; the owner of a SW program can control its reproduction, distribution, modification, etc.; and the owner of a protected content (music, movie, books, etc.) has the same exclusive rights to it. Consent of the owner is required for any use of anything protected under IP law.
Under privacy laws, just in the same way, no one can use or communicate to a third party a person’s data without such person’s consent, safe for some limited cases (but under IP law, “fair use” is also permitted, without the owner’s consent). Hence, in principle one has the right to control distribution of her/his data. Personal data have significant economic value, just as all contents and other materials protected under IP law. Just as protected content (be it music, SW or videos) personal data have a life of their own, but the person whose data are being processed has hardly any control over it. The question then is: if we accept the protection of IP rights, why shouldn’t we accept the same level of protection for something that (in some instances) can be very intimate and personal?
Marketers all around the world know that there’s a wealth of information available to anyone of capable to use and aggregate simple, basic personal data. The familiar “I Like” button, originally introduced by social network, is used to create a profile of each user; these data, once aggregated in any given fashion, become a content that’s sold at premium prices. Why the same level of protection granted to IP protected content is not granted when someone is selling my own data?
This parallel may be deceiving (and probably it is) but it highlights the contradictions of a system that is still in an initial stage. The fact is that privacy laws are not very sophisticated for the mix of reasons I have tried to explain above; and being conceived as “having an image problem” has not helped to move forward the development of the law.
Looking at the pace new technologies are being developed, it is fair to say that the present crisis of privacy laws may be irreversible if there isn’t a swift change and if this change doesn’t arrive soon; indeed, the sooner, the better. But on what basis this change can arrive?
I have identified seven points as a possible start for a new basis on which to build a new protection of privacy. First and foremost, we need to know from our legislators if they are serious about protecting privacy; without a strong political position, privacy protection will remain an illusion. Second: it may be a difficult step, but a step to take is to define, once and for all, who is the owner of personal data. Is it the person whose data are being processed? Is it someone else? Right now this is not clear, so we need to define the rules. Third: let’s leave, once and for all, the “consent” system as we know it to-day, and let’s work on a better and more efficient opt-out system. Fourth: in Europe, any product you buy from the shelves of a supermarket has a mandatory label, with all information about it. Let’s do the same for all the apps’ being sold and/or used today (social networks included). Let’s create a privacy scorecard, simple and easy to understand, and let’s make it mandatory. Everyone has a right to know how her/his data are being used and, above all, if and what data are being used. Fifth: let’s impose a transparency report. Every year Google publishes a transparency report, but it is a very partial and limited report: it only states how many requests for personal data have been received from public officials around the world. Google can certainly do better, and so can all other companies. Sixth: storing time must be reasonable. Right now Google and European Data Privacy Authorities (DPA’s)are fighting over this very issue. Google says it needs the data forever and ever; the DPA’s want it to keep it for two months. Between eternity and two months there is significant room to find a reasonable compromise. Seventh, and final: let’s enforce access rights: any person must have the right to know what data are being used and by whom and request that their processing be stopped, and this right must be enforceable (with stiff fines for those who do not respond).
These are some ideas; some of them are based on “traditional” concepts of present privacy laws, some of them are just ideas. But if we do not start thinking (and think quickly), if we do not start discussing new alternatives to effectively enforce privacy protection, there is no way to stop the surveillance society.