There is a classic episode in the television show I Love Lucy in which Lucy goes to work wrapping candies on an assembly line. Initially, Lucy and her sidekick Ethel have no trouble neatly wrapping the candies as they move across the line. However, as the line speeds up, the couple scrambles harder to keep up. Eventually candies move so quickly that Lucy and Ethel resort to eating chocolates and stuffing their uniforms to avoid being reprimanded by their boss. As the situation becomes unmanageable, Lucy exclaims “I think we are fighting a losing game.” Lucy’s dilemma serves as the perfect metaphor to the world’s data privacy problem.
Data is being generated at a rate that is difficult to fathom. Nearly three quintillion bytes of data are created each day. This explosion in data collection has been sparked by the doubling of computer processing power every two years; now compounded by the billions of devices that collect and transmit data, storage devices and data warehouses that make it cheaper and easier to retain data, greater bandwidth to move data faster, and more sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network— in ways we are still learning.
The European Union’s General Data Protection Regulation (GDPR), effective May 25, 2018, is the most recent piece of legislation that aims to regulate the massive influx of personal data being processed by entities around the world. According to the European Parliament, the protection of persons in the processing of their data is a fundamental right. The GDPR, through its 173 recitals which cover forty-five specific regulations on data processing, forty-three conditions of applicability, thirty-five bureaucratic obligations for EU member states, and seventeen enumerated rights, aims to protect this fundamental right to data protection. The European Commission states that the purpose of the legislation is to give consumers more control of their data and to make business “benefit from a level playing field.”
In the U.S., many popular media outlets have praised the GDPR, and Senators Edward Markey, Dick Durbin, Richard Blumenthal, and Bernie Sanders have called on U.S. companies to voluntarily adopt its provisions. In fact, a growing number of senators want to require some of the provisions. However, a closer look at the GDPR evidences various pitfalls that create serious consequences for consumers all around the world. This note provides a detailed look into the elements of the GDPR, reviews its effects considering U.S laws and policy, urges restraint about adopting GDPR-style measures, and highlights the need for careful attention in formulating any new data protection legislation.
I. A Review of the GDPR
A. The Difference Between Data Protection and Data Privacy
Popular among misinformed consumers is the idea that the GDPR protects privacy when, in reality, the statute is focused on data protection or, more precisely, data governance. In fact, the word “privacy” fails to appear in the final text of the GDPR. Data privacy relates to the use of data by people authorized to hold that data. In contrast, data protection considers the technical systems that prevent unauthorized persons from accessing protected data.
Because the European Parliament has framed the GDPR as a “protection policy”, many people believe that the GDPR creates a morally superior regime to that which currently exists in the U.S. However, this belief conflates the value of privacy with a secular set of technical requirements on data protection. In addition, while the EU’s regulator for data protection, labels itself as the “global gold standard”, this assertion is not yet warranted because various critical components of the GDPR such as data portability and the right to erasure are still being tested both in the marketplace and the courts.
As a growing number of tech executives assert the need for new broad-sweeping federal privacy legislation in the U.S., many Americans are being persuaded by lofty descriptions of the GDPR—contrasting them with what they see as a morally inferior laissez faire approach at home—both because they confuse data privacy and protection and because they are not aware of America’s own substantive personal informational privacy protections developed since the founding of our country. In addition, U.S. constituents’ skewed understanding of their country’s privacy framework exists, in part, due to the growing number of journalists who refer to the U.S. as the “wild west,” as if there are no laws or regulations on data privacy and protection. In reality, the U.S. privacy and data protection regime is arguably the oldest, most robust, well developed and effective in the world. The EU’s laws are relatively new, officially dating from this century, and still lack the history of judicial scrutiny and case law that characterizes U.S. law.
Utilizing these two principles, the FTC has developed a robust record of settlements that privacy professionals pay close attention to in order to determine best practices in the area of informational privacy. While settlements do not set precedent, their influence in the privacy community means that companies treat consent orders much like judicial decisions that have the weight of precedent. This is true even though consent orders do not require companies to admit to any wrongdoing. An added benefit of settling is efficiency, in that the FTC and the company in question do not have to tie up the courts and spend vast sums of money in litigation.
Whereas the GDPR assumes that any data collection is suspect and therefore regulates it ex ante, the FTC focuses its enforcement efforts on sensitive information that should be protected against unwarranted disclosure. The US privacy system has a relatively flexible and non-prescriptive nature, relying more on ex post FTC enforcement and private litigation, and on the corresponding deterrent value of such enforcement and litigation, than on detailed prohibitions and rules. This system helps avoid imposing costly and draconian compliance mandates on entities which are not a priori threats to personal privacy, such as personal blogs, small businesses, and informational websites. The FTC’s approach seeks to allocate scarce regulatory resources to prevent the greatest threats to online privacy. To clarify, if a small entity behaves in an unfair or deceptive way, it can be prosecuted, but the FTC does not presume that every entity wants to harm online users. Several additional laws form the foundation on which the FTC carries out its charge: the Privacy Act of 1974, the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act, and the Children’s Online Privacy Protection Act.
The American conception of privacy is predicated on ensuring the individual’s freedom from government intrusion and pushing back the growth of the administrative state. The framers’ distaste for excessive government power to invade the privacy of the people was forged into the Bill of Rights in the Third, Fourth, and Fifth Amendments. These amendments responded to the egregious British abuses of personal privacy; including the quartering of soldiers in private homes, the search and seizure of colonists’ property, and forcing colonists to divulge information. Some of the first laws in the new republic constrained the government’s use of the census and its ability to compel information in court. The 1966 Freedom of Information Act (FOIA) ensured that people could access records held by the government. Given this history of pushing back against government intrusion, it is reasonable to be skeptical that increasing government power is now the key to privacy in the U.S.
B. GDPR-type Policy in the EU Has Failed to Increase Consumer Trust
The argument for adopting GDPR-like legislation in the U.S. would be made stronger if Europe’s laws to date successfully increased trust among consumers in the digital ecosystem. Unfortunately, reports and surveys indicate no such evidence. The biannual Eurobarometer survey, which interviews 100 individuals from each EU country on a variety of topics, has been tracking European trust in the Internet since 2009. Interestingly, European trust in the Internet remained flat from 2009 through 2017, despite the European Union strengthening its regulations in 2009 (implementation of which occurred over the subsequent few years) and significantly changing its privacy rules, such as the court decision that established the right to be forgotten in 2014. The evidence suggests that Europe’s data protection regulations to date have little to no positive effect on trust when compared with other countries who possess moderate or limited levels of data regulation. This is perhaps because consumers interpret heavy regulations as signs the government is telling its citizens that technology cannot be trusted.
A poll conducted by the U.S. Census Bureau for the National Telecommunications and Information Administration (NTIA) in 2015 and 2017 gives insight into Americans’ sense of online trust. During this period, the proportion of American households that reported online privacy or security concerns “fell from 84 percent to 73 percent.” Similarly, the proportion of households that said privacy concerns stopped them from doing certain activities online dropped from 45 percent to 33 percent.
Trust among Americans can also be inferred from user response to Facebook in the wake of the Cambridge Analytica scandal. In March of 2018, Facebook estimated that Cambridge Analytica was able to leverage its “academic” research into data on more than 87 million Americans. However, the Cambridge Analytica scandal veered from the traditional data leak narrative in which a hacker illegally obtains information directly from a data controller without their knowledge. Here, victims were shocked to find out that the data they had given their affirmative consent to have taken by a third-party app provider through Facebook had been subsequently provided to an outside third-party to target and create strategies in political campaigns.
Despite the media’s efforts to perpetuate user distrust in Facebook, daily active users (DAU) on Facebook in the U.S has held steady. It seems millions of U.S. Facebook users are either completely oblivious to Facebook’s recent mishaps, or they are simply not concerned. Overall, the platform has gained over five million DAU since the 2016 election. Further, a recent poll reports that more than half of Americans under the age of 53 are either against the regulation of tech companies or believe that any attempts at regulation would make no difference. Many regulatory advocates have responded to this data by arguing that Facebook users are suffering from a “privacy paradox” – understanding the value of privacy but failing to advocate for devices—such as legislation—that would better protect them. However, these regulatory advocates fail to understand the value that users derive from Facebook; they like having their family and friends, photo albums, and messaging all in one place. Most reasonable consumers understand that Facebook is a for-profit enterprise—one that has a legal obligation to maximize value for its shareholders. They understand that data collection and targeted advertising serve as the main drivers of profit which make Facebook’s valuable services possible. Naturally, users expect to be treated fairly, but it would be unrealistic for users to assume that the platform will never make mistakes as it continues to innovate.
Indeed, rather than quitting Facebook in the wake of the Cambridge Analytica scandal, it seems as though users responded by giving Facebook a chance to make improvements. The scandal’s relatively small effect on Facebook’s bottom line may be related to Facebook having a resilient “brand personality” such that users understand that it is an imperfect and evolving platform. Indeed, Facebook experienced an increase in engagement from U.S. users following the Cambridge Analytica scandal, as users went online to update their privacy settings.
However, U.S. users do quit Facebook. In fact, Hill Holiday’s recent survey of Generation Z found that more than one-half had switched off social media for an extended period of time and one-third had canceled all of their social media accounts. While 44% of respondents cited “time wasting” as the largest factor behind their decisions, 22% cited privacy concerns. Risk of declining user engagement, on its own, is enough to catalyze change at Facebook. A perfect example of this reality comes from Facebook’s recent attempt to change its model to emphasize posts from family and friends over news. In response to the changed model and Cambridge Analytica scandal, the market reproved the company as it lost $119 billion in value in a day – the largest single day drop for a company in U.S. history. This amount is roughly 10 times the maximum fine that authorities could levy under the GDPR. Moreover, Facebook’s shareholders have demanded leadership changes and have initiated lawsuits against the company. This response demonstrates once again that users and the marketplace are the most effective regulators of corporate entities. In the midst of public relations disasters, firms possess the tools to improve user safety, and will do so without being compelled by the government.
II. The Risks Associated With GDPR-Style Legislation
The American free enterprise system has been one of the most successful devices for prosperity and liberty in history, and has the potential to deliver a promising future for the United States. Recent decades have seen a decline in economic growth and innovation, and one important cause is poorly-designed government policies. The United States should proceed with caution before it follows the EU’s lead and expands the power of regulatory entities in the U.S. data privacy regime. Under the GDPR, Data Protection Authorities (DPAs) become intermediaries between consumers and corporate entities. DPAs are independent public authorities that supervise, through investigative and corrective powers, the application of data protection law. Through the implementation of the GDPR, significant legal and policy risks have emerged which must be examined thoroughly before Americans can seriously consider elements of the legislation for our country. When regulation is deemed necessary, the policies should be designed in ways that encourage competition and allow for experimentation. This is not the case when examining the GDPR—which allows for selective enforcement, excessive litigation, and the strengthening of bureaucratic entities.
A. Selective Enforcement
In law, selective enforcement occurs when government officials exercise enforcement discretion, which is the power to choose whether or how to punish a person who has violated the law. While selective enforcement can be efficient, it can also produce bias, corruption, and prejudice. The European Commission avoids phrasing its operations in terms of ‘selective enforcement’. Instead, the Commission relies on the term ‘prioritization’, which does not have the same definite and discerning connotation.
The Commission acknowledges that, in addition to prioritizing violations among policy sectors, it also prioritizes violations within each sector. A recent doctoral thesis published by the Department of Law at the European University Institute contends that the Commission’s policy of selective enforcement rests on four pillars: confidentiality, bilateralism, flexibility, and autonomy. For years, the European Parliament and various other stakeholders have pressured the Commission to reform its enforcement policy in order to increase its legitimacy in the eyes of EU citizens. They have sought to replace the Commission’s existing discretionary model of enforcement with a new approach characterized by standards such as transparency, trilateralism, objectivity, and accountability. The Commission, however, supported by the European Court of Justice, has in most part resisted these challenges. Indeed, the Commission is unwilling to improve its standards and formalize its enforcement procedures because doing so would create administrative burdens, which would in turn decrease its efficiency.
As stakeholders fail to effectively comply with GDPR regulations, EU politicians respond to panic by touting selective enforcement as a safety mechanism. Green Party Parliamentarian Jan Philipp Albrecht, the “father of the GDPR,” has assured critics that GDPR investigations will not focus on small to medium enterprises, but instead “will concentrate on the bigger ones that pose a threat to many consumers.” Albrecht made clear that the firms “already under suspicion of not complying with European data protection rules… will be the first to be looked at.” He further noted that “if smaller companies are trying in good faith to comply with the GDPR, it would be disproportionate to sanction them.” Albrecht’s statements make clear that EU regulators passed the GDPR with a preconceived desire to strong-arm large companies such as Facebook, Google, and Apple who have had a history of run ins with European authorities. However, now as the GDPR approaches its one-year birthday, enforcement appears to be ramping up significantly. Fines have been levied against both large and small businesses, directly contradicting the assurances made by the politicians who spearheaded the GDPR movement. While France’s January 2019 fine against Google for €50,000,000 comports with Albrecht’s assertions, many recent fines do not. In October 2018, a small Austrian business was fined €4,800 because its security camera captured too much public space. Additionally, the French DPA’s rulings against two startups, Teemo and Fidzup, for data protection violations illustrates that the French DPA also has no problem prosecuting startups, a rebuke of the German policymaker’s assurance that enforcement would focus on the big players. Selective enforcement of the GDPR over the last year has created a perfect mechanism for abuse of power to occur. If European government officials continue to keep the public in the dark as to their enforcement criteria, they are creating an environment of chaos; an environment where businesses operate in fear. Selective enforcement should not be accepted as a simple solution to a poorly structured or unnecessary regulation.
B. Undue Empowerment of Litigants
In the United States, standing is a jurisdictional prerequisite, and so a federal court will be quick to dismiss a claim for lack of standing if the plaintiff is unable to show injury from an alleged privacy violation. For a plaintiff to have standing, he or she must show “(1) [he or she has] suffered an ‘injury in fact’ that is (a) concrete and particularized and (b) actual or imminent, not conjectural or hypothetical; (2) the injury is fairly traceable to the challenged action of the defendant; and (3) it is likely, as opposed to merely speculative, that the injury will be redressed by a favorable decision.” Privacy plaintiffs often run afoul of the standing test in that without monetary damages arising from their privacy violations, they are unable to demonstrate they have suffered “injury in fact” that is “concrete and particularized”.
Without a statute that provides a private right of action, even when a plaintiff has suffered economic harm as a result of trying to mitigate an anticipated future privacy violation, the Supreme Court has held that such an injury is too speculative for standing. In Clapper v. Amnesty International USA, the Court held that plaintiffs could not “manufacture standing” based on their fears of a speculative future harm and the money they spent in order to safeguard against surveillance of their client communications. This strict reading of the imminence requirement for standing further restricts the situations in which a privacy plaintiff may bring a claim absent a showing of an imminent violation.
In Europe, the GDPR has enabled litigants with a new set of rights, including the right to complain, select representatives, and receive judicial remedy when firms fail to comply with the GDPR. Just hours after the GDPR came into effect, Austrian activist Max Schrems’ non-profit None of Your Business (NOYB) filed complaints against Google, Facebook, Instagram and WhatsApp, arguing that they act illegally by forcing users to accept intrusive terms of service or lose access.
The complaints demand investigations by the European supervisory authorities, and under Article 83, propose fines of up to four percent of the companies’ worldwide annual turnover of the preceding year—the maximum possible fine under the GDPR. Importantly, claims by advocacy groups, such as NYOB, under the GDPR need not allege injury or harm—which would be required for class actions in U.S. federal court—but only failure to comply with regulation, even if no harm results. This allows privacy plaintiffs to overcome a difficult hurdle, as there are frequently no concrete harms for courts to latch onto in privacy claims. While class actions can be viewed as a convenient, effective remedy for harm, they also provide the potential for abuse among activists and lawyers attempting to circumvent democratic procedures. Unlike the U.S., which has been at the forefront of collective actions with its far-reaching class action regime, European states have traditionally been hesitant to adopt such an expansive and powerful redress mechanism. While great uncertainty exists as to the scope and potential impact of its private right of action, the immediate complaints pending in European courts highlight the potentially devastating roles that private actors may end up playing in GDPR enforcement.
Generally, individuals have two routes to vindicate an alleged infringement of their privacy rights under the GDPR. First, under Articles 77 and 78(2), they can lodge a complaint against the infringing company with a supervisory authority, and if the supervisory authority fails to conduct an investigation, the private actor can seek a judicial remedy against the supervisory authority. Second, under Articles 79 and 82, the private actor can seek a judicial remedy directly against the infringing company for damages. Additionally, Article 80(1) allows non-profit organization – like NYOB – to represent (and even receive compensation on behalf of) an individual, as long as the organization’s statutory objectives are in the public interest and the organization is active in the space of data rights.
...(download the rest of the essay above)