We Can’t Trust Facebook to Regulate Itself (Totally, yes! A response.)

Note: This piece considers this 2017 op-ed printed in the New York Times. It was written shortly thereafter and has not been updated to reflect the myriad of new complaints one can now levey against the company.

Though Facebook has now long enjoyed a seemingly immovable place in the American public consciousness, when conversation swirled earlier this fall about the social media giant’s entanglement with the Russians in the 2016 elections, company executives surely hoped for slightly less attention. One opinion piece—of many—written during that time comes courtesy of Sandy Parakilas, a former operations manager at the company. His article contends less with Facebook’s role in election meddling, however, than its inability to generally keep itself in check despite repeated assurances to the contrary. The result is an unforgiving op-ed piece that justifies its mistrust in Facebook by turning to the company’s extensive past failings on issues concerning user privacy.

That Parakilas chiefly uses the issue of privacy to cast doubt on the company’s ability to self-regulate itself in other areas is undeniable. However, rather than slip into an exploitation of the topic, Parakilas instead offers a reminder of the unprecedented and worrisome nature of Facebook’s data collection and use policies. In a short space, he alludes to no less than a half dozen of Daniel Solove’s enumerated “privacy problems,” most notably emphasizing startling instances of information dissemination to third party developers. These problems are made even more troubling by their legality and the unwitting complicity of Facebook users who “often authorize access to sensitive information without realizing it,” essentially greenlighting its eventual misuse.[1]

Intentionally or not then, Parakilas’ piece, which initially focuses on corporate self-policing, also importantly brings to mind questions about the feasibility of self-regulation on the consumer/user level, specifically just how well we can look after our own best privacy interests. Surely, internet users could opt not to use the site, or as the article states deny explicit requests from the site for permission to use and share account information. For most, however, this ability to say “no” has proven illusory, in small part due to the company’s past efforts to tilt the balance of power with their users massively in their own favor. As Parakilas describes, Facebook’s transformation of user personal information into online currency occurred innocuously enough yet with devastating effect. By establishing one’s online profile as the cost of admission for “addictive games” such as “Farmville and Candy Crush,” the site normalized what would become a steady one-way flow of information into unknown hands. In so doing, Facebook devalued the importance placed on keeping personal information private while simultaneously amassing a collection of such data whose actual value has only continued to climb.

Interestingly, this present-day predicament calls to question many of our past assumptions regarding how we value privacy and our ability to protect it. In Freedom and Privacy, for instance, Alan Westin dismisses (in rather short notice) the dangers of self-invasion—perhaps the most significant of Facebook’s effects—instead placing faith in the average person’s continued ability to discriminate what they share with whom.[2] He also, by book’s end, describes a highly idealized set of “private forces” capable of ensuring adequate privacy protections throughout the country.[3] The basis of these forces (such as moral consciousness and ethical thinking), lie in a fundamental conviction not only in the public’s ability to defend its own privacy but more vitally in its desire to do so at all. His view is an optimistic one that several decades removed now betrays a bit of naivete from an era still on the cusp of a technological revolution. Today, the internet is full, not of informed users alert to the dangers of the information age, but rather of users who have proven themselves ill-fit to act in their own best interests.

Though the blame for this is not entirely removed from the average internet user, Parakilas spares them any responsibility in his piece. Nevertheless, he cannot fully separate their actions—often driven by ignorance—from their own exploitation. In one especially alarming instance, Parakilas cites a developer that used Facebook data to “automatically generate profiles of children, without their consent.” A textbook case of Solove’s appropriation (“the use of one’s identity or personality for the purposes and goals of another”), the data misuse was made possible, however, only with user approval for the transfer of information willfully entered into the site earlier on.[4] While these fake profiles are perhaps the most egregious examples of wrongdoing, they represent the culmination of dubious data collection practices (often secretly done without user knowledge), aggregation efforts, data insecurity, and more.

Parakilas’ account, of course, is neither the first to sound the alarm on Facebook’s abuses nor the most painstaking in its detail. It is, however, unique in offering an insider’s perspective on how the company has responded to these issues in the past, which is to say: ignore completely until you can’t (or as he puts it “when negative press or regulators are involved”).[5] Further, the company’s calls for self-regulation are unsurprising in a corporate landscape loathe to government intervention of any kind. Today, the same empty-mouthed arguments for efficiency and innovation that neutered the Privacy Act of 1974 persist and remain effective. This speaks especially to the larger issue that is the United States’ failure to adequately protect its citizens’ privacy through legislation, effectively leaving a vacuum for private industry to declare itself its own watchdog. While this would be worrisome under any circumstances, Parakilas emphasizes the absurdity of Facebook’s proposition, noting that their profits are tied to satisfying advertisers whose interests are often diametrically opposed to safeguarding user data. Any promise to balance these interests, Parakilas continues could not be trusted given how the company has continually “prioritized data collection over user protection and regulatory compliance.”[6]

That a private company has taken advantage of gaps in American law to exploit the personal data of millions is unambiguously reprehensible and something Parakilas repeatedly takes Facebook to task for in his op-ed. Yet, in focusing so much on the company, he perhaps underplays the impact of absent legislative efforts and an internet user base largely uneducated on issues of privacy, both of which have created an environment ripe for abuse by far more companies than just Facebook.

[1] Sandy Parakilas, “We Can’t Trust Facebook to Regulate Itself,” New York Times, 19 Nov 2017.

[2] In imagining what it might take to radically reshape how, where, and with whom we share information about ourselves, Westin’s earnestly only suggests a “widespread use […] of drugs.” I’m not sure social media quite counts as one but it seems to have had the same effects Westin attributes to LSD!
(see: Alan Westin, Privacy and Freedom (New York: Atheneum, 1967), 53.)

[3] Westin, 378.

[4] Daniel Solove, “A Taxonomy of Privacy,” University of Pennsylvania Law Review 154 (2006): 543.

[5] Parakilas.

[6] Parakilas.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.