On March 20, 2018—two years ago this past Friday—the Federal Trade Commission (FTC) announced its investigation of Facebook and Cambridge Analytica.
Where to begin? Well, let’s just say that the phrase “Cambridge Analytica” has the same feel as “subprime mortgage crisis” in that (A) both events are clearly very bad and yet (B) the average person doesn’t fully understand how or why. (If we’re being honest, we didn’t have a full understanding ourselves—not until we sat down to write this, got up to read more, and then sat back down.)
The story of Cambridge Analytica, like any other (hi)story, starts earlier than you think—not in 2018, but back in 2011. Here is a greatly-condensed rundown:
November 2011 — Facebook agrees to settle FTC charges which the latter summarizes as follows: “[Facebook] deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.” The settlement requires Facebook to live up to its privacy promises, and to give consumers clearer notice whenever their information is shared.
June 2014 — An affiliate of Cambridge Analytica (SCL) enters into an agreement with Global Science Research (GSR) which, according to The Guardian, is entirely premised on harvesting and processing Facebook data. GSR is Aleksandr Kogan’s company, and through his app thisisyourdigitallife, he begins collecting detailed personal information on as many as 87 million American Facebook users for Cambridge Analytica.
How was this possible? The app was basically an online personality test and several hundred thousand Facebook users took the test, thereby giving the app access permission. It’s neither surprising nor illicit, unto itself, that Kogan and company would have personal info on their users. The real problem is that the app could access the users’ friends’ detailed personal data, which greatly multiplied the breadth (and eventual manipulative value) of the app’s data-mining capability. But—in case it’s not obvious—the users’ friends could not and did not consent to that breach of privacy.
December 2016 — The Guardian’s Carole Cadwalladr is researching the U.S. election and smells something fishy when she turns up Cambridge Analytica. During her investigations, she comes under increasing attack—and without diminishing the weight of any personal trauma, that kind of reaction is often an early sign that a journalist has sniffed out a valuable secret.
April 2017 — Facebook meekly confirms that the social network has been manipulated in attempts to interfere with the 2016 presidential election, though the exact details are unclear.
March 17, 2018 — The New York Times and The Guardian (through its sister paper The Observer) break the Cambridge Analytica story at the same time. The FTC announces its investigation three days later.
So what’s happened since then? 2019 was the year of techlash, and in hindsight that doesn’t seem surprising. Facebook paid a $5 billion fine to the FTC, which hadn’t forgotten what happened eight years before. Facebook is still scrambling to address all of this (they can’t afford not to), but whether they can truly fix these problems is another question… and whether they can earn our trust is another question still.
Two of the latest developments to point out:
(1) Official word from Facebook has (unsurprisingly) been apologetic, flowery, and full of promises. This CBS piece outlines eight of Facebook’s promises in the wake of the Cambridge Analytica scandal; conveniently enough, the piece lists the promises in roughly the same order that Facebook’s fulfillment of those promises might be ranked (best to worst).
(2) Facebook has fronted at least two new initiatives aimed at repairing areas with critical losses of trust: as examples, the Facebook Journalism Project (for FB’s damage to credible reporting) and Social Science One (for FB’s damage to democracy). Neither one has borne substantial fruit yet—and it’s hard to say where PR ends and actually-trying-but-need-time begins.