On March 20, 2018, three years ago tomorrow, the Federal Trade Commission (FTC) announced its investigation into Facebook and Cambridge Analytica.
Let’s start by saying that the phrase “Cambridge Analytica” has the same feel as “subprime mortgage crisis,” in that (A) everyone knows these things were bad and yet (B) the average American doesn’t understand what happened, much less why it’s bad.
Some of this could be called a Brave New World problem, but in our time, the added challenge is the sheer complexity of the information, everything from technical workings to legal boundaries. Theft, abuse, fraud, and other dishonorable behaviors were once obvious when detected; technology has abstracted many of these things beyond the public’s understanding.
When in doubt, it’s usually helpful to start at the beginning and go forward, and that’s what we’ll do now. But the story of Cambridge Analytica starts not in 2018, but back in 2011. Here’s our greatly-condensed rundown:
November 2011 — Facebook agrees to settle FTC charges which the latter summarizes as follows: “[Facebook] deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.” The settlement requires Facebook to live up to its privacy promises, and to give consumers clearer notice whenever their information is shared.
June 2014 — An affiliate of Cambridge Analytica, SCL, enters into an agreement with Global Science Research (GSR) which, according to The Guardian, is entirely premised on harvesting and processing Facebook data. GSR is Aleksandr Kogan’s company, and through his app thisisyourdigitallife, he begins collecting detailed personal information on as many as 87 million American Facebook users for Cambridge Analytica.
How was that possible? The app was basically an online personality test and several hundred thousand Facebook users took the test, thereby giving the app access permission. It’s neither surprising nor illicit, unto itself, that Kogan and company would have personal info on their users. The real problem is that the app could access the users’ friends’ detailed personal data, which greatly multiplied the breadth (and eventual manipulative value) of the app’s data-mining capability. But—in case it’s not obvious—the users’ friends could not and did not consent to that breach of privacy.
December 2016 — The Guardian’s Carole Cadwalladr is researching the U.S. election and smells something fishy when she turns up Cambridge Analytica. During her investigations, she comes under increasing attack—and without diminishing any trauma, that kind of reaction is often a red flag that a journalist might have sniffed out a valuable secret.
April 2017 — Facebook meekly confirms that the social network has been manipulated in attempts to interfere with the 2016 presidential election, though the exact details are left unclear.
March 17, 2018 — The New York Times and The Guardian (through its sister paper The Observer) break the Cambridge Analytica story at the same time. The FTC announces its investigation three days later.
It’s hard to say exactly what will come next, but to get an idea, let’s summarize public sentiment about tech in each of the last three years:
2019: The Year of Techlash. People are pissed about Cambridge Analytica because it’s maybe the first scandal of its kind; it breaks a lot of public trust and, like other big scandals, causes a lot of other curtains to be pulled open.
2020: Overture to Antitrust. Starting with the Big Tech hearing in July, which proved Congress was no longer messing around. There won’t be any more “Senator, we run ads” stupidity from this point forward.
2021: First Shots Fired. Google and Facebook have both been hit with antitrust suits, and there’s no reason to think Apple and Amazon are safe.
0 Comments