But that is just what they may have done, in enabling nefarious disinformation agents to access the most intimate details about our lives and leverage that information to undermine the 2016 presidential elections.
Facebook understands this well. In light of the Cambridge Analytica scandal, the company has made momentous announcements in recent days about its APIs, which allow third-party developers access to Facebook users' sensitive data. The company's chief technology officer wrote
that Facebook would be shutting down some of the APIs' core functions and features.
This comes just days after the company announced changes to the Graph API, which Aleksandr Kogan had used to vacuum up personal data for sharing with Cambridge Analytica, the digital advisory firm that was engaged by Donald Trump's presidential campaign. But while Facebook's latest move may seem a positive step for the individual user's privacy, it could in fact only worsen the problems germinated by foreign disinformation operations.
An API is nothing more than a simple communication tool that helps one program interact with another. In the internet age, when big data has taken hold of nearly every web-based service, from banking to dating apps, the ability to access third-party data to power your app is both a tremendously powerful and common business practice.
For instance, the Google Maps API includes a rich set of data about the physical world that many third-party developers would covet. Airbnb, say, might wish to use data from Google Maps to show users where housing is without having to create its own mapping database. It's the Google Maps API that enables Airbnb
to do this.
In its announcement, Facebook stated more specifically that it was shutting down some features of its broadly used login functionality as well as its groups, events, pages and Instagram platform APIs. Technically, this means that third-party developers with access to these APIs will no longer be able to pull sensitive user data -- including member lists, names, profile photos and wall posts. Again, all seemingly positive steps toward protecting privacy.
But what does this really mean? To get at that we need to first understand how the digital advertising industry works. This ecosystem -- dominated by Google, Facebook, Twitter and a few other leading internet companies -- is based on a simple three-pronged business model.
First, these companies work hard to develop the most compelling services possible to hook the user and maximize space for ads on their platforms -- whether in search engine results pages, scrolling feeds, or news reader apps.
Second, they develop highly sophisticated ad-targeting algorithms that let marketers reach specific audience segments with great accuracy.
And third -- and perhaps most important because it underpins the implicit power of the first two -- they collect, unchecked, a vast amount of fine-grained personal data.
This last piece allows firms across the digital advertising industry to learn sensitive information about internet users and develop detailed data profiles based on inferences about each user's interests, preferences, behaviors and beliefs. These profiles are then repackaged and used by internet companies to drive content curation and targeted advertising on their platforms.
So in tamping down third-party data access, isn't Facebook cutting into its core business model? Not likely; while it might cause some difficulty for marketers and political communicators in the near term, it is most likely that Facebook will find ways to take all the data that it would have in the past shared with marketers, internalize it, and enable marketers to leverage the intelligence that Facebook gleans about its users from that data from behind the company's walled garden.
Under this paradigm, Facebook would keep all the data itself; marketers would have to ask Facebook whom to target with ads and how, as opposed to making the decisions themselves.
Indeed, if Facebook makes no further changes beyond what it has announced, the key tools that enabled the notorious Russian disinformation operations that were run out of the Internet Research Agency
will still be available for use by anyone who wants to use them. That includes genuinely legitimate political communicators like campaigns or their associates, who will still be able to leverage the sophisticated ad-targeting tools available through Facebook and other leading internet platforms.
In other words, the tools of targeted advertising using data brokers will still be available. It's just that instead of the data being in the advertisers' hands, it will be in the tech companies' hands; the new paradigm will be that the campaigns will tell the companies where to route the ads and the companies will make it happen through internal data analysis, instead of the campaigns doing external data analysis. To be clear, both are still going to happen but there will be a shift toward more analysis happening inside the tech company.
In a sense, the API changes Facebook is implementing could even contribute to the potential for nefarious disinformation operations, including the tactics that have been pursued by the Russians. These new changes will very likely make it harder for academic researchers, journalists and the broader body politic to analyze the integrity of public spaces; for instance, it will be harder for the public to understand how disinformation and hate speech spread on social media platforms if access to the underlying data is restricted.
Anecdotal evidence of misleading content -- like the revelations recently on CNN
that the biggest Facebook page purporting to support the Black Lives Matter movement was in fact entirely fake -- will remain invisible to the public in the absence of more thorough data access.
Meanwhile, these steps will do little to deter the Russian operations that targeted so many American voters with egregious disinformation.
What should happen instead? First, our national policymakers must work to better understand the business models that underlie web-based services. We need to know the root cause of disinformation, which is premised on the unchecked collection of sensitive user data and the opacity of targeting algorithms. This business model is not likely to change, but when a disinformation operator begins to leverage this commercial paradigm, we have a problem.
Second, we need to be more proactive about the forthcoming risks to our democracy and spend less time looking back. The 2018 elections are fast approaching, and in their lead-up, we will most likely witness significant coordinated efforts, foreign and domestic, to misinform American voters.
Now that Facebook has established a transparency requirement
on the funding behind political ads, it must also make sure that it can detect ads that claim not to be "issue ads," but which nonetheless do advocate ideas that favor one political candidate or philosophy over others. It must furthermore install practices to identify and respond to nefarious disinformation operations using advanced algorithmic detection technologies.
Finally, our lawmakers must understand that the disinformation problem is not exclusive to Facebook. Facebook CEO Mark Zuckerberg, who testified before Congress this week, and his company are under tremendous public scrutiny, for good reason: On many counts, Facebook's ugly underbelly has been exposed to harsh light and the company must be more accountable to the public.
But to say that Facebook is uniquely engaged in these business practices would be folly; on the contrary, these advertising and content infrastructures are commonplace across the consumer internet, which makes all the companies engaged throughout the digital ecosystem easy targets over which the Russians can push political disinformation. It is only a matter of time before they employ those alternate attack vectors, too.
In the long term, the United States will need to address the challenges raised by the internet's core business model by establishing comprehensive reform of the ways we assure consumer privacy and market competition in the digital sector.