Editor’s Note: Dipayan Ghosh is a fellow at New America and the Shorenstein Center at Harvard Kennedy School. He was a technology and economic policy advisor in the Obama White House, and more recently, a privacy and public policy advisor at Facebook. Stephen Wicker is professor of electrical and computer engineering at Cornell University and author of “Cellular Convergence and the Death of Privacy,” published by Oxford University Press. The opinions expressed in this commentary are theirs.
Amidst a fresh cycle of reports last week, Facebook confirmed that it had data partnerships with no less than 60 device manufacturers, including four Chinese firms – Huawei, Lenovo, Oppo and TCL. These companies maintained access to Facebook user data as well as information on a user’s friends – even though Facebook did not collect prior consent.
Many have already highlighted the tremendous harm that such expansive sharing of data with third parties – particularly with firms that have close associations with foreign governments that harbor their own agendas against the United States – poses for American democracy. Beyond the obvious risk to individual privacy is the concern that this never-ending leakage of data could add fuel to the raging fire of political disinformation. Indeed, access to sensitive personal data offers exactly the foothold necessary for the propagators of disinformation – both foreign and domestic – to operate with effectiveness and precision.
For their part, Facebook chief executive Mark Zuckerberg and leaders in the rest of the industry have consistently outlined a stark vision: that the antidote to the spread of disinformation on their platforms will lie in their mastery of artificial intelligence.
In the Zuckerberg vision the problem is clear; the dark world of disinformation operators – among them the Russia-based Internet Research Agency – is vast, diverse, and global. These malevolent actors continuously create manipulative content and shower the internet’s leading platforms with their subversive falsehoods around the clock.
Moving forward, disinformation operators will act with ever greater speed, scale, and sophistication. No matter how many humans Silicon Valley or the US government might engage in attempts to impede this destructive activity, their combined countermeasures will never outweigh the universe of anonymous disinformation agents without the assistance of machine-trained algorithms.
Artificial intelligence (AI) has to be part of the answer; executed over Silicon Valley’s tremendously powerful, state-of-the-art servers, AI will speed and fine-tune automated propaganda filters, making the industry exponentially more effective in the fight against disinformation. No other technology can scale to match the malicious, state-funded online actors ready to pounce on democratic discourse at every opportunity, from every angle imaginable. We are entering an escalating war, with computerized mercenaries as our infantry against ever-active digital bomb-throwers taking aim at our democracy.
But what if Zuckerberg’s vision of AI saving Americans from Russian disinformation agents is a mirage? After all, a different kind of AI is principally responsible for this mess in the first place; it was social media algorithms that empowered political operatives in 2016 to identify and target the audiences that were most likely to react to misleading content.
As leaders in Silicon Valley hone their weapons of detection, foreign and domestic propagandists will continue to leverage internet targeting algorithms, dividing us up into opposing groups and feeding us the disinformation we are most likely to find relevant.
By default, internet companies don’t care that we’re looking at disinformation so long as we’re engaging with content and looking at ads disseminated on their platforms. That is part and parcel of how internet companies work; optimal routing of relevant content attracts more eyeballs to the screen for longer periods of time, which in turn means higher advertising revenues for the industry.
The real problem thus lies in the core of the consumer internet’s business model. These companies are founded on algorithms that take as inputs the immense amount of personal data that we offer up every minute of the day. The outputs are highly targeted ads and a selective presentation of digital content – each of which is driven by sophisticated AI tools. The data and algorithmic tools that Steve Bannon and Cambridge Analytica have wielded with such terrible power are the same tools that are driving profit margins for the leading internet companies.
In a sense, Mr. Zuckerberg and the rest of the industry are pointing to AI to highlight its positive use while remaining silent about the potential for AI’s increased integration into the same ad-targeting technologies that have in the past so empowered both the Russians and manipulative political commentators in the United States.
If the industry does increase the use of AI in ad-targeting and content curation algorithms, American voters – and politicians seeking election – should be worried as the 2018 cycles approach. AI that is integrated into ad-routing and content curation will only stand to exacerbate the same problems that caused malicious content to pervade social media platforms in the lead-up to the 2016 elections and beyond.
The root of the problem is clear: American consumers have little control over their personal data, and even less control over how that data is used. Internet-based algorithms are opaque to the public; as consumers, we have little sense as to how Twitter or YouTube curate our content feeds, or how Google ranks search engine results pages, or how Facebook enables the targeted placement of digital advertisements (to be fair to Facebook, they are providing new tools to make digital ads more transparent—but critics contend they are not enough) – all of which will increasingly be powered by AI. Similarly, we likely will have little visibility into how these companies will deploy AI to overcome malicious actors like the Internet Research Agency.
Does this mean that we should discourage the industry from deploying AI in the fight against the ongoing spread of disinformation and other forms of malicious content? Certainly not; as Mark Zuckerberg has rightly noted, the key to combating disinformation must lie in the use of AI combined with human review teams.
But it does mean that unless internet companies act with greater transparency, they will only be addressing specific symptoms, instead of treating the underlying disease. Given that this will run counter to their business models, government regulation of data collection and its use is necessary.
Consumers need to know what data about them is collected and how it is used. In an ideal world, they should also have the ability to opt out of such data collection without having to lose access to service. Europe, now with its General Data Protection Regulation that went into effect last month, is already well ahead of the United States. And while several attempts have been made over the past few years to legislate an American privacy law – among them the Obama administration’s Consumer Privacy Bill of Rights Act, the Data Broker Accountability and Transparency Act and the CONSENT Act – each effort has been met swiftly by intense partisanship and political gridlock.
The United States needs to do better. After all, the best defense against manipulation is a little space, away from all the noise, in which the individual can think independently.