Editor’s Note: Arun Vishwanath is a technologist who studies the people problem of cybersecurity. He is a faculty associate at the Berkman Klein Center at Harvard University. The views expressed in this commentary are his own.
In the not-so-distant future, we will be presented with the version of the news we wish to read – not the news that some reporter, columnist or editorial board decides we need to read. And it will be entirely written by artificial intelligence (AI).
Think this is science fiction? Think again. Many of us probably don’t realize that AI programs were authoring many parts of the summer Olympics coverage, and also providing readers with up-to-date reports, personalized based on the reader’s location, on nearly 500 House, Senate, and gubernatorial races during the last election cycle.
And those news feeds on Facebook and Google News that the majority of people trust more than the original news sources, well those, too, employ machine-learning algorithms to match us with news and ads. And we saw how easily those were co-opted by the Russians to influence our last presidential elections.
Follow the natural progression of these developments, and it leads to an ominous future in which AI entirely writes and presents the news exactly the way each of us would like to read it – forever altering democracy as we know it.
In this future, journalists might still report on events, but it will be AI that will take these inputs, inject data from its vast historical repositories and formulate a multitude of different themes, each making different arguments and coming to different conclusions. Then, using data about readers’ interests learned from their social media, online shopping and browsing history, AI will present them with the version of the news they would like to read.
For example, for a reader with strong views on the environment, news of heavy flooding in some place of interest might be presented from a global warming standpoint, with conclusions about how human activity has hurt the environment. For another with views against climate change, the same story might be presented with data and conclusions questioning the validity of weather science.
Stories might be presented in brief, for readers who like to peruse the news, or in-depth, for those who like to delve into details. It may even have actionable links to online stores selling essential supplies for those in the flood zone or social media links connecting readers with others who share their interests. In essence, it will be the perfect AI-created echo chamber – where each person will be an audience of one, connected to others who are always agreeable.
This hyper personalized, AI-driven reality is closer than people realize – and it goes beyond the Olympic or election coverage I mentioned. After his purchase of the Washington Post, Jeff Bezos introduced Heliograf, an AI-based writing tool, which given predefined themes and phrases can write complete articles. This software, while still far from autonomous, has already authored about 850 articles that have cumulatively garnered half a million page views.
Others like The New York Times, the Associated Press and many financial organizations are also testing and utilizing similar software for everything from local news reporting to financial report writing. Just consider this AP story on a Maryland-based company’s third-quarter results, written by AI.
Furthermore, thanks to Google, Facebook, Amazon and other online services tracking virtually every aspect of people’s online and even offline behaviors, we already have deep data on almost every American’s personal opinions and preferences – which these companies already use to target and position advertisements. All that’s missing is for one media organization to combine these processes.
And there is nothing to stop a company, especially one such as Amazon or even Apple, from doing it. After all, it would create the perfectly “sticky website,” where people, content and products are precisely matched – an advertiser’s dream come true.
Besides, there is no policy or law that prohibits any of this – none whatsoever prescribing that the news must be authored by people. And news consumers would love such personalized news. After all, close to the majority of news consumers, both right- and left-leaning, not only prefer to hear political views in line with their own thinking on social media, but they also tend to block or defriend people who disagree with their avowed political views.
The majority of news consumers also “happen upon the news” online rather passively, often while doing something else. They usually follow the same few news sources rather than looking for another source to reconfirm what they are presented, let alone get a different perspective.
So the audience preference for an AI-driven, single news website that targets them with hyper-personalized content is already here, policies prohibiting it are absent and the technology for it is almost ready. In other words, this media future is primed for disruption.
A win-win for marketers, advertisers and readers – but a giant loss for democracy as we know it, because it will take away the core of what makes democracies successful: well-informed citizens, who form opinions not by simply reading articles they agree with, but by examining that which they don’t agree with – and then finding common ground.
However, we can save this critical part of our democracy through forward thinking policy, media self-policing and a bit of introspection.
More specifically, first, when it comes communication technology, policy making tends to be highly reactive. Right from the days of the Radio Act of 1912, which was a reaction to the sinking of the Titanic and eventually led to the creation of the Federal Communications Commission, to all the many congressional hearings after the Russian interference in our elections, we have reactively dealt with the media. What we need instead is to proactively address what we know is more than likely.
Get our free weekly newsletter
The problem with AI is not only that it will do things faster or better than human journalists, but it is also that we will trust it implicitly. We already see this trend with court systems across the nation using AI-based programs for deciding what punishment is meted out to people convicted of crimes without fully examining the underlying computational algorithms governing the programs.
Likewise, the AI-generated news of the future will likely be considered more trustworthy, unless policies are enacted that limit the extent to which algorithms can access audience profile data – thereby reducing the ability for the media to target each reader with their own version of “alternative facts.”
Second, the news media needs to act responsibly and self-police. With the many articles already being generated and matched to readers by AI, news sites need to start providing indicators of how such content matching was done, what parts of the content was authored by AI and, in the future, how many different versions of the story were created. This would help readers make up their own minds about the credibility of what they read.
Finally, the reading public has the largest responsibility. What our recent presidential election has taught us is that it’s not simply the availability of the media, the presence of competing content or even its accessibility. It is human agency. In other words, we the people have to actively seek information – some that is agreeable, a lot that it not; some that’s online, and others that come from discussions with people who disagree with us – and form our informed views. And that’s something tomorrow’s AI could well take away from us.