Editor’s Note: Gregory C. Allen is an adjunct fellow at the Center for a New American Security. In July 2017, he co-authored a report, “Artificial Intelligence and National Security” which was published by the Harvard Belfer Center for Science and International Affairs. Follow him on Twitter @Gregory_C_Allen. The views expressed in this commentary are his own.
Conversations about the military use of artificial intelligence usually bring to mind the “Terminator” movies, where a super-intelligent AI turns evil and tries to destroy humanity. This month, the US Defense Department announced that it has indeed taken a major step toward regular use of artificial intelligence, but it’s a far cry from the Terminator approach.
This December, the Pentagon revealed that it has completed its crash program to bring state-of-the-art artificial intelligence technology to America’s military. Project Maven, which the DOD began funding in June, has operationally deployed its AI system to the fight against ISIS in the Middle East. This marks the first time that the military has fielded an advanced AI system using deep learning and neural networks. Its mission? Monitor the video feeds from tactical unmanned aerial vehicles – better known as drones.
Project Maven’s AI system, however, is nothing like the “Skynet” of the Terminator movies or even human intelligence. Project Maven’s AI possesses only narrow intelligence, meaning it is smart at the task of monitoring drone surveillance videos and literally useless for doing anything else.
Today, the military employs thousands of service members and contractors to analyze video from drone sensors. Though these individuals are highly (and expensively) trained, much of their day-to-day work involves tediously counting the people, objects, and activities that are picked up by drone cameras. Project Maven’s AI system automates this low-level counting and logging activity so that defense intelligence analysts can focus on more complicated tasks.
If watching drone video sounds like no big deal, think again. The DOD has spent tens of billions of dollars to develop, build, and fly its fleet of more than 11,000 drones. America’s military doesn’t suffer from a shortage of eyes in the sky, but it could never find enough people to watch all the video that drones record. Even though the DOD has been hiring and training video analysts as fast as it can, 99% of drone video data is never analyzed by anyone. Project Maven’s AI technology has finally provided a way for the DOD to surf the tidal wave of data that it collects, rather than drown in it.
Project Maven’s greatest impact, though, goes beyond drones and the fight against ISIS. The project managed to attract support from the academic and the commercial tech sectors that are driving today’s AI breakthroughs. Incorporation of commercial state-of-the-art machine learning technology is a major milestone in the military use of AI, on a par with the first use of heat-seeking missiles nearly 60 years ago.
Breakthroughs in machine learning – especially the deep learning and neural network technologies used by Maven – have led to a massive expansion in the diversity of activities that can effectively be automated. In 2005, before the current machine-learning revolution, two prominent labor economists, Frank Levy and Richard Murnane, used driving as their main example of jobs that are difficult to automate. Thanks to AI with machine learning, Uber began selling self-driving taxi rides in 2016.
Project Maven’s AI is only smart at one thing, but its success at that one thing is likely to spark hundreds of similar AI projects throughout America’s defense and espionage agencies. Each of these other AI systems will be smart at its own, different thing.
At the Nvidia GPU Technology Conference, Jack Shanahan, the Air Force general overseeing Project Maven, said that the military will now likely seek to adapt AI technology to work with more types of surveillance platforms, such as larger drones and spy satellites. The technology will also have to be adapted to work with more types of data, such as radar, and in more regions and operational contexts. Eventually, advanced AI technology is likely to spread to everywhere in the military that has an abundance of data, which these days is nearly everywhere.
The US military is wise to pursue greater adoption of AI technology. As I wrote in a study on AI and national security on behalf of the US Intelligence Advanced Research Projects Activity (IARPA), advances in AI technology are poised to revolutionize 21st century warfare and espionage, much as aircraft and nuclear weapons revolutionized the 20th century.
America’s nearest military competitors, Russia and China, have reached the same conclusion. In his September 2017 speech on artificial intelligence, Russian President Vladimir Putin said that “whoever becomes the leader in this sphere will become the ruler of the world.” This past July, China released its national strategy for AI, which aims to establish China’s dominance in both military and commercial AI technology. Former Google CEO and current Chairman of the Defense Innovation Advisory Board Eric Schmidt said in November that he views China’s AI strategy as a credible threat to US tech leadership.
Project Maven’s success foreshadows AI’s incredible opportunities and challenges for the US national security community. As AI systems become more capable, and as countries like Russia grow bolder in their military use, AI will be fielded in more diverse missions and operational contexts. This will bring about ever more difficult legal, ethical, and strategic dilemmas.
To its credit, the DOD chose counting objects in drone videos as an AI prototype because it sought an activity where occasional mistakes wouldn’t have life and death consequences.
Counting objects is one thing, but drone surveillance can also be used to establish whether an individual is participating in combat and potentially subject to retaliation. Figuring out how to field safe and ethical AI systems with appropriate human oversight and control is only going to get more complicated.
In navigating such future dilemmas, Project Maven’s successful partnership with leading AI research organizations may prove to be its most important legacy. National security officials desperately need the counsel of the AI research community on how to utilize AI technology ethically and effectively, just as AI researchers need the defense community to explain the national security implications of the technologies they pioneer.
Every technology revolution brings both promise and peril, but not since the invention of nuclear weapons has the need for frank conversation been more urgent. Project Maven has finally brought state-of-the-art AI systems to the front lines of the US military, but that is only the beginning of America’s challenges with artificial intelligence and national security.