Social media faces a crisis of trust. Europe wants to make sure artificial intelligence doesn’t go the same way.
The European Commission on Monday unveiled ethics guidelines that are designed to influence the development of AI systems before they become deeply embedded in society.
The intervention could help break the pattern of regulators being forced to play catch up with emerging technologies that lead to unanticipated negative consequences.
The importance of doing so was underscored Monday when Britain proposed new rules that would make internet companies legally responsible for ridding their platforms of harmful content.
“It’s like putting the foundations in before you build a house … now is the time to do it,” said Liam Benham, the vice president for regulatory affairs in Europe at IBM (IBM), which was involved in drafting the AI guidelines.
The European Union has taken the global lead on tech regulation, introducing a landmark data privacy law last year while going after big tech companies for anti-competitive behavior and unpaid taxes.
AI, which has captured the public’s imagination and produced dire warnings on the potential for misuse, is the latest regulatory front for the bloc. It’s not an easy topic.
Google (GOOGL), for example, shuttered its new artificial intelligence ethics council last week after a swarm of employees demanded the removal of the president of a conservative think tank from the group.
The European Commission has crafted seven principles for guiding AI development and building trust. While the guidelines are not binding, they could form the basis of further action in coming years.
Transparency is key
Mariya Gabriel, Europe’s top official on the digital economy, said companies using AI systems should be transparent with the public.
“People need to be informed when they are in contact with an algorithm and not another human being,” said Gabriel. “Any decision made by an algorithm must be verifiable and explained.”
An insurance company that rejects a claim based on an algorithm, for example, should ensure the customer knows how and why the decision was made. A human should be able to step in and reverse the decision.
The European Commission said that future AI systems need to be safe and reliable for their entire life cycle. It also said that data protection must be a priority, with users in control of their own information.
The guidelines put responsibility squarely on those who build and deploy the AI systems.
“If a company puts in an AI system, that company is responsible for it … this is very important if there is any accident,” said Gabriel.
Gabriel also said companies need to ensure their AI systems are fair.
She said, for example, that an algorithm used in the hiring process that was produced using data from a company that employed only men would likely reject women candidates.
“If you have biased input data, that really can be a problem,” said Gabriel.
AlgorithmWatch, a non-profit group, said that while it’s a good idea to put guidelines in place, there are problems with Europe’s approach.
“The guidelines center around the idea of ‘trustworthy AI’ and that is problematic because it’s not a well-defined term,” said Matthias Spielkamp, the group’s co-founder. “Who is to trust and who is to be trusted?,” he added. He also said that it is not yet clear how future oversight will be handled.
Thomas Metzinger, a philosopher and professor at the University of Mainz, helped draft the guidelines but criticized them for not prohibiting the use of AI to develop weapons.
Others are worried about the impact the guidelines will have on innovation.
“We are concerned that the granularity of the guidelines, would make it difficult for many companies particularly [small and medium sized businesses], to implement,” said Antony Walker, deputy CEO of TechUK, an industry group.
The European Union will now try to work through these questions and others in a pilot program with Big Tech companies.