This commitment follows protests from staff over the US military’s research into using Google’s vision recognition systems to help guide drones.
Google insisted last week that its AI technology is not being used to help drones identify human targets, but told employees that it would no renew its contract after it expires in 2019.
Google chief executive Sundar Pichai said: “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas.
“These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”
Mr Pichai did not explain how Google would reach decisions about when to limit the use of AI, but added the company was not coming up with “theoretical concepts”. He said “They are concrete standards that will actively govern our research and product development and will impact our business decisions.”
The new AI principles follow weeks of protest from over 3,000 Google employees over “Project Maven”, an programme with the US Pentagon on AI for drones.
Miles Brundage, a Research Fellow at the University of Oxford, said on Twitter: “A bit vague in places, they don’t exclude offensive cyber security or anti-materiel autonomous weapons but it’s a start.”
The principles clearly state that Google will not work on AI for weapons but they also leave room for interpretation for company executives and allow Google to work for the military.
Among its objectives, the projects aim is to develop and integrate “computer-vision algorithms needed to help military and civilian analysts.”
In an open letter from Google employees to Mr. Pichai, employees expressed concern that the military could weaponize AI. “We believe that Google should not be in the business of war…Google’s unique history and its direct reach to the lives of billions of user set it apart.”
The principles also address a much broader range of concerns. Mr Pichai pledges to avoid creating systems that reinforce “societal biases on gender race or sexual orientation,” and says that privacy safeguards should be incorporated into AI.
Source: The Telegraph