The new "Partnership on AI" initiative will develop best practices and explore the impact of artificial intelligence tools on society.
Google and four other technology giants have launched a nonprofit initiative to develop best practices in the field of artificial intelligence and to advance public understanding of AI technologies.
The group's new Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI) will explore opportunities in this area that will benefit people and society. The nonprofit will publish research under an open license in areas such as ethics and inclusivity in AI, privacy, interoperability and technology reliability.
The other members of the nonprofit are Facebook, IBM, Amazon and Microsoft.
For the moment at least, the Partnership on AI will not be used for lobbying government and policy-making bodies, the five founding members said in a joint statement
released this week. The partnership was announced at IBM Watson's headquarters in New York.
The initiative is the result of several conversations among research scientists at the five founding members over the past year or so. Informal discussions about the possibility of the companies coming together to establish a nonprofit for exploring advances in AI began earlier this year and culminated in this week's announcement.
"Google and DeepMind strongly support an open, collaborative process for developing AI," said Mustafa Suleyman, co-founder and head of applied AI at DeepMind, the UK-based artificial intelligence company that Google acquired two years ago.
The initiative represents a major step toward overcoming barriers that have kept AI teams from sharing knowledge and collaborating in critical areas like the impact of AI on ethics and society, he said.
Representatives from each of the companies as well as a board comprising leaders in policy, law and academia with govern the Partnership on AI. All work that is developed under the initiative will be shared publicly, as will details of all discussions, meetings and guidance developed by the group.
Google has staked a lot on AI technologies. In comments to shareholders earlier this year, company CEO Sundar Pichai described a vision
in which AI technologies from Google's acquisition of DeepMind will be central to its capabilities in areas ranging from search to autonomous vehicles.
The company has said it wants to use AI and machine learning to change and reshape the manner in which people go about their daily tasks, and to tackle major challenges in areas such as health care and environmental issues including climate change.
AI and machine intelligence technologies already power a lot of Google products, including core ones such as Search, Maps and Google Translate. In addition, the company's DeepMind group is using its technology to support medical research in the UK. Among the projects that DeepMind researchers are involved in is one that explores how AI tools can be applied to better analyze eye scan data so health professionals can detect and treat eye diseases more effectively.
Recently, Google researchers have also been conducting extensive research on ensuring the safety of future AI systems. Earlier this year, for example, researchers from the company released
a paper developed in collaboration with researchers from multiple universities on safety problems pertaining to the use of AI systems and measures for mitigating them.
Another study, in collaboration with researchers from Oxford University, explored the potential consequences that can arise when AI systems malfunction
and suggested ways to address those concerns.