ITU

What’s next for ‘AI for Good’? 4 key transformations on the horizon

Dozens of top artificial intelligence (AI) experts from industry, academia, government and civil society are meeting this week for the AI for Good Global Summit at the ITU headquarters in Geneva, Switzerland.

Day 1 of the three-day Summit, the United Nations’ top platform for dialogue on AI, was dedicated to framing the enormous potential – and great challenges – for AI to improve lives worldwide.

Distinguished experts in the ‘Transformations on the Horizon’ panel framed some of the top issues facing the growing AI for Good movement. Below are four key areas highlighted by the group.
1) The great potential of AI for Good
From detecting cancer to precision agriculture to more accurate climate models, experts detailed some of the top ways AI can accelerate progress on the United Nations’ Sustainable Development Goals.

“We need to transform the way we think about AI, give it a positive attitude to help societies,” said Wolfram Burgard, Professor of Computer Science at the Albert-Ludwigs-Universität in Freiburg, Germany.

He added that AI for Good has applications in logistics and manufacturing, self-driving, health care, and precision farming — and that fresh advances in highly precise robot mobility will be a ‘key enabler’ for industry in Europe and beyond.

“It’s an exciting moment for AI. We are at point where startups and industry can use technologies that a decade ago only a handful of research labs had access to,” said Celine Herweijer, Partner, Innovation and Sustainability at PwC UK, adding that PwC is trying to embed ‘responsible’ AI advice into work they do for both industry and government clients.

Ms. Herweijer spoke of how AI can speed up natural cycles of intelligence and deliver massive productivity gains to optimize systems for water, mobility, farming, and the sustainable use of raw materials.
RELATED: 8 ways AI can help save the planet
But to realize this great potential, all stakeholders need to work together to ensure real progress on this growing AI for Good movement.
2) Need for partnerships, accelerators for innovation for good
“We are at critical juncture for AI,” said Terah Lyons, Executive Director of the Partnership on AI, pointing out the tech policy should be developed with more voices than just tech developers. “We need to confront and address questions now. This is crucial to ensure we develop AI that benefits all. We cannot determine solutions alone. We are united by interest in collective multi-stakeholder approach to this.”

“We are not as subservient to tech idealists as we were last year. That is good news.” — Wendell Wallach, Yale University

“We also all think untapping AI’s potential means grappling with its challenges,” said Ms. Lyons.
3) Need to proactively address challenges and risks
Panelists shared Ms. Lyons view that more needs to be done proactively to tackle the range of critical challenges and risks that AI raises. As AI is moving so fast, these are not theoretical concepts.

Critical issues include weaponization, bias in AI algorithms, lack of transparency, manipulation of our behaviours through AI, job displacement, data ownership and more, said Wendell Wallach, a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics who wrote a book on how to keep tech from slipping beyond our control, and another book on whether we can teach robots rights from wrong.

“We are not as subservient to tech idealists as we were last year,” Wallach said, referencing the first AI for Good Global Summit in 2017. “That is good news.”

So what do we need to do?

There needs to be more focus on ways we can protect each other, more focus on what can go wrong, said Wallach.

He also spoke of a distinction between “outwardly turning” AI, which would seek to consistently focus on how specific applications can help the billions of more vulnerable people worldwide — and “inwardly turning” AI, which would consistently seek to identify and mitigate societal harm that could come from specific uses of AI. Such societal harm could include appropriation of technology by rogue actors and elites for self-serving purposes, for instance.
4) Keep it simple
There is a need to focus on the simple AI applications for good that exist already, not on arcane considerations for technologies that don’t even exist yet, experts agreed.

Too often, the discussions around AI become hypothetical whereas there are real use cases now that can be scaled up to great effect. One example given by Wallach is a small insurance company that uses deep learning to analyze satellite data to deduce rainfall for crop production in particular areas and send important messages to African farmers’ mobile phones to precisely time purchase of seed or fertilizer.

READ MORE