- mage credit: AdobeStock
- |
A critical moment for global AI governance
ITU News
Leaders from the United Nations, industry, and academia gathered online to answer journalists’ burning questions about the rise of generative artificial intelligence and what it means for global governance ahead of the AI for Good Global Summit to be convened by ITU from 6 to 7 July.
The goal of the Summit, said ITU Secretary-General Doreen Bogdan-Martin, who opened the media roundtable, is to advance AI dialogue as a global community: with civil society, academia, the private sector, governments, and the UN.
“We are at a critical moment in history where we can get this right and build the global governance we need,” said AI expert Gary Marcus, professor emeritus at NYU who recently testified before the US Senate Judiciary Committee. Marcus looks forward to the Summit as an ideal place to “plan and think what the right next move is.”
Juan Lavista Ferres, Chief Scientist at Microsoft, pointed out that for certain problems, AI offers “the only solution we have” – emphasizing that we need “all the help we can get.”
While the revolutionary potential of the technology was not up for debate – it was “the way forward” in terms of global guardrails and governance that generated lively discussion.
Questioning the AI race
Talks of winning the AI race might be misguided, according to some panellists.
It wouldn’t make sense to have 193 foundational models that are expensive to train, Marcus pointed out. “We need a coordinated response across all countries.”
At the same time, there is no one-size fits all solution. But what’s clear is that AI will need agile, flexible governance mechanisms.
For Gabriella Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, the rise of generative AI has been positive in that it has sharpened the world’s focus on what institutional capacities governments need to take the best of AI and “control the downsides.” She raised the United States Food and Drug Administration as an interesting example of “ex-ante” governance worth exploring.
Panellists agreed that the conversation has moved on from principles and towards action.
“We know what the issues are,” said Bogdan-Martin. “We have agreement on many of these principles. We have to look at implementation and compliance. When it comes to who and how, a multi-stakeholder attempt is needed.”
The question on everyone’s minds was whether a global regulator for AI is needed. But the answer is more complex.
Roles and responsibilities
When it comes down to AI governance, international organizations have a role to play. “We bring the evidence and expertise of countries to inspire, to define what is more effective policy-wise,” said Ramos. “But we are not enforcers.”
When asked whether technical or ethical issues are being discussed at ITU, Standardization Bureau Deputy Director Reinhard Scholl answered: “It depends on the area – for something like health, the entire life cycle of AI must be considered. Discussions on machine learning for communication networks might be technical only.”
Should the fate of AI governance, then, lie in the hands of Big Tech giants like Microsoft?
“I don’t think business can self-regulate,” replied Bogdan-Martin. “Governments need to engage, and there’s an important role for the UN, academia and civil society, too.”
Lavista Ferres highlighted how private sector involvement in global initiatives like AI for Good is key. “We have been working with all UN agencies,” he added, noting the importance of discussions on topics like human rights that “we need to address together.”
For Ramos, the duty of care clearly lies with governments.
“Big Tech will respond to incentives and frameworks as any other sector.” If there are none, she warned, industry players will continue operating “as they have been until now.”
The way forward
Doreen Bogdan-Martin is optimistic about the Summit being “more than just words.”
“Let’s remember many countries are trying to engage – many are developing AI policies and strategies right now,” she pointed out. “Let’s get them around the table and try to avoid exacerbating inequalities further.”
When asked about clear outcomes, Bogdan-Martin hopes “to end up with a clear blueprint for the way forward.”
One thing was clear: for Bogdan-Martin, doing nothing is not an option – because humanity depends on global leaders taking action.
“We have to engage and ensure a responsible future with AI.”
- -ACSIS
RELATED NEWS
- | November 27, 2024