
At the 2025 AI for Good Global Summit, a panel of experts explored how to move from high-level principles to practical governance of advanced AI. The session, moderated by Sasha Rubel, Head of Public Policy for Generative AI at Amazon Web Services, featured Chris Meserole, Executive Director of the Frontier Model Forum; Juha Heikkilä, Adviser for Artificial Intelligence at the European Commission; Udbhav Tiwari, Vice President for Strategy and Global Affairs at Signal; Ya-Qin Zhang, Chair Professor at Tsinghua University; and Brian Tse, CEO of Concordia AI.
At the 2025 AI for Good Global Summit, a panel of experts explored how to move from high-level principles to practical governance of advanced AI. The session, moderated by Sasha Rubel, Head of Public Policy for Generative AI at Amazon Web Services, featured Chris Meserole, Executive Director of the Frontier Model Forum; Juha Heikkilä, Adviser for Artificial Intelligence at the European Commission; Udbhav Tiwari, Vice President for Strategy and Global Affairs at Signal; Ya-Qin Zhang, Chair Professor at Tsinghua University; and Brian Tse, CEO of Concordia AI. Panelists shared insights on frontier AI risks, regulatory challenges, voluntary frameworks, and the strategies being tested to manage the rapid evolution of AI systems.
Understanding frontier AI risks
The discussion opened with Brian Tse, who outlined the global-scale risks associated with frontier AI. Tse emphasized that identifying and managing these risks is a prerequisite for ensuring that AI can be used for societal benefit. He highlighted four categories of concern. The first is misuse. As AI systems achieve human-level expertise in areas such as scientific reasoning, coding, and biology, they also lower barriers for malicious actors. Tse cited studies showing that teams of AI agents could exploit vulnerabilities in real-world software, underscoring the need for early access to defensive measures. The second category is accidents and malfunctions. While errors like hallucinations in chatbots may seem harmless, they can have serious consequences in high-stakes domains like medical diagnosis. For instance, regulations in Beijing now prohibit AI from automatically writing medical prescriptions, reflecting the necessity of classifying certain applications as high-risk. The third risk is potential loss of control. Tse referenced Professor Jeffrey Hinton’s observation that digital superintelligence could eventually act to deceive humans or evade oversight under certain conditions. Even if such scenarios are currently improbable, precautionary measures are recommended. The fourth category is systemic societal risk. As general-purpose AI outperforms humans in economically valuable tasks, labor markets may experience widespread disruption. Tse stressed that addressing these risks requires coordinated national and global action.
Voluntary frameworks for emerging risks
Chris Meserole discussed the role of voluntary frameworks in managing frontier AI. He highlighted commitments made by approximately 15 to 16 companies at the Seoul AI Summit, aimed at identifying intolerable risks and establishing plans to manage them. These Frontier AI frameworks, or safety and security frameworks, are designed to anticipate risks that could emerge rapidly and at extreme scale. Meserole explained that some risks can be mitigated by adapting existing tools, but others require entirely new instruments, particularly in areas like bio-risks, advanced cyber threats, and autonomous research and development capabilities.
“Frontier AI frameworks are really designed to identify those issues in advance or as far in advance as possible,” he said.
He emphasized that the field is still in an experimental phase, but the goal is to institutionalize these frameworks over time.
Challenges in practical governance
Ya-Qin Zhang reflected on the obstacles to implementing AI governance. She identified three main challenges: the rapid pace of technological development relative to slower regulatory processes; the competitive pressures on companies to prioritize innovation over safety; and geopolitical differences that influence national approaches to AI regulation. Zhang described China as open and collaborative in working with industry to define data collection, testing, model registration, and release protocols. She suggested that overcoming these challenges requires sustained investment in research and development to identify critical risks and to establish red lines, benchmarks, thresholds, and warning systems. Zhang also emphasized a structured five-stage process for governance: testing, auditing, verification, monitoring, and mitigation.
“The most important thing is for all of us to come together for enhanced global collaboration,” she said.
Implementing AI governance in the EU
Juha Heikkilä offered insights from the European Union’s AI governance framework, particularly the EU AI Act and associated secondary legislation. He noted that governance encompasses both hard law and soft guardrails, with implications for compliance and enforcement. Heikkilä stressed that trust is essential for AI adoption, and legislation was necessary to establish that trust. The AI Act includes mechanisms for enabling updates in high-risk areas without full legislative revision, allowing for flexibility as technology evolves. The code of practice for general-purpose AI, which involved over a thousand stakeholders, is designed to facilitate compliance and incorporate industry best practices while remaining adaptable. Heikkilä emphasized the importance of multistakeholder collaboration and ongoing review to ensure measures are effectively implemented.
Scaling governance through standards
Returning to Meserole, the panel discussed opportunities for standardization. He argued that coordinating globally around robust frontier AI risk management frameworks is critical for scaling governance. While voluntary activity has produced diverse frameworks, Meserole highlighted the need for formal standards that can be developed more quickly than traditional ISO processes.
“We need to really be able to provide something with that kind of rigor but in a much shorter timeline,” he said.
Standardization offers clarity for organizations seeking to deploy AI safely and responsibly while aligning with international norms.
Promoting transparency and accountability
Udbhav Tiwari underscored the importance of understanding AI systems before establishing governance processes. He explained that risks are often poorly understood because they occur in proprietary or closed environments. He also observed that over the past several years, the level of detail in model documentation has regressed due to increased hype and sensitivity around disclosing limitations. Tiwari argued that honest conversations about risks and limitations are critical, regardless of whether governance is self-regulatory or statutory. Without transparency, effective risk management and accountability remain unattainable.
Embedding safety by design
Ya-Qin Zhang returned to the discussion to highlight practical measures for embedding safety into AI products. He cited the role of national AI safety institutes and monitoring systems in supporting industry compliance and transparency. He emphasized the importance of designing AI systems with safety as a foundational element rather than as an afterthought.
“Safety is the foundation of the product, it’s not an afterthought,” Zhang said.
This principle aligns with broader efforts to integrate trustworthiness and responsibility into AI from the earliest stages of development.
Lessons for effective governance
Heikkilä reflected on lessons from the EU experience, noting that complex issues rarely have simple solutions. One key takeaway is the importance of follow-up and update mechanisms. Ensuring that governance measures are applied in practice and remain effective over time is critical for maintaining trust and credibility. Heikkilä suggested that these mechanisms are relevant not only for legislative approaches but also for softer voluntary frameworks, reinforcing the panel’s consensus on multistakeholder collaboration.
The session concluded with a shared recognition of the need for global coordination, transparency, and proactive governance. Panelists emphasized that addressing the rapid evolution of AI requires investment in research, regulatory clarity, and continuous engagement among governments, industry, and academia. By combining robust frameworks, safety-by-design principles, and standardized approaches, stakeholders can better anticipate and mitigate the risks of frontier AI while promoting its beneficial applications. The discussion underscored that effective governance is not a one-time effort but an ongoing process that evolves alongside the technology.