Cyber stability under pressure: A reality check for cyber norms in an era of AI-driven cyber risks
On-site registration: RSVP to [email protected], and also register for the Geneva Cyber Week, before 30 April
2026 Geneva Cyber Week
Cyber stability is under increasing strain — not only from more sophisticated attacks, but from the rapid integration of artificial intelligence into both the tools used to carry them out and those used to defend against them. The same technology that makes defenders faster also makes attackers faster. The same AI model that helps a security team identify weaknesses in their own systems can help an adversary find them first.
At the same time, several major AI providers have revised the terms governing how their models may be used, including in some cases terms that previously restricted military and national security applications. Governments in a number of jurisdictions have actively sought to expand their access to commercial AI capabilities for defence and intelligence purposes. There is evidence that criminal and APT groups — including those allegedly affiliated with states — are increasingly adopting commercial AI tools to automate cyber attacks at greater scale, while reducing the investment in time and human resources required. Commercial AI security products, including those being procured by critical infrastructure operators, are built on underlying models whose permitted uses and governance terms may not be fully visible to the organisations deploying them.
This raises fundamental questions: when the same AI tools serve both attack and defence, what does “responsible use” actually mean in practice? Who sets the boundaries, and what happens when those boundaries are moved? How do existing cyber norms hold up when the technology they are supposed to govern has changed faster than the norms themselves?This scenario-based session takes place during the Geneva Cyber Week and is open to both onsite and online participants. It brings together experts and decision-makers from across stakeholder groups — including public policymakers, critical infrastructure operators, technology providers, cybersecurity practitioners, AI governance specialists, compliance and risk professionals, and civil society and academic experts.
The session will be held under the Chatham House Rule.
Its findings will directly inform the third chapter of the Geneva Manual on Responsible Behaviour in Cyberspace. To join online, please RSVP at [email protected].
About Geneva Dialogue:
The Geneva Dialogue on Responsible Behaviour in Cyberspace is an international multistakeholder process examining how agreed cyber norms and confidence-building measures (CBMs) are implemented in practice by non-state stakeholders. Established in 2018 by the Swiss Federal Department of Foreign Affairs and implemented by DiploFoundation with the support of key partners, it brings together experts from the private sector, academia, civil society, and technical communities.
The Geneva Dialogue is designed to surface practical insights, trade-offs, and constraints in the implementation of agreed cyber norms and CBMs across different stakeholder communities. Its objective is to document practical roles, responsibilities, and points of friction in the implementation of cyber norms, and to translate these insights into concrete guidance through the Geneva Manual. The Manual currently comprises two chapters, covering supply chain security and responsible vulnerability reporting, and the implementation of cyber norms and CBMs related to the protection of critical infrastructure.
In 2026, the Geneva Dialogue focuses on stress-testing cyber norms and cybersecurity practices under real-world pressure, examining how they perform amid geopolitical tension, technological acceleration, and systemic interdependence. The outcomes will inform the next chapter of the Geneva Manual, grounded in operational and policy realities.
Experts




