Image: Unsplash

WEF
How to secure the digital future: Resilience, trust and leadership
Network of the Global Future Councils

2025-2026, World Economic Forum

Martina Szabo

Head of Knowledge Communities, World Economic Forum

Kaiser Kuo

Writer, World Economic Forum, and host of the Sinica Podcast

 

This article is part of:Annual Meetings of the Global Future Councils and Cybersecurity

  • Global challenges have become inseparable, making collaboration across disciplines more important than ever.
  • Experts from the Global Future Councils met with global cybersecurity leaders in Dubai to discuss interconnected challenges.
  • Here are their reflections on digital resilience, trust and leadership in an increasingly intertwined world.
     

Amid profound geopolitical and geo-economic realignments, and as rapid advances in technology reshape the global security landscape, economic resilience, societal trust, sustainability and cybersecurity are now part of the same equation.

To confront these intertwined challenges, the World Economic Forum convened a joint session of the Annual Meeting of the Global Future Councils and the Annual Meeting on Cybersecurity, bringing more than 500 experts from business, government, civil society, academia, and media, alongside 150 global cybersecurity leaders, together in Dubai. The gathering underscored the growing recognition that cyber resilience is no longer a technical concern, but a shared societal imperative.

“Cybersecurity touches every facet of modern life, and this meeting brought together the Councils to see how it is reshaping each of our fields, from quantum transition to financial regulation,” said Akshay Joshi, Head of the World Economic Forum’s Centre for Cybersecurity.

 

Resilience is universal – and so is vulnerability

Cybersecurity has always been asymmetric. Attackers need only to find a single overlooked vulnerability; defenders must secure every possible point of entry across sprawling networks and complex supply chains.

The digital attack surface – encompassing every connected device, platform, and line of code – has expanded exponentially, creating a terrain so vast that even minor oversights can have cascading consequences.

In 2024, human error contributed to roughly 95% of data breaches – a reminder that even sophisticated systems fail at the human interface. The introduction of AI has deepened this imbalance, turning it into a contest not just of code but of time and scale. AI-generated phishing campaigns now succeed at rates more than four times higher than their traditional counterparts, while automated tools crawl networks continuously, identifying misconfigurations or unpatched systems within seconds.

Daniela Rus, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT and a member of the Council on Autonomous Systems, warned that the same extraordinary AI tools empowering scientists and doctors are also empowering cybercriminals. “Now someone who does cybercrime doesn’t have to know how to break into systems,” she said. “They can sit in their own living rooms and inflict a lot of damage.”

At the same time, the nature of what is at risk has evolved. Cyberattacks no longer target only bank accounts or financial institutions. Increasingly, they seek to manipulate or exfiltrate intellectual property, disrupt supply chains, or compromise critical infrastructure.

Data on new molecules, AI training sets, and industrial control systems now carry as much strategic and monetary value as cash once did. The rise of cryptocurrency has further accelerated this shift, providing near-instant, pseudonymous liquidity and fuelling transnational networks that treat data theft as a scalable business model.

As Ohood bint Khalfan Al Roumi, Minister of State for Government Development and Future of the United Arab Emirates, said during the Opening Plenary of the Annual Meetings of the Global Future Councils and Cybersecurity, “The global cost of cyber threats is expected to reach around $20 trillion annually by 2030, underscoring that resilience, trust, and secure digital infrastructure have become fundamental or foundational pillars of future prosperity.”

It’s not just prosperity that is under threat.

“There was a state actor a couple of years ago in this region that performed a cyberattack on infrastructure, and it was the first time ever we saw that somebody was trying to kill people through a cyberattack,” said Robert Lee, CEO of cybersecurity technology company Dragos. “They failed. It only cost around $300 million in downtime, but the intent of the attack was to try to kill people at a facility,” he said. “We see criminals breaking into water systems around the world, trying to change chemical levels. These are not actions that drive profit. These are not actions that drive an outcome for anybody that would be good. It’s just bad behaviour trying to hurt our families.”

For defenders of prosperity and security, meanwhile, incentives remain fragmented. The financial sector invests heavily in fraud detection; cloud providers focus on infrastructure hardening; governments prioritize the resilience of critical systems. Yet the connective tissue between these efforts is still thin. The cost of failure is borne systemically, while the cost of prevention is distributed unevenly. The result is an economic disequilibrium in which it remains cheaper, faster, and easier to attack than to defend.

As the Council on Decentralized Finance has observed, distributed systems may appear inherently resilient, yet their interconnections can turn a single code-level flaw into a systemic failure. Cybersecurity faces a similar paradox – resilience depends less on perfect defence than on anticipating how local disruptions propagate through complex, code-dependent ecosystems.

Here again, lessons from the decentralized finance ecosystem apply – when interdependent systems lack shared safeguards, isolated vulnerabilities can cascade through entire networks. Cyber resilience, like financial stability, requires mechanisms to absorb shocks rather than merely repel them.

AI could, in theory, reverse this dynamic by enabling predictive defence, automated patching, and real-time threat correlation across sectors. But those benefits depend on trust, data-sharing, and common standards that are still largely aspirational. For now, the attacker’s advantage lies in agility, while the defender’s lies in accountability – a structural mismatch that technology alone cannot resolve.

The Council on Cybersecurity and 10 other Global Future Councils have explored how emerging technologies intersect with digital trust, governance, and resilience. Read the full series:

Economics of digital trust

Breach costs have reached an average of $4.45 million per incident, with reputational damage and loss of customer confidence often eclipsing the direct financial hit.

Yet investment decisions in cybersecurity remain shaped by short-term incentives and asymmetric information. Organizations tend to spend to meet compliance requirements, not resilience thresholds.

Digital interdependence compounds this risk. A single compromised vendor can expose thousands of clients; a flaw in an open-source library can ripple through entire industries. The resulting externalities mirror those of climate or financial risk – systemic vulnerabilities that no single actor can meaningfully mitigate alone. The opacity of modern digital supply chains – where code, data and services are recursively embedded – further obscures accountability, creating blind spots even for the most mature institutions.

Here, insights from other Global Future Councils are instructive. The Council on Decentralized Finance notes that transparency and verifiability – long hallmarks of blockchain – are necessary but insufficient substitutes for trust. Even in cryptographically secured systems, confidence ultimately depends on education, governance and the perceived integrity of intermediaries. Similarly, the Council on Financial Education argues that resilience begins not with regulation but with literacy – a populace capable of assessing risk, reading signals and distinguishing credible information from manipulation.

The Council on Data Frontiers adds that trust itself is becoming quantifiable, as organizations experiment with metrics to assess data provenance, model integrity and algorithmic transparency. The Council on Climate and Nature Governance, in turn, underscores that the same collective-action logic applies to sustainability and cybersecurity alike – both depend on actors investing in shared resilience that may not yield immediate private returns.

What is emerging is a digital commons – a shared infrastructure of trust that depends on collective maintenance yet remains governed by private incentives. In theory, AI can help close these trust gaps – correlating anomalies across sectors, predicting emergent threats and coordinating rapid responses. In practice, however, the very data needed to train such systems is often locked behind legal, competitive or privacy barriers. Trust, paradoxically, is what must exist before it can be engineered.

During the Global Future Council’s Output Lab discussions, participants outlined what they called the “seven Ps” of digital trust: Protection, Proficiency, Provenance, Personalization, Perception, Prevalence, and Payments.

The framework captures how resilience must be cultivated across technological, human, and economic dimensions. Protection concerns the basic defences that underpin shared security; Proficiency highlights digital literacy as a civic and leadership competency; Provenance demands transparent labeling and traceability of digital and AI-generated content; Personalization calls for empowering users to curate credible information sources; Perception addresses the psychological and cognitive aspects of trust-building; Prevalence urges increasing the reach of high-quality content while reducing exposure to manipulation; and Payments underscores the need for secure and equitable digital transaction systems.

Taken together, these seven principles offer a holistic framework for strengthening both individual and systemic resilience in the digital ecosystem.

From information to intelligence

The digital domain produces abundance without comprehension. Every network, platform and endpoint now generates vast amounts of data – logs, alerts and anomalies that accumulate faster than human analysts can process. Most organizations are data-rich but insight-poor, trapped in a perpetual state of reactive defence. The challenge is no longer the absence of information, but the inability to translate it into intelligence that can guide timely action.

AI offers a path out of this trap – but only if it is deployed as a system-level capability rather than a series of disconnected tools.

Machine learning can now correlate weak signals across sectors, detect anomalies invisible to human analysts and predict threat trajectories before they materialize. Properly designed, these systems could act as guardian agents – AI systems that continuously monitor other AI systems, validating their actions against human intent and flagging divergence before it cascades into failure.

Insights from other Global Future Councils reinforce this shift from volume to value. The Council on Data Frontiers emphasizes that data’s utility lies not in its scale but in its stewardship – in how provenance, context and consent are encoded into its lifecycle. The Council on Generative Biology similarly observes that intelligence arises from structured diversity: when biological data are integrated across silos, emergent patterns yield discoveries otherwise invisible. Cyber resilience depends on a comparable synthesis – connecting datasets, models and threat signals across institutional boundaries without compromising privacy or sovereignty. The Council on Neurotechnology extends the lesson further, warning that intelligent systems must remain interpretable to human minds if they are to preserve agency and accountability. Intelligence without comprehension, in either brains or machines, invites error on a systemic scale.

Yet intelligence is not simply an analytical product; it is an institutional function. Converting information into foresight depends as much on governance as on computation. The obstacles are rarely technical – they lie in data silos, liability fears and the absence of trusted frameworks for sharing sensitive signals. When Singapore’s Protection from Scams Act 2025 empowered authorities to freeze suspicious transfers in real time, it demonstrated what becomes possible when legal frameworks catch up to operational need.

The Council on Natural Capital draws an instructive parallel: just as environmental systems require shared baselines and trust in measurement before coordinated stewardship is possible, digital ecosystems demand common standards of verification and accountability to sustain collective defence.

True intelligence emerges only when disparate actors – public and private, national and transnational – can converge their partial views into coordinated action. That requires not just technological interoperability but institutional trust: shared standards, legal safe harbours and ethical norms about how AI systems learn from, and act upon, sensitive data. Without this connective tissue, we risk building faster machines that think in fragments.

As AI amplifies both speed and scale in cyberspace, the economic balance between attacker and defender grows ever more skewed. Yet the same technologies that empower exploitation can also enable collective defence. The question now is whether incentives and institutions can be realigned so that intelligence moves faster than threat.

The leadership gap

The economic misalignment is reinforced by a governance void. World Economic Forum research found that 72% of organizations state that their cyber risks have increased over the past 12 months and that private sector leaders differ on what sufficient cybersecurity standards entail. Meanwhile, regulators call for appropriate measures, but the term remains undefined, leaving boards unsure how to translate compliance into actual resilience.

Cybersecurity can no longer be delegated downward as a purely technical function. When breaches can destabilize societies, erode institutional trust and cascade across interconnected systems, security becomes a question of governance – and of leadership. Executives need not become technologists, but they must develop the literacy to ask the right questions: What is the business's appetite for risk? How quickly can it recover? Which dependencies are existential?

Bridging this gap requires a new model of cyber governance – one that embeds security into organizational DNA, links operational readiness with strategic decision-making, and measures performance not only by uptime but by resilience. As Forum participants repeatedly noted, cybersecurity today is 80% governance and 20% technology. The critical task is to build cultures of informed oversight, where boards can weigh trade-offs between innovation, risk and trust with confidence rather than deference.

The Council on Climate and Nature Governance highlights that effective stewardship in complex systems depends on distributed but coordinated authority – a model that could inform cyber leadership, where responsibility must be shared across jurisdictions and value chains rather than centralized in compliance offices. The Council on Generative Biology draws a parallel from science: progress accelerates when leaders recognize the epistemic limits of their systems and create incentives for transparency rather than perfection. Similarly, the Council on Neurotechnology cautions that as decision-support systems grow more autonomous, leadership must evolve from command to calibration – defining intent, setting ethical boundaries and ensuring human judgment remains the ultimate failsafe.

In cybersecurity as in climate, the next frontier of leadership lies in designing governance architectures capable of learning, adapting and sharing accountability across the entire digital ecosystem.

Coordinating global incentives

Across the interconnected global ecosystem, progress remains uneven. Some governments are exploring "safe-harbour" provisions to encourage breach disclosure; others are experimenting with shared-risk insurance pools or cyber-resilience metrics. These are encouraging developments – but they remain early experiments. Without binding coordination mechanisms, voluntary cooperation risks succumbing to free-rider problems. As one GFC expert observed, "everyone agrees that information-sharing is essential until it becomes inconvenient." Yet when platforms, telecom operators, and banks have coordinated in real time – as in recent takedowns of transnational fraud networks – the results demonstrate what becomes possible when incentives align.

Shared frameworks often falter not at the policy level but in execution – when competitive pressures, liability concerns, or incompatible standards prevent the operational sharing of threat intelligence. The challenge is to design incentive structures that make security economically rational rather than merely ethical. Markets have long underpriced digital risk because consequences are diffused and delayed, while the benefits of openness and speed are immediate.

“The challenge is to design incentive structures that make security economically rational rather than merely ethical.”

Rebalancing will require new forms of public-private alignment: liability frameworks that distribute responsibility more equitably, and cross-sector compacts that treat threat data as a collective asset rather than a proprietary advantage. This coordination problem is, at heart, a governance problem. AI may enable real-time anomaly detection and predictive modelling, but only institutions can decide how much transparency, information-sharing, and accountability are acceptable in pursuit of resilience. Technology can illuminate interdependence; only leadership can manage it.

“The perimeter is gone, the tempo has changed, and trust is the new strategic centre of gravity,” said Sami Khoury, Senior Official for Cyber Security at the Communications Security Establishment of Canada and co-chair of the Council on Cybersecurity. “Resilience isn’t just an individual responsibility — it’s an ecosystem responsibility.”

Khoury noted that proactive defence depends on “developing the muscle memory to deal with incidents” and that “how we handle a breach will set the tone for trust.” He added, “We can let the future happen — or we can change it.”

Toward a resilient digital commons

Cybersecurity is no longer a technical domain apart, but a condition of modern interdependence. The path forward requires action on three fronts: institutional, operational and cultural.

The systems now emerging – AI-enhanced monitoring, shared-risk insurance pools, interoperable threat-intelligence networks – suggest that the technical foundations are already in place. What remains is institutional courage – the commitment to build trust architectures that can operate at the speed of threat.

The World Economic Forum’s Centre for Cybersecurity and Network of Global Future Councils are already translating these principles into practice. Joint taskforces are developing collective-defence playbooks to enable rapid, coordinated responses across sectors and borders. The Forum’s Partnership against Cybercrime and Cybercrime Atlas analyse and map cybercrime, connecting law-enforcement agencies with private platforms to disrupt transnational threat networks.

These efforts mark the shift from isolated fortification to shared stewardship. What emerges is not just a safer internet but a more resilient digital commons built on transparency, accountability, and trust at scale.

If the twentieth century built institutions to manage physical interdependence, the twenty-first must build institutions to manage digital interdependence. That is the work of resilience – not perfect protection, but the capacity to adapt, recover, and rebuild trust at the speed of change.

These insights draw from the expertise and collaboration of expert members of 17 Global Future Councils, including the GFCs on Information Integrity, GovTech and Digital Public Infrastructure, Artificial General Intelligence, Cybersecurity, Generative Biology, Autonomous Systems, Neurotechnology, Financial Education, Next Generation Computing, Faith in Action, Space Technologies, Human Science of Environment Action, Good Governance, Human Capital Development, Jobs and Frontier Technologies, and Leadership.