Managing AI security risks

Governments and organizations worldwide are rapidly integrating AI into essential services—from digital public infrastructure to healthcare and finance. As AI becomes mission-critical, ensuring its security is emerging as a priority, including for developing countries.
Today’s AI models are exposed to new vulnerabilities such as misalignment, prompt injection, data poisoning and model inversion. When exploited, they can lead to significant risks such as unauthorized access to sensitive data, generation of harmful or misleading outputs, and potential misuse with far-reaching implications for public safety.
This World Bank webinar will explore the main security risks faced by AI systems, as well as the concrete safeguards emerging across the AI lifecycle to better manage them—from “constitutional” guardrails and red teaming exercises to post-deployment monitoring and incident response protocols. The discussion will also consider how these risks and mitigations may apply to emerging AI architectures, such as “small AI” and agentic AI.
Tailored for policymakers and practitioners in developing countries, the webinar will offer practical steps to build and operate AI systems that are resilient and trustworthy.