AI and finance concept. | © Adobe Stock

WBG
From data to inference: Why AI governance matters for central banks in developing economies

Central banks are among the most data-dense public institutions. As they modernize digital payments and financial supervision, artificial intelligence (AI)-enabled analytics and supervisory technology (SupTech) enable them to draw connections across the large datasets they hold.  In doing so, these tools expand central banks’ ability to infer information about legal entities and individuals beyond what is directly contained in any single dataset.

In many emerging markets and developing economies (EMDEs), central banks are being asked to do more with their data—support faster payments, improve risk detection, and enhance supervisory capacity. Yet often these advances are being pursued while data protection, oversight, and redress mechanisms are still evolving.

Ensuring that governance keeps pace with the growing inferential capacity of central banks is critical to financial inclusion, institutional trust, and the legitimacy of digital financial reform. As a recent World Bank report on AI for Financial Sector Supervision notes, supervisory authorities in EMDEs already identify data privacy, security, and cybersecurity as key challenges to AI adoption.

AI-enabled inference raises a sharper legal question: not only what data is held, but what can be inferred and how those inferences are governed and constrained across central bank functions.
 

Inference is changing the data protection equation

Central banks operate under diverse data protection frameworks (Table 1). Yet across high-income economies and EMDEs alike, a common gap emerges: central banks may acknowledge that they process personal data, but their public data protection materials (e.g., South Africa Reserve Bank Protection of Personal Information Policy) or AI strategies (e.g., Bank of England) do not clearly explain how profiling or AI-derived inference is governed.
 

Table 1

Image

 

Data protection rights cannot be meaningfully exercised without visibility into how personal data is processed. In developing countries, this opacity heightens the risk that individuals become visible to public institutions and state scrutiny before existing legal frameworks can respond. For example, Kenya’s courts halted the Huduma Namba Digital ID scheme until proper legal protections were in place, which illustrates how a country’s digital infrastructure can outpace the law.

Advanced analytics can generate a wide range of inferences, including from existing datasets. The European Central Bank (ECB)’s AnaCredit dataset, which records loans granted to legal entities (not individuals) is a case in point. In 2025 the ECB acknowledged that an individual could be identifiable where a legal entity’s name includes a person’s name together with an address. This shows how institution-level supervisory data can carry personal data implications, especially as AI-enabled analytics increases the risk of identifiability or “inferred identities” before information is formally recognized or classified as personal data.
 

Digital payments and CBDCs

These issues are particularly visible in payment systems. While central banks traditionally lack routine access to retail transaction data, this can shift when they operate retail payment infrastructures or central bank digital currencies (CBDC) systems that centralize transactional flows. For example, Brazil’s PIX fast payment system gives the Banco Central do Brasil visibility into all transactions on the platform.

AI can derive insights from payments data that go beyond the purposes for which the data was originally collected. This is important in digital payments and CBDCs, where policymakers are already trying to balance data protection with legal objectives such as anti-money laundering/counter-terrorist financing (AML/CTF). Although personal data processing in this context is typically justified by legal obligation or public interest, broad legal bases may still enable extensive transactional analytics and inference-based profiling such as scoring or categorizing individuals.

For developing economies, the stakes are high. According to the World Bank’s AI supervision report, AI-based credit scoring is common in some African countries partly because many consumers lack formal credit histories. Risks become acute when AI-based flags trigger additional scrutiny without meaningful human review or safeguards to challenge errors. In countries pursuing fast payment systems and CBDCs to advance financial inclusion and digital economy goals, such misjudgments may disproportionately affect people least able to understand or contest them. Here, weak governance around inference can deter participation and undermine the very development objectives these systems are meant to advance.
 

What should change

Safeguards should scale as AI systems become more agentic and capable of acting on inferred insights. They should cover the insights generated, the actions triggered, and the vulnerabilities associated with AI-enabled tools.

These issues should not only be left to data protection laws or emerging AI governance frameworks. They also need to be addressed through the rules governing central bank functions, such as payment systems and supervisory practice,1 as well as through practical guidance and governance of AI tools.

Three priorities therefore stand out for EMDEs:

  1. Institutional transparency. Central banks should maintain an inventory of AI-enabled tools, describing each tool’s purpose, the types or categories of personal data it may process, whether and how it influences decisions, and whether it generates or relies on inferences about individuals, including those who did not directly provide their data.
  2. Meaningful human oversight. Oversight should be based on ongoing risk assessments across the analytics lifecycle. This includes meaningful human intervention, independent review, impact assessments, auditable records, and contestability. In many developing countries, this is essential because legal remedies may be limited or costly.
  3. Data separation and controls. Clear rules for data sharing and re-use, enforcing access controls, and using privacy-enhancing technologies can help prevent function creep, inference spillovers, and the misuse of data for political control. In EMDEs, safeguards should be proportionate, risk-based, and adapted to local capacity.

For central banks, meaningful governance is essential to protecting personal data in the era of AI. The risk that these powerful tools will outpace oversight mechanisms is real and imminent. However, with institutional transparency, human oversight, and strong data controls, central banks can strengthen governance even as their inferential capacity rises. The task ahead is to harness these tools in ways that foster financial inclusion and maintain public confidence in digital reform.