Header image credit: ITU

ITU
Standards help unlock trustworthy AI opportunities for all

By Tomas Lamanauskas, Deputy Secretary-General, ITU

As artificial intelligence (AI) motors ahead, standards are key to lower costs, increase affordability, and drive the spread of reliable AI technology. Equally, they are essential to ensure AI benefits everyone and uphold people’s fundamental rights.

Legal frameworks are slow to change. However, technical standards help meet national policy objectives in an agile, swift and flexible manner – keeping up with private-sector innovation.

To date, ITU has published 120 standards on AI, with 130 more under development. Standards like these are crucial to enable AI development, while aligning it with the needs of people and our planet.

Risks and opportunities of AI

AI has become an integral part of our daily lives – woven into finance, healthcare, agriculture, education, entertainment, and more. But biases, abuse and malfunctions remain common.

It’s troubling enough when AI hallucinates a factoid in a blog post, but quite another thing if it’s a medical misdiagnosis or a mix-up between the accelerator and brakes in a driverless vehicle.

At the same time, we must acknowledge extreme imbalances between countries and regions in terms of who holds patents, who releases new AI tools, where data gets processed and stored, and, ultimately, who has capabilities to leverage the power of AI.

Severe infrastructure, skills and policy gaps risk leaving an enormous part of the world even further behind.

Key initiatives by ITU and partners address some of these challenges. Our growing AI Skills Coalition has a new online hub with over 50 courses available to keep pace with the AI-driven job market. Our Digital Infrastructure Investment Initiative, counting seven development finance institutions as core partners, aims to address an estimated USD 1.6 trillion digital infrastructure investment gap.

Importantly, as we tackle those risks, we’re also witnessing tremendous AI breakthroughs for good ends.

The United Kingdom’s publicly accessible AlphaFold database, for instance,provides open access to over 214 million protein structure predictions – accelerating scientific research on human health.

Research shows AI can accelerate progress on nearly 80 per cent of targets under the UN Sustainable Development Goals.

That’s why we need a boost in AI-driven innovation, together with the right governance and standards – to minimize the risks and maximize the benefits that AI brings.

Human rights in AI

Standards will shape actual AI design and usage – with implications for privacy, freedom of expression, and access to information. The UN Human Rights Council has called for integration such considerations into technical standard-setting, effectively aligning AI standards with international human rights law.

To achieve this, standards development must be transparent and consultative, reflecting input from governments, industry, civil society and technical experts alike.

The World Standards Cooperation – ITU’s partnership with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) – aims to embed that human rights lens into technical standards.

We have taken up this challenge in collaboration with the UN’s Office of the High Commissioner for Human Rights.

Greening AI

Let’s not overlook how AI bolsters climate monitoring, mitigation, and adaptation. Current AI could help mitigate 5-10 per cent of greenhouse gas emissions by 2030 – equivalent to annual emissions from the European Union.

In the telecom sector, a winning solution in one of our AI and Machine Learning challenges integrated AI neural networks to cut energy consumption by 5G base stations up to 20 per cent.

The problem is, we still need to get AI’s own environmental impact under control – and we need to do it fast. So far, we are heading in the opposite direction – with several leading tech firms producing around 50 per cent more greenhouse gas emissions than five years earlier.

Green Digital Action is the rallying cry for our industry to take responsibility – and provide climate solutions, not add to the problem. Our newest track focuses on Green Computing, directly addressing the impact of AI.

The recent UN climate conference, COP29, brought a notable win: the Declaration on Green Digital Action, put forward by the COP29 Presidency in a joint initiative with ITU. With endorsements from over 80 countries and nearly 2000 companies and organizations, this unprecedented declaration puts digital issues at the forefront of climate talks going forward.

The Coalition for Sustainable Artificial Intelligence – recently launched by France, the UN Environment Programme and ITU – further strengthens the global public-private commitment to make AI friendly to the environment.

The next UN climate change conference, COP30, promises a strong focus on digital technologies, in particular AI.

Our responsibility – as the digital industry and community – is to ensure AI reduces, rather than adds to, net greenhouse gas emissions. Standards are key to make AI part of the solution, not a rapidly increasing problem.

AI for verticals

The power and, indeed, the risks of AI express themselves in specific application areas. That’s why we work with our sister UN entities and industry experts to unlock responsible AI transformation across vertical sectors.

We’re also leveraging AI to strengthen Early Warnings for All.

Alongside AI experts, we work with experts from key user communities, who are essential to ensure that tech serves the needs of their specific sectors.

Beyond the UN system, our standards collaboration on AI watermarking, multimedia authenticity, and deepfake detection is strengthening the technical underpinnings to fight misinformation and disinformation.

Inclusive and collaborative standardization

The way we work – collaboratively, inclusively – is what will ultimately create trustworthy, sustainable, and effective AI standards. Strong partnerships are key.

The World Standards Cooperation between IEC, ISO and ITU helps align our work with global goals, boost our combined impact, and build a better future for all.

We invite all standard-setting bodies to collaborate with us.

Global policy imperatives

The Global Digital Compact, adopted as part of the UN Pact for the Future, provides a framework to enhance international AI governance for humanity. Specifically, it calls on standards development organizations to collaborate in promoting the development and adoption of interoperable AI standards that uphold safety, reliability, sustainability, and human rights.

We, as the standards community, responded to this call immediately. ITU’s World Telecommunication Standardization Assembly (WTSA-24) in New Delhi last October featured the first episode of the new International AI Standards Summit, convened together with ISO and IEC. The next will follow in Seoul, Republic of Korea, on 2-3 December.

Standards will take centre stage at our upcoming AI for Good Global Summit between 8 and 11 July, in Geneva, Switzerland, closing with a dedicated AI standards day.

In the meantime, we are building an AI standards database – a comprehensive global repository for trusted, responsible, ethical, international standards on AI.

Opportunities to seize

We recognize risks in AI growth, and we take them seriously. We also know this is an opportunity not to miss.

Generative AI alone could add trillions in value to the global economy – up to USD 4.4 trillion annually, according to a McKinsey study in 2023.

Technical standards must enable us to seize the AI opportunity and keep pace with the fast-evolving AI landscape – for everyone to benefit.

Based on remarks by Tomas Lamanauskas on 17 March 2025 at the AI Standards Hub Global Summit, organized in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, and the Partnership on AI, with support from the UK Department for Science, Innovation and Technology.

The UK’s AI Standards Hub is a partnership of the Alan Turing Institute, British Standards Institution, and National Physical Laboratory.