
Deepfakes use artificial intelligence (AI) technology to create highly realistic fake images, videos and audio recordings. Image: REUTERS
- Denmark is proposing new deepfake legislation as part of its digital copyright law, aiming to protect individuals' rights from the impact of AI-generated deepfakes.
- From spreading fake news to enabling financial fraud and cybercrime, deepfake attacks are on the rise, causing substantial financial losses.
- The World Economic Forum's Global Coalition for Digital Safety aims to accelerate public-private cooperation to tackle harmful online content, including deepfakes, and promote digital media literacy.
Deepfakes can range from being funny and absurd to being manipulative and dangerous.
In Denmark, the government is taking actions, aiming to strengthen its copyright law to prevent the creation and sharing of AI-generated deepfakes. The amendment, believed to be the first of its kind in Europe, is designed to protect the rights of individuals over their identities, including their appearance and voice. With cross-party support, the government hopes to submit the amendment in the autumn, suggesting that preventing deepfakes is considered a matter of urgency.
So just how threatening are deepfakes, and what can policymakers do about them?
What are deepfakes?
Deepfakes use artificial intelligence (AI) technology to create highly realistic fake images, videos and audio recordings. The term comes from “deep learning” and “fake” and describes both the AI technology used and the resulting content.
Deepfakes either alter existing content – like replacing Michael J. Fox’s face with Tom Holland’s in clips from Back to the Future – or generate new content showing someone saying or doing something they didn’t.
While superimposing faces in a film scene may seem innocuous at first glance, it still challenges the individual's right to their image. US actors went on strike for this right in 2023, bringing film and TV productions to a standstill and securing the industry’s commitment that, in future, any AI use of actors’ images would require consent.

Why are deepfakes a threat?
A more concerning use of deepfakes is circulating fake news, as in the case of deepfakes of former US President Joe Biden and Ukrainian President Volodymyr Zelenskyy. This gives the messages a high level of credibility by making them appear to come from a trustworthy source.
But not all deepfake attacks are politically motivated – financial fraud and cybercrime are other big growth areas, according to recent research by Resemble.ai. While 41% of those targeted are public figures – celebrities, politicians and business leaders – 34% are private individuals, predominantly women children, and 18% organizations.
Take Arup, a UK engineering firm, which fell prey to a sizable deepfake scam when criminals using an AI-generated clone of a senior manager convinced a finance employee on a video call to transfer $25 million to cybercriminals.
A fraud attempt on Ferrari, using the AI-generated voice of CEO Benedetto Vigna, was narrowly thwarted by an employee asking a tricky question that only the real CEO could answer.
A BBC journalist was able to bypass her bank’s voice identification system with a synthetic version of her own voice.

In its deepfake security report (Q2, 2025), Resemble.ai – a company specializing in detecting harmful deepfakes – reported 487 publicly disclosed deepfake attacks in the second quarter of 2025, a 41% increase from the previous quarter and more than 300% year-on-year. Direct financial losses from deepfake scams have reached nearly $350 million, with deepfake attacks doubling every six months, the company found.
What action are governments taking?
According to Resemble, deepfake fraud is a global issue, concentrated mainly in technologically advanced regions, with emerging markets increasingly affected. The US leads in reported incidents, but deepfake cases are also widespread across Asia Pacific and Europe, and rapidly growing in Africa.
Policymakers are stepping up in response to deepfakes, with the Take It Down Act in the United States being one of the most significant measures so far. It requires harmful deepfakes to be removed within 48 hours and imposes federal criminal penalties for their distribution. Public websites and mobile apps must establish reporting and takedown procedures. State legislators in Tennessee, Louisiana, and Florida have also passed deepfake laws.
In Europe, the European Union’s Digital Services Act (or DSA), which came into effect in 2024, is designed to “prevent illegal and harmful activities online and the spread of disinformation”. Online service providers are now under greater EU scrutiny than ever before, and several formal investigations for non-compliance are already underway. The UK adopted a similar approach in early 2025 with the Online Safety Act.
The Danish amendment currently under consideration means people affected by deepfake content can request its removal, and artists can demand compensation for unauthorized use of their image. This right would extend for 50 years beyond the artist’s death. Online platforms like Meta and X could face substantial fines if the amended bill is passed as proposed. While the bill doesn’t directly provide for compensation or criminal charges being levied, it would set the legal foundations for seeking damages under Danish law.
- How is the World Economic Forum creating guardrails for Artificial Intelligence?
In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.
The Alliance unites industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.
This includes the workstreams part of the AI Transformation of Industries initiative, in collaboration with the Centre for Energy and Materials, the Centre for Advanced Manufacturing and Supply Chains, the Centre for Cybersecurity, the Centre for Nature and Climate, and the Global Industries team.
A cornerstone of democracy
With Denmark currently holding the Presidency of the Council of the European Union, it has expressed a clear ambition to make media and culture central to European democracy – promoting initiatives like the European Democracy Shield. Its recent amendment to domestic copyright law is therefore likely to send strong political signals to both Brussels and the wider EU.
Stressing the need for cross-regional cooperation to make the online world safer, the World Economic Forum’s Global Coalition for Digital Safety aims to accelerate public–private collaboration to address harmful content, including deepfakes. It also promotes the exchange of best practices in online safety regulation and supports efforts to improve digital media literacy.

Have you read?
- Why detecting dangerous AI is key to keeping trust alive in the deepfake era
- Deepfakes proved a different threat than expected. Here's how to defend against them
- How do you spot a deepfake? This is what the experts say