ITU

How AI can help fight misinformation

Disinformation has become a global problem affecting citizens, governments and businesses.

Identifying and isolating so-called “fake news” poses a major challenge across today’s growing digital information ecosystem. But advances in artificial intelligence (AI) could increasingly help online information users sort out fact from fiction.

The Global Disinformation Index (GDI) collects data on how misinformation – or disinformation, when deliberate – travels and spreads. The index, put out by a US-based non-profit organization, can help governments, media professionals, and other web users assess the trustworthiness of online content.

Companies can use the GDI in evaluating where to place their advertising and avoid associating their brands with untrustworthy news sites.

Tall-tale triage

Using AI tools and techniques, the GDI “triages” unreliable content from several of the world’s most prominent news markets. Combining AI results with independent human analysis, the index then rates global news publications based on their respective disinformation risk scores.

“Fake news comes from a handful of people who have huge, vested interest,” said Sam Pitroda, inclusive tech advocate and chairman of software development firm the Pitroda Group.

Disinformation has become a lucrative business for websites that monetize the traffic generated by false or malicious content. But it creates harmful effects in all areas of society.

Fake news sites have hindered the global fight to cut greenhouse gas emissions and curb global warming, notes the latest Intergovernmental Panel on Climate Change (IPCC) report.

“AI has huge potential to reduce the damage done by fake news, but it will take time. It is not going to eliminate fake news,” Pitroda added at a recent AI for Good webinar hosted by the International Telecommunication Union (ITU).

Labelling misinformation

Plenty of room remains to filter and, if necessary, regulate misinformation, according to participants at the webinar.

“Society cannot work without a clear understanding about what is true and what is not,” said Arthur van der Wees, co-founder of the Institute for Accountability in the Digital Age (I4ADA). “Anyone needs tools and other capabilities to be able to distinguish between truth and misinformation.”

Misinformation labelling could help to establish and maintain basic standards for news and information.

The globally accepted nutrition fact panel for food products could provide a suitable template, as could the labelling required by some countries on digitally retouched photos in the fashion and influencer industry.

Still, flagging false or misleading online content may not be enough to thwart determined creators of false content. After all, capturing people’s attention and reinforcing their pre-existing “filter bubbles” can drum up more website traffic and ad revenue.

Fake news generators, therefore, deliberately feed into a target audience’s confirmation bias, making some people reluctant to accept “official” warning labels.

Information is trusted online regardless of its truthfulness, often because users are actually looking for shared values aligned with a certain group, website or platform, argued Silvia De Conca, assistant professor in law and technology at the Vrije Universiteit Amsterdam. “We need to decouple facts from values, and that changes the perception,” she added.

Humans in the loop

With reams of new information generated daily, in multiple languages, content moderation and fake news flagging are increasingly ambitious tasks. AI can with the filtering process. However, full automation may leave too much to chance.

“Humans do need to be in the loop. AI is merely a tool for accelerating certain human judgements,” said Daniel Rogers, GDI’s executive director and co-founder. “Computers are good at repeating a raw task many times, but anywhere a judgement is involved, you need a human.”

Communities of users can contribute significantly to effective monitoring. Crowd-sourced knowledge and collaboration among professional news organisations, for example, can help to validate and verify raw information.

Within the United Nations system, the Office of the High Commissioner for Human Rights (OHCHR) advocates a human rights-based approach to data in relation to the UN Sustainable Development Goals for 2030.

Keeping the web healthy and reliable requires eliminating any monetary incentives for the spread of misinformation.

Ultimately, clear international standards are needed to categorize online data and its ownership.

“We actually need to get to that point of specificity about data,” said Mei Lin Fung, chair and co-founder of People-Centered Internet initiative. “We can’t talk about data ownership. It is about very specific data rights: by whom, to do what, for what purpose, when, etc.”

Watch the full AI for Good session recording.

Previously posted at :