WEF

AI will be a hot topic at Davos 2024 – here’s what the experts are saying

Johnny Wood
Writer, Forum Agenda

  • Artificial intelligence has the potential to transform most facets of today’s world, but left unchecked it could pose serious threats.
  • Multi-stakeholder collaboration is key to developing policies to regulate AI systems without stifling its capacity for innovation.
  • These are some of the views of experts gathered for an AI edition of the World Economic Forum’s Radio Davos podcast.

“It’s very rare to have a technology which overnight is used by millions of people. When that happens, you have both the excitement of how it’s being used in ways that are beneficial and unexpected, but also the brittleness of technology that is used everywhere all at once.”

So says Sara Hooker, VP of Research at Cohere and Leader of Cohere For AI, as she joins other artificial intelligence authorities in a round-up of views for an AI edition of the World Economic Forum’s Radio Davos podcast.

The Forum hosted two summits on artificial intelligence in 2023 and launched the AI Governance Alliance (AIGA), a multi-stakeholder group addressing how AI policies can catch up with the tech. The growing impact of AI also looks set to be a hot topic at its upcoming Annual Meeting in Davos.

So what are the experts saying? What have we learned about AI this year and how can human societies govern this rapidly evolving and potentially highly disruptive technology in the future?

Why AI and its governance is so important

AI has the power to transform our world: reshaping how we live and interact, increasing productivity and engagement at work, bolstering creativity and… everything in between.

However, these technologies are evolving exponentially fast and taking the world into unknown territory filled with potential challenges, including technology misuse, security breaches, job displacement and economic disruption.

The World Economic Forum’s Future of Jobs Report 2023 says that, while AI could displace millions of jobs, the emergence of this technology will create many new ones.

The fastest growing jobs are AI and machine learning specialists.

The fastest growing jobs are AI and machine learning specialists. Image: World Economic Forum

AI and machine learning specialists are the fastest growing jobs, along with other roles that require human skills like judgement, creativity, physical dexterity and emotional intelligence.

The fastest declining roles include repetitive functions like bank tellers, cashiers, postal service and ticket clerks.

“I wouldn’t minimize anxiety, because I think it’s natural. With every big technological change, we’ve had anxiety and some of it has a lot of merits,” said Hooker. “I think it’s important that we have realistic conversations about how we build educational programmes and figure out support for how people use AI technology.”

So how can we ensure that AI models are deployed safely and responsibly?

Several attempts at safeguarding AI use have been made, explains Cathy Li, Head of AI, Data and Metaverse, and Deputy Head of C4IR, at the World Economic Forum.

October 2023 saw the US government issue an Executive Order on AI,actioning the establishment of new standards to ensure AI is used safely and securely.

In the same month, the G7 group of leading nations also issued an international code of conduct for organizations developing advanced AI systems.

This was followed by the UK government hosting an AI summit, which led to the Bletchley Declaration calling for multi-stakeholder action to harness the benefits of AI while addressing its risks.

DISCOVER

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Show more

Is collaboration important for safe AI deployment?

At the World Economic Forum’s AI Governance Summit held in San Francisco in November 2023, discussions reflected a commitment to ensuring the ethical and responsible development of artificial intelligence.

“Participants from various sectors highlighted the vast opportunities of AI integration, while emphasizing the critical need for responsible development aligned with global ethical standards. It addressed topics such as adaptive regulatory frameworks and harmonized standards,” said Li.

The event brought together regulators, governments, companies, academia and other stakeholders to discuss the challenges and opportunities of AI.

“The transformation opportunity that AI brings for all of society, for governments, business, communities and just human beings, can only be achieved if we have one strong public and private sector collaboration,” said Sabastian Niles, President and Chief Legal Officer at Salesforce.

“If we lead with trust and inclusion, think about equality, sustainability and innovation and really embrace stakeholder success as we look at AI, I think we can raise the floor and improve business outcomes, human outcomes, societal outcomes, civil society outcomes and achieve the really powerful moonshot goals, too.

“We need sound law and sound public policy to undergird and protect the development of these types of technologies in ways that promote responsible innovation.

“We need to have systems that are even more multilateral, that are even more multi-stakeholder,” said Niles.

Bridging the digital divide

“Artificial intelligence by the name is not something that you can actually govern. You can govern the sub-effects or the sectors that artificial intelligence can affect. And if you take them on a case-by-case basis, this is the best way to actually create some kind of a policy,” said Khalfan Belhoul, Chief Executive Officer, Dubai Future Foundation.

“But the biggest challenge is how do you unify those policies and set best practices and standards and then apply them on a global basis to ensure that everyone can use AI in the best way possible.

“The AIGA Alliance can focus on step one, getting the right voices in the room and coming up with an aggregated plan that has all those views in it, and then convert those into action items.

“When you try to convert those actions through this Alliance, I would probably say the first action would be some kind of a tangible use case or a pilot project that can be an example for the world, on which it can be gradually standardized.

“With artificial intelligence specifically, you would need to focus on a specific sector. For example, how can I impact the media sector and what kind of content can we use? How will we use that content? Once that’s done, then you can gradually jump into different sectors,” said Belhoul.

What differentiates the Forum’s AIGA and its summits is first that they are community-based. And secondly, these agile events are able to pursue some of the most urgent and needed actions on the ground in a very timely manner.

“One significant focus was on ensuring inclusive benefits of AI development extending to both developed and developing countries,” said Li.

“Bridging the digital divide became a central topic, with participants advocating for increased access to critical infrastructure like data cloud services and computers, alongside essential foundations for improved training and education.

“Key takeaways included the need for clear definitions and thoughtful consideration in the open source and innovation debate, promoting public-private collaboration for global access to digital resources, and advancing AI governance through adaptive regulations, harmonized standards and ongoing international discussions,” said Li.

DISCOVER

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Show more

Could over-regulation of AI stifle innovation?

Amid the policy debates on AI, some fear over-regulation could legislate away innovation, leaving AI’s full transformative potential unfulfilled.

“My biggest fear for AI right now is stifling regulation putting a stop to this wonderful progress that otherwise would make so many people in the world have healthier, longer, more fulfilling lives,” said Andrew Ng, Founder of Coursera and DeepLearning.AI.

“AI technology is very powerful, but is a general purpose technology, meaning it’s not useful for one thing like ChatGPT or Bard. It’s helping healthcare systems improve how they process medical records, it’s making processing of legal documents more efficient, it’s helping customer service and customer aspirations, and so on.

“Individual applications have risks and should be regulated: so if you want to sell a medical device, let’s make sure that’s safe. If you build a self-driving car, that needs to be regulated. If you have an underwriting system to make loans, let’s ensure we know how to check it’s not biassed.

“The danger comes with regulation of the raw technology.

“Compare AI to electricity. There are multiple use cases to be worked out, and yes, it can electrocute people, it can spark dangerous fires. But today, few if any of us would give up heat, refrigeration and lighting for fear of electrocution.

“And I think the same would apply to AI. There are a number of harmful use cases, but we’re making it safer every day and regulating the applications to help us move forward. But imposing flawed regulations to slow down AI’s technological development would be a huge mistake,” said Ng.

Looking toward our AI future

So what does the future hold for AI development and its governance?

“The World Economic Forum’s AI Governance Summit achieved the formulation of practical plans for the accountable and inclusive development of generative AI technology,” said Cathy Li.

“The next steps involve consolidating and sharing those plans at the Annual Meeting in Davos in January 2024, through the publication of our first report.

“Discussions at Davos will include the debate between open and closed source AI models, and the near-term and long-term risks of AI technologies.

“The expectation is that those initiatives will guide further collaborative efforts and actions within the governance allies and the broader ecosystem to ensure responsible and ethical advancements in the field of artificial intelligence,” she said.

Have you read?

Previously posted at :