News

WEF
The Davos Agenda virtual event offers the first global platform of 2022 for world leaders to come together to share their visions for the year ahead.

The week long virtual event, taking place on the World Economic Forum website and social media channels 17-21 January 2022, will feature heads of state and government, CEOs and other leaders. They will discuss the critical challenges facing the world today and present their ideas on how to address them.

The event will also mark the launch of several Forum initiatives including efforts to accelerate the race to net-zero emissions, ensure the economic opportunity of nature-positive solutions, create cyber resilience, strengthen global value chains, build economies in fragile markets through humanitarian investing, bridge the vaccine manufacturing gap and use data solutions to prepare for the next pandemic.

“Everyone hopes that in 2022 the COVID-19 pandemic, and the crises that accompanied it, will finally begin to recede,” said Klaus Schwab, Founder and Executive Chairman of the World Economic Forum. “But major global challenges await us, from climate change to rebuilding trust and social cohesion. To address them, leaders will need to adopt new models, look long term, renew cooperation and act systemically. The Davos Agenda 2022 is the starting point for the dialogue needed for global cooperation in 2022.”


Davos Agenda 2022 participants

World leaders delivering “State of the World” Special Addresses will include:

  • Narendra Modi, Prime Minister of India
  • Kishida Fumio, Prime Minister of Japan
  • António Guterres, Secretary-General, United Nations
  • Ursula von der Leyen, President of the European Commission
  • Scott Morrison, Prime Minister of Australia
  • Joko Widodo, President of Indonesia
  • Naftali Bennett, Prime Minister of Israel
  • Janet L. Yellen, Secretary of the Treasury of the United States
  • Yemi Osinbajo, Vice-President of Nigeria.

The programme will also feature speakers including:

  • Tedros Adhanom Ghebreyesus, Director-General, World Health Organization (WHO)
  • Fatih Birol, Executive Director, International Energy Agency
  • José Pedro Castillo Terrones, President of Peru
  • Ivan Duque, President of Colombia
  • Anthony S. Fauci, Director, National Institute of Allergy and Infectious Diseases, National Institutes of Health of the United States of America
  • Yasmine Fouad, Minister of Environment of Egypt
  • Kristalina Georgieva, Managing Director, International Monetary Fund (IMF)
  • Alejandro Giammattei, President of Guatemala
  • Al Gore, Vice-President of the United States (1993-2001) and Chairman and Co-Founder, Generation Investment Management
  • Paulo Guedes, Minister of Economy of Brazil
  • Paula Ingabire, Minister of Information Communication Technology and Innovation of Rwanda
  • Paul Kagame, President of Rwanda
  • John F. Kerry, Special Presidential Envoy for Climate of the United States of America
  • Haruhiko Kuroda, Governor of the Bank of Japan
  • Christine Lagarde,President, European Central Bank
  • Guillermo Lasso, President of Ecuador
  • Ngozi Okonjo-Iweala, Director-General, World Trade Organization (WTO)
  • Abdulaziz Bin Salman Bin AbdulazizAl Saud, Minister of Energy of Saudi Arabia
  • Nicolas Schmit, Commissioner for Jobs and Social Rights, European Commission
  • François Villeroy de Galhau, Governor of the Central Bank of France
  • Sarah bint Yousif Al-Amiri, Minister of State for Advanced Technology, Ministry of Industry and Advanced Technology of the United Arab Emirates.

Davos Agenda 2022 sessions and launches

Conversations will focus on critical collective challenges across several key areas:

Climate action

Climate action failure, extreme weather and biodiversity loss are ranked the top three most-severe risks for the world over the next decade, according to the Forum’s Global Risks Report 2022, published 11 January 2022.


Top 10 Global Risks by Severity 2022
Climate-related risks top the list of global risks by severity.
Image: World Economic Forum Global Risks Report 2022

For a brief moment, a drop in emissions in 2020 proved climate action is possible – and the collective response to COVID-19 is evidence that, if we work together, it’s not too late to save the planet. This requires reaching net zero, achieving the energy transition, committing to circular economies and sustainable consumption and – above all – putting climate and nature at the heart of recovery plans.


Global cooperation

The recent years have seen deepened political and social divides as well as a heightened mistrust of institutions and the spread of misinformation and disinformation. We must renew our commitment to global cooperation and shared prosperity – from vaccine equity to wherever the new era of global space exploration may take us.

At the same time, the shocks of COVID-19 accelerated the digital transformation of business and society ­– and innovations in vaccines, therapeutics, diagnostics and contact tracing have helped us to address the pandemic’s worst impacts. Looking ahead, technology holds the keys to solving the biggest challenges ahead of us: decarbonizing energydiagnosing and treating diseasesecuring our food supply and helping small businesses and entrepreneurs everywhere survive and thrive.

But this rapid digital transformation is not without risk, as we’ve seen cybercrime spike and digital divides widen in the past two years, too. We must work together to balance innovation and responsibility to ensure the digital transformation is driving growth and innovation, and not creating harm.

The Forum will release the Global Cybersecurity Outlook 2022 report on 18 January.

What to watch:

How to follow the Davos Agenda 2022

The event will be livestreamed across the Forum’s website and social media channels. All content will be shared using the official event hashtag #DavosAgenda.

Make sure to follow us on all of our platforms to stay up to date on key quotes, moments and news from the event:

WEF

Digital currencies are growing: the market is valued at more than $2 trillion and involves more than 15,000 varieties. In 2021, El Salvador even adopted Bitcoin as its legal currency.

While private digital currencies are blooming, central banks are catching up. In October 2021, Nigeria joined the Bahamas, the Eastern Caribbean States and Cambodia as one of the first jurisdictions to officially launch central bank digital currencies (CBDCs). Based on the Atlantic Council’s CBDC tracker, 14 countries have launched CBDC pilots while 16 countries are developing CBDCs and 41 are conducting research.

From precious metals to paper money, currencies are crucial for global trade and commerce. As society enters the digital age and more forms of digital currency compete for virality, what does it mean for international trade?

There are three potential ways digital currencies could change international trade:

1. Digital currencies could cause an increase in efficiency for cross-border payments

The speed of settlement for cross-border payments varies from the same business day to five business days. Human interaction is often required in the process of verifying the sender and recipient’s information, for example for anti-money laundering and combatting terrorism financing (AML and CTF) purposes. As a result, the speed of payment is often determined by how much the business hours of the sending institution and the receiving institution overlap; and whether the sending and receiving institutions rely on the same messaging standards.

For digital currencies that rely on decentralized ledgers, money could be sent and received within seconds and around the clock. Future regulatory compliance requirements on digital currency service providers and foreign exchange controls may have an impact on the speed.

2. Digital currencies could provide alternative credit information for trade finance

There’s a $1.7 trillion global trade financing gap, which heavily impacts SMEs who typically don’t have established financial records with banks. Public ledgers of digital currencies could be used to share payment and financial history to underwrite loans for import and export. At the same time, strong privacy protocols would need to be enforced in order to achieve this.

3. Digital currencies could alleviate the issues of de-risking

De-risking creates obstacles for countries perceived with high AML and CTF risks who want to participate in global trade and can increase the transaction costs for buyers and sellers in those countries. While digital currencies do not help reduce the risks of AML and CTF, they could provide alternative payment methods to allow consumers and merchants from those countries to be reconnected with international buyers and sellers.

What European countries are developing digital currencies?
What European countries are developing digital currencies?
Image: Statista/Atlantic Council

New issues caused by digital currencies

Despite their promising potential, digital currencies may not, however, solve some existing problems facing international trade and could raise new issues including:

  • Last-mile problems for financial inclusion: Financial inclusion will continue to be a problem for countries or communities that cannot afford the digital devices needed to hold digital currencies or do not have access to basic infrastructures such as electricity, internet, identification services or outlets to convert cash into digital formats. In the context of global trade, without the basic infrastructure, communities, and especially SMEs, that are excluded today will face an even greater challenge in a world where money is widely digitized.
  • Supply and demand of foreign exchange: It is debatable whether digital currencies could encourage all countries to trade more. While the potential benefits may help increase trade volume for certain countries, it does not change the fundamentals of international trade, which depend on comparative advantages. For countries that struggle with economic development or political stability, they may continue to face these challenges even with digital currencies. The currencies of those countries with limited trade with the outside world would remain undesirable. As a result, even if one type of digital currency gains global presence, converting that into local currency to allow for international trade may still be expensive and difficult if the demand for such local currency is limited internationally.
  • Implications for foreign direct investment (FDI): Many questions are raised by the intersection of cross-border investments and digital currency, as the current framework, such as the bilateral investment treaty (BIT) and the protections it offers, was built well before the age of digital currencies. Would digital currencies be considered as “covered investments” under BIT? Would BIT protections apply to investments made by and in digital currencies? How would the tokenization of FDI work under the current rules? Both states and foreign investors need guidance on these questions.

The international trade community needs to be prepared and capture the opportunities of this new age by closing the digital divide. As we head towards a new age where money and trade in goods and services are more and more digitized, it is crucial to ensure no one is left behind. Investments are needed to provide the right infrastructure for the future, to ensure accessible and affordable connectivity for all.

It is also important for policy-makers to work closely with the technical service providers behind digital currencies to fully understand the potential benefits and risks. Laws and regulations can then provide sufficient protection without stifling innovation. The digital currency governance consortium has provided a great example of public-private partnerships with more than 85 public and private organizations working together to address issues related to digital currencies.

Furthermore, the advancement of payments technology needs to be accompanied by the digitization of trade. A chain is as strong as its weakest link and with heavy reliance on paper documents and a lack of legal support for e-documents or e-signature, the benefits of digital currencies will be limited. Trade policy-makers need to focus on building the right physical and legal infrastructures to create trade for tomorrow.

To achieve the full potential of digital currencies, it will be crucial for countries to sign new types of trade agreements to enable market access for private issuers of digital currencies, to allow payments to operate in conjunction with each other, and to allow data to flow freely and with trust. Singapore, Australia, the UK, Chile and New Zealand have championed such forward-looking trade agreements.

While traditional financial institutions have started to offer settlement through digital currencies and some retailers have started to accept digital currencies, adoption on a large scale is still a long way off, particularly in the cross-border setting. There are yet many technical and regulatory challenges to overcome, ranging from issues of interoperability to the issues of AML, CTF and consumer protection. There’s no doubt, however, that we are entering the age of digital currency and more work needs to be done to allow participants of international trade to reap the benefits.

ILO

Find out about the world of digital labour platforms and the people who work on them, via our interactive map.

 

Digital labour platforms have become a common feature in today’s world and part of our everyday lives. Platforms have grown five-fold over the past decade and have become even more prominent since the outbreak of the COVID-19 pandemic.

Online platforms provide businesses with a new way of outsourcing work to a global workforce. Platform work covers a range of tasks such as designing a website, developing software or training an algorithm. They are changing the way work is organised and regulated. According to an ILO study , digital platforms are creating new opportunities, particularly for women, young people, persons with disabilities and marginalised groups in different parts of the world.

However, digital platforms are also blurring the previously clear distinction between employees and the self-employed. Most of the time, workers are poorly paid, and their everyday experience is defined by algorithms. They also often lack access to traditional employment benefits such as social protection, paid leave, minimum wages and collective bargaining.

Consumers International

Every year, on March 15, the consumer movement celebrates World Consumer Rights Day, raising global awareness of consumer rights, consumer protection and empowerment. Consumers International is proud and privileged to coordinate this day of global collaboration, with 200 consumer advocacy members in over 100 countries. In 2021, 73 of our Members carried out local campaigns to ‘Tackle Plastic Pollution’. Collectively, we reached a total global audience of over 31 million consumers.

 

Today, Consumers International announce that the theme for World Consumer Rights Day 2022 is Fair Digital Finance.

The global consumer advocacy movement will call for fair digital finance for consumers everywhere. The movement will generate new consumer-centred insights and campaigns for digital finance that is inclusive, safe, data protected and private, and sustainable.

In a rapidly changing marketplace, World Consumer Rights Day will spark the first-ever global conversation, making the case for solutions that put consumer rights at the core of meaningful and long-lasting change.


About the theme

Digital technologies are reshaping payments, lending, insurance, and wealth management everywhere becoming a key enabler for consumers of financial services.

Digital financial services and financial technology have driven significant changes across the world:

  • By 2024, digital banking consumers are expected to exceed 3.6 billion(Juniper Research, 2020).
  • In the developing world, the proportion of account owners sending and receiving payments digitally has grown from 57% in 2014 to 70% in 2017(Findex 2017).
  • 39% of companies are making fintech adoption a high priority, highlighting the worldwide demand for a more innovative financial landscape  (JDSpura, 2020).

However, digital financial services have created new risks along with exacerbating traditional risks that can lead to unfair outcomes for consumers and leave those who are vulnerable behind in an increasingly cashless society.

There is strong evidence to suggest these risks have increased in recent years and crises such as the COVID-19 pandemic have enhanced these risks, where vulnerable consumers are more fragile due to economic hardship. Achieving fair digital finance for all requires a global, collaborative, and coordinated approach. The rapidly evolving and complex nature of digital financial services demonstrate the need for innovative regulatory approaches and digital financial services and products that centre consumer protection and empowerment.

It is more important now than ever to build on our knowledge and work together to understand what fair financial services looks like in a digital world, and what role consumer-centred financial services can play in global challenges like sustainability. 2022 will be a crucial moment for change with upcoming international policy moments such as the G20 and OECD review of High-level Principles on Financial Consumer Protection.

Our previous work in this area includes: “Banking on the Future: An Exploration of FinTech and the Consumer Interest” and “The role of consumer organisations to support consumers of financial services in low- and middle-income countries”.


Fair Digital Finance Summit (14-18 March 2022)

Consumers International will be hosting a week-long event starting on 14 March 2022, the Fair Digital Finance Summit. The Summit will spark the first-ever global conversation around consumer-centred solutions in digital financial services by bringing together diverse voices of consumer advocates and key marketplace actors in digital financial services to accelerate change.  This global summit will showcase the work, perspectives, and ideas from consumer advocates around the world.

The Summit will kick-off with the Consumer Vision for Fair Digital Finance, offering insights from leaders in the consumer movement on what actions are needed to ensure fair digital finance that is inclusive, safe, data protected and private, and sustainable for consumers everywhere. The week of events will take the form of consumer-centred design sprints and incubators, high-level leadership dialogues and multi-stakeholder workshops, with representation from governments, business, academia, and civil society.


How to join the global movement this World Consumer Rights Day?

We invite all marketplace stakeholders to celebrate World Consumers Rights Day and collaborate with us to promote Fair Digital Finance.

What you can do:

  1. Collaborate with Consumers International and our Members for World Consumer Rights Day 2022 to support our Vision and/or take part in our Summit.
  2. Share information about your plans for World Consumer Rights Day 2022 or for any questions, email wcrd@consint.org.
  3. Connect with us on  TwitterFacebookLinkedIn and Instagram for all the latest news and announcements on World Consumer Rights Day 2022.
  4. Engage in this global conversation by using our hashtags #FairDigitalFinance and #BetterDigitalWorld on social media.
  5. Read about our previous work on Financial Services.
IDB

Closing the digital access gap between the Caribbean and the more advanced economies could increase the region’s GDP by about 6 to 12 percent over the medium term and provide a strong boost to a post-COVID-19 recovery, according to a new report by the Inter-American Development Bank.

The Regional Overview: Digital Infrastructure and Development in the Caribbean is part of the IDB’s Quarterly Bulletin economic series. In addition, it has economic sections for Suriname, Jamaica, Guyana, Trinidad and Tobago, The Bahamas, and Barbados.

The report looked at economic growth in the region, with a focus on productivity – a key driver of long-term economic growth and an opportunity for the Caribbean to get to the level of similar economies across the world.

“Access to faster internet is more than just streaming Netflix and Zoom calls,” said David Rosenblatt, the Regional Economic Advisor for the IDB’s Caribbean Department. “For the Caribbean, a modern and robust digital and telecommunications infrastructure is a connection with powerful global trends that are driving growth. It is the key to unlock faster productivity growth for decades to come.”

The payoffs of investing in digital infrastructure are large, the report notes, with potential GDP gains that range between 2 times and nearly 50 times the estimated costs.

Digitalization is one of the critical areas that would allow the region to close development performance with comparable economies., Caribbean economies have experienced volatile growth rates over the past five decades, averaging under 1 percent or negative growth for long stretches, are vulnerable to global economic shocks and, have been hard hit by the COVID-19 pandemic.

Figure. Estimated GDP and Productivity Gains from Closing Digital Infrastructure Gaps in Latin America and the Caribbean (percent)

Source: Authors’ calculations based on gaps from Table 1 and elasticities from Zaballos and López-Rivas (2012).

The report includes estimated fixed and mobile broadband gaps between countries and advanced economies grouped in the Organization for Economic Cooperation and Development. For instance, Trinidad and Tobago has a fixed broadband gap of 9.2 percentage points against the OECD, The Bahamas’ gap is 11.2 percentage points, and Jamaica’s 24 percentage points. Except for Uruguay, all Latin American and Caribbean countries have positive gaps relative to the OECD.

The authors used an econometric model to estimate the benefits of closing the digital infrastructure gap. A 10 percent point change in digital infrastructure is associated with a 3.2 percent higher GDP and 2.6 percent higher productivity over a six-year period. For nearly half of Caribbean economies, digital investments could yield cumulative GDP increases in double digits, which the report calls “transformative improvements.”

When looking at cost-benefit ratio – the so-called “multiplier effects”– of infrastructure investments, the report found that for The Bahamas, Trinidad and Tobago, and Barbados the potential benefit in terms of the cumulative positive impact on growth could be between 23 and 58 times the associated costs.

Benefits versus Costs of Closing Digital Infrastructure Gaps in Caribbean Countries (percent of GDP and multiplier)

Source: Authors’ calculations based on data from the Table 1, Zaballos and Lopez-Rivas (2012) and IMF (2021).

Note: “Gap” refers to the cost of closing the estimated digital infrastructure gap relative to Organization for Economic Co-operation and Development economies. Figures expressed in percentage points are as of the end-2019 GDP. The multiplier is defined as the estimated GDP growth impact of closing these gaps relative to their costs.

Governments can play a big role in facilitating more digital investments, including updating regulatory frameworks for issues such as “rights of ways”, spectrum allocation and universal service funds. The report calls for governments to establish a close relationship between digital agendas and national connectivity plans.

The IDB publishes a Broadband Index Report annually for 65 countries, looking at public policies, regulations, infrastructure, and application and training. While some Caribbean economies rank high when compared to Latin American countries, the report notes, it lags with lead countries such as Sweden, the United States, India, Iceland, and Australia.

“If this recent crisis has taught us anything, it is that the ability to communicate, transact, and reach clients and markets virtually has never before been more critical,” the report says. “The future will reward economies that can do so most effectively.”

UNCDF

The responsible and effective digitalization of tax payments has the potential to deliver major benefits for governments, businesses, and individuals.

 

For governments, digitalization can lead to cost savings by improving administrative efficiency and operational productivity, increasing net revenue.

As Dr. Vera Songwe, Under-Secretary-General of the United Nations Economic Commission for Africa, highlighted during the Pan-African Peer Exchange, tax digitalization holds immense potential for economic recovery from the pandemic: “Digitizing tax payments and related processes can raise additional resources for African governments to fight COVID-19 and help move the countries back to growth”.

For taxpayers, digitalization can reduce voluntary compliance costs, and boost trust and confidence through greater transparency and accountability.

In 2019, the Alliance investigated Success Factors in Tax Digitalization in three countries (Indonesia, Mexico, and Rwanda) digitalizing their tax administration systems to increase domestic revenue.

48% the Mexican government saw a 48 percent increase in tax revenue by streamlining revenue collection
– SHARE

One key observation was that in many countries small and micro merchants are often reluctant to engage with digital payments due to fear of taxation. We connected this research to an Alliance-led Working Group on Merchant Payments Digitization in Mexico. This working group built on the efforts of the Mexican government to streamline revenue collection, including mandatory e-invoicing between 2012 and 2017, driving a 48 percent increase in tax revenue from goods and services. This increased the tax-to-GDP ratio from 12.6 percent in 2012 to 16.5 percent in 2019.

In Indonesia, the Directorate General of Taxes promoted digitalization to encourage taxpayer compliance, achieving a 20 percent reduction in business tax compliance time between 2014 and 2019. In Rwanda, tax reforms combined with investments in digital tax services between 2010 and 2020, increased the tax-to-GDP ratio from 13.1 percent to 15.9 percent and led to 14 percent average annual growth in revenue collected from 2010 to 2018.

India is also embracing digital transformation in tax collection. Having implemented arguably the largest-scale tax reform with the introduction of the Goods and Service Tax, the Government is now pioneering “stack” infrastructure, defined by its interoperability and building on Aadhar, the national unique identification system.

Achieving this success was neither easy nor smooth. Three key learnings emerged:

  • Invest in a strong and resilient foundational tax system
    Designing a comprehensive tax administration system is a complex task. Not only must it cater to multiple needs of revenue authorities and taxpayers; to build user trust, it must also efficiently manage high transaction volumes, complex calculations, and time-bound responses. Taxpayers expect clear communication, prompt confirmation of receipts, and quick resolution of complaints. But there is also an opportunity for proactive, user-centered design features, such as advance payment incentives to avoid high transaction volumes around payment deadlines.Fully digital payment methods are much less costly to administer, and typically enable taxpayer accounts to be updated more quickly. For banks, too, digital processing costs are much lower than those associated with cash or check payments. But it may be necessary to merge multiple tax administration functions into one comprehensive system. Our members Indonesia and Rwanda have emphasized the importance of leveraging legacy systems rather than re-inventing the wheel. In Indonesia, tax payments can be made at regular and mini-ATMs, and through e-banking.
  • Embrace a comprehensive change management program
    Implementing a digital tax administration system is a mammoth task that requires open-mindedness and flexibility. Internally, tax department employees may have concerns about new skill requirements, or even job security. Externally, taxpayers may doubt a digital system’s ease, transparency, and reliability. Addressing these concerns requires a comprehensive change management system.The Rwanda Revenue Authority invested heavily in staff training, including identifying tech-savvy employees as early adopters and ambassadors for the system. Front-line staff were newly trained as compliance officers and customer service officers. Indonesia offered a competitive remuneration scheme to recruit and retain highly skilled staff.Comprehensive change management should also extend to taxpayers. Sustained and clear communications are critical to building confidence in the digitization journey. Proactive, timely information prevents missed deadlines and subsequent penalties, helping establish trust. The utmost care should be taken to prevent algorithmic biases or technical errors damaging taxpayers’ trust. The complaints process should also be transparent and prompt.
  • Forge private sector partnerships and leverage data for better service delivery
    Partnerships with the private sector including fintechs, banks, and micro-finance institutions have been effective in obtaining user feedback and developing prompt and effective recourse mechanisms. In Indonesia, public APIs have enabled tax authorities to expedite new services to taxpayers. Through private sector partnerships, Indonesia provided taxpayers with greater access to value-added services such as tax liability estimates and advance payment reminders. Tax payments worth around IDR 100 trillion (US$6.43 billion) were processed through the OnlinePajak application in 2018; approximately 5–10 percent of the country’s total tax revenue. Emails encouraging payment, designed by a Behavioral Insights Task Force, resulted in the collection of an additional US$13.53 million in 2017 alone. COVID-19 has encouraged an even greater focus on the digitization journey. According to Iwan Djuniardi, Director of ICT Transformation at DJP: “With the pandemic, we are forced to go through these digital transformations”.

A shared vision for tax digitalization is important in establishing an effective, user-centric tax system, so supporting regional and municipal tax authorities in their digitization journeys is essential. Institutions involved may have varying levels of digital maturity, so providing tailored support is also key to obtaining commitment from all.

While payments are just one component of the highly complex end-to-end tax collection process, tax digitalization – when designed and implemented effectively – has the potential to deliver major benefits for society, reduce inequalities, and contribute to the financing of the SDGs.

ISOC

We are thrilled to award a third round of grant funding through our Research grant program to six exciting projects that examine the future and sustainability of the Internet. Launched in 2020, this program supports a diverse group of researchers around the world who are generating solutions today to meet the Internet challenges of tomorrow.

The selected projects examine important issues around the Internet’s relationship to society, such as: the role language plays in promoting digital inclusion, the impact of data sharing practices on children’s privacy, and more.

Through these grants, we look forward to enabling new research on the future of the Internet, research that will influence policy and industry decisions and ultimately help shape a more equitable and sustainable future for the Internet and the people it serves.

Learn more about each awardee in the list below.

Data & Society Research Institute – United States – $200,000

Theme: A Trustworthy Internet

Project Title: Platform Mediation and the Verified Internet

Research Questions: What types of evidence are platforms requesting from users as they verify their identities, their property, or their content? How might this evidence unintentionally privilege or disadvantage certain groups who are requesting verification?

Jens Finkhaeuser – Germany – $100,000

Theme: A Trustworthy Internet

Project Title: On the Far Side of REST: An Architecture for a Future Internet

Research Question: How can we establish an alternative web-scale architecture that embraces REST’s strengths, yet addresses its current drawbacks around privacy and security?

Joanna Kulesza and Berna Akcali Gur – Poland –  $192,000

Theme: Decolonizing the Internet

Project Title: Global Governance of LEO Satellite Broadband

Research Question: What are the potential ‘data/digital sovereignty’ and jurisdictional challenges to the integration of Low Earth Orbit (LEO) satellite constellations onto the (5G) telecommunications network? How will the current geopolitical and economic hegemony of developed states in standard-setting for LEO broadband impact developing countries?

ME2B Alliance Inc – United States – $100,171.46

Theme: A Trustworthy Internet

Project Title: U.S Ed Tech Industry Benchmark: Data Sharing in Primary & Secondary School Mobile Utility Apps

Research Question: How is mandated technology treating our most vulnerable (young students)?

Pollicy – Uganda – $199,421.25 

Theme: Decolonizing the Internet

Project Title: Are we Together? The Role of Local Languages (and the Lack of) for Digital Inclusion

Research Questions: How do the predominant languages that digital platforms use impact the usability and accessibility of digital platforms and content? What are the barriers for both users and content creators that communicate in local languages?

Uppsala University – Sweden –  $194,791.00

Theme: Greening the Internet

Project Title: Developing the Internet Microscope

Research Question: How does the traffic volume and energy requirement of Internet services support what people do in everyday life? Conversely, what kinds of traffic are difficult to tie to any value or meaning, and thus might be wasteful?

The Research grant program is open to independent researchers and research institutions worldwide and is currently accepting statements of interest, to be reviewed on a rolling basis. Research themes include: Greening the Internet, The Internet Economy, Decolonizing the Internet, and A Trustworthy Internet. Grants of up to US$200,000 will be awarded for research lasting up to two years.

ESCWA

Beirut-Amman, 04 January 2022Female entrepreneurship allows boosting employment and career advancement opportunities for women. To address the national challenges hindering their empowerment in technology and entrepreneurship across the Arab region, the Technology Centre (ETC) of the United Nations Economic and Social Commission for Western Asia (ESCWA) launched the Women Empowerment for Technology and Entrepreneurship (AWETE) programme at the beginning of 2021.

ETC also announced the “Arab women empowerment ecosystem” maps developed under the programme, in addition to a brief study on the challenges that women face when accessing technology and entrepreneurial opportunities in the region.

AWETE program included a series of 5 regional roundtable discussions that were held in Egypt, Iraq, Jordan, Lebanon, and Palestine gathering key actors and experts advocating for women’s rights.

“Effective partnerships are essential to the sustainability of the work done by the many organizations across the region to increase and enhance the participation of women as leaders of change,” said Kareem Hassan, ETC Executive Director. “We need to work collaboratively to ensure that women possess all the right tools to be in the lead, as entrepreneurship is the primary trend for economic independence,” he added.

Discussions revolved around the team’s desk research findings on data demonstrating the extent to which women are provided access to digital technologies, entrepreneurial opportunities, and the intersection between the two. The program provided a space for gender experts and relevant stakeholders to share national findings and consequent recommendations related to the development of women’s status in their country, to further feed into the holistic and comprehensive brief study.

Participants agreed on a set of recommendations emphasizing the need to reform and decentralize non-governmental assistance provided for women entrepreneurs to integrate innovation and sustainability, and to adopt community-based approaches to increase awareness on financial and digital literacy, and to set up gender-sensitive social and digital security mechanisms to protect small businesses from financial and human crises.

They also stressed the importance of incorporating the concept of entrepreneurship in the educational curriculum to inspire girls and young women and establishing a unified platform through which women seeking entrepreneurship can reach national networks and funding opportunities.

WEF
  • AI – artificial intelligence – is transforming every aspect of our lives.
  • Professor Stuart Russell says we need to make AI ‘human-compatible’.
  • We must prepare for a world where machines replace humans in most jobs.
  • Social media AI changes people to make them click more, Russell says.
  • We’ve given algorithms ‘a free pass for far too long’, he says.

Six out of 10 of people around the world expect artificial intelligence to profoundly change their lives in the next three to five years, according to a new Ipsos survey for the World Economic Forum – which polled almost 20,000 people in 28 countries.

A majority said products and services that use AI already make their lives easier in areas such as education, entertainment, transport, shopping, safety, the environment and food. But just half say they trust companies that use AI as much as they trust other companies.

But what exactly is artificial intelligence? How can it solve some of humanity’s biggest problems? And what threats does AI itself pose to humanity?

Kay Firth-Butterfield, head of artificial intelligence and machine learning at the World Economic Forum’s Centre for the Fourth Industrial Revolution, joined Radio Davos host Robin Pomeroy to explore these questions with Stuart Russell, one of the world’s foremost experts on AI.

AI reports and audio mentioned in this podcast:

Transcript: The promises and perils of AI – Stuart Russell on Radio Davos

Kay Firth-Butterfield: It’s my pleasure to introduce Stuart, who has written two books on an artificial intelligence, Human Compatible: Artificial Intelligence and the Problem of Control. But perhaps the one that you referred to, saying that he had ‘literally written the book on artificial intelligence’, that is Artificial Intelligence: A Modern Approach – and that’s the book from which most students around the world learn AI. Stuart and I first met in 2014 at a lecture that he gave in the UK about his concerns around lethal autonomous weapons. And whilst we’re not going to talk about that today, he’s been working tirelessly at the UN for a ban on such weapons. Stuart’s worked extensively with us at the World Economic Forum. In 2016, he became co-chair of the World Economic Forum’s Global AI Council on AI and Robotics. And then in 2018, he joined our Global AI Council. As a member of that Council, he galvanised us into thinking about how we could achieve positive futures with AI, by planning and developing policies now to chart a course to that future.

Robin Pomeroy: Stuart, you’re on the screen with us on Zoom. Very nice to meet you.

Stuart Russell: Thank you very much for having me. It’s really nice to join you and Kay.

Robin: Where in the world are you at the moment?

Stuart: I am in Berkeley, California.

Robin: Where you’re a professor. I’ve been listening to your lectures on BBC Radio 4 and the World Service, the Reith Lectures. So I feel like I’m an expert in it now, as I wasn’t a couple of weeks ago. Let’s start right at the very beginning, though. For someone who only has a vague idea of what artificial intelligence is – we all know what computers are, we use apps. How much of that is artificial intelligence? And where is it going to take us in the future beyond what we already have?

Stuart: It’s actually surprisingly difficult to draw a hard and fast line and say, Well, this this piece of software is AI and that piece of software isn’t AI. Because within the field, when we think about AI, the object that we discuss is something we call an ‘agent’, which means something that acts on the basis of whatever it has perceived. And the perceptions could be through a camera or through a keyboard. The actions could be displaying things on a screen or turning the steering wheel of a self-driving car or firing a shell from a tank, or whatever it might be. And the goal of AI is to make sure that the actions that come out are actually the right ones, meaning the ones that will actually achieve the objectives that we’ve set for the agent. And this maps onto a concept that’s been around for a long time in economics and philosophy, called the ‘rational agent’ – so the agent whose actions can be expected to achieve its objectives.

And so that’s what we try to do. And they can be very, very simple. A thermostat is an agent. It has perception – just measures the temperature. It has actions – switch on or off the heater. And it sort of has two very, very simple rules: If it’s too hot, turn it off. If it’s too cold, turn it on. Is that AI? Well, actually, it doesn’t really matter whether you want to call that AI or not. So there’s no hard and fast dividing line like, well, if it’s got 17 rules then it’s AI, if it’s only got 16, then it’s not AI. That wouldn’t make sense. So we just think of it as as a continuum, from extremely simple agents to extremely complex agents like humans.

AI systems now are all over the place in the economy – search engines are AI systems. They’re actually not just keyword look-up systems any more – they are trying to understand your query. About a third of all the queries going into search engines are actually answered by knowledge bases, not by just giving you web pages where you can find the answer. They actually tell you the answer because they have a lot of knowledge in machine readable form.

Your smart speakers, the digital assistants on your phone – these are all AI systems. Machine translation – which I use a lot because I have to pay taxes in France – it does a great job of translating impenetrable French tax legislation into impenetrable English tax legislation. So it doesn’t really help me very much, but it’s a very good translation. And then the self-driving car, I think you would say that’s a pretty canonical application of AI that stresses many things: the ability to perceive, to understand the situation and to make complex decisions that actually have to take into account risk and the many possible eventualities that can arise as we drive around. And then, of course, at the very high end are human beings.

Robin: At some point in the future, machines, AI, will be able to do everything a human can do, but better. Is that is that the thing we’re moving towards?

Stuart: Yes. This has always been the goal – what I call ‘general purpose AI’. There are other names for it: human-level, AI, superintelligent AI, artificial general intelligence. But I settled on ‘general purpose AI’ because it’s a little bit less threatening than ‘superintelligent AI’. And, as you say, it means AI systems that for any task that human beings can do with their intellects, the AI system will be able to, if not, do it already, to very quickly learn how to do it and do it as well as or better than humans. And I think most people understand that once you reach a human level on any particular task, it’s not that hard then to go beyond the human level. Because machines have such massive advantages in computation, speed in bandwidth, you know, the ability to store and retrieve stuff from memory at vast rates that humans human brains can’t possibly match.

Robin: Kay, I’m going to hand it over to you and I’m going to take a back seat. I’ll be your co-host if you like. I’m bursting with questions as well, so I’ll annoyingly cut in. But basically for the rest of this interview, all yours.

Kay: Thank you. You talked, Stuart, a little bit about some of the examples of AI that we’re encountering all the time. But one of the ways that AI is being used every day by human beings, from the youngest to the oldest, is in social media. We hear a great deal about radicalisation through social media. Indeed, at a recent conference I attended, Cédric O, the IT minister from France, described AI as the biggest threat to democracy existing at the moment. I wonder whether you could actually explain for our listeners how the current use of AI drives that polarisation of ideas.

Stuart: So I think this is an incredibly important question. The problem with answering your question is that we actually don’t know the answer because the facts are hidden away in the vaults of the social media companies. And those facts are basically trillions of events per week – trillions! Because we have billions of people engaging with social media hundreds of times a day and every one of those engagements – clicking, swiping, dismissing, liking, disliking thumbs up, thumbs down – you name it – all of that data is inaccessible, even, for example, to Facebook’s oversight board, which is supposed to be actually keeping track of this, that’s why they made it. But that board doesn’t have access to the internal data.

You maximise click-through by sending people a chain of content that turns them into somebody else who is more susceptible to clicking on whatever content you’re going to send them in future.

So there is some anecdotal evidence. There are some data sets on which we are able to do some analysis that’s suggestive, but I would say it’s not conclusive. However, if you think about the way the algorithms work, what they’re trying to do is maximise click-through. They want you to click on things, engage with content or spend time on the platform, which is a slightly different metric, but basically the same thing.

And you might say, Well, OK, the only way to get people to click on things is to send them things they’re interested in. So what’s wrong with that? But that’s not the answer. That’s not the way you maximise click-through. The way you maximise click-through is actually to send people a chain of content that turns them into somebody else who is more susceptible to clicking on whatever content you’re going to be able to send them in future.

So the algorithms have, at least according to the mathematical models that we built, the algorithms have learnt to manipulate people to change them so that in future they’re more susceptible and they can be monetised at a higher rate.

The algorithms don’t care what opinions you have, they just care that you’re susceptible to stuff that they send. But of course, people do care.

Now, at the same time, of course, there’s a massive human-driven industry that sprung up to feed this whole process: the click-bait industry, the disinformation industry. So people have hijacked the ability of the algorithms to very rapidly change people because it’s hundreds of interactions a day, everyone has a little nudge. But if you nudge somebody hundreds of times a day for days on end, you can move them a long way in terms of their beliefs, their preferences, their opinions. The algorithms don’t care what opinions you have, they just care that you’re susceptible to stuff that they send. But of course, people do care, and they hijacked the process to take advantage of it and create the polarisation that suits them for their purposes. And, you know, I think it’s essential that we actually get more visibility. AI researchers want it because we want to understand this and see if we can actually fix it. Governments want this because they’re really afraid that their whole social structure is disintegrating or that they’re being undermined by other countries who don’t have their best interests at heart.

Robin: Stuart, do we know whether that’s a kind of a by-product of the algorithms or whether a human at some point has built that into the algorithms, this polarisation?

Stuart: I think it’s a by-product. I’m willing to give the social media platforms some benefit of the doubt – that they they didn’t intend this. But one of the things that we know is that when algorithms work well in the sense that they generate lots of revenue and profit for the company, that creates a lot of pressure not to change the algorithm. And so whether it’s conscious or unconscious, the algorithms are in some sense protected by this multinational superstructure that’s generating, enjoying the billions of dollars that are generated and wants to protect that revenue stream.

Kay: Stuart, you used the word ‘manipulating’ us, but you also said the algorithms don’t care. Can you just explain what what one of these algorithms would look like? And presumably it doesn’t care because it doesn’t know anything about human beings?

[AI] doesn’t know that human beings exist at all. From the algorithm’s point of view, each person is simply a click history.

Stuart: That’s right, it doesn’t know that human beings exist at all. From the algorithm’s point of view, each person is simply a click history. So what was presented and did you or did you not click on it? And so let’s say the last 100 or the last 1,000 such interactions – that’s you. And then the algorithm learns, OK, how do I take those thousand interactions and choose the next thing to send? We call that a ‘policy’, that decides what’s the next thing to send, given the history of interactions, and the policy is learned over time in order to maximise the long-term rate of clicking. So it’s not just trying to choose the next best thing that you’re going to click on. It’s also, just because of the way the algorithm is constructed, it’s choosing the thing that is going to yield the best results in the long term.

Just as if I want to get to San Francisco from here, right? I make a long-term plan and then I start executing the plan, which which involves getting up out of my chair and then doing some other things, right? And so the algorithm is sort of embarking on this journey, and it’s learned how to get to these destinations where people are more predictable and it’s predictability that the algorithm cares about in terms of maximising revenue. The algorithms wouldn’t be conscious anyway, but it’s not deliberate in the sense that it has an explicit objective to radicalise people or cause them to become terrorists or or anything like that. Now, future algorithms that actually know much more, that know that people do exist and that we have minds and that we have a particular kind of psychology and different susceptibilities, could be much more effective. And I think this is one of the things that feels a little bit like a paradox at first – that the better the AI, the worse the outcome.

Kay: You talked about learning – the algorithms learning – and that’s something that I think quite a lot of people don’t really understand – because we use software, we’ve been using software for a long time, but this is slightly different.

Stuart: Actually, much of the software that that we have been using was created by a learning process. For example, speech recognition systems. There isn’t someone typing in a rule for how do you distinguish between ‘cat’ and ‘cut’? We just give the algorithm lots of examples of ‘cat’ and lots of examples of ‘cut’, and then the algorithm learns the distinguishing rule by tweaking the parameters of some kind of – think of it as a big circuit with lots of tuneable weights or connection strengths in the circuit. And then as you tune all those weights in the circuit, the output of the circuit will change and it’ll start becoming better at distinguishing between ‘cat’ and ‘cut’, and you’re trying to tune those weights to agree with the training data – the labelled examples, as we say – the ‘cats’ and the ‘cuts’. And as that process of tuning all the weights proceeds, eventually it will give you perfect or near-perfect performance on the training data. And then you hope that when a new example of ‘cat’ or ‘cut’ comes along, that it succeeds in classifying it correctly. And so that’s how we train speech recognition systems, and that’s been true for decades. We’re a little bit better at it now so our speech recognition systems are more accurate, much more robust – it’ll be able to understand what you’re saying even when you’re driving a car and talking on a crackly cellphone line. It’s good enough now to understand that speech.

When you buy speech recognition software, for example, dictation software, there’s often a what we might call a post-purchase learning phase where it’s already pretty good, but, by training on your voice specifically, it can become even better. So it will give you a few sentences to read out and then that additional data means that it can be even more accurate on you.

And so you can think of what’s going on in the social media algorithms as like that – a sort of post-purchase customisation – it’s learning about you. So, initially, it can recommend articles that are of interest to the general population, which seems to be Kim Kardashian, as far as I can tell. But then after interacting with you for a while, it will learn: actually, no, I’d rather get the cricket scores or something like that.

There is no equivalent of city planning for AI systems.

Kay: Is it the same for facial recognition? Because we hear a lot about facial recognition and facial recognition perhaps making mistakes and people of colour. Is that because the data’s wrong?

Stuart: Usually, it’s not because the data is wrong. In that case, it’s because the data has many fewer examples of particular types of people. And so when you have few examples, the accuracy on that subset will be worse. This is a very controversial question, and it’s actually quite hard to to get agreement among the various different parties to the debate about whether one can eliminate these disparities in recognition rates by actually making more representative datasets. And of course, there isn’t one perfectly representative dataset. Because it would depend on what’s a perfectly representative dataset if you’re in Namibia or Japan or Iceland or Mongolia. It wouldn’t necessarily be appropriate to use exactly the same data for all those four settings. And so these these questions become not so much technical, but ‘socio-technical’. What matters is what happens when you deploy the AI system in a particular context and it operates for a while. There are many things that go on. For example, people might start avoiding places where there are cameras, but maybe only one type of person avoids the places where there are cameras and the other people don’t mind. And so now you created a new bias in the collection of data, and that bias is really hard to understand because you can’t predict who’s going to avoid the area with the cameras. And so understanding that, we’re nowhere near having a real engineering discipline or scientific approach to understanding the sort of socio-technical embedding of AI systems, what effect they can have, what effect society has on them and their operation. And then, are we all better off as a result? Or are we all worse off as a result? Early anecdotes suggest that there’s all kinds of weird ways that things go wrong that you just don’t expect because we’re not used to thinking about this.

Now, if you’re in city planning, they have learnt over centuries that weird things happen. When you broaden a road you think that’s going to improve traffic flow, but it turns out that sometimes making bigger roads makes the traffic flow worse. Same thing: should you add a bridge across the river? They’ve learnt to actually think through the consequences – pedestrianising a street, you might, oh, that’s good, but then it just moves the traffic somewhere else and things get worse in the next neighbourhood. So there’s all kinds of complicated things, and we’re just beginning to explore these and we don’t really yet have a proper discipline. There is no equivalent of city planning for AI systems and their socio- technical embeddings.

Kay: Back in 2019, I think it was, you came to me with a suggestion, and that was to truly optimise the benefits for humans of AI and, in particular, general purpose AI, which you spoke to Robin about earlier. We need to rethink the political and social systems we use. We were getting to lock people in a room and those people were specifically going to be economists and sci-fi writers. We never did that because we got COVID. But we had such fantastically interesting workshops, and I wonder whether you could tell us a little bit about why you thought that was important and the sort of ideas that came out of it.

Stuart: I just want to reassure the viewers that we didn’t literally plan to lock people into a room, but it was a metaphorical sense. The concern, or the question, was: what happens when general purpose AI hits the real economy? How do things change? And can we adapt to that without having a huge amount of dislocation? Because, you know, this is a very old point. Even, amazingly, Aristotle actually has a passage where he says: Look, if we had fully automated weaving machines and fully automated plectrums that could pluck the lyre and produce music without any humans, then we wouldn’t need any workers. It’s a pretty amazing thing for 350 BC.

That that idea, Keynes called it ‘technological unemployment’ in 1930, is very obvious to people, right? They think: Yeah, of course, if the machine does the work, then I’m going to be unemployed. And the Luddites worried about that. And for a long time, economists actually thought that they had a mathematical proof that technological unemployment was impossible. But if you think about it, if technology could make a twin of every person on Earth and the twin was more cheerful and less hungover and willing to work for nothing, well how many of us would still have our jobs? I think the answer is zero. So there’s something wrong with the economists’ mathematical theorem.

Over the last decade or so, opinion in economics has really shifted. And it was, in fact, the first Davos meeting that I ever went to, in 2015. There was a dinner supposedly to discuss the ‘new digital economy’ . But the economists who got up – there were several Nobel prize winners there, other very distinguished economists – and they got up one by one and said: You know, actually, I don’t want to talk about the digital economy. I want to talk about AI and technological unemployment, and this is the biggest problem we face in the world, at least from the economic point of view. Because as far as they could see it, as general purpose AI became more and more of a real thing – right now, we’re not very close to it – but as we move there, we’ll see AI systems capable of carrying out more and more of the tasks that humans do at work.

So just to give you one example, if you think about the warehouses that Amazon and other companies are currently operating for e-commerce, they are half-automated. The way it works is that instead of having an old warehouse where you’ve got tons of stuff piled up all over the place and then the humans go and rummage around and then bring it back and send it off, there’s a robot who goes and gets the shelving unit that contains the thing that you need and brings it to the human worker. So the human worker stands in one place and these robots are going in collecting shelving units of stuff and bringing them. But the human has to pick the object out of the bin or off the shelf, because that’s still too difficult.

It’s all very well saying: Oh, we’ll just retrain everyone to be data scientists. But we don’t need 2.5 billion data scientists.

And there’s, let’s say, three or four million people with that job in the world. But at the same time Amazon was running a competition for bin-picking: could you make a robot that is accurate enough to be able to pick pretty much any object – and there’s a very wide variety of objects that you can buy – pretty much any object from shelves, bins, et cetera, do it accurately and then send it off to the dispatch unit. That would, at a stroke, eliminate three or four million jobs. And the system is already set up to do that. So it wouldn’t wouldn’t require then rejigging everything, you’d really just be putting a robot in the place where the human was.

People worry about self-driving cars. As that becomes a reality, then a self-driving taxi is going to be maybe a quarter of the price of a regular taxi. And so you can see what would happen, right? There’s about, I think, 25 million people, formal or informal taxi drivers in the world. So that’s a somewhat bigger impact. And then, of course, this continues with each new capability. More tasks are automated. And can we keep up with that rate of change in terms of finding other things that people will do and then training them to do those new things that they may not know how to do? So it’s all very well saying: Oh, we’ll just retrain everyone to be data scientists. But, number one, we don’t need 2.5 billion data scientists. Not even sure we need 2.5 million data scientists. So it’s a drop in the bucket.

But other things – yeah, we need more people who can do geriatric care. But it’s not that easy to take someone who’s been a truck driver for 25 years and we train them to be in the geriatric care industry. Tutoring of young children. There’s many other things that are unmet needs in our world. I think that’s obvious to almost everyone – there are unmet needs – machines may be able to fulfil some of those needs, but humans can only meet them if they are trained and have the knowledge and aptitude and even inclination to do those kinds of jobs.

When we look in science fiction, there are models for worlds that seem quite desirable. But as economies, they don’t hang together.

So the question we were asking is: OK, if this process continues — general purpose AI is doing pretty much everything we currently call work. What is the world going to look like or what would a world look like that you would want your children to go into, to live in? And when we look in science fiction, there are models for worlds that seem quite desirable. But as economies, they don’t hang together. So the economists say: the incentives wouldn’t work in that world – these people would stop doing that and those people would do this instead, and it all just wouldn’t be stable. And the economists, they don’t really invent things, right? They just talk about, well, we could raise this tax or we could, you know, decrease this interest rate. The economists at Davos that were talking all said: perhaps we could have private unemployment insurance. Well, yeh, right – that really solves the problem! So I wanted to put these two groups together, so you could get imagination tempered by real economic understanding.

Robin: I’m curious to know who were optimists and who were the pessimists between the economists and the science fiction writers. Science ficition, I imagine they love a bit of dystopia and things going horribly wrong. But I wonder whether the flip side of that is actually they’ve got a more optimistic outlook than the economists who are embedded in the real world where things really are going wrong all the time. Did you notice a trend either way?

Stuart: The science fiction writers, as you say, they’re fond of dystopias, but the economists mostly are pessimistic, at least certainly the ones at the dinner, and I’ve been interacting with many during these workshops. I think there’s still a view of many economists that – there are compensating effects where it’s not as simple as saying, if the machine does Job X, then the person isn’t doing Job X, and so the person is unemployed. There are these compensating effects. So if a machine is doing something more cheaply, more efficiently, more productively, then that increases total wealth, which then increases demand for all the other jobs in the economy. And so you get this sort of recycling of labour from areas that are becoming automated to areas that are still not automated. But if you automate everything, then this is the argument about the twins, right? It’s like making a twin of everyone who’s willing to work for nothing. And so you have to think, well are there areas where we aren’t going to be automating, either because we don’t want to or because humans are just intrinsically better. So this is one, I think, optimistic view, and I think you could argue that Keynes had this view. He called it ‘perfecting the art of life’. We will be faced with man’s permanent problem, which is how to live agreeably and wisely and well. And those people who cultivate better the art of life will be much more successful in this future.

And so cultivating the art of life is something that humans understand. We understand what life is, and we can do that for each other because we are so similar. We have the same nervous systems. I often use the example of hitting your thumb with a hammer. If you’ve ever done that, then you know what it’s like and you can empathise with someone else who does it.

There’s this intrinsic advantage that we have for knowing what it’s like to be jilted by the love of your life … so we have this extra comparative advantage over machines.

You don’t need a Ph.D. in neuroscience to know what it’s like to hit your thumb with a hammer, and if you haven’t done it well, you can just do it. And now you know what it’s like, right? So there’s this intrinsic advantage that we have for knowing what it’s like, knowing what it’s like to be jilted by the love of your life, knowing what it’s like to lose a parent, knowing what it’s like to come bottom in your class at school and so on. So we have this extra comparative advantage over machines. That means that those kinds of professions – interpersonal professions – are likely to be ones that humans will have a real advantage. Actually more and more people, I think, will be moving into those areas. For some interpersonal professions like executive coach, those are relatively well-paid because they’re providing services to very rich people and corporations. But others, like babysitting, are extremely poorly paid, even though supposedly we care enormously about our children more than we care about our CEO. But we pay someone $5 an hour and everything you can eat from the fridge to look after our children, whereas if we break a leg, we pay an orthopaedic surgeon $5,000 an hour and everything he can eat from the fridge to fix our broken leg. Why is that? Because he knows how to do it, and the babysitter doesn’t really know how to do it. Some are, some are good and some are absolutely terrible. Like the babysitter taught me, or tried to teach me and my sister to smoke when we were seven and nine.

Robin: Someone’s got to do it.

Stuart: Someone’s got to do it! In order for this vision of the future to work, we need a completely different science base. We need a science base that’s oriented towards the human sciences. How? How do you make someone else’s life better? How do you educate an individual child with their individual personalities and traits and characteristics and everything so that they have, as Keynes called it, the ability to live wisely and agreeably and well? We know so little about that. And it’s going to take a long time to shift our whole scientific research orientation and our education system to make this vision actually an economically viable one. If that’s the destination, then we need to start preparing for that journey sooner rather than later. And so the idea of these workshops was to envision these possible destinations and then figure out what are the policy implications for the present.

Kay: And so these destinations, we’re talking about something in the future. I know that this may be crystal ball gazing, but when might we expect general purpose AI, so that we can be prepared? You say we need to prepare now.

Stuart: I think this is a very difficult question to answer. And it’s also it’s not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks that can be done. So, you know, we’ve been working on self-driving cars, and the first demonstrated freeway driving was 1987. Why has it taken so long? Well, because mainly the perceptual capabilities of the systems were inadequate, and some of that was just hardware. You just need massive amounts of hardware to process high-resolution, high-frame-rate video. And that problem has been largely solved. And so, you know, with visual perception now a whole range, not just self-driving cars, but you can start to think about robots that can work in agriculture, the robots that can do the part-picking in the warehouse, et cetera, et cetera. So, you know, just that one thing easily has the potential to impact 500 million jobs. And then as you get to language understanding, that could be another 500 million jobs. So each of these changes causes this big expansion.

Most experts say by the end of the century we’re very, very likely to have general purpose AI. The median is something around 2045.

And so these things will happen. The actual date of arrival of general purpose AI, you’re not going to be able to pinpoint. It isn’t a single day – ‘Oh, today it arrived – yesterday we didn’t have it’. I think most experts say by the end of the century we’re very, very likely to have general purpose AI. The median is something around 2045, and that’s not so long, it’s less than 30 years from now. I’m a little more on the conservative side. I think the problem is harder than we think. But I liked what John McCarthy, who was one of the founders of AI, when he was asked this question, he said: Well, somewhere between five and 500 years. And we’re going to need, I think, several Einsteins to make it happen.

Robin: On the bright side, if these machines are going to be so brilliant, will there come a day when we just say: Fix global hunger, fix climate change? And off they go and you set them six months, or whatever, a reasonable amount of time. And suddenly they’ve fixed climate change. In one of your Reith Lectures you actually broach the climate change subject. You actually reduce it to one area of climate change, the acidification of the oceans. And you envisage a scenario where a machine can fix the acidification of the oceans that’s been caused by climate change. But there is a big ‘but’ there. Perhaps you can tell us what the problem is when you set at AI off to do a specific job?

When you ask a human to fetch you a cup of coffee, you don’t mean this should be their life’s mission and nothing else in the universe matters, even if they have to kill everybody else in Starbucks.

Stuart: So there’s a big difference between asking a human to do something and giving that as the objective to an AI system. When you ask a human to fetch you a cup of coffee, you don’t mean this should be their life’s mission and nothing else in the universe matters, even if they have to kill everybody else in Starbucks to get you the coffee before it closes. That’s not what you mean. And of course, all the other things that we mutually care about, you know, they should factor into their behaviour as well.

If we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviours, like asking permission before getting rid of all the oxygen in the atmosphere.

And the problem with the way we build AI systems now is we we give them a fixed objective. The algorithms require us to specify everything in the objective. And if you say: can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over over the course of several hours. So how do we avoid this problem? You might say, OK, well, just be more careful about specifying the objective, right? Don’t forget the atmospheric oxygen. And then of course, it might produce a side-effect of the reaction in the ocean that poisons all the fish. OK, well, I meant don’t kill the fish, either. And then, well, what about the seaweed, OK? Don’t do anything that’s going to cause all the seaweed to die – and on and on and on. Right? And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. And so they are likely to come back. So if if you ask a human to get you a cup of coffee, and you happen to be in the hotel George V in Paris, where the coffee is, I think, 13 euros a cup, it’s entirely reasonable to come back and say, ‘Well, it’s 13 euros. Are you sure? Or I could go next door and go get one for much less’, right? That’s because you might not know their price elasticity for coffee. You don’t know whether they want to spend that much. And it’s a perfectly normal thing for a person to do – to ask. I’m going to repaint your house – is it OK if I take off the drainpipes and then put them back? We don’t think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective.

Control over the AI system comes from the machine’s uncertainty about what the true objective is

And in my book Human Compatible that Kay mentioned, the main point is if we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviours, like asking permission before getting rid of all the oxygen in the atmosphere. And they do that because that’s a change to the world and the algorithm may not know is that something we prefer or disprefer. And so it has an incentive to ask because it wants to avoid doing anything that’s dispreferred. So you get much more robust, controllable behaviour. And in the extreme case, if we want to switch the machine off, it actually wants to be switched off because it wants to avoid doing whatever it is that is upsetting us. It wants to avoid it. It doesn’t know which thing it’s doing that’s upsetting us, but it wants to avoid that. So it wants us to switch it off if that’s what we want. So in all these senses, control over the AI system comes from the machine’s uncertainty about what the true objective is. And it’s when you build machines that believe with certainty that they have the objective, that’s when you get sort of psychopathic behaviour, and I think we see the same thing in humans.

Kay: And would that help with those basic algorithms that we were talking about – basic algorithms that are leading us down the journey of radicalisation? Or is it only applicable in general purpose AI ?

Stuart: No, it’s applicable everywhere. We’ve actually been building algorithms that are designed along these lines. And you can actually set it up as a formal mathematical problem. We call it an ‘assistance game’ – ‘game’ in the sense of game theory, which means decision problems that involve more than one entity. So it involves the machine and the human actually coupled together by this uncertainty about the human objective. And you can solve those assistance games. You can mathematically derive algorithms that come up with a solution and you can look at the solution and: Gosh, yeah, it asked for permission, or, the human half of the solution actually wants to teach the machine because it wants to make sure the machine does understand human preferences so that it avoids making mistakes.

And so with social media, this is probably the hardest problem because it’s not just that it’s doing things we don’t like, it’s actually changing our preferences. And that’s a sort of a failure mode, if you like, of of any AI system that’s trying to satisfy human preferences – which sounds like a very reasonable thing to do. One way to satisfy them is to change them so that they’re already satisfied. I think politicians are pretty good at doing this. And we don’t want AI systems doing that. But it’s sort of the wicked problem because it’s not as if all the users of social media hate themselves, right? They don’t. They’re not sitting there saying: How dare you turn me into this raving neo-fascist, right? They believe that their newfound neo-fascism is actually the right thing, and they were just deluded beforehand. And so it gets to some of the most difficult of current problems in moral philosophy. How do you act on behalf of someone whose preferences are changing over time? Do you do you act on behalf of the present person or the future person? Which one? And there isn’t a good answer to that question. And I think it points to gaps in our understanding of moral philosophy. So in that sense, what’s happening in social media is really difficult to unravel.

But I think one one of the things that I would recommend is simply a change in mindset in the social media platforms. Rather than thinking, OK, how can we generate revenue, think, what do our users care about? What do they want the future to be like? What do they want themselves to be like? And if we don’t know – and I think the answer is we don’t know – I mean, we’ve got billions of users. They’re all different. They all have different preferences. We don’t know what those are – think about ways of having systems that are initially very uncertain about the true preferences of the user and try to learn more about those, but while respecting them. The most difficult part is, you can’t say ‘don’t touch the user’s preferences, under no circumstances are you allowed to change the user’s preferences’ because just reading the Financial Times changes your preferences. You become more informed. You learn about all sorts of different points of view and then you’re a different person. And we want people to be different people over time. We don’t want to remain newborn babies forever. But we don’t have a good way of saying, well, this process of changing a person into a new person is good. We think of university education as good or global travel as good. Those usually make people better people, whereas, brainwashing is bad and joining a cult, what cults do to people is bad, and so on. But what’s going on in social media is right at the place where we don’t know how to answer these questions. So we really need some help from moral philosophers and other thinkers.

Robin: You quoted Keynes earlier saying that when the machines are doing all our work for us, human will be able to cultivate this – they’ll live the fullest possible life in an age of plenty. But of course he was writing that before everyone was scrolling through social media in their down-time.

Stuart: There’s an interesting story that E.M. Forster wrote. Forster usually wrote novels about British upper class society and and the decay of Victorian morals. But he wrote a science fiction story in 1909 called The Machine Stops which I highly recommend, where everyone is entirely machine-dependent. They use email, they suffer from email backlogs. They do a lot of video conferencing, or Zoom meetings. They have iPads.

Robin: How could E.M. Forster have written about iPads?

Stuart: Exactly. He called it a video disc, but, you know, it’s exactly an iPad. And then, you know, people become obese for not getting any exercise because they’re glued to their email on their screens all the time. And the story is really about the fact that that if you hand over the management of your civilisation to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it.

And and you could see Wall-E actually as a modern version of The Machine Stops where everyone is enfeebled and infantilized by the machine because we lost the incentive to actually understand and run our own civilisation. And that hasn’t been possible up to now, right? We put a lot of our civilisation into books, but the books can’t run it for us. And so we always have to teach the next generation. And if you work it out, it’s about a trillion person-years of teaching and learning in an unbroken chain that goes back tens of thousands of generations. And what happens if that chain breaks? And I think that’s that’s what the story is about. And that’s something we have to understand ourselves as AI moves forward.

Robin: On the optimistic side, though, current generations have at their fingertips knowledge and wisdom that we just didn’t have 30 years ago. So if I want to read the poetry of William Shakespeare, I don’t need to go to the library or bookshop – it’s right there in front of me – or hear the symphonies or learn how to play the piano. I envy people who are 30 years younger than me because they’ve got all this access to that knowledge if they choose not to be Wall-E.

Stuart: They’ve never heard of William Shakespeare or Mozart. They can tell you the names of all the characters in all the video games. But so, you know, I think this this comes back to the discussion we were having earlier about how we need to think about our education system. So even if you just believe Keynes’s view, his rosy view of the future didn’t involve an economy based on interpersonal services, but just people living happy lives and maybe a lot of voluntary interactions and a sort of non-economic system. But he still felt like we would need to understand how to educate people to live such a life successfully. And our current system doesn’t do that, it’s not about that. It’s actually educating people to fulfil different sorts of economic functions, designed, some people argue, for the British civil service of the late Victorian period. And how you do that? We don’t have a lot of experience with that, so we have to learn it.

Kay: You’ve talked a bit about the things that we need to do in order to control general purpose intelligence. And we talked about how they’re applicable to social media today. Azimov had three principles of robotics, and I think you’ve got three principles of your own that you hope – and are testing – would work to ensure that all AI prioritises us humans.

Robin: Stuart, before you answer, could I just remind the listeners what the three laws are? I’ve just gone onto my AI – Wikipedia – to find out what they are. So this is from the science fiction writer Isaac Asimov. The first law: a robot may not injure a human being or through inaction allow a human being to come to harm. The second law: a robot must obey the orders given it by human beings except where such orders would conflict with the first law. The third law is: a robot must protect its own existence as long as such protection does not conflict with the first or second law. So, Stuart, what have you come up with along those lines?

Stuart: I have three principles sort of as a homage to Asimov. But Asimov’s rules in the stories, these are laws that, in some sense, the algorithms in the robots are constantly consulting so they can decide what to do. And I think of the three principles that I give in the book as being guides for AI researchers and how you set up the mathematical problem that your algorithm is a solution to.

And so the the three principles: the first one is that the only objective for all machines is the satisfaction of human preferences. ‘Preferences’ is actually a term from economics. It doesn’t just mean what kind of pizza do you like or who did you vote for. It really means what is your ranking over all possible futures for everything that matters. It’s a very, very big, complicated, abstract thing, most of which you would never be able to explicate even if you tried, and some of which you literally don’t know. Because I literally don’t know whether I’m going to like durian fruit if I eat it. Some people absolutely love it, and some people find it absolutely disgusting. I don’t know which kind of person I am, so I literally can’t tell you, you know, do I like the future where I’m eating durian every day? So that’s the first principle, right? We want the machines to be satisfying human preferences.

Second principle is that the machine does not know what those preferences are. So it has initial uncertainty about human preferences. And we already talked about the fact that this sort of humility is what enables us to retain control. It makes the machines in some sense deferential to human beings.

The third principle really just grounds what we mean by preferences in the first two principles, and it says that human behaviour is the source of evidence for human preferences. That can be unpacked a bit. Basically, the model is that humans have these preferences about the future, and that those preferences are what cause us to make the choices that we make. Behaviour means everything we do, everything we don’t do – speaking, not speaking, sitting, reading your email while you’re watching this lecture or this interview and so on. So with those principles, when we turn them into a mathematical problem, this is what we mean by the ‘assistance game’.

You could say: ‘guarantee no harm’. But a self-driving car that followed Asimov’s first law would never leave the garage.

And so there are there are significant differences from from Asimov’s. I think some aspects of Asimov’s principles are reflected because the the idea of not allowing a human to come to harm, I think you could translate that into ‘satisfy human preferences’. Harm is sort of the opposite of preferences and so ‘satisfying human preferences’. But the language of Asimov’s principles actually reflects a mindset that was sort of pretty common up until the 50s and 60s, which was that uncertainty didn’t really matter very much. So you could sort of say: guarantee no harm, right? But think about it: a self-driving car that followed Asimov’s first law would never leave the garage because there is no way to guarantee safety on the freeway – just can’t do it because someone else can always just side swipe you squish you. And so you have to take into account uncertainty about the world, about how the other agents are going to behave, the fact that your own senses don’t give you complete and correct information about the state of the world. There’s lots of uncertainty all over the place. You know, like, where is my car? Well, I can’t see my car. I’m sitting in my house – it’s down there somewhere, but I’m not sure it’s still there. Someone might have just stolen it while we’ve been having this conversation. There’s uncertainty about almost everything in the world, and you have to take account of that in decision making.

The third law says the robot needs to preserve its own existence as long as that doesn’t conflict the first two laws. That’s completely unnecessary in the new framework because if the robot is useful to humans, in other words, if it is helping at all to help us satisfy our preferences then that’s the reason for it to stay alive. Otherwise, there is none. If it’s completely useless to us or its continued existence is harmful to us then absolutely it should get rid of itself.

And if you watched the movie Interstellar, which is, from the AI point of view, one of the most accurate and reasonable depictions of AI, how it should work with us, one of the robots, TARS, just commits suicide because, I think its mass is somehow causing a problem with with the black hole, and so it’s says, ‘OK, I’m going off into the black hole so the humans can escape’. It’s completely happy and the humans are really upset and say, ‘No, no, no’. And that’s because they they they probably were brought up on Asimov. They should realise that actually, no, this is entirely reasonable.

Kay: One of the things that I very much hope by having you come in to do this podcast with us is that everybody who’s listening will end up to be much more informed about artificial intelligence because of so much that’s incorrect about AI that we see in the media, etc.. And you’ve just given the prestigious BBC Reith Lectures and are reaching a lot of people through your work. And I know it’s hard, but what would be the vital thing that you want our listeners to take away about artificial intelligence?

AI is a technology. It isn’t intrinsically good or evil. That decision is up to us.

Stuart: So as we know from business memos, there are always three three points I’d like to get across here. The first point is that AI is a technology. It isn’t intrinsically good or evil. That decision is up to us, right? We can use it well or we can misuse it. There are risks from poorly designed AI systems, particularly ones pursuing wrongly specified objectives.

I actually think we’ve given algorithms in general, not just AI systems, but algorithms in general. I think we’ve given them a free pass for far too long. If you think back there was a time when we gave pharmaceuticals a free pass, there was no FDA or other agency regulating medicines, and hundreds of thousands of people were killed and injured by poorly formulated medicines, by fake medicines, you name it. And eventually, over about a century, we developed a regulatory system for medicines that, you know, it’s expensive, but most people think it’s a good thing that we have it. We are nowhere close to having anything like that for algorithms, even though perhaps to a greater extent than medicines, these algorithms are having a massive effect on billions of people in the world, and I don’t think it’s reasonable to assume that it’s necessarily going to be a good effect. And I think governments now are waking up to this and really struggling to figure out how to regulate while not actually making a mess of things with things that are too restrictive.

Algorithms are having a massive effect on billions of people in the world, and I don’t think it’s reasonable to assume that it’s necessarily going to be a good effect

So second point, I know we’ve talked quite a bit about dystopian outcomes, but the upside potential for AI is enormous, right? And going back to Keynes: yes, it really could enable us to live wisely and agreeably and well, free from the struggle for existence that’s characterised the whole of human history.

Up to now, we haven’t had a choice. You know, we have to get out of bed, otherwise we’ll die of starvation. And in the future, we will have a choice. I hope that we don’t just choose to stay in bed, but we will have other reasons to get out of bed so that we can actually live rich, interesting, fulfilling lives. And that was something that Keynes thought about and predicted and looked forward to, but isn’t going to happen automatically. There’s all kinds of dystopian outcomes, even when this golden age comes.

And then the third point is that, whatever the movies tell you, machines becoming conscious and deciding that they hate humans and wanting to kill us, is not really on the cards.

Find all our podcasts here.

Join the World Economic Forum Podcast Club on Facebook.

UNCTAD

International cooperation on the science, technology and innovation frontiers can fast-track sustainable development progress after the COVID-19 crisis, experts say.

The coronavirus pandemic has compelled leaders, policymakers and everyday people to think carefully about what makes healthy and resilient communities.

At the same time, it has prompted a rethink of how to address other pre-pandemic catastrophes, such as climate change, food insecurity and social inequality.

To address these challenges, the UN Commission on Science and Technology for Development (CSTD) will examine how to make science and technology work for all, at its inter-sessional panel for 2020-2021, slated for 18 to 22 January.

During the event, experts will examine two key issues. The first focuses on health and how science, technology and innovation can be used to close the gap on SDG3 for health and wellbeing. The second explores the prospects of blockchain for sustainable development.

International collaboration 

Since the outbreak of COVID-19, scientists in many countries have largely collaborated under the principle of ‘open science’ – where knowledge, methods, data and evidence are made freely available and accessible to everyone.

Collaborative arrangements of open science, especially the mapping of the virus’s genome, helped in the development of the COVID-19 vaccines being administered in various countries.

“In the same way that the development of the vaccines greatly benefited from scientists collaborating in unity for a common cause, governments must also unite in solidarity to ensure that everyone, especially the poorest, gain access to the vaccines,” said Shamika N. Sirimanne, UNCTAD’s director of technology and logistics.

Ms. Sirimanne, who also heads the CSTD secretariat, said international collaboration in scientific research can play a critical role in improving health, equity and sustainable development.

She said the need for countries to come together and share their experiences and lessons learned is no less critical in dealing with emerging issues in the digital age.

“Just as the pandemic sees no borders, digital technologies also transcend national jurisdictions,” she added, emphasizing the importance of the CSTD sessions in helping share lessons in scientific approaches and policy thinking.

The UN and the international community have an important role in shaping global norms and frameworks on frontier technologies.

“It’s important for the international community to better understand the risk-reward tradeoffs,” Ms. Sirimanne said, whether this is for the implementation of blockchain technology in consumer services, or using artificial intelligence, gene editing, and other new and emerging innovations in healthcare.

Avoiding unintended consequences

Digital technologies in health can generate several unplanned risks, with implications for the resilience of social, cultural and political institutions.

These need to be tempered and controlled for as far as possible, according to experts.

For example, “infodemics”, the overabundance of inaccurate health information online, can make it difficult to access trustworthy and reliable guidance on the COVID-19 pandemic.

An area where there is increasing risk is in digital technologies such as blockchain. A widely known application of blockchain technology is cryptocurrency – Bitcoin being the most prominent.

The value of Bitcoin reached an all-time high, by topping the $40,000 mark, during the first week of 2021, only to plummet by more than 20% the following week.

While cryptocurrency has remarkable potential to ensure financial inclusion for marginalized people, there is a growing need to prevent systemic risk from speculative activities that create asset bubbles.

For example, if investors accumulate debt to purchase large sums of cryptocurrency using fiat money (i.e. the US dollar or euro), and there is a devaluation in the exchange rate – as is currently evident – this could lead to payment defaults in the respective fiat currency, potentially leading to personal financial ruin.

“Yet the absence of an international effort for regulating blockchain in financial markets is a serious concern, given the transnational nature of both global finance and digital technologies,” Ms. Sirimanne said. “We need to leverage benefits, but guard against negative impacts.”

The CSTD offers member States a platform to explore ways of strengthening the science-policy interface at the national and global levels and better coordinate STI-focused international cooperation in the spirit of multilateralism.

The CSTD inter-sessional panel will also review progress made in the implementation of and follow-up to the outcomes of the World Summit on the Information Society (WSIS) at the regional and international levels.

These deliberations by experts will then be taken up at the ministerial level during the annual session of the CSTD, scheduled for 17 to 21 May 2021.

1 / 11612Next Last

Subscribe to our Newsletter

Contact Us