Artificial intelligence (AI) is becoming more prominent in the world that surrounds us even if it has existed for a good deal of time. The Internet search engines have been using AI to identify articles that correspond with our search, enabling product advertisers to correlate their marketing with our search history. Similar programs exist in many of the rewards schemes we use. For example, at the supermarket all our purchases are linked to our account through the loyalty card, and we receive reduction coupons or email advertisements based on our shopping history.
The recent advances with Chat Generative Pre-trained Transformer (Chat GPT), an AI chatbot developed by the company Open AI that allows individuals to ask all kinds of questions and get automated answers, have brought AI technology to the forefront of discussions about regulatory compliance and standardization setting.
AI technologies bring positive prospects regarding the automation of certain products, but their wide application also raises many questions.
In line with the focus of UNECE’s 70th Commission session on harnessing digital and green transformations for sustainable development over the next two years, the Working Party on Regulatory Cooperation and Standardization Policies (WP.6) is looking into artificial intelligence from two perspectives.
First, it is looking at AI from a gender-responsive standards perspective. AI reflects the society in which it is created and may contain inherent gender biases in its hard coding. Even if the programmers make efforts to ensure that the product treats men and women equally, it may become biased when it learns from other available material which the programmers may not have foreseen.
To bring these challenges to light and to seek ways to ensure that all products re equally beneficial to all consumers, UNECE’s Team of Specialists on Gender-Responsive Standards (GRS)under WP.6 is bringing these challenges to light and seeking ways to ensure that all products benefit all consumers (men and women) as equally as possible at its annual meeting on 24 May 2023.
Second, WP.6 is looking at AI from a product conformity perspective. Products are tested against technical regulations and standards considered safe to be put on the market. This safety is not only physical, in that the product will not be expected to directly cause any physical harm to consumers; it can also be strategic, in that the product will not disserve the economic, environmental or governance interests of societies.
Products with integrated AI and/or embedded software pose a particular challenge to this conformance check because they can inherently change over time. Such products may be conformant when they enter the market, but as they evolve and learn, they may become non conformant. To rethink the current compliance models while not impeding on the innovative advantages of AI or other digital technologies, WP.6 has recently launched a project on “Regulatory compliance of products with embedded artificial intelligence or other digital technologies.” .
These and other topics related to regulatory cooperation and standardization policies will be discussed during the WP.6 Second Forum which will be held virtually on 22-26 May 2023.
To join and find out more, contact the WP.6 secretariat.