Why each country must forge its own definition of what ethical AI would be

Why each country must forge its own definition of what ethical AI would be

In France as elsewhere, each country must now decide what it considers an acceptable use of artificial intelligence (AI), in order to know, for example, whether the use of facial recognition in public spaces should be accepted or prohibited. For Ieva Martinkenaite, head of analytics and AI at Scandinavian operator Telenor, public debate is the key to finding a balance between market opportunities and ensuring ethical use of AI.

For the leader, governments have a heavy task: that of ensuring that AI regulations are appropriate for their local population. This is what the Norwegian operator is trying to do, which applies AI and machine learning models to deliver more personalized and targeted sales campaigns to customers, achieve better operational efficiency and optimize the resources of its network. . As the management of the operator knows, these technologies can help fight against global warming, for example by turning off antennas when their use is low.

For Ieva Martinkenaite, also president of the working group on artificial intelligence of the GSMA (the organization bringing together the main global operators), regulators must take greater account of the commercial impact of the technologies and laws that affect them. frame. AI ethics and governance frameworks may look good on paper, but we also need to ensure that they are usable in terms of adoption, she notes.

Finding the right balance

In fostering the meaningful adoption of AI in our daily lives, nations must strive to find a “balance” between exploiting market opportunities and the ethical use of technology. Noting that technology is constantly evolving, the leader admitted at a symposium in Singapore that it was not always possible for regulations to keep pace.

In developing the regulation of AI on the Old Continent, EU lawmakers faced several challenges, including how laws governing the ethical use of AI could be introduced without having to impact on the flow of talent and innovation, she explained. This proved to be a significant hurdle, as some feared that the regulations would create too much bureaucracy for businesses. The growing dependence on IT infrastructures and machine learning frameworks developed by a handful of internet giants, including Amazon, Google and Microsoft or Tencent, Baidu and Alibaba, is also causing concern, notes Ieva Martinkenaite.

And recall the perplexity of EU officials as to how the region could maintain its sovereignty and independence in the midst of this emerging landscape. Discussions in Brussels focus more specifically on the need to create key technologies for AI in the region, such as data, computing power, storage and machine learning architectures, explains the leader. According to her, with the focus on creating greater technological independence in AI, it is then essential that EU governments create incentives and stimulate local investments in the ecosystem.

Profits at stake

Beyond the issues of sovereignty, Ieva Martinkenaite calls not to lower our guard regarding the possible misuse of AI for political purposes. Recently, Michelle Bachelet, head of human rights at the United Nations, called for the use of AI to be banned if it violates international human rights law. And underscored the urgency to assess and address the human rights risks AI could pose, noting that stricter legislation on its use should be implemented where it poses higher risks to human rights. human rights.

“AI can be a force for good, helping companies overcome some of the great challenges of our time. But AI technologies can have negative and even catastrophic effects if they are used without taking sufficient account of how they affect the human rights of people, ”said the latter, thus agreeing with the position expressed by Ieva. Martinkenaite.

According to the latter, it is now up to each country to determine what it means by ethical AI. And to point out that until the veracity issues related to the analysis of different skin colors and facial features have been properly addressed, the use of this AI technology should not be deployed without human intervention. , without adequate governance or without quality assurance in place. What is the key to this public debate and this in-depth reflection on AI? A mess of profits for authorities, businesses and citizens, promises the head of research at Telenor.

Source: ZDNet.com

In France as elsewhere, each country must now decide what it considers an acceptable use of artificial intelligence (AI), in order to know, for example, whether the use of facial recognition in public spaces should be accepted or prohibited. For Ieva Martinkenaite, head of analytics and AI at Scandinavian operator Telenor, public debate is the…

Leave a Reply

Your email address will not be published. Required fields are marked *