Skip to main content
Media centre

Desire to win the AI race could compromise safety

15 December 2020

 

The rapid race for supremacy in the use of Artificial Technology (AI) could compromise safety without tighter regulations, as companies strive to get ahead of their competitors, warns a Teesside University research team.

Dr The Anh Han
Dr The Anh Han

Technological advancements in AI, such as increasingly wider use of intelligent technologies in the likes of robotics, face recognition systems and self-driving vehicles, is creating fierce competition between businesses, nations and regions, with each seeking to the first to harness these technologies.

Dr The Anh Han, from University’s School of Computing, Engineering & Digital Technologies, who led the research team said the need to be ‘first’ can mean ethical or safety procedures are potentially ignored or underestimated, putting lives at risk by compromising safety in order to win the AI race.

The team explored how mathematical ‘game theories’ can be used to predict how different incentives and constraints can influence the development of AI technologies. The researchers also investigated how these technologies are likely to be developed as forces for good or ill in different environments.

Dr Han said: “There is a temptation to cut corners on safety compliance in order to move more quickly than competitors. But if AI is not developed in a safe way, it could have catastrophic consequences.

“Our research aims to understand what sort of behaviours emerge and how we can use different, efficient incentives to drive the race in a more beneficial direction.”

In a new research paper published in the flagship Journal of Artificial Intelligence Research, https://jair.org/index.php/jair/article/view/12225 the research team constructed a model which captures the key aspects of the dynamics associated with the AI race. Their study showed that the need for regulation depends on the balance between innovation, speed and the risk of negative externalities.

If AI is not developed in a safe way, it could have catastrophic consequences

Dr The Anh Han

The work is a result of funding from The Future of Life Institute, a volunteer-run research and outreach organisation based in the United States that works to mitigate existential risks facing humanity.

Dr Han added: “When defining codes of conduct and regulatory policies for AI, a clear understanding about the time-scale of the race is required for effective AI governance.

“Regulation might not always be necessary and could even have detrimental effects if not applied in the right circumstances. The need for regulation depends on the balance between innovation speed and the risk of negative externalities.”

Dr Han has been working alongside international researchers Professor Luis Moniz Pereira, New University of Lisbon, Professor Tom Lenaerts, Université Libre de Bruxelles and Vrije Universiteit Brussel, and Professor Francisco Santos, University of Lisbon.

Dr Han said: “We were delighted to be awarded a grant by the Future of Life Institute to progress our research. Our ambition is to understand the dynamics of safety compliant behaviours within the ongoing AI research and development race.

“We hope to provide advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants around prevention of undesirable race escalation.”

Dr Han recently secured a Leverhulme Trust research fellowship award and the research team is currently examining how different forms of incentives and interventions such as sanctioning of unsafe behaviours, or rewarding safe behaviour, can ensure a more beneficial outcome for society despite the tension induced by an AI race.


 
 
Go to top menu