Skip to main content
Business

Teesside UniversityTeesside University

FAIRGAME: Award-Winning Framework for Bias Recognition in AI Agents

Research and innovation

Digital Life Building

Artificial Intelligence (AI) systems are increasingly deployed as autonomous, decision-making agents across sectors such as finance, customer service, education, healthcare and digital governance. As these systems become embedded in everyday workflows, ensuring fairness, transparency and reliability has become critically important.

Researchers in our School of Computing, Engineering & Digital Technologies (SCEDT), working in the Centre for Digital Innovation (CDI) and the Interpretable & Beneficial AI (IBAI) group, have developed FAIRGAME – a novel and game-theoretic framework designed to detect and analyse bias in AI agents.

This research was internationally recognised at the 28th European Conference on Artificial Intelligence (ECAI 2025), one of the world’s most prestigious AI conferences. With only three papers receiving the Outstanding Paper Award from thousands of submissions, the recognition highlights both the significance and global impact of the work we undertake.


Challenge

Large Language Models (LLMs) such as GPT-4o, Llama, Claude and Mistral are increasingly used in multi-agent and decision-making contexts. However, these systems can exhibit inconsistent, biased or unpredictable behaviours, particularly when interacting across different languages, incentives or cultural contexts.

Traditional evaluation methods often fail to capture how AI agents behave strategically when interacting with one another, making it difficult to identify hidden biases, fairness issues or deviations from expected decision-making norms. This presents a challenge for organisations and policymakers seeking to deploy AI systems that are trustworthy, reproducible, and aligned with human expectations and regulatory requirements.


Solution

FAIRGAME addresses this challenge by systematically evaluating AI agents through controlled, strategic simulations grounded in game theory. The framework places LLMs into classic strategic scenarios – such as the Prisoner’s Dilemma and Battle of the Sexes – enabling researchers to observe how different models behave under varying conditions.

Varying factors include:

  • model architecture and family
  • language of interaction
  • personality profiles
  • payoff structures and incentives.

FAIRGAME reveals how AI agents cooperate, defect, coordinate, or deviate from established game-theoretic equilibria. Crucially, the framework uncovers previously undocumented inconsistencies, such as cross-linguistic behavioural shifts, providing new insights into bias and fairness in agentic systems.

FAIRGAME is one of the first large-scale, open-source frameworks capable of benchmarking AI behaviour across both strategic and linguistic dimensions, offering a reproducible methodology for analysing decision-making in multi-agent AI environments.

The research paper, authored by Dr Alessandro Di Stefano (Senior Lecturer, SCEDT) and Professor The Anh Han (CDI Lead), alongside collaborators from the University of Cambridge, LIST Luxembourg and the University of Trento, was selected from 2,672 submissions, with only 626 accepted and just three awarded Outstanding Paper. The work has also been invited for an extended submission to the Artificial Intelligence Journal.


Impact

As governments and organisations increasingly rely on autonomous systems, FAIRGAME provides a powerful tool for:

  • assessing AI reliability and robustness
  • identifying hidden behavioural risks
  • supporting AI governance and AI Act compliance
  • informing safe and responsible deployment strategies.

The research significantly enhances Teesside University’s international profile in ethical and interpretable AI, multi-agent systems, evolutionary game theory and explainable AI – all central to the University’s digital innovation and responsible AI agenda. The Outstanding Paper Award reinforces the reputation of both the CDI and the IBAI group as leaders in transparent and beneficial AI research.

FAIRGAME is already catalysing new research directions, including:
  • multi-agent governance and regulatory simulations
  • cross-linguistic bias and cultural behaviour studies
  • domain-specific applications such as cybersecurity, media influence and digital regulation.

This achievement positions Teesside University as a growing contributor to cutting-edge, internationally impactful AI research and aligns strongly with its mission to drive responsible technological innovation.


We are honoured that FAIRGAME received the Outstanding Paper Award at ECAI 2025. This recognition highlights Teesside University’s growing contribution to ethical and interpretable AI and supports our mission to build transparent, fair and trustworthy AI systems.

Dr Alessandro Di Stefano, Senior Lecturer in Computer Science


This award reflects the strength of our research in AI and demonstrates how game theory can play a key role in shaping responsible AI.

Professor The Anh Han, Professor of Computer Science


Go to top menu