
Research and innovation
Artificial Intelligence (AI) systems are increasingly deployed as autonomous, decision-making agents across sectors such as finance, customer service, education, healthcare and digital governance. As these systems become embedded in everyday workflows, ensuring fairness, transparency and reliability has become critically important.
Researchers in our School of Computing, Engineering & Digital Technologies (SCEDT), working in the Centre for Digital Innovation (CDI) and the Interpretable & Beneficial AI (IBAI) group, have developed FAIRGAME – a novel and game-theoretic framework designed to detect and analyse bias in AI agents.
This research was internationally recognised at the 28th European Conference on Artificial Intelligence (ECAI 2025), one of the world’s most prestigious AI conferences. With only three papers receiving the Outstanding Paper Award from thousands of submissions, the recognition highlights both the significance and global impact of the work we undertake.
Large Language Models (LLMs) such as GPT-4o, Llama, Claude and Mistral are increasingly used in multi-agent and decision-making contexts. However, these systems can exhibit inconsistent, biased or unpredictable behaviours, particularly when interacting across different languages, incentives or cultural contexts.
Traditional evaluation methods often fail to capture how AI agents behave strategically when interacting with one another, making it difficult to identify hidden biases, fairness issues or deviations from expected decision-making norms. This presents a challenge for organisations and policymakers seeking to deploy AI systems that are trustworthy, reproducible, and aligned with human expectations and regulatory requirements.
FAIRGAME addresses this challenge by systematically evaluating AI agents through controlled, strategic simulations grounded in game theory. The framework places LLMs into classic strategic scenarios – such as the Prisoner’s Dilemma and Battle of the Sexes – enabling researchers to observe how different models behave under varying conditions.
Varying factors include:
As governments and organisations increasingly rely on autonomous systems, FAIRGAME provides a powerful tool for:
We are honoured that FAIRGAME received the Outstanding Paper Award at ECAI 2025. This recognition highlights Teesside University’s growing contribution to ethical and interpretable AI and supports our mission to build transparent, fair and trustworthy AI systems.
This award reflects the strength of our research in AI and demonstrates how game theory can play a key role in shaping responsible AI.