Centre for Responsible AI
Our interdisciplinary Centre for Responsible AI advances explainability, safety, and fairness in AI through fundamental and applied research.

Overview
It takes a human to harness the power of AI
The rapid advancement of AI, particularly with Large Language Models (LLMs) and generative AI, has largely prioritised system performance.
As AI systems move beyond supporting roles to autonomous operations, where machines perform tasks without human intervention, the stakes grow significantly higher. This shift brings not only technical challenges but also profound ethical, legal, social, fairness, safety, and explainability concerns.
The success of AI is no longer a matter of accuracy, technical performance or financial profit but how responsibly it connects with and serves human beings.


Ensuring AI is trustworthy and aligns with societal values
Hallucinations, bias, poor explainability, and safety risks in AI systems, especially LLMs, are unacceptable in regulated domains such as law, healthcare, defence, finance, and sustainability, where accuracy and trust are paramount.
Our Centre focuses on the responsible development of AI by enhancing explainability, safety, and fairness. Through cross-disciplinary collaboration and knowledge transfer, we drive impactful applications across regulated sectors and beyond.
Partnerships
Unlocking possibilities through strategic partnerships
At the Centre for Responsible AI, we work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.
We create social and economic impact via funded projects (EPSRC, Innovate UK, European Union), and knowledge transfer activities, including Knowledge Transfer Partnerships.
Our Team
Our research
What we specialise in
At the Centre for Responsible AI, we specialise in Neurosymbolic AI, explainability techniques, and the development of safe and trustworthy systems, including Federated Learning and Large Language Models. Our work extends conversational AI, digital twins, decision support systems, and regulated domains such as healthcare, sustainability, law, defence, and finance.
Responsible AI
We are enhancing explainability, safety, and fairness in artificial intelligence systems.
Regulation
We are interweaving software solutions with governance, ethics, and regulation.
Real life applications
We work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.

Impact through collaboration
We drive impactful applications across healthcare, the circular economy, net zero and sustainability, defence, law, and beyond through cross-disciplinary collaboration and knowledge transfer.
Professor Dhaval Thakker, Centre lead said: “I'm an applied scientist, deeply committed to harnessing Responsible AI to address contemporary challenges. We have seen a marked shift towards Deep Learning, which seeks to emulate the complexities of our brain using neural networks. These networks enable us to process vast datasets and identify intricate patterns, leading to the creation of bespoke machine learning algorithms. The emergence of Large Language Models, as seen in ChatGPT and Gemini, along with Generative AI, is revolutionising the domain, highlighting the vast potential and versatility of artificial intelligence.
Our research delves into constructing such AI systems responsibly. It's pivotal to establish guardrails, ensuring that these systems act responsibly, explain their decisions and recommendations, and remain trustworthy.”
Get in touch