Centre for Responsible AI
Our Centre for Responsible AI is enhancing explainability, safety, and fairness in artificial intelligence systems.

Overview
It takes a human to harness the power of AI
The rapid advancement of Artificial Intelligence (AI), particularly with large language models and generative AI, has largely prioritised system performance.
However, AI is increasingly used for autonomous operations, enabling machines to perform tasks without human intervention. This brings ethical, legal, social, fairness, safety and explainability challenges.
The success of AI is no longer a matter of accuracy, technical performance or financial profit but how it connects with human beings.


Ensuring AI is trustworthy and aligns with societal values
The development of AI requires a socio-technical approach to the design, deployment and use of AI systems, interweaving software solutions with governance, ethics, and regulation.
Our research focuses on the responsible development of AI, enhancing explainability, safety, and fairness in artificial intelligence systems. Addressing these challenges is essential to ensure that AI systems are not only powerful but also trustworthy and aligned with societal values.
Partnerships
Unlocking possibilities through strategic partnerships
At the Centre for Responsible AI, we work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.
We create social and economic impact via funded projects (EPSRC, Innovate UK, European Union), and knowledge transfer activities, including Knowledge Transfer Partnerships.
Our research
What we specialise in
At the Centre for Responsible AI, we specialise in Neurosymbolic AI, Explainability techniques, Safe AI, Safe Federated Learning, Safe LLMs, Conversational AI, Decision support systems, applications in healthcare, sustainability, law and defence and digital twins.
Responsible AI
We are enhancing explainability, safety, and fairness in artificial intelligence systems.
Regulation
We are interweaving software solutions with governance, ethics, and regulation.
Real life applications
We work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.

Impact through collaboration
We drive impactful applications across healthcare, the circular economy, net zero and sustainability, defence, law, and beyond through cross-disciplinary collaboration and knowledge transfer.
Through this research, we are examining human-level explainability using Neurosymbolic and statistical approaches.
We are also looking at safe and trustworthy AI systems for cloud and edge computing environments and cross-disciplinary research on ethics and fairness to help theory-driven AI development.
Get in touch