Research centre

Centre for Responsible AI

Our interdisciplinary Centre for Responsible AI advances explainability, safety, and fairness in AI through fundamental and applied research.

A robot hand and a human hand touch fingertips

Overview

It takes a human to harness the power of AI

The rapid advancement of AI, particularly with Large Language Models (LLMs) and generative AI, has largely prioritised system performance.

As AI systems move beyond supporting roles to autonomous operations, where machines perform tasks without human intervention, the stakes grow significantly higher. This shift brings not only technical challenges but also profound ethical, legal, social, fairness, safety, and explainability concerns.

The success of AI is no longer a matter of accuracy, technical performance or financial profit but how responsibly it connects with and serves human beings.

Milky Way viewed from earth with sun rising
Man sitting looking at health diagnostic images on a computer screen
The challenge

Ensuring AI is trustworthy and aligns with societal values

Hallucinations, bias, poor explainability, and safety risks in AI systems, especially LLMs, are unacceptable in regulated domains such as law ,healthcare, defence, finance, and sustainability, where accuracy and trust are paramount.

Our Centre focuses on the responsible development of AI by enhancing explainability, safety, and fairness. Through cross-disciplinary collaboration and knowledge transfer, we drive impactful applications across regulated sectors and beyond.

Partnerships

Unlocking possibilities through strategic partnerships

At the Centre for Responsible AI, we work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.

We create social and economic impact via funded projects (EPSRC, Innovate UK, European Union), and knowledge transfer activities, including Knowledge Transfer Partnerships.

Our Team

Our research

What we specialise in

At the Centre for Responsible AI, we specialise in Neurosymbolic AI, explainability techniques, and the development of safe and trustworthy systems, including Federated Learning and Large Language Models. Our work extends conversational AI, digital twins, decision support systems, and regulated domains such as healthcare, sustainability, law, defence, and finance.

Responsible AI

We are enhancing explainability, safety, and fairness in artificial intelligence systems.

Regulation

We are interweaving software solutions with governance, ethics, and regulation.

Real life applications

We work with businesses of different sizes and geographical reach to embed our research into real-life applications and practice.

Robot used in manufacturing
Our impact

Impact through collaboration

We drive impactful applications across healthcare, the circular economy, net zero and sustainability, defence, law, and beyond through cross-disciplinary collaboration and knowledge transfer.

EPSRC National Edge Artificial Intelligence Hub

We are a node of this national AI hub – one of nine hubs as part of £100M investment from EPSRC. We contribute to the research theme on “Artificial Intelligenceˮ and “Cyber securityˮ focussing on cutting-edge techniques for safe and trustworthy AI at the edge.

EPSRC funded Assuring AI Large Language Models) based Systems Using Neuro-Symbolic AI

This project focuses on developing neuro- symbolic AI based approaches to assuring Large Language models. We will explore how neuro- symbolic AI can help with the known issues with LLMs including explainability, hallucinations, lack of grounding in factual knowledge.

EU funded project on Smart Cities and Open Data REuse SCORE

SCORE project focussed to increase efficiency and quality of public services in cities through smart and open data-driven solutions. Our team worked with nine European cities in the project and contributed to the development of open solutions for ambient and indoor Air quality monitoring, and flood monitoring.

Please see list of further projects here.

Get in touch

Have a question for the Centre for Responsible AI?