Artificial Intelligence graphic

Dhaval's passion for Responsible AI shines through

Dhaval Thakker is a Professor of Artificial Intelligence (AI) and the Internet of Things (IoT) at the University of Hull. Dhaval has over fifteen years of experience working in the European Union and on industrial projects, researching and delivering innovative solutions.

His broad area of research interest and expertise is interdisciplinary, focusing on the use of AI and IoT technologies for the betterment of society. His current and evolving research interests include exploring the role of AI and IoT technologies in the context of Smart Cities, Digital Health, and the Circular Economy/Net Zero. His research interests also include ethical considerations and the concept of 'Responsible AI'.

Dhaval has been successful as Principal and Co-investigator in over £4 million worth of research and innovation projects funded by national and international funding bodies and commercial organisations. Some of the notable funders have been the European Commission, Innovate UK, HFCE, and GCRF, focusing on projects addressing societal challenges around themes such as Smart Cities, Air Quality Monitoring, Flood Monitoring, Children's Health, Industry 4.0 (Smart Factories), and Archaeological and Drone-based surveys in conflict zones.

I'm an applied scientist, deeply committed to harnessing Responsible AI to address contemporary challenges. Over the past half-decade, there has been a marked shift towards Deep Learning, which seeks to emulate the complexities of our brain using neural networks. These networks enable us to process vast datasets and identify intricate patterns, leading to the creation of bespoke machine learning algorithms. The emergence of Large Language Models, as seen in ChatGPT and Bard, along with Generative AI, is revolutionising the domain, highlighting the vast potential and versatility of artificial intelligence. My research delves into constructing such AI systems responsibly. It's pivotal to establish guardrails, ensuring that these systems act responsibly, elucidate their decisions and recommendations, and remain trustworthy.”
Professor Dhaval Thakker

Professor Dhaval Thakker

Professor of Artificial Intelligence and Internet of Things

Professor Dhaval Thakker

Professor Dhaval Thakker

Professor of Artificial Intelligence and Internet of Things

In the health field, Dhaval's research is used to improve the lives of those with Asthma. The research project ‘Smart Cities and Open Data REuse(SCORE)' investigated the use of indoor air quality sensor data to support patients to self-manage their condition. Dhaval's research is also applied in Oncology, assisting doctors to identify tumours accurately. Other research uses AI applications to support the circular economy by providing information about early routine maintenance of laptop computers to extend the life of these devices.

Dhaval's work on Responsible AI centres around developing a framework based on three fundamental tenets: 'explainability' (meaning AI solutions should not operate as a black box but rather explain their decisions to both engineers and end-users), 'trustworthiness' (ensuring AI solutions are reliable and trustworthy), and 'equitability' (guaranteeing that AI solutions adhere to societal norms in terms of ethics, legal considerations, and professional standards). Using the example of cancer diagnosis 'explainability' provides doctors with a rationale for how results are generated through AI.

microscopes in a lab
Responsibility is a cornerstone of what we do in any health research. When we apply AI to solve any health-related issues, whether that's encouraging certain activities which help people to manage their conditions or detecting a tumour, this needs to be done responsibly. For example, AI systems should explain to a doctor why certain images are more likely to be breast cancer than other images. In self-management applications, an AI system should clearly explain why it recommends certain diet or exercise routines for diabetic patients, ensuring both the user and healthcare providers understand its rationale.
Professor Dhaval Thakker

Professor Dhaval Thakker

Professor of Artificial Intelligence and Internet of Things

Can you tell us about your research into Responsible AI?

My research focuses on explainability, trustworthiness and equitability in AI and the responsible use of Deep Learning. For example, because of the number of parameters used in a system like ChatGPT, it gives the user the impression that the system understands what you are saying. So, when you ask a question, it leads you to believe that it understands the context of the question and gives you an answer which seems to be sensible, logical and often helpful in a number of scenarios.

Although AI can appear to be ground-breaking, it can be misused or behave unpredictably. Hence, my research delves deeply into constructing AI systems with responsibility at the forefront. It's pivotal to establish robust guardrails, ensuring that these platforms not only function effectively but also act responsibly, elucidate their decisions and recommendations, and consistently earn users' trust.

My research is about being responsible for the power AI brings and addressing the challenges. Working with social scientists, we ask the question, how do you make AI systems that are equitable? How do you bring in social aspects, ethics, and the law when building AI? And, how do we deal with bias which might be inherent, when you're building AI systems?'

Could you tell us about your AI research in health?

We work on a number of health issues with a focus on self-management and support for patients and caregivers in managing chronic conditions such as Asthma, and Diabetes. We use the Internet of Things sensors, self-reporting and AI-based monitoring, recommendations and predictive analytics.

For example, I have also worked with health scientists in the care of Asthma patients, promoting self-care. We look at how AI can use the data from Indoor Air Quality Sensor Devices to offer guidance on the kind of activities patients can safely do and the kind of activities they might want to avoid.  By using data on air quality, patients can avoid situations that could exacerbate their condition.

We also collaborate with city councils across the UK and Europe to develop AI-based frameworks that assess interventions aimed at enhancing air quality in their cities. For instance, we have partnered with Bradford City Council to gauge the efficacy of their Clean Air Zone. This involves augmenting the monitoring of air pollution through AI and IoT, complemented by spatial-temporal AI research to bolster such monitoring.

Can you define what you mean by 'explainability'?

Explainability and transparency are crucial in ensuring Responsible AI.  They make systems more accountable and interpretable.  In the detection of tumours, for example, can AI systems explain to a doctor why certain images are more likely than others to show breast cancer? Can the AI system explain its decisions? Can AI explain the rules it has developed internally? What does it learn from the data provided? If it can explain, we can 'train' it more accurately and think about how to include different sorts of data.

My research group specialises in a distinct strand of AI known as Neuro-Symbolic AI (NS-AI) to achieve 'explainability' in AI systems. NS-AI integrates neural network approaches with symbolic reasoning to enable AI models to provide more interpretable and logical explanations for their decisions.

Can AI be trustworthy?

There's a lot of work going on to ensure that AI is trustworthy and safe. At the University of Hull, our research delves into the vulnerabilities of machine learning algorithms, exploring scenarios where they might be misled into rendering erroneous decisions. If they can be fooled, it undermines the safety of AI. We are working on techniques to estimate the safety of machine learning classifiers.

What is your personal motivation for working in this field?

I get excited by societal challenges, especially in the health and environmental fields. My drive stems from a desire to harness the power of AI to tackle and find solutions to these challenges.

What are your reasons for joining the University of Hull?

I wanted to come to the University of Hull particularly because of the potential for working with the Hull York Medical School to expand the use of AI to address health challenges. Additionally, the University boasts an impressive research track record in environmental fields and Net Zero, which aligns with another focal area of my research.

Community Impact

Dhaval actively publishes in leading high-impact journals such as the Semantic Web Journal, Elsevier Journal of Engineering Applications of Artificial Intelligence, and Transactions on Emerging Telecommunications Technologies.

His research has been rewarded with many best paper awards (2019 at the 10th IEEE conference on IoT, Big Data and AI for a Smart and Safe Future; 2015 at 12th the European Semantic Web Conference). He regularly reviews for the Engineering and Physical Sciences Research Council (EPSRC), and the Natural Environment Research Council (NERC).

Top