Dhaval Thakker is a Professor of Artificial Intelligence (AI) and the Internet of Things (IoT) at the University of Hull. Dhaval has over fifteen years of experience working in the European Union and on industrial projects, researching and delivering innovative solutions.
His broad area of research interest and expertise is interdisciplinary, focusing on the use of AI and IoT technologies for the betterment of society. His current and evolving research interests include exploring the role of AI and IoT technologies in the context of Smart Cities, Digital Health, and the Circular Economy/Net Zero. His research interests also include ethical considerations and the concept of 'Responsible AI'.
Dhaval has been successful as Principal and Co-investigator in over £4 million worth of research and innovation projects funded by national and international funding bodies and commercial organisations. Some of the notable funders have been the European Commission, Innovate UK, HFCE, and GCRF, focusing on projects addressing societal challenges around themes such as Smart Cities, Air Quality Monitoring, Flood Monitoring, Children's Health, Industry 4.0 (Smart Factories), and Archaeological and Drone-based surveys in conflict zones.
In the health field, Dhaval's research is used to improve the lives of those with Asthma. The research project ‘Smart Cities and Open Data REuse(SCORE)' investigated the use of indoor air quality sensor data to support patients to self-manage their condition. Dhaval's research is also applied in Oncology, assisting doctors to identify tumours accurately. Other research uses AI applications to support the circular economy by providing information about early routine maintenance of laptop computers to extend the life of these devices.
Dhaval's work on Responsible AI centres around developing a framework based on three fundamental tenets: 'explainability' (meaning AI solutions should not operate as a black box but rather explain their decisions to both engineers and end-users), 'trustworthiness' (ensuring AI solutions are reliable and trustworthy), and 'equitability' (guaranteeing that AI solutions adhere to societal norms in terms of ethics, legal considerations, and professional standards). Using the example of cancer diagnosis 'explainability' provides doctors with a rationale for how results are generated through AI.