Robot arm

NEWS •

University of Hull research helps to improve human-robot interactions

New research from the University of Hull has found that people are usually better at understanding human movements than robotic movements, but this gap can be closed when robots are positioned amongst relevant environmental cues that may make the robot's behaviour more predictable.

The research team wanted to find out if people's understanding of a robot's behaviour changes based on the environment around the robot. They compared how well people could understand the intentions of humans and humanoid robots by having participants observe and predict their actions based on simple social cues such as gaze directed towards an object.

The study revealed that when two objects were present in the environment, people were generally faster at understanding human behaviour compared to robotic behaviour. However, when only one object was present, people were just as fast to make inferences about human and robotic gazes. This effect was not seen when participants had to predict the behaviour of an object with non-human-like shape (like a lectern).

Such findings imply that people may be able to understand robotic and human actions in a similar manner when there are fewer objects present in the environment for the robot to interact with. This suggests that the complexity of the environment also plays a role in how well people can understand the actions of humans and robots.

Dr Emmanuele Tidoni, a Lecturer in Psychology at the University of Hull and the lead author of the report, said: “The research suggests that both the internal characteristics of the robot (such as its human-like shape) and the external environment (like its complexity) can impact how easily people can understand a robot's behaviour. By considering both of these factors, it may be possible to create more intuitive and effective human-robot interactions.”

Dr Tidoni added: "The research has practical applications for improving human-robot interactions. One potential strategy for doing so is programming robots to position themselves in ways that give clear cues about their intended actions. For example, imagine a robot approaching a workstation to pick and hand one object (among many) over to you. If the robot positions itself in a way that the grasping hand is closer to the object-to-be-grasped, then your coordination with the robot will be easier and safer. This could be especially useful in scenarios where robots are integrated into our homes and workplaces.”

Body form modulates the prediction of human and artificial behaviour from gaze observation was published in the International Journal of Social Robotics by Dr Emmanuele Tidoni (University of Hull), Dr Michele Scandola (University of Verona), Professor Emily Cross (University of Glasgow and Macquarie University), and Dr Nathan Caruana (Macquarie University).

Last updated

Top