In the future, socially interactive robots could help seniors age in place or assist residents of long-term care facilities with their activities of daily living. However, people may not actually accept advice or instructions from a robot. A new study from the University of Toronto Engineering suggests that the answer hinges on how that robot behaves.
“When robots present themselves as human-like social agents, we tend to play along with that sense of humanity and treat them much like we would a person.”
– Lead Author
Simple tasks of persuasion, like asking someone to take their medication, have a lot of social depth to them. When putting robots in those situations, the researchers need to better understand the psychology of robot-human interactions. Even in the human world, persuasion is complex and has multiple variables. But one key concept is authority, which can be further divided into two types: formal authority and real authority.
Formal authority comes from roles, such as teachers, parents and bosses have a certain amount of formal authority. The real authority has to do with the control of decisions, often for entities such as financial rewards or punishments.
To simulate the concepts, the researchers set up an experiment where a humanoid robot was used to help 32 volunteer test subjects complete a series of simple tasks, such as memorising and recalling items in a sequence.
For some participants, the robot was presented as a formal authority figure: it was the experimenter and the only ‘person’ the subjects interacted with. For others, the lead researcher was presented as the experimenter, and the robot was introduced to help the subjects complete the tasks.
Each participant ran through a set of three tasks twice: once where the robot offered financial rewards for correct answers to simulate positive real authority, another time, offering financial penalties for incorrect answers, simulating negative real authority.
Generally, the robot was less persuasive when it was presented as an authority figure than when it was presented as a peer helper. This result might stem from a question of legitimacy. Social robots are not commonplace today, people lack both relationships and a sense of shared identity with robots. It might be hard for people to see robots as a legitimate authority.
Another possibility is that people might disobey an authoritative robot because they feel threatened by it. The aversion to being persuaded by a robot acting authoritatively seemed to be particularly strong among male participants, who have been shown in previous studies to be more defiant to authority figures than females, and who may perceive an authoritative robot as a threat to their status or autonomy.
A robot’s social behaviours are critical to acceptance, use and trust in this type of distributive technology, by society as a whole. This ground-breaking research provides an understanding of how persuasive robots should be developed and deployed in everyday life, and how they should behave to help different demographics, including our vulnerable populations such as older adults.
The big takeaway for designers of social robots is to position them as collaborative and peer-oriented, rather than dominant and authoritative. The research suggests that robots face additional barriers to successful persuasion than the ones that humans face. If robots are to take on these new roles in society, their designers will have to be mindful of that and find ways to create positive experiences through their behaviour.
U.S. researchers have been developing robots to help humans in various fields, including creating modern robotic white cane for the visually impaired community. As reported by OpenGov Asia, equipped with a colour 3D camera, an inertial measurement sensor, and its own onboard computer, a newly improved robotic cane could offer blind and visually impaired users a new way to navigate indoors.