Robots can do many things, but opening doors are kryptonite to robots as it is a big challenge. Engineers in UC’s Intelligent Robotics and Autonomous Systems Laboratory have solved this complex problem in three-dimensional digital simulations. Now they’re building an autonomous robot that not only can open its own doors but also can find the nearest electric wall outlet to recharge without human assistance.
This simple advance in independence represents a huge leap forward for helper robots that vacuum and disinfect office buildings, airports and hospitals. Helper robots are part of a $27 billion robotics industry, which includes manufacturing and automation.
Some researchers have addressed the problem by scanning an entire room to create a 3D digital model so the robot can locate a door. But that is a time-consuming custom solution that works only for the particular room that is scanned.
An autonomous door-opening operation is a complex task consisting of identifying the door and door handle, navigating the vehicle to the door, operating the door handle, and pulling or pushing the door to open while traversing the doorway. Doors also come in different colours and sizes with different handles that might be slightly higher or lower. Robots have to know how much force to use to open doors to overcome resistance. A self-closing door adds significant difficulty for the last step because the door usually needs to be held open while the vehicle is traversing the doorway.
Since the engineers are using machine learning, the robot has to “teach” itself how to open a door, essentially through trial and error. This can be time-consuming initially, but the robot corrects its mistakes as it goes. Simulations help the robot prepare for the actual task. The robot needs sufficient data or experience to help train it.
This is a big challenge for other robotic applications using Artificial Intelligence (AI)-based approaches for accomplishing real-world tasks. The challenge is how to transfer this learned control policy from simulation to reality. Digital simulations typically are only 60% to 70% successful in initial real-world applications.
Another study has ascertained that robots can open doors — if you try hard enough with machine learning. Researchers have to put in complex AI algorithms, and it would calculate millions of real-life simulations that a robot could undertake. Through this process, a robot could be trained to open doors after some rigorous trial-and-error. Sure enough, simulated robots could generally recognise most doors and open them with little difficulty.
U.S. researchers have been developing robots to perform many tasks, including robots that can be socially interactive and persuasive. As reported by OpenGov Asia, in the future, socially interactive robots could help seniors age in place or assist residents of long-term care facilities with their activities of daily living. However, people may not actually accept advice or instructions from a robot. A new study from the University of Toronto Engineering suggests that the answer hinges on how that robot behaves.
Generally, the robot was less persuasive when it was presented as an authority figure than when it was presented as a peer helper. This result might stem from a question of legitimacy. Social robots are not commonplace today, people lack both relationships and a sense of shared identity with robots. It might be hard for people to see robots as legitimate authority.
The big takeaway for designers of social robots is to position them as collaborative and peer-oriented, rather than dominant and authoritative. The research suggests that robots face additional barriers to successful persuasion than the ones that humans face. If robots are to take on these new roles in society, their designers will have to be mindful of that and find ways to create positive experiences through their behaviour.