Getting your Trinity Audio player ready...
|
Lifting a heavy object is quite easy for humans, as we only need to extend our fingers and palms. However, it is different for robots. For robots, this task falls under intricate activity.
To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.
In light of this, MIT researchers have devised a method to streamline the complex process of contact-rich manipulation planning. They employ an AI technique called “smoothing,” which condenses numerous contact events into fewer decisions. It allows even a basic algorithm to discern an effective manipulation plan for the robot rapidly.
This innovation holds the potential to facilitate the use of smaller, mobile robots in factories capable of manipulating objects with their entire arms or bodies, as opposed to large robotic arms limited to fingertip grasping. Such a shift could lead to reduced energy consumption and cost savings. Furthermore, this technique may prove invaluable for robots on exploration missions to celestial bodies like Mars, where they can swiftly adapt to their surroundings using only onboard computing power. This approach is called reinforcement learning.
Reinforcement learning is a machine-learning technique that involves an agent, such as a robot, acquiring the ability to accomplish a task by iteratively attempting actions and receiving rewards as it gets closer to its goal. Researchers highlight that this learning method adopts a somewhat opaque approach because the system must primarily gain knowledge about the world through trial and error.
In reinforcement learning, the smoothing process occurs implicitly by exploring various contact points and calculating a weighted average of the outcomes. Building upon this concept, MIT scientists devised a straightforward model that executes a comparable form of smoothing. It enables the model to concentrate on fundamental robot-object interactions and predict long-term behaviour. Their findings demonstrate that this approach can be equally proficient as reinforcement learning in generating intricate plans.
Although smoothing significantly simplifies decision-making, searching for the remaining options can still pose a challenging problem. Therefore, the researchers combined their model with an algorithm capable of swiftly and efficiently exploring all possible choices that the robot might make. This combination substantially reduced computation time, taking only approximately one minute on a standard laptop.
Initially, they tested their approach in simulations where robotic hands were assigned tasks like repositioning a pen, opening a door, or lifting a plate to a specific configuration. Across all scenarios, their model-based strategy achieved the same level of performance as reinforcement learning but within a fraction of the time. Similar results were observed when they conducted experiments using actual robotic arms.
However, it’s important to note that their model relies on a simplified representation of the real-world environment, limiting its ability to handle highly dynamic motions, such as objects in free fall. While effective for slower manipulation tasks, their approach isn’t suitable for planning actions like having a robot throw a can into a trash bin. In the future, the researchers aim to refine their techniques to address these more dynamic scenarios.
Terry Suh, the researcher, emphasised, “If you study your models carefully and truly understand the problem you are trying to solve, you can definitely achieve some gains. There are benefits to doing things that go beyond the black box.”