Many deep learning models struggle to see the world in which there are objects and the relationships between them. Most models do not understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to assist someone in a kitchen would have difficulty following commands such as “grab the spatula on the left side of the stove and place it on the cutting board.”
In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model depicts individual relationships one by one and combines these representations to describe the overall scene. This allows the model to generate more accurate images from text descriptions, even when the scene contains multiple objects arranged in different relationships to each other.
This work can be applied in situations where industrial robots need to perform complex, multi-step manipulation tasks, such as stacking items in a warehouse or assembling devices. It also brings the field one step closer to enabling machines that can learn from and interact with their environment more like humans do.
When I look at a table, I cannot tell there is an object in the XYZ location. Our minds do not work that way. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we can use that system to more effectively manipulate and change our environments.
– Yilun Du, PhD Computer Science and Artificial Intelligence Laboratory & Co-Lead Author
The framework the researchers developed can generate an image of a scene based on a text description of objects and their relationships, such as ‘A wooden table to the left of a blue stool. A red bench to the right of a blue stool.”
Their system would break these sentences into two smaller pieces describing each individual relationship, then model each part individually. Those pieces are then combined through an optimisation process that generates an image of the scene.
The researchers used a machine learning technique called energy-based models to represent the individual object relationships in a scene description. This technique allows them to use one energy-based model to encode each relational description, then assemble them in a way that infers all objects and relationships.
The system also works in reverse: with an image, it can find text descriptions that correspond to the relationships between objects in the scene. In addition, their model can be used to edit an image by rearranging the objects in the scene to match a new description.
The researchers compared their model with other deep learning methods that were given text descriptions and tasked with generating images showing the associated objects and their relationships. In any case, their model outperformed the baselines.
They also asked people to evaluate whether the images generated matched the original scene description. In the most complex examples, where descriptions included three relationships, 91% of participants concluded that the new model performed better.
While these initial results are encouraging, the researchers would like to see how their model performs on more complex real-world images, with noisy backgrounds and objects blocking each other. They are also interested in eventually incorporating their model into robotic systems, allowing a robot to derive object relationships from videos and then apply this knowledge to manipulate objects in the world.