Predicting what someone is about to do next based on their body language comes naturally to humans but not so for computers. In a new study, U.S. engineering researchers unveil a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects.
The algorithm is a step toward machines being able to make better predictions about human behaviour, and thus better coordinate their actions with humans. The results of the research open several possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.
This Artificial Intelligence (AI) system is the most accurate method to date for predicting video action events up to several minutes in the future. After analysing thousands of hours of movies, sports games, and shows the system learns to predict hundreds of activities, from handshaking to fist-bumping. When the AI system can’t predict the specific action, it finds the higher-level concept that links them.
Past attempts in predictive machine learning have focused on predicting just one action at a time. The algorithms decide whether to classify the action as a hug, high five, handshake, or even a non-action like “ignore.” But when the uncertainty is high, most machine learning models are unable to find commonalities between the possible options. This algorithm is the first to learn the capability to reason abstractly about future events.
As prediction is the basis of human intelligence, Machines make mistakes that humans never would because they lack our ability to reason abstractly. This work is a pivotal step towards bridging this technological gap. The mathematical framework developed by the researchers enables machines to organise events by how predictable they are in the future. The system is aware of uncertainty, providing more specific actions when there is a certainty, and more generic predictions when there is not.
The technique could move computers closer to being able to size up a situation and make a nuanced decision, instead of a pre-programmed action. This is a critical step in building trust between humans and computers. Trust comes from the feeling that the robot really understands people. If machines can understand and anticipate human behaviours, computers will be able to seamlessly assist people in daily activity.
While the new algorithm makes more accurate predictions on benchmark tasks than previous methods, the next steps are to verify that it works outside the lab. If the system can work in diverse settings, there are many possibilities to deploy machines and robots that might improve our safety, health, and security. The researchers plan to continue improving the algorithm’s performance with larger datasets and computers, and other forms of geometry.
AI has been adopted for a variety of functions, but it can sometimes create bias or continue to perpetuate bias that has already existed. As reported by OpenGov Asia, to counter the negative effect of biases in AI that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases. NIST outlines the approach in a publication titled “A Proposal for Identifying and Managing Bias in Artificial Intelligence”.
Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear. NIST wants to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.
The bias in AI-based products and systems can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to the agreement on understanding and measuring bias in AI systems.