The ability to reason about causality is one property that sets human intelligence apart from Artificial Intelligence (AI). Modern AI algorithms perform well on clearly defined pattern recognition tasks but fall short of generalising in the ways that human intelligence can.
This often leads to unsatisfactory results on tasks that require extrapolation from training such as recognising events or objects in contexts that are different from the training set. To address this problem, U.S. researchers have built a high-fidelity simulation environment that is designed for developing algorithms that improve causal discovery and counterfactual reasoning of AI.
The researchers illustrated a problem by making an analogy of AI in a different context. If a self-driving car were confined to the streets of a neighbourhood in Arizona with few pedestrians, wide, flat roads and street signs with English writing, and then they deployed the car on the narrow, busy streets of Delhi, where street signs are written in Hindi, pattern recognition would be insufficient to operate safely.
The pattern in their training set would be very different from the deployment context. Yet, somehow humans can adapt so quickly to situations that they have not previously observed that someone with an Arizona state-issued driving license is allowed to drive a car in India.
The recent paper took a closer look at this problem and the researchers proposed a new high-fidelity simulation environment. They designed a high-fidelity simulation with the ability to control causal structure. A more robust AI model does more than simply learning patterns. It captures the causal relationships between events.
Humans do this very well, which enables them to reason about the world and adapt more quickly and generally with fewer examples. Humans often do so by making a specific action–an intervention–in the environment, observing the result, building a mental model and then repeating this process to refine the model.
Using interventions is one way to learn about systems, such as the behaviour of traffic in a city and their underlying causal structure. The presence of confounders–factors that impact both the intervention and the outcomes–can complicate the task of causal learning. Imagine driving in a city and noticing an ambulance. In this context, the behaviour of other drivers would be a confounder that might impact the path of both the ambulance and a possible follower vehicle.
Machine learning researchers are increasingly developing models that involve causal reasoning to increase robustness and generalisability. Computer graphics simulations have proven helpful in investigating problems involving causal and counterfactual reasoning as they provide a way to model complex systems and test interventions safely.
The parameters of synthetic environments can be systematically controlled, thereby enabling causal relationships to be established and confounders to be introduced. However, much of the prior work has approached this via a relatively simplistic set of entities and environments.
This leaves little room to explore, and control for, different causal relationships among entities. One challenge involved in creating more realistic systems is the complexity involved in dictating every state and action of every agent at every timestep. To help address this problem, the researchers proposed giving agency to each entity to create simulation environments that reflect the nature and complexity of these types of temporal real-world reasoning tasks.
This includes scenarios where each entity makes decisions on its own while interacting with each other, like pedestrians in a crowded street and cars on a busy road. Agency provides the ability to define scenarios at a higher level, rather than specifying every single low-level action. They can now more easily model scenarios such as the car following the ambulance described above.
The environment that the researchers have developed reflects the real-world, safety-critical scenario of driving. They seek to build a simulation environment that enables controllable scenario generation that can be used for temporal and causal reasoning. This environment allows them to create complex scenarios including different types of confounders with relatively little effort.
AI has been adopted for a variety of functions, such as predicting human behaviour from videos. As reported by OpenGov Asia, U.S. engineering researchers unveil a computer vision technique for giving machines a more intuitive sense of what will happen next by leveraging higher-level associations between people, animals, and objects.
The algorithm is a step toward machines being able to make better predictions about human behaviour, and thus better coordinate their actions with humans. The results of the research open several possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.