The National University of Singapore’s Associate Professor Gary Tan employs technology to model and forecast human movement and then uses that information to optimise evacuation, reduce accidents, and ease traffic congestion during emergency situations. He is particularly interested in modelling how people would run or flee in such circumstances.
According to Associate Professor Gary, when people are in a panic, they act extremely differently and try to anticipate what would happen when, for example, they must evacuate an MRT station due to a bomb threat or a fire.
“In a crisis, each second is crucial. Effective evacuation and rescue plans are essential because delays might result in more fatalities,” says Associate Professor Gary.
Associate Professor Gary together and his students PhD candidates Wang Chengxin and Muhammad Shalihin bin Othman have created this special framework. It uses deep learning methods to track the real-life movement of pedestrians through video feeds. This behaviour is then converted into information that a virtual simulator can use to recreate situations and occurrences that would be too expensive or risky to actually recreate.
The project’s main goal was to create a disaster simulation. This data-driven approach makes it easier to build crowd management tactics that are more effective and delivers a more accurate prediction of human reactions in a crisis.
The framework interprets the movement patterns of pedestrians in real-world video feeds and converts them into data that can be used in a virtual simulator. The technology uses deep learning techniques to identify objects in specific video frames and accurately track them across the video feed.
They recreate settings and imitate actions that would be too expensive or risky to be carried out in real life. This enables the researchers to simulate various evacuation and rescue plans to determine the best course of action to take in an emergency.
The methodology is distinctive because, in contrast to earlier pedestrian simulation methods, it takes a data-driven approach and aims to investigate human behaviour directly from real-life footage. Since they are adapted from real video, this raises the level of realism.
The tracking algorithm that analyses how people move in the films underwent considerable improvement by the researchers. To extract realistic trajectories from real-world recordings, a good tracking algorithm is necessary. They can simulate realistic human movements using highly accurate trajectory data, which enables them to make more accurate predictions.
Following testing, it was discovered that “greater than expected” numbers of trajectories were successfully imported into the simulator from the movies.
Since releasing their research, the team has focused on developing further pedestrian monitoring systems. One, known as the Graph-based Temporal Convolutional Network (GraphTCN), uses artificial intelligence to track pedestrians’ temporal and geographical interactions with one another. The outcome is a behavioural model that can more faithfully simulate human movement.
The researchers are currently developing a new model that thinks more deeply. The Conscious Movement Model, or CMM, analyses CCTV footage and other real-world recordings to identify human behavioural patterns. These patterns are used to build a deep learning model that would subsequently affect the motions of a pedestrian in the simulation.
Researchers can increase the precision of prediction simulations by including genuine pedestrian movements. This will enable them to automatically run optimization algorithms and suggest the optimal course of action in various what-if scenarios. The research can be used to model the movement of both humans and autos in simulations of traffic congestion and accidents in addition to disaster situations.