Getting your Trinity Audio player ready...
|
With just an audio clip and a face photo, a group of researchers from Nanyang Technological University, Singapore (NTU Singapore) have created a computer programme that generates lifelike videos that mimic the speaker’s facial expressions and head movements.
DIverse yet Realistic Facial Animations or DIRFA is an artificial intelligence programme that combines speech recognition and image processing to create a three-dimensional (3D) video that synchronises a subject’s realistic and consistent facial animations with spoken audio. The programme created by NTU is an improvement over current methods that have trouble controlling emotions and changing positions.
The team trained DIRFA on more than one million audiovisual clips from more than 6,000 individuals obtained from The VoxCeleb2 Dataset, an open-source database, to predict speech cues and associate them with head movements and facial expressions.
According to the researchers, DIRFA has the potential to open up new applications in a variety of fields and industries, including healthcare, by enabling chatbots and virtual assistants that are more realistic and sophisticated, thus enhancing user experiences.
Additionally, it might be a very useful tool for people who have trouble speaking or using their faces or voices to express their feelings and ideas through animated figures or digital representations, which would improve their communication skills.
The study’s lead author, Associate Professor Lu Shijian of NTU Singapore’s School of Computer Science and Engineering (SCSE), stated “Our study could have a significant and wide-ranging impact as it revolutionises multimedia communication by enabling the creation of incredibly lifelike videos of people speaking by combining techniques like AI and machine learning (ML).
The programme also builds on previous research and represents a technological advancement, as videos created with the programme include accurate lip movements, vivid facial expressions, and natural head poses while using only audio recordings and static images.
“Speech exhibits a multitude of variations,” said first author Dr Wu Rongliang, a PhD graduate of NTU’s SCSE. People pronounce the same words differently in different contexts, with variations in duration, amplitude, tone, and other factors. Besides, speech conveys rich information about the speaker’s emotional state as well as identity factors such as gender, age, ethnicity, and even personality traits, in addition to its linguistic content.
Dr Wu, a Research Scientist at Singapore’s Agency for Science, Technology, and Research (A*STAR) Institute for Infocomm Research added that the approach is a pioneering effort to improve performance in AI and ML from the standpoint of audio representation learning.
According to the researchers, creating lifelike facial expressions driven by audio poses a complex challenge. There are numerous possible facial expressions for a given audio signal, and these possibilities multiply when dealing with a sequence of audio signals over time.
Because audio has strong associations with lip movements but weaker associations with facial expressions and head positions, the team set out to create talking faces with precise lip synchronisation, rich facial expressions, and natural head movements that corresponded to the provided audio.
To address this, the team first created DIRFA, an AI model that captures the complex relationships between audio signals and facial animations. The team trained their model on over one million audio and video clips from a publicly available database of over 6,000 people.
Also, to add more options and improvements to DIRFA’s interface, NTU researchers will fine-tune its facial expressions with a broader range of datasets that include more diverse facial expressions and voice audio clips.