As the use of the internet and social media help spread disinformation campaigns, disinformation has the power to influence major things, such as changing elections, strengthening conspiracy theories, and sowing discord. However, U.S. researchers seek to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks.
The team sets out to better understand these campaigns by launching artificial intelligence (AI) called the Reconnaissance of Influence Operations (RIO) programme. The team published a paper on their work in the Proceedings of the National Academy of Sciences.
The project originated in 2014 when the researchers were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing narratives of a specific country.
In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyse the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96% precision. The RIO system is unique because it combines multiple analytics techniques to create a comprehensive view of where and how the disinformation narratives are spreading.
The traditional method to answer the question of who is influential on a social network is to look at activity counts. However, the researchers found that method to be insufficient as it does not accurately tell the impact of the accounts on the social network.
The RIO system helps determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message. RIO classifies these accounts by looking into data related to behaviours such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns.
RIO can also detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.
The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. They are diving into the cognitive aspects of influence operations and how individual attitudes and behaviours are affected by disinformation. Defending against disinformation is not only a matter of national security, but also about protecting democracy.
U.S. researchers have been utilising AI for several purposes on social media, including detecting sarcasm. As reported by OpenGov Asia, they have developed a technique that accurately detects sarcasm in a social media text. The team’s findings were recently published in the journal Entropy.
The team effectively taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
The researchers show the effectiveness of the approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. They visualise the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of their model.