Getting your Trinity Audio player ready...
|
The National Security Agency (NSA) and its federal agency partners have released new guidance concerning a cybersecurity risk posed by deepfakes, a type of synthetic media. This emerging threat poses cybersecurity challenges for National Security Systems (NSS), the Department of Defence (DoD), and organisations within the Defence Industrial Base (DIB).
They have jointly published a Cybersecurity Information Sheet (CSI) titled “Contextualising Deepfake Threats to Organisations” to assist entities in recognising, safeguarding against, and responding to deepfake threats. NSA developed the CSI with the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA).
The term “deepfake” encompasses multimedia content that has been either artificially created or manipulated through machine learning and deep learning technologies, which are forms of artificial intelligence (AI). Other phrases used to describe such synthetically generated or altered media include “Shallow/Cheap Fakes,” “Generative AI,” and “Computer Generated Imagery (CGI).”
Candice Rockell Gerstner, an NSA Applied Research Mathematician with expertise in Multimedia Forensics, emphasised that while the tools and methods for altering authentic multimedia have been in existence for some time, the noteworthy shift lies in the ease and widespread adoption of these techniques by cyber actors. This evolving landscape introduces a fresh set of challenges to national security.
Gerstner pointed out that organisations, as well as their employees, must adapt to this changing environment. They need to identify the tradecraft and techniques associated with deepfakes. Moreover, it is essential to establish comprehensive plans to respond to potential deepfake attacks and mitigate their impact effectively. As cyber adversaries increasingly leverage these technologies, recognising and countering deepfake threats becomes paramount to ensuring national security and safeguarding sensitive information.
The joint Cybersecurity Information Sheet (CSI) provides valuable recommendations for organisations to address the challenges posed by synthetic media threats, particularly deepfakes. The CSI suggests implementing various technologies and strategies to counter this emerging threat.
One key recommendation is adopting real-time verification capabilities, which enable organisations to identify and respond to potential instances of deepfake content swiftly. Passive detection techniques are also emphasised for ongoing monitoring and early detection. Furthermore, the CSI highlighted the importance of safeguarding high-priority officers and their communications, as they are often the targets of deepfake attempts.
In addition to detection, the guidance underscores the significance of minimising the impact of deepfake attacks. This involves information sharing within and across organisations to stay ahead of evolving threats. It also advocates for comprehensive planning and rehearsing of responses to potential exploitation attempts, ensuring that organisations are well-prepared to mitigate the consequences of deepfake incidents. Personnel training is another crucial component, equipping individuals with the skills and knowledge to recognise and respond effectively to synthetic media threats.
The CSI underscores the diverse nature of synthetic media threats, encompassing techniques that jeopardise an organisation’s brand, impersonate its leaders and financial officers, and employ fraudulent communications to gain unauthorised access to networks and sensitive information. These threats highlighted the need for a holistic approach to cybersecurity.
Advancements in computational power and deep learning have facilitated the mass production of fake media, making it more accessible and cost-effective. This not only undermines brands and financial stability but also has the potential to incite public unrest by disseminating false information on critical issues such as politics, society, the military, and the economy.
The CSI draws attention to the concerning availability of deep learning-based algorithms on open-source repositories. These accessible resources pose a security risk, as their application requires minimal technical skill and can be executed using little more than a personal laptop. Consequently, the widespread availability of such tools amplifies the urgency of addressing synthetic media threats.
In light of these evolving challenges, the NSA, FBI, and CISA strongly encourage security professionals to adopt the strategies outlined in the report. By proactively implementing these recommendations, organisations can enhance their resilience to the growing threats posed by synthetic media and deepfakes. This collaborative effort among government agencies and security experts is vital to ensuring the integrity of digital information and safeguarding national security.