Getting your Trinity Audio player ready...
|
Oak Ridge National Laboratory (ORNL) has introduced the Centre for AI Security Research (CAISER) to confront the existing threats stemming from the widespread adoption of artificial intelligence by governments and industries worldwide. This move concedes the potential benefits of AI in data processing, operational streamlining, and decision-making while acknowledging the associated security challenges.
ORNL and CAISER will collaborate with federal agencies such as the Air Force Research Laboratory’s Information Directorate and the Department of Homeland Security Science and Technology Directorate. Together, they will conduct a comprehensive scientific analysis to assess the vulnerabilities, threats, and risks associated with emerging and advanced artificial intelligence, addressing concerns ranging from individual privacy to international security.
Susan Hubbard, Deputy for Science and Technology at ORNL, emphasised this endeavour, “Understanding AI vulnerabilities and risks represents one of the most significant scientific challenges of our time. ORNL is at the forefront of advancing AI to tackle critical scientific issues for the Department of Energy, and we are confident that our laboratory can assist DOE and other federal partners in addressing crucial AI security questions, all while providing valuable insights to policymakers and the general public.”
CAISER represents an expansion of ORNL’s ongoing Artificial Intelligence for Science and National Security initiative, which leverages the laboratory’s unique capabilities, infrastructure, and data to accelerate scientific advancements.
Prasanna Balaprakash, Director of AI Programmes at ORNL, emphasised that AI technologies substantially benefit the public and government. CAISER aims to apply the lab’s expertise to comprehensively understand threats and ensure AI’s safe and secure utilisation.
Previous research has highlighted vulnerabilities in AI systems, including the potential for adversarial attacks that can corrupt AI models, manipulate output, or deceive detection algorithms. Additionally, generative AI technologies can generate convincing deepfake content.
Edmon Begoli, Head of ORNL’s Advanced Intelligent Systems section and CAISER’s founding director emphasised the importance of addressing AI vulnerabilities. CAISER aims to pioneer AI security research, developing strategies and solutions to mitigate emerging risks.
CAISER’s research endeavours will provide federal partners with a science-based understanding of AI risks and effective mitigation strategies, ensuring the reliability and resilience of AI tools against adversarial threats.
They provide educational outreach and disseminate information to inform the public, policymakers, and the national security community.
CAISER’s initial focus revolves around four national security domains aligned with ORNL’s strengths: AI for cybersecurity, biometrics, geospatial intelligence, and nuclear nonproliferation. Collaboration with national security and industry partners is critical to these efforts.
Col Fred Garcia, Director of the Air Force Research Laboratory (AFRL) Information Directorate, expressed confidence in CAISER’s role in studying AI vulnerabilities and safeguarding against potential threats in an AI-driven world.
Moreover, as ORNL celebrates its 80th anniversary, CAISER embodies the laboratory’s commitment to solving complex challenges, advancing emerging scientific fields, and making a global impact. With its established cybersecurity and AI research programmes, ORNL is well-suited to pioneer AI security research through CAISER.
Moe Khaleel, Associated Laboratory Director for National Security Sciences at ORNL, highlighted the laboratory’s legacy of scientific discovery in various fields and emphasised CAISER’s role in scientifically observing, analysing and evaluating AI models to meet national security needs.