The Indian Computer Emergency Response Team (CERT-In) has released an advisory regarding the security concerns associated with applications utilising AI language technology. According to CERT-In, which operates under the Ministry of Electronics and Information Technology, AI language-based models are gaining significant recognition and generating discussions due to their beneficial impact. However, these models can also be exploited by malicious actors to target individuals and organisations.
AI language-based applications are being utilised to comprehend, analyse and categorise the context of cyber security. They are also employed in reviewing security events and logs to interpret malicious codes and malware samples. Other potential uses include vulnerability scanning, translating security code between different languages or converting code into natural languages, conducting security audits of codes, performing Vulnerability Assessment and Penetration Testing (VAPT), and integrating the applications with Security Operations Centers (SOC) and Security Information and Event Management (SIEM) systems for monitoring, reviewing, and generating alerts.
However, AI-based applications can also be used by threat actors to conduct various malicious activities. For example, users can use the tools to write malicious codes, exploit vulnerabilities, conduct scanning, and perform privilege escalation and lateral movement to create malware or ransomware specifically designed for a targeted system.
These tools can generate output in the form of text as written by a human. Users can ask for promotional emails, shopping notifications or software updates in their native language and get a well-crafted response in English. It can aid in the creation of fake websites and web pages to host and distribute malware to users through malicious links or attachments using the domain like AI-based applications.
Furthermore, users can develop fake applications impersonating AI-based applications. Cybercriminals could use AI language models to scrape information from the internet, such as articles, websites, news, and posts and potentially take personally identifiable information without explicit consent from the owners.
CERT-In has recommended several advisory measures to mitigate adversarial threats associated with AI applications. These measures include:
- Educating developers and users about the risks and threats involved in interacting with AI language models.
- Verifying domains and URLs that impersonate AI language-based applications to prevent falling victim to phishing or other malicious activities.
- Implementing appropriate controls to safeguard the security and privacy of data used.
- Ensuring that the generated text is not exploited for illegal or unethical purposes.
- Employing content filtering and moderation techniques within organisations to prevent the dissemination of malicious links, inappropriate content, or harmful information.
- Conducting regular security audits and assessments to identify and address vulnerabilities in the systems.
- Monitoring user interactions with AI language-based applications for any suspicious or malicious activity.
- Establishing an incident response plan and defining a set of activities to be followed in case of a security incident.
As AI becomes more widespread, governments around the world are increasingly interested in regulations to protect users. Recently, the Indian Institute of Technology, Madras established the Centre for Responsible Artificial Intelligence (CeRAI). As OpenGov Asia reported, it is a multidisciplinary research centre dedicated to promoting ethical and accountable advancements in AI-powered solutions for practical applications.
CeRAI will offer sector-specific recommendations and guidelines to policymakers. Drawing from its research outputs, the centre will also contribute to the formulation of tailored recommendations and guidelines that address the unique requirements of various sectors. Additionally, CeRAI will provide stakeholders with essential toolkits to support ethical and responsible management and monitoring of AI systems during their development and deployment stages. These resources will assist in promoting best practices and ensuring that AI technologies are used in a responsible and accountable manner.