The Indian Institute of Technology Madras (IIT-Madras) has established the Centre for Responsible Artificial Intelligence (CeRAI), a multidisciplinary research centre dedicated to promoting ethical and accountable advancements in AI-powered solutions for practical applications.
CeRAI aims to establish itself as a leading research facility at both the national and international levels, focusing on fundamental and applied research in Responsible AI and its direct influence on implementing AI systems within the Indian ecosystem. The Centre for Responsible AI was formally inaugurated last month by Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology. It conducted its first workshop last week.
At the workshop, an industry expert noted that AI is playing a major role in human life. Whether people know it or not, every day they use AI-based technologies in some part of their life. It is crucial for policymakers and innovators at the forefront of technology development to understand the persisting risks and challenges associated with using these technologies to address societal issues, improve healthcare accessibility and affordability, promote inclusive education, and enhance agricultural productivity. To meet these needs, there is a necessity for an AI framework that is unbiased, non-discriminatory, and customisable to cater to India’s unique requirements.
CeRAI’s main focus will be on generating high-quality research outputs, such as publishing research articles in high-impact journals/conferences, white papers, and patents, among others. It will work towards creating technical resources such as curated datasets (universal as well as India-specific), software and toolkits pertaining to the field of Responsible AI.
CeRAI will play a crucial role in offering sector-specific recommendations and guidelines to policymakers. Drawing from its research outputs, the centre will contribute to the formulation of tailored recommendations and guidelines that address the unique requirements of various sectors. Additionally, CeRAI will provide stakeholders with essential toolkits to support ethical and responsible management and monitoring of AI systems during their development and deployment stages. These resources will assist in promoting best practices and ensuring that AI technologies are used in a responsible and accountable manner.
In addition, the centre intends to establish opportunities for conducting specialised sensitisation and training programmes. These initiatives will enable stakeholders to develop a deeper understanding of the ethical and responsible aspects of AI, empowering them to contribute effectively towards problem-solving in their respective domains. The centre plans to organise a series of technical events, including workshops and conferences, centred around deployable AI systems with a strong emphasis on ethics and responsible practices.
An expert explained that at present, there is a critical need to attribute responsibility to AI tools and understand the rationale behind their outputs. Key considerations include addressing human augmentation, mitigating bias in datasets, addressing the risk of data leakage, and implementing new policies alongside extensive research efforts. Establishing trust in AI is increasingly important, with a particular emphasis on safeguarding privacy. It is worth noting that as long as domain interpretation remains intact, AI is unlikely to replace jobs. It is important for the AI model and its predictions to be explainable and interpretable when they are to be deployed in various critical sectors/domains such as healthcare, manufacturing, and banking/finance, among other areas.