The National Institution for Transforming India (NITI Aayog) has released part one of an approach document titled, ‘Towards Responsible AI for All’. It builds on the National Strategy for Artificial Intelligence (NSAI) discussion paper that the organisation had released in June 2018.
The document aims to establish broad ethical principles for the design, development, and deployment of AI in India, drawing on similar global initiatives but grounded in the Indian legal and regulatory context. It explored the ethical implications of ‘narrow AI’, which is a broad term given to AI systems that are designed to solve specific challenges that would ordinarily require domain experts.
The paper noted that AI systems have gained prominence over the last decade due to their vast potential to unlock economic value and help mitigate social challenges. It is estimated that AI has the potential to add US$957 billion, or 15% of the current gross value added to India’s economy in 2035. It is projected that the AI software market will reach US$126 billion by 2025, up from US$10.1 billion in 2018.
The rapid increase in adoption can also be attributed to the strong value proposition of the technology. The document claimed that NSAI had successfully brought AI to the centre-stage of the government’s reform agenda by underlining its ability to improve healthcare, agriculture, and education. AI can improve the scale of delivery of specialised services (remote diagnosis or precision agriculture advisory) and enhance inclusive access to welfare services (regional language chatbots or voice interfaces), creating new paths for government intervention in these sectors.
In India, large-scale applications of AI are being trialled every day. Uttar Pradesh installed 1,100 CCTV cameras for the Prayagraj Kumbha Mela in 2019. The cameras would raise an alert when the crowd density exceeded a threshold, and the connected Integrated Command and Control Centres (ICCCs) provided security authorities with the relevant information. In Tamil Nadu, researchers from the Indian Institute of Technology-Madras (IIT-M) are looking to use AI to predict the risk of expectant mothers dropping out of healthcare programmes to improve targeted interventions and increase healthcare outcomes for mothers and infants.
While the potential of these solutions to improve productivity and efficiency is well established, AI systems must be handled responsibly. The document stated that around the world, there have been instances of unjust uses of AI systems. For example, the system to allocate healthcare in the United States was found to discriminate against black people.
The black-box nature of AI and its ‘self-learning’ ability makes it difficult to justify its decisions and in apportioning liability for errors. AI systems often lack transparency, and the user is unaware that they are dealing with a chatbot or an automated decision-making system. Unequal access to AI-powered applications for marginalised populations can further the digital divide.
The document outlined seven principles to manage AI systems responsibly:
- Safety and Reliability
- Equality
- Inclusivity and Non-discrimination
- Privacy and Security
- Transparency
- Accountability
- Protection and Reinforcement of Positive Human Values
The second part of the Responsible AI strategy, which will be released shortly, explores means of operationalising the principles across the public sector, private sector, and academia.
Regulating AI is complex and there are diverse views regarding what degree and forms are most effective. Currently, India does not have an overarching guidance framework for the use of AI systems. The closest to one is the draft Personal Data Protection Bill (2019) (PDP) designed as a comprehensive legislation to outline various facets of privacy protections that AI solutions need to comply with. It covers the limitations on data processing, security safeguards to protect against data breaches, and special provisions relating to vulnerable users like children.