The country’s think tank, the National Institution for Transforming India (NITI Aayog), has released a paper titled “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology”. The paper is the third in a series of publications on Responsible Artificial Intelligence (RAI).
In 2018, NITI Aayog published the National Strategy on Artificial Intelligence (NSAI), which included a roadmap for adopting AI in five public sectors. Following this, stakeholder meetings to discuss methods for the responsible use of new technologies were launched in 2019 in partnership with the World Economic Forums. This culminated in 2021 with the publication of a two-part approach document outlining principles for the responsible design and deployment of AI in India. It also highlighted several measures to operationalise the government’s RAI principles: safety and reliability, inclusivity and non-discrimination, equality, privacy and security, transparency, and accountability.
Facial Recognition Technology
Facial recognition technology (FRT) was selected as the first use case to examine the RAI principles and operationalisation mechanism. FRT refers to an AI system that can identify or verify a person based on images or video data interfacing with the underlying algorithm. Essentially, FRT seeks to accomplish facial detection, feature extraction, and facial recognition.
The paper mentioned that several government programmes across the world gather biometric facial data at the time of registration for certain public services. The purpose is to manually authenticate an individual at the time of furnishing identity documents, or when they are accessing services. The rise in FRT computational abilities enables authentication to be automated, making the process efficient and requiring minimal human intervention.
India is home to some of the most surveilled cities in the world, the paper stated. The use of CCTV cameras in Delhi, Chennai, Hyderabad, Indore, and Bangalore ranks among the highest across the world. The country’s surveillance unit markets have an annual growth of 20-25%.
FRT has attracted widespread debate around its potential benefits and uses. Likewise, the rising adoption of FRT for both security and non-security has led to a deeper examination of its risks. The paper outlines a few major risk categories. Firstly, there are design-based risks, such as inaccuracy due to technical factors or bias caused by underrepresentation. Further, security risks from data breaches and unauthorised access. Secondly, FRT poses rights-based issues including infringements on fundamental rights like individual privacy, equality, free speech, and freedom of movement.
Addressing these risks, the paper recommends legal and policy reform, focusing on the principles of privacy and security and accountability. It suggests developing a data protection regime and setting up public and private committees to draft guidelines and carry out system audits on how organisations source, build, deploy, and maintain their data and AI models.
Digi Yatra
As part of its efforts to improve the travel experience, the Ministry of Civil Aviation has initiated the Digi Yatra programme under which FRT and facial verification technology (FVT) will be used at different process points. It aims to create a seamless, paperless, and contactless check-in and boarding experience for passengers. It proposes using FRT to authenticate a passenger’s travel credentials, which allows other checkpoints in an airport to be operated in an automated form with minimal human involvement. FVT will be used at different airports to identify and verify travellers and for ticket validation and other checks necessary based on the operational needs of the airport processes.