The Privacy Commissioner, Michael Webster, has issued warnings regarding safeguarding personal information while utilising artificial intelligence (AI), addressing the private and public sectors. In releasing his expectations, Webster emphasised the need for adaptability as technological advancements in AI continue to evolve rapidly.
Webster’s emphasis on organisations exercising caution in handling personal information within the realm of AI highlights the critical need to balance the potential gains in productivity with the inherent privacy risks involved. With the increasing reliance on AI systems like ChatGPT, it becomes crucial to address the challenges associated with managing and controlling the information fed into these systems.
One key concern lies in the difficulty of retrieving information once it has been input into AI systems. Unlike traditional data storage methods, where retrieval is relatively straightforward, AI systems often lack easily accessible mechanisms to retrieve specific information. This poses significant challenges in ensuring the accuracy, integrity, and privacy of the data that has been processed.
Furthermore, the controls governing the usage of personal information within AI systems are often limited in scope. As AI technologies rapidly advance, it becomes imperative to establish robust frameworks and mechanisms to regulate and govern the use of personal data. Without adequate controls, there is a risk of unauthorised access, misuse, or inappropriate handling of sensitive information, leading to privacy breaches and potential harm to individuals.
Webster’s warning reminds organisations to carefully evaluate and address these concerns before implementing AI solutions. Organisations must thoroughly assess AI’s potential risks and implications, especially when handling personal or confidential information. This includes considering the AI system’s privacy impact, security measures, and ethical considerations.
In light of these concerns, Webster emphasised that agencies should conduct comprehensive due diligence and privacy analyses to ensure compliance with the law before venturing into the realm of generative AI. He advised against incorporating personal or confidential information into AI systems unless explicit confirmation is obtained that such data will not be retained or reused. One alternative approach could involve removing any re-identifiable information from input data.
Considering the potential privacy implications, staff members were encouraged to evaluate the necessity and proportionality of using AI and to explore alternative methods if available. Seeking approval from supervisors and privacy officers and transparently informing customers about the use of AI were recommended practices. Additionally, Webster emphasised the importance of human review of any AI-generated information before taking any consequential actions based on it.
Webster further outlined the steps agencies should undertake when considering the implementation of AI. These include conducting due diligence, performing a privacy analysis, and carrying out a Privacy Impact Assessment. Seeking feedback from impacted communities, including Māori, and requesting clarification from AI providers regarding privacy protections designed into their systems were identified as critical components of the evaluation process.
Before this, the commissioner had communicated his concerns to government agencies, cautioning against the hasty adoption of AI without proper assessment. He underscored the need for a holistic, government-wide response to address the emerging challenges posed by this technology.
The Privacy Commissioner’s warnings emphasise the imperative of preserving privacy rights when utilising AI. Organisations must exercise caution, conduct thorough assessments, and implement adequate safeguards to protect personal information in the face of AI’s evolving landscape.