The New Zealand government firmly believes that Artificial intelligence will drive significant economic and social benefits for New Zealand. At the same time, it can introduce a range of risks and challenges to the nation that cannot be overlooked.
The AI Forum of New Zealand recently published a set of guiding principles to help build public trust in the development and use of AI across New Zealand. The AI Principles aim to:
- start a conversation about the importance of ethical and legal considerations in AI design, development, deployment and operation
- raise awareness that AI ethical and legal issues need to be identified and addressed early on
- set the groundwork for more detailed, practical guidance that will define and inform good practice in AI design, development and implementation in New Zealand.
Providing overarching high-level guidance for anyone involved in designing or developing AI, the principles are a first step in helping New Zealanders have access to trustworthy AI. Explaining their stance, the Forum’s Executive Director, Emma Naji, said. “We can’t turn away from the challenges and risks that AI can present, especially when good intent or inclusivity are absent.”
The principles are in line with the other global documents aiming to empower, foster and monitor the responsible development of trustworthy artificial intelligence systems. The framers have drawn upon the common themes emerging from the growing body of published AI ethical principles, including the OECD Recommendation on Artificial Intelligence, the European Commission’s Ethics Guidelines for Trustworthy AI, the iTech Principles, the Montréal Declaration and Singapore’s Proposed Model AI Governance Framework.
However, they have attempted to condense the language and concepts used elsewhere into something more accessible and relevant to New Zealand. A key focus of the group has been to make sure the AI principles are simple, succinct and user friendly.
Naji pointed out that AI does not exist in a legal void. Existing laws and regulations such as privacy, human rights and liability all apply, but people tend to forget that in the face of AI. The Forum is keen to raise awareness that ethical and legal issues need to be identified and addressed as early as possible. The principles, she confirmed, have been put in place in the hope they will prompt AI stakeholders to start thinking about how to incorporate processes and measures to work towards the ethical development of AI.
The fundamental purpose of publishing these principles is not to provide a long list that leaves people feeling intimidated, but rather a succinct, useful reference point that can help lay some groundwork in building and informing good practice. It is envisaged that anyone developing AI in following these principles will be better able to understand the identified risks and unintended consequences.
The government has a comprehensive role to play in ensuring AI serves the long-term public good, including all duties of inclusive national interest. The local AI community welcomes the announcement of the Digital Council, an independent ministerial advisory group designed to advise the government from a whole-of-society perspective.
As such, the AI Forum will be offering as much support as possible to the government as it embarks on these important steps. The council will advise on how to maximise the societal benefits of digital and data-driven technologies to increase equality and inclusivity, wellbeing and community resilience.
“We are all responsible for the application and use of technology including ensuring New Zealanders can take advantage of the opportunity and benefits AI can offer. We will be holding some events to enable further discussion on ethical AI,” Naji says.
Sharing best practice will become increasingly important as commitments to ethical AI are only valuable if they are implemented.