Artificial intelligence receives a fair share of distrust. The doubts rise from the point of civil rights and liberties. How is data being used to improve decision making? Can end users trust these processes? Is artificial intelligence being used fairly?
Non-profit group, The Public Voice, established by the Electronic Privacy Information Centre (EPIC), wants to draw a line in the sand. They have inked Universal Guidelines for Artificial Intelligence to inform and improve the design and use of AI. Details of their proposition were announced at the 2018 International Data Protection and Privacy Commissioners Conference.
Keeping an Eye on AI
From the financial sector to healthcare, AI has a wide range of use cases. Private and public sector organisations alike have jumped on the bandwagon. Around the world, governments are drafting national blueprints for digital transformation. One where AI is invariably hardwired into the process.
Comparatively however, there has been little public policy to regulate AI. AI for decision making has direct consequences on fundamental rights of fairness, accountability and transparency. Issue areas which feel strongly the impact of AI for decision making are surprisingly vast. The non-profit states that modern data analysis produces significant outcomes for people in employment, housing, credit, commerce, and even criminal sentencing.
What’s worrying is that these data analysis techniques tend to be largely unregulated and not many public officials and even end users have full information about how AI is affecting them. Individuals are in the dark about whether the AI decisions generated for them are accurate, fair or even tailored uniquely for them.
In rolling out a set of Universal Guidelines, the advocacy group hopes AI’s usage will maximise benefits, minimise associated risks and overall protect human rights. The Guidelines were crafted off the work of scientific societies, think thanks, NGOs and international organisations. Elements of human rights doctrine, data protections law and ethical guidelines were consulted and incorporated in the drafting process. In turn, the Guidelines features sound principles for AI governance and suggests new principles which cannot be found in similar policy frameworks.
The end goal of these Guidelines should be to incorporate it into ethical standards, adopted in national law and international agreements, and built into the design of systems. The Public Voice says the primary responsibility for AI systems should reside with institutions which fund, develop and deploy these systems.
Twelve Principles for AI Governance
A comprehensive twelve-point Guideline is set out. They include:
- Right to Transparency: Individuals have the right of to know how an AI decision affects them. Users have the right to know the factors, logic and techniques which have produced the outcomes.
- Right to Human Determination: Individuals have the right to make the final determination.
- Identification Obligation: The public should be aware of institutions responsible for an AI system.
- Fairness Obligation: Institutions have a duty to ensure AI systems are not unfairly bias or make impermissible discriminatory decisions.
- Assessment and Accountability Obligation: AI should only be deployed after proper evaluation of its purpose and objectives, benefits and risks, have been carried out. Institutions are also responsible for AI decisions.
- Accuracy, Reliability, and Validity Obligations: Institutions are responsible for these.
- Data Quality Obligation: Data provenance must be established by institutions. The quality and relevance of input data for algorithms are the institution’s responsibility.
- Public Safety Obligation: Institutions must prioritise this and safe the public.
- Cybersecurity Obligation: AI systems must hedge against cyberthreats.
- Prohibition on Secret Profiling: No institutions should establish or maintain one. Information asymmetry should be avoided to ensure independent accountability.
- Prohibition on Unitary Scoring: Governments should avoid making or keeping a general-purpose score on its citizens or residents.
- Termination Obligation: AI systems must be terminated if the institution no longer has human control over it.
These have been forwarded to the National Science Foundation of the United States to be adopted. The non-profit believes that these principles sits well with the seven strategies which were already set out by the United States thus far. Hence, adopting these twelve principles will be easier.
Presently, more than 200 experts and 50 organisations have signed the Guidelines.