As companies and decision-makers increasingly look to machine learning to make sense of large amounts of data, ensuring the quality of training data used in machine learning problems is becoming critical. That data is coded and labelled by human data annotators—often hired from online crowdsourcing platforms—which raises concerns that data annotators inadvertently introduce bias into the process, ultimately reducing the credibility of the machine learning application’s output.
A team of U.S. researchers has developed a new scientific method to screen human data annotators for bias, ensuring high-quality data inputs for machine learning tasks. The researchers have also designed an online platform that allows for scaling up the screening process.
We have created a very systematic and scientific method for finding good data annotators. This much-needed approach will improve the outcomes and realism of machine learning decisions around public opinion, online narratives and perception of messages.
– Lead Researcher
They investigated how five common attitudes and knowledge measures in Brexit could be combined to create an anonymized profile of data annotators who are likely to label data used for machine learning applications in the most accurate, bias-free way. They tested 100 prospective data annotators from 26 countries using several thousand social media posts from 2019.
The lead researcher stated that they wanted to use machine learning to detect what people were talking about. In the case of their study, are they talking about Brexit in a positive or negative way? Are data annotators likely to label data as only reflecting their beliefs about leaving or staying in the EU because their bias clouds their performance? Data annotators who can put aside their own beliefs will provide more accurate data labels, and our research helps find them.
The team’s method is scalable in two ways. First, it cuts across domains, impacting data quality for machine learning problems related to transportation, climate and robotics decisions in addition to health care and geopolitical narratives relevant to national security. Second, the team’s open-source interactive web-based platform, scales up the measurement of attitudes and beliefs, allowing for profiling of larger groups of prospective data annotators and faster identification of the best hires.
This research strongly indicates that data annotators’ morals, prejudices and prior knowledge of the narrative in question significantly impact the quality of labelled data and, consequently, the performance of machine learning models. Machine learning projects that rely on labelled data to understand narratives must qualitatively assess their data annotators’ worldviews if they are to make definitive statements about their results.
As reported by OpenGov Asia, To reduce bias in AI algorithms, U.S. researchers have developed a new Artificial Intelligence(AI) programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives. Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system.
Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modelling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backwards to infer probable explanations for observed data.
SPPL gives fast, exact solutions to probabilistic inference questions. These inference results are based on SPPL programmes that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include “Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?” or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?”.