Facial recognition algorithms – which have repeatedly been demonstrated to be less accurate for people with darker skin – are just one example of how racial bias gets replicated within and perpetuated by emerging technologies. There’s an urgency as AI is used to make really high-stakes decisions. The stakes are higher because new systems can replicate historical biases at scale. One of the fundamental questions of the work is: how to build AI models that deal with systemic inequality more effectively?
Inequality is perpetuated by technology in many ways across many sectors. One broad domain is health care. The demand for mental health care, for example, far outstrips the capacity for services in the United States. That demand has been exacerbated by the pandemic, and access to care is harder for communities of colour.
Taking the bias out of the algorithm is just one component of building more ethical AI. The researchers work also to develop tools and platforms that can address inequality outside of tech head-on. In the case of mental health access, this entails developing a tool to help mental health providers deliver care more efficiently. They are building a real-time data collection platform that looks at activities and behaviours and tries to identify patterns and contexts in which certain mental states emerge. The goal is to provide data-informed insights to care providers in order to deliver higher-impact services.
Watkins, a professor at the University of Texas at Austin and the founding director of the Institute for Media Innovation has joined the newly launched Initiative on Combatting Systemic Racism (ICSR), an IDSS research collaboration that brings together faculty and researchers from the MIT Stephen A. Schwarzman College of Computing and beyond. The aim of the ICSR is to develop and harness computational tools that can help effect structural and normative change toward racial equity.
The ICSR collaboration has separate project teams researching systemic racism in different sectors of society, including health care. Each of these verticals addresses different but interconnected issues, from sustainability to employment to gaming. Watkins is a part of two ICSR groups, policing and housing, that aim to better understand the processes that lead to discriminatory practices in both sectors. Discrimination in housing contributes significantly to the racial wealth gap in the U.S.
Models can also predict outcomes, but Watkins is careful to point out that no algorithm alone will solve racial challenges. Models can inform policy and strategy that we as humans have to create. Computational models can inform and generate knowledge, but that doesn’t equate with change. It takes additional work—and additional expertise in policy and advocacy—to use knowledge and insights to strive toward progress.
One important lever of change will be building a more AI-literate society through access to information and opportunities to understand AI and its impact in a more dynamic way. He hopes to see greater data rights and a greater understanding of how societal systems impact lives.
As reported by OpenGov Asia, creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.
To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items—chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.