News

Princeton Researchers Develop Tool to Spot Bias in AI Training Images

A group of researchers at Princeton University has developed a new, open-source tool designed to identify potential bias in image sets used to train artificial intelligence (AI) systems. Called REVISE (REvealing VIsual biaSEs), the tool is capable of surfacing potential bias involving people, objects, and actions, making the bias know to users and data set creators, and suggesting actionable steps to correct the bias.

The tool was developed by three Princeton researchers: Olga Russakovsky, an assistant professor of computer science and principal investigator in the Princeton Visual AI Lab; Arvind Narayanan, an associate professor of computer science; and Angelina Wang, a graduate student. REVISE builds on earlier work with Stanford University that involved filtering and balancing a data set's images in a way that required more direction from the user. They presented their research in August at the virtual European Conference on Computer Vision. The tool is available now on GitHub.

The researchers described the tool and the problem they're aiming to solve with it in a paper ("REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets"). "Machine learning models are known to perpetuate and even amplify the biases present in the data," they wrote. "However, these data biases frequently do not become apparent until after the models are deployed. To tackle this issue and to enable the preemptive analysis of large-scale dataset, we present our tool…."

REVISE was designed to be a "broad-purpose tool for surfacing the under- and different- representations hiding within visual datasets. It assists in the investigation of biases in visual datasets along three specific dimensions: object-based, gender-based, and geography-based. Object-based biases relate to size, context, or diversity of object representation. Gender-based metrics aim to reveal the stereotypical portrayal of people of different genders. Geography-based analyses consider the representation of different geographic locations.

In one example cited in the paper, REVISE uncovered a potential gender bias in the OpenImages dataset containing people and organs (the musical instrument). Analyzing the distribution of inferred 3-D distances between the person and the organ showed that males tended to be featured as actually playing the instrument, whereas females were often merely in the same space as the instrument.

Providing actionable insights on gender bias is a less concrete and more nuanced process, the researchers found. "In contemporary societies, gender representation in various occupations, activities, etc. is unequal," they wrote, "so it is not obvious that aiming for gender parity across all object categories is the right approach. Gender biases that are systemic and historical are more problematic than others, and this analysis cannot be automated. Further, the downstream impact of unequal representation depends on the specific models and tasks.

REVISE uses a Jupyter notebook interface that allows exploration and customization of metrics. This is an analytics tool designed to sheds light on the datasets, but it's up to the users and the data set designers to act on that analysis. "[T]he responsibility then lies with the user to consider the cultural and historical context, and to determine which of the revealed biases may be problematic," the researchers wrote. "The tool then further assists the user by suggesting actionable steps that may be taken to mitigate the revealed biases. Overall, the key aim of our work is to tackle the machine learning bias problem early in the pipeline."

The Princeton Visual AI Lab focuses on developing AI systems "able to reason about the visual world," the website states. The group's research combines the fields of computer vision, machine learning, and human-computer interaction in an effort to develop "the fundamental perception building blocks of visual recognition," and to ensure "the fairness of the vision systems with respect to people of all backgrounds by improving dataset design, algorithmic methodology and model interpretability."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025