A consortium of researchers dedicated to improving the understanding of the human causes and consequences of terrorism

Using applied math to de-bias hiring algorithms

UMD students work across disciplines and majors on four-year research project

How can we remove bias from artificial intelligence (AI) systems designed for everything from talent acquisition to online shopping to surveillance systems? 

Ten University of Maryland undergraduates came together to answer this question for their Gemstone honors research project. One of those students was Philip Mathew, a junior mathematics and computer science double major.

“I knew I wanted to focus on AI bias because of research I’d done on diabetic retinopathy at Johns Hopkins Applied Physics Lab,” Mathew said. “When I was a freshman intern, I learned that with diabetic retinopathy, your melanin count does play a difference in how your retinal scans look—and we saw that underrepresentation of people of color in the training data led to bias against accurate diagnoses for those people groups.”

Historical human bias—against people of color, women and other marginalized groups—causes artificial intelligence bias today. That fact became glaringly evident in 2018 when Amazon scrapped an AI and machine learning-based recruitment program after figuring out that the algorithm was biased against women.

Amazon’s AI model was programmed to vet candidates by observing patterns in resumes submitted to the company over a 10-year period. But because those hired during that period had been predominantly men, the system deduced that male candidates were preferred over female candidates. This prominent example is part of what inspired the Gemstone team, aptly named Project DeBIAS, to look at hiring algorithms.

“If you train AI on data that already has this systematic disadvantage against a group of people, it’s going to find and replicate those trends,” Mathew said. “The issue is that a lot of coders will think the AI system works fine without trying to understand the distribution of their data in the way of protective attributes such as gender and race.”

Developing the Methodology

Early on, the Project DeBIAS team hypothesized that resumes for people of color are being ranked disproportionately lower on hiring websites like Indeed due to AI bias. To test their hypothesis, they collected 59 anonymized resumes that included attendance at a Historically Black College or University (HBCU) and 304 resumes that did not. Then, Mathew and his teammates created an AI model to simulate how an autonomous hiring system would function, training it based on Indeed’s publicly available hiring data. 

“What we’re trying to ensure is that these systems don’t have something baked in where HBCUs are discriminated against,” Mathew said.

The team’s preliminary data analysis shows a trend that aligns with their hypothesis—resumes from individuals who went to HBCUs ranked lower in the hiring system. 

“We’re determining whether this is some sort of spurious correlation or a proper inverse correlation between what the status of your college is and what your ranking is in this Indeed resume ranking algorithm,” Mathew said. “Based on our preliminary findings, we do think we’re going to find an inverse correlation where people who went to HBCUs are ranked lower—and then we plan to find a way to solve for that in the ranking algorithm.”

Becoming Better Researchers

Mathew and junior mathematics majors Seth Gleason, Johnny Rajala and Daniel Zhu contributed their statistics know-how to understand the distribution of data and analyze the components of hiring algorithms. 

“I’m now better able to talk about and understand things from a statistical perspective, like why data is distributed in a certain way,” Mathew said. “And while it may seem obvious, linear algebra ended up being a huge help because computers use matrices, so you kind of need to know how matrices work.”

As they worked through their research work plan and received Institutional Review Board approval for resume data collection, the team worked closely with their team librarian Kate Dohe from UMD Libraries and their advisor, Steve Sin, an associate research scientist in the National Consortium for the Study of Terrorism and Responses to Terrorism at UMD.

“The Project DeBIAS team has become very agile over the last two and a half years,” said Sin, who has worked on detecting bias in emerging technologies. “One of the things they got really strong at is putting together a work plan with branches that says, if A then B. Their work plan helped them continually evaluate whether they were on track, adjusting course as needed.”

Sin noted that the students on the team have grown by “leaps and bounds” since their freshman year in sticking to timelines and collaborating across both the STEM and social sciences aspects of the project.

“Really, all of us are principal investigators on this research because every single one of the 10 of us gets a say on how we carry out the research,” Mathew said. “Not only are we getting experience working on an interdisciplinary team, we also can say that as undergrads we have shaped this research from beginning to end, which is an opportunity I’m pretty grateful for.”

Another important lesson the students learned was how to pivot when their initial research plan wasn’t working. When selecting a proxy for racial demographics, the Gemstone team initially planned to examine redlining—the systematic denial of providing financial services to residents of communities associated with a certain racial group. However, their initial research revealed that very few job applicants include their home addresses on their resumes. 

“Once we saw that redlining wasn’t offering enough data, we pivoted to looking at whether the highest level of education listed was from an HBCU,” Mathew said. “We used that as a proxy for seeing how marginalized populations get treated by these AI systems.”

In April, the Project DeBIAS team presented their research at UMD’s Undergraduate Research Day. Next, they’ll further analyze their preliminary findings, prepare their Gemstone thesis and submit their research results to a journal—and, hopefully, make a difference. 

 “Part of the goal of this whole research is to get it out there and let it add to the field,” Mathew said. “We really want to show that, yes, AI bias is an actual problem—and we might just have a way to fix it or evaluate it.” 

This article originally appeared on the University of Maryland Department of Mathematics website.

Keywords

Investigators