by

Calculated bias: The pitfalls and potential of algorithmic recruitment

Ten years ago, a pair of researchers decided to investigate the role that racial bias plays in the contemporary labor market. They sent out fictitious resumes to companies who published help-wanted ads in newspapers from Boston and Chicago. To manipulate the perceived race of the applicant, each resume was given either a very Black sounding name (i.e. Jamal, Lakisha) or a very White sounding name (i.e. Emily, Greg). The results revealed significant discrimination against stereotypical African-American names: White names receive 50 percent more callbacks for interviews. The variation was particularly stark for well-qualified applications. For White names, high quality credentials elicited 30 percent more callbacks, whereas a far smaller increase was documented for African Americans. More recently a similar set of methods were used to document biases against women pursuing careers in academic science (faculty rated male applicants as significantly more competent and hireable than females with identical credentials).

CLNews1

These studies provide critical insight into the prevailing racial and gender biases within our current labor market. They also reveal how challenging it can be to wrap our heads around the ways these biases continue to shape our individual decisions. After all, during the gender study, women scientists demonstrated the same level of bias as their male counterparts when evaluating applications for a lab manager position. In contrast to overt discrimination, these studies reveal the subtle ways that cognitive biases continue to shape our decisions and impressions of people from diverse backgrounds, even when we belong to the group in question.

Last week I wrote a blog post about the disconcerting rhetoric coming from pundits in the tech sector who, even as concern rises over the lack of diversity in the field, describe the culture of Silicon Valley as a die-hard meritocracy. I suggested that we needed to chart out the educational pathways that individuals take in pursuit of a career as a programmer. My hope is that this will give us a better sense of the formal and informal learning experiences that shape who pursues this profession successfully. In addition to strengthening the educational pipeline, the other critical aspect of this issue is related to the hiring practices of tech companies. As the research above indicates, significant biases persist even when employers evaluate candidates with comparable educational backgrounds. This week I came across a company called Gild, which intentionally seeks to address these known human biases by offloading the job to a more “objective recruiter:” the algorithm.

Print

Gild is a start-up which provides services to companies looking to hire a web developer. The company uses an algorithm to sift through thousands of bits of publicly available data online in order to identify skilled coders. Dr. Vivienne Ming, Gild’s CEO and Chief Scientist, argues that this approach is not only more efficient for companies, it’s also fairer. She points to the case of Jade Dominguez, a college dropout from a blue-collar family in southern California who taught himself how to code. In spite of having no college degree and minimal formal work experience, Gild’s algorithm identified Jade as one of the most promising developers in his region. The young man now works for Gild and serves as an example of their algorithm’s ability to find those precious “diamond in the rough” coders startups are so eager to unearth.

For some, Jade’s story is an optimistic example of the ways big data can be harnessed to provide opportunities to individuals who otherwise would not be considered for the job, due to a lack of conventional credentials like a college degree. In this way, Gild situates itself squarely in line with the concept of merit-based hiring: the quality of a person’s code is more valuable than their personal or academic background. One could argue that Gild’s services effectively address well documented human biases by enabling promising candidates to “emerge” from the data.

But before we herald a big data revolution in human resources, we must consider the risks that come with using data as a proxy for talent and employment potential. Just last week the White House released a report recommending that the government place limitations on how private companies gather and make use of information collected from their online clients. In particular, the report warned about the dangers of masking discriminatory practices behind a veneer of data-based “facts.” As the report concludes, decisions based on algorithms, are becoming “used for everything from predicting behavior to denying opportunity” in a way that “can mask prejudices while maintaining a patina of scientific objectivity.” These concerns are echoed by other scholars such as Kate Crawford, who has made incisive arguments against the claim that big data doesn’t discriminate against social groups. Given big data’s ability to make claims about group behavior, Crawford claims, it is often deployed to segregate and discriminate individuals by group. The peril of these algorithms is that they mask deep seated biases behind the promise that the numbers “speak for themselves.”

xerox-man

In the case of Gild, the danger lies in thinking that we have wiped away the ugly biases that stubbornly persist in the labor market with the sweep of an algorithm. The trick now is to understand what values are written into the equation in order to tabulate a coder’s final Gild score. This is both the most interesting and the most fraught aspect of the company’s project. Gild’s algorithm enables employers to broaden the set of experiences and credentials that they value when looking for a new employee. This means that companies can define ‘merit’ according to a wider set of behaviors and experiences, beyond traditional markers for success like a college degree. This opens up the potential for using algorithmic methods to find talented people from unconventional backgrounds: the self-taught learners and the network-poor hustlers. However, these methods are by no means perfect. In a recent talk at MIT, Tarleton Gillespie reminded the audience that recent attempts to use algorithms as a tool for fair hiring are a part of a much longer history,

“People who hire people have already had algorithms to do it. It’s just not a computational algorithm. It’s a set of professional guidelines, and it’s a checklist…and in a lot of ways these were stand-ins for (us) to not let our bias get to (us)…we’re left with this weird calculus… And we try to squeeze the benefit from human acumen and the benefit of some impartial mechanism, but it’s not perfect.”

He suggested that the important thing for us to do now is to hone our ability to recognize and adjust the values we inscribe in new algorithmic tools.This includes questions about the quality of the input data, the implied priorities behind weighted variables, and the limitations of category formation. We still have a long way to go in developing a working vocabulary to talk about these issues in a practical way. It’s time for us to move beyond critiquing the limits of big data and start working towards a practical understanding of how to identify the promise and pitfalls of data-based approaches to promoting fairness and equality.

This is cross-posted at MIT’s Center for Civic Media

Share your thoughts

Comment