The plot thickens. When Adam Tanner entered his name, the same ad from the same company popped up, but without the mention of arrest or criminal records. After hours searching more names, eventually the pair came to an inescapable conclusion: Ads suggestive of arrest were appearing more often for African-American sounding names than white names.
Laura Sydell revisited the topic in a recent NPR segment, interviewing University of Michigan Professor Christian Sandvig. Sandvig explains the biases of the average searching population most are likely the culprit here.
“Because people tended to click on the ad topic that suggested that that person had been arrested, when the name was African-American, the algorithm learned the racism of the search users and then reinforced it by showing that more often,” says Sandvig.
The Search Algorithm is You
For those that don’t know how search engines work, a number of factors go into deciding what to return for a particular query. One tactic employed often includes analyzing your personal data and behavior to better predict a more personalized result for you—and we see these types of algorithms all the time. If you listen to a lot of John Coltrane on Spotify, it wouldn’t be unusual if a suggestion for Duke Ellington started showing up. If you visit a lot of hunting websites, you’ll probably notice more hunting ads popping up during your online experience.
Interestingly, Sydell also spoke with Professor Sorelle Friedler of Haverford College, who claims other studies exist proving that this type of statistically significant discrimination occurs on the gender front as well, when people enter job queries into Google’s search engine. Women in particular were less likely to be shown as many high-paying jobs as men—but here’s the kicker: Friedler worries that widespread societal gender bias is actually subconsciously causing women not to click on high-paying jobs, believing that they won’t get them anyway. Google’s algorithm would then return fewer high-paying jobs to all female users after recognizing a significant trend in female users clicking on lower-paying jobs. Kind of a digital self-fulfilling prophecy.
Can Artificial Intelligence be Racist?
After reading Lindsay Bell’s recent article, Artificial Intelligence: The Good, The Bad, and the Orwellian, it’s hard not to draw parallels. The three tech companies Bell mentions are either using A.I. to predict or identify… well, you. And, as she mentioned, it’s already in play today. So what are the implications of feeding racial or gender-related bias into a machine that builds and rewrites itself based on your input? Is it possible that we’re creating racist artificial intelligence?
The answer is “maybe”. We know we have succeeded in imparting upon a computer our gender and race related biases and virtually imprinting them in its algorithm. Obviously, race related issues are much more complicated than shopping or job search habits—but if A.I. is currently building itself on a combination of our input and its own mysterious mathematical conclusions, we can’t be 100 percent sure that some facets and hints of gender and/or race-related bias wouldn’t sneak through, right?
The Hive Mentality
More importantly than asking what the far-flung possibility of a future ruled by a racist, sexist, A.I. overlord would look like (what a rabbit hole of a writing prompt that sounds like), it’s important to ask ourselves how these decisions and algorithms are affecting us now. There is already the worry that despite personalized results pages, the information Google feeds most people is largely the same, creating an echo chamber mentality and maybe making us less intelligent humans that subscribe to more of a hive mind. This would not come as a surprise, seeing as many people today trust search engine results more than actual news sources for news. The interesting effect is that if we are indeed capable of influencing search-based A.I. to produce racist results, it’s entirely possible that a reinforcing feedback loop would be created.
Fortunately, we’re talking about very nebulous and one-off situations. They are important in their own right, but it would be a bit premature and alarmist to worry that we’re creating an army of genocidal robots—especially when there are plenty of other things to worry about when it comes to A.I.
There is hope, though, that as humans continue to build and influence artificial intelligence, we’ll learn more about ourselves than we’ve ever known. Maybe then, finally, instead of trying to build better machines, we’ll learn how to be better people.
Andrew Heikkila is a writer from Boise, ID. He likes to cover tech and cybersecurity topics and considers both relevant to many Millennial-centric issues. He’s currently debating whether or not he spends too much time playing video games. Connect with Andrew on Twitter or on Facebook.