Why is artificial intelligence set to become a human rights issue?

May 16, 2023

This week’s guest is Dr Safiya Noble. Dr. Noble is an internet studies scholar and Professor of Gender Studies and African American Studies at the University of California, Los Angeles (UCLA) where she serves as the Faculty Director of the Center on Race & Digital Justice. In her book ‘Algorithms of Oppression’ Safiya explores the ways in which search engines perpetuate systemic racism and discrimination. Noble argues that these search engines are not neutral, but rather are designed and operated by people with their own biases and values, which are often shaped by broader social and cultural forces. Noble examines the ways in which search algorithms can reinforce and amplify existing biases and stereotypes, particularly against marginalized groups such as women, people of color, and LGBTQ+ individuals. This episode discusses how these biases can have real-world consequences, such as limiting opportunities for employment or housing.

- Hello everyone. I am Chris Hyams, CEO of Indeed. My pronouns are he and him, and welcome to the next episode of "Here To Help." For accessibility, I'll offer a quick visual description. I am a middle-aged man with dark rimmed glasses, I'm wearing a blue T-shirt, and behind me is the North Austin skyline. At Indeed, our mission is to help people get jobs. This is what gets us out of bed in the morning and what keeps us going all day, and what powers that mission is people. "Here To Help" is a look at how experience, strength, and hope inspires people to want to help others and helping others is also about looking at the world with a new lens. With every guest, we aim to challenge old assumptions with some new ideas. It is my great pleasure to introduce to you Dr. Safiya Noble. Dr. Noble is an internet studies scholar and professor of gender studies, African American studies, and information studies at the University of California Los Angeles. At UCLA, Dr. Noble serves as the faculty director of the Center on Race and Digital Justice and Co-Director of the Minderoo Initiative on Tech and Power at the UCLA Center for Critical Internet Inquiry. Dr. Noble is board member for the Cyber Civil Rights Initiative, serving those vulnerable to online harassment and the Joint Center for Political and Economic Studies, the nation's oldest Black think tank. I was first introduced to Dr. Noble's work through her extraordinary book, "Algorithms of Oppression: How Search Engines Reinforce Racism" which details the ways in which search engines perpetuate systemic racism and discrimination. Noble argues that search engines are not neutral, but rather designed and operated by people with their own biases and values, which are often shaped by broader social and cultural forces. Today we'll be talking about these biases and the impact they have on the real world, such as limiting opportunities for employment or housing. Dr. Noble, thank you so much for joining me today.

- Thanks so much, Chris. It's really great to be here with you.

- Fantastic. Well, we have, we have a lot to cover today, but I'd love to start by talking about the book "Algorithms of Oppression." It was published in 2018 and you focus on the power and influence of search on society, and you open the book with this statement, "I believe that artificial intelligence will become a major human rights issue in the 21st century." And so right now that seems pretty much on the nose and we'll definitely get to the current state of AI in a bit. But I'd like to back up and just really talk about the thesis of the book and the impact that search engines have had on the world, and really what inspired you to write this book.

- You know, when you write a book about the internet, you assume that it will immediately be out of date. And it is kind of odd to me that this book and some of the arguments are holding up and that's not what you want, right? You want a book about the terrible things to fade away, in fact, and be solved. So we're not there yet. But I will tell you, I went back to graduate school. I started my first career in corporate America, and part of the reason I did that is because I felt like there are people who work in industries of many types, really have a lot of influence and sometimes outsized influence on what our communities look like. So as the economy was crashing and the recession was coming and everybody I knew was losing their jobs in advertising and marketing, I went back to grad school and now I go into a PhD program at the University of Illinois Urbana-Champaign. It's this STEM oriented university with all these information scientists getting a PhD in library and information science, really wanting to think about and talk about the internet. And everyone around me was talking about this new company, Google, and I was surprised that these librarians were seeding so much space to search engines because I really understood them as advertising platforms. And it was really there in that disconnect that I was like, I think well, like every good grad student, you're looking for a good thing to do research on. I was like, this seems like something I should write about, this dissonance, this disconnect.

- Can you define for folks that haven't had a chance to read the book and everyone who's listening should definitely read this book, can you define algorithmic oppression?

- Okay, so one of the things that I was arguing, this is again, 2010, 11, 12, I was conducting a series of kind of experiments, doing searches, and looking to see what kind of results come back to the first page. I would just ask questions like, why are Black women so, why are Black girls so, why is any group so anything, that would be very interesting to see like the auto suggestions and you know, this is like the cover of the book. It's like the first thing is why are Black women so angry, right? And this kind of idea that we're mean, lazy, angry, I saw a pattern, and the pattern was if you were a woman or girl of color, especially a girl of color, you were almost exclusively represented, misrepresented with pornography. Now, you didn't have to add the word sex, you didn't have to add the word porn. Black girls, Latina girls, Asian girls were just synonymous with porn. I also saw that many of the kinds of stereotypes and results that would come back really mapped onto racist tropes we hold in our society. And, you know, my undergraduate degree was in sociology. I'd spent many years around African American studies and ethnic studies, gender studies. So I actually really understood what I was looking at, not as an anomaly, but that these ideas had kind of been baked in at the level of code. Someone had coded this project called the "Search Engine." Many people, in fact, thousands of people over time, and these kind of discriminatory ideas were normalized, flattened, kind of naturalized in the search results. And I called that algorithmic oppression because what I was seeing is that the algorithms themselves are a function of and reinforce oppression in our society vis-a-vis racist stereotypes and other kinds of damaging ideas in society. And more importantly, not only do they do that in a kind of this automated way, they're coded and they're automated, but they're naturalized under the auspices of just being math, right? Or it's just tech. Like, it's because, you know, in those days, what people would say to me when I presented my research was, "Safiya, algorithms can't be racist because algorithms are just math, right? And math can't be racist." So it was just like a hyper-reductionist way of thinking about algorithms and AI. And you know, to me now, then I didn't have the right retorts. I would just say like, "That's not true." You know, these algorithms are actually holding all kinds of different types of biases. That's common sense knowledge now. But I will tell you that in 2010, that was not common sense understanding. People would get very angry with me at academic conferences. Men would shout at me, I need like that on a T-shirt: Men shout at me at conferences. But that was kind of the idea here because it was very difficult to understand the kind of social, political, ethical dimensions of AI and algorithms. And I coined the term "Algorithmic Oppression" just as a way to kind of say like, the algorithms are also doing things that do contribute to oppression in our society.

- Thinking about the response to the book and to your work and men yelling at you in conferences, at conferences. So there's been massive pushback and backlash around anything equity or justice related in tech. And sort of like the word 'woke' has become an epithet and CRT has been totally demonized. How do you respond in the face of that type of sort of rhetorical onslaught?

- You know, if you look at these origins, and I won't bore you with a lecture on this, but these ideas about Black people, our telling each other to kind of stay woke is really to stay aware of dangerous spaces and places and people as you move about, right? You can, like, I think the first time I just heard a lecture, someone talking about the first time that the term woke was used was like in the 1800s, like, you know, it was like the late 1800s or the early 1900s where a blues artist was like telling Black people as we were moving about, like, "Be careful out here because of racism and racists, and you might get lynched and like, stay woke." So that through line is still true. And making a mockery of that is just so sad to me. It's just such a sad commentary that rather than make fun of or deride or make unfashionable being racist, instead there's like a discourse now of people who notice racism being the problem. Now, you know, I'm from, I told you, I already told you I'm from the '90s, so you know, this is not new, this is old. I mean, this idea of that this is kind of like the colorblind ideologies that came into vogue in our generation, I'm assuming, and your generation that were kind of like, if you see racism, you are the racist rather than like, if you experience it and you try to intervene upon it or stop it, that that's actually the thing we want. Helen Neville, who's a professor at the U of I, she's a psychologist and she does all these studies on people who adopt colorblind ideology that people who say, "I don't see color." And her experiments show over and over and over again that people who adopt a colorblind ideology are more racist and more willing to tolerate racism on their watch. So we've done a disservice to a whole generation of people who feel like disempowered to talk about race and racism in our society because they've kind of adopted this colorblind stance. Their parents felt it was impolite to talk about race or racism so they just shushed them and shushed that out of them. So what that's left are people who are virulent racists, who've actually taken over the discourse around collapsing that with like their free speech, their rights to say and do anything they want, weaponizing the internet and social media in service of that. And that those who speak back are actually the problem. And you know, I think of a study by Tiera Tanksley, who is a professor at the University of Colorado-Boulder. And she studied college aged Black women who dealt with these kinds of hostile engagements. They would go into their social media, all of it, and there would be people just like calling them woke, just like attacking them, making fun of viral videos of Black people dead or dying. And these women would go in and try to do battle in the comments because they felt that those, like these kinds of attacks that you're talking about, this kind of harsh inhumane way of engaging was they felt they couldn't let it stand on their watch. They couldn't just let it be there. And they would spend up to eight hours a day in between work, in between classes, before they went to bed, when they woke up, on their phones commenting and they had self-reported PTSD, they were suffering from depression, they were really struggling. And so the effects of that kind of hostility toward people who are trying to just speak about their lived experiences is part of like the challenge. And I think because Silicon Valley in particular lacks in diversity, it lacks in hiring Black and Latinx and Indigenous people, it doesn't have the sensitivity. And I think you couple that with people who feel afraid to speak or talk about racism because they've adopted colorblind ideologies, you just leave a cesspool. And you know, unfortunately, I think that has happened. Of course, billionaires have also given so much money to far right wing organizations, think tanks, councils, researchers, that those people also have been extremely well funded and they're able to really see this narrative so much so that now it's illegal in some places to talk about things like critical race theory, which is really just erasing Black people from history books. It's really just ensuring that the history of racism also is not discussed. And you know, the things we don't discuss we're doomed to repeat, for sure.

- I want to sort of bring us a little bit to the present day. And it's impossible to talk about any of these issues without talking about AI, which is not new, but it is certainly in the public imagination right now. And especially the very recent revolution of large language models like Open AI's ChatGPT. Can you talk a little bit about what you're talking with your students about right now?

- For some students who feel that writing is an incredible struggle for them, they're not oriented or they haven't just haven't been trained as strong in those kind of writing intensive fields, ChatGPT is like the miracle they prayed for. You know, it really is going to, and some of my colleagues think that this will even the playing field, let's say for the engineering students or the math students or the STEM students who are writing a verse. There are others who absolutely understand the consequences of it. I think there are people who are really struggling, students who are really struggling with the morality. Am I plagiarizing? Is this a tool or is this a replacement for the intellectual work that I should be doing? I think we are not going to be able to get away from teaching students what these projects are and what their limits are. And the same way that I taught a whole generation of students about search engines by making them do searches and seeing the limits, and you know, I'm doing that now with ChatGPT. Probably the one thing that I try to get the students to resist is this of anthropomorphization of AI, like thinking of it like it's a human or thinking of it like it's superhuman or better than human. That probably is one of the most dangerous ideas around these types of AI. And so that, of course, there are all these other ethical issues around copyright, around sucking in all the data that could be made available, which that data is also people's life's work of art, of writing, of all kinds of things that people have struggled for centuries to make. So it's a very interesting and important moment that we're living through right now.

- Yeah, and I think, one of the potential dangers of anthropomorphizing or thinking of these things as smart is that there can be a fine line between thinking something is smart and thinking that it's objective and true. So I guess my question is what kind of critical lens can we bring to I guess our interactions with a system like this?

- It's so interesting because I asked my first question of ChatGPT was, who are the investors in ChatGPT and OpenAI? Then I was like, who else? You know, is there anyone that you're leaving out? What about companies, who owns those companies? Like I used it like that. So I asked it for give me some citations on Black digital feminism, digital Black feminism. And it gave me back all of these citations that looked really legitimate except I'm an expert in this field, and so I knew they were not, I knew they were made up, I could tell. So it was like real journal articles, real journal titles, like New Media & Society, that's a big journal in our field. But the article was not real, it was fake. And the authors were like a mix of names of people in the field, like combining their names. So I knew, you know, I was like, "Oh no, now this is actually where we're going to be in trouble." 'Cause people are going to ask for the citations, they're going to ask for the receipts and the fake receipts are going to come. The challenge here is going to be disambiguating. But you know, we have those challenges already of disambiguating fact from fiction. I mean in other digital media systems. I mean social media is the perfect space. YouTube, Google search, other kinds of search. So I think one of the problems that we failed to kind of see with this is twofold. The large language models are so much they're going to be so much larger than the container of the kinds of things in search. And we, you know, with page rank or other kinds of like rank ordered information, like that we are used to getting on the web in lots of contexts, you know there's a point of view. Something, either the AI or people, thought this was the best thing through the least important thing. With large language models and the way in which the output happens, there isn't even a rank order in that way. So that is going to make this harder to, again, disambiguate in this, you don't see a person, it's not your like racist uncle who's commenting on the Facebook. And so, you know, like what's happening, right? And so I think these are going to be real challenges and then the question will be what happens when these open like large language model systems start to converge and they're talking across one another? They're not going to be able to contain that either, the makers of these technologies. So that's going to make the point of view even harder.

- Yeah. And so that disambiguation, that's when we met last week to talk, that was sort of one of the things that, that I'm really interested in is how much harder it is to identify and address racism the deeper it's embedded in the system. And so Michelle Alexander in "The New Jim Crow" talks about this with slavery was an abomination, but it was extremely overt. It was out in the open, which made it easy to spot and eventually more or less abolished, it was replaced by Jim Crow, which was also explicit, but had this separate but equal thing. So there was another level of abstraction. And now with mass incarceration. You know, when you wrote your book and when you were doing research 10, 12 years ago, Google research for Black girls were easy to spot the difference. Or Google photos tagging Black faces as gorillas were easy to spot. Now that those overt examples are cleaned up, how much harder is it with something like ChatGPT to identify where there are problems in the system and how to root them out?

- It's going to be very difficult. And I think, already ChatGPT has been programmed to put up disclaimers that some of its results might be biased, right? Or if you ask it questions about race, gender, I've noticed it leans toward this kind of colorblindness. That means that people think also that there's empathy or like some type of recognition that's happening in these systems that bad things could happen. And it's also like, it's legal disclaimer, but it's not really, it doesn't appear that way. It's not as explicit of like the legalese of a terms of service agreement, right? It's more of like these empathic responses that have been programmed in, yet and still you're going to get racist content out of these. So I think that is really troubling and obscuring is happening. I think there's also, there's like the content of what happens, like what these predictive pattern recognition technologies predict and output, but there's also the entire laborer like supply chain ecosystem that it takes for something like the electronics filled realities that we live in in the west to happen. These companies outsource so much of that kind of labor that's exploitive to the global south, right, to countries of the global majority we could call it. And I think that also is part of the obfuscation, like the incredible energy impact of these systems. The kind of labor, we're always talking about software engineers when we talk about tech, we're not talking about coltan miners, we're not talking about e-waste workers, people who actually have to disassemble the electronics who have cancer by the time they're 30 because they've been working in these e-waste sites since they were children. That kind of exploitation is also obfuscated when we talk about things like OpenAI and large language models. And I am really committed to making those parts more visible because I think if we knew what it cost us in terms of human beings, human life, environmental degradation and so forth, we would say, "Why do we want this?" The modern day enslavements that are happening around the world, the modern day harms that have just been off shored, they're not right here in your household, in your neighborhood, in the same way in the United States as they were before. But this is why we need these kinds of solidarities around the world as workers and to understand our work in relationship to other people's work.

- You know, some of the folks who are listening today, certainly people who work in Indeed and other people might be in some positions of power, technological, political, organizational. What would you want someone listening to take away from this conversation in terms of what they can do to be aware of and try to address these dangers of algorithmic bias?

- Justice and the right things are like a million decisions, everyday decisions that we make, all of us all the time. There's not just some big meteor of justice that's going to hit the planet. Do you know? Which probably is not the right metaphor 'cause that seems like that would be terrible. So, you know, there's not going to be a wave of cooling breeze, justice breeze that's going to skim our faces and make it right. It's going to be millions of everyday decisions that we make in service of the right things. So those are the things that I think employees, companies, workers can feel empowered around and should be empowered to be able to do that. And of course that means having facility with the kinds of conversations that you and I have facility with. I mean, I rarely talk to corporate CEOs who have the facility who are like quoting Michelle Alexander at me, so get it, Chris. But you know, the facility and ease with these kinds of conversations is really important. And I think that would be the thing that I would say is don't feel like you need to leave this work to be done by others. You can do it in your own work, in your own decision making processes, diversifying your supply chain, diversifying your employee workforce. Thinking about the technology that you make, does it facilitate discrimination or does it intervene upon it? How do you educate clients to understand that the circuitous route to success doesn't look like necessarily you were valedictorian and then you went to Stanford and then you came to work here. You know, maybe you had all these kinds of experiences that don't look like the model, what the model would predict for success because that AI model is part of the problem, right? That success and matching clients with employees maybe what they need is the person they never would've imagined. And that is the kind of facilitation that your company does and the kind of education that also has to go with it. So I think we're all in it together is the bottom line.

- I'd like to close with the same question that we always ask at the end. And in reference to last week when we were talking, you said at one point, "This is not an optimistic moment," however, I like to always end with the question. Given everything that we've been through as a world over the last few years, what, if anything, has left you with some hope for the future?

- Men don't shout at conferences at me anymore. These are common sense conversations that more and more people are having. I hear arguments made on the internet and in the classroom and around the dinner table that are common sense today that were not common sense a decade ago. And so I think part of that is because there's more harm, there's more consequence, there's more negative output from these systems. But we also have a real awakening of awareness about those harms. You know, there really are hundreds of thousands of people around the world who are developing acute expertise in these conversations and are really trying to do the interventions that we need. And that makes me feel incredibly hopeful.

- Dr. Safiya Noble, thank you so much for joining us today, for sharing your experience and your incredible research. And thank you for everything you do to help enlighten the world.

- Thank you. Thanks for having me. It's really an honor.