This Is Auburn
Auburn philosophy professors discuss ethics of AI

Auburn philosophy professors discuss ethics of AI

August 29, 2023 @ 11:21 a.m.
facebook logo twitter logo linked in logo

Font Size

Generative artificial intelligence models such as ChatGPT and Google’s Bard raise ethical questions about property, privacy and education. Auburn University Assistant Professor of Philosophy Rachel Rudolph and Associate Professor of Philosophy Elay Shech, along with data scientist Michael Tamir, are completing a research article about bias in AI. In addition to the ethical concerns of biased AI programs, Rudolph and Shech discuss ethical questions society will need to confront about evolving artificial intelligence.

Your current research focuses on bias in AI models. How does bias enter a computer program?

Rudolph: One big worry with some of these AI tools is they’re trained on all this text from the internet that often has a lot of biased opinions and stereotypes, that are prevalent in our society, which get baked into that training data. Unless interventions are put into place, these tools are just going to spit out and perpetuate more of this unethical biased language usage. So, we’ve been thinking about how these AI tools are being trained and influenced to try to improve in that regard and maybe even try to help influence users to think about things in a less stereotypical way.

Shech: The idea is that we can get our machine learning model to pick up on patterns and correlations if we feed it with enough data. Large language models like Chat GPT have hundreds of billions to over a trillion trainable parameters and so are trained on large volumes of available existing text found, for example, on the web. This means though that whatever biases are out there in the way we use language on the internet gets sucked into these machines. In our paper, we explain how this process happens and continue to ask questions like: What do we mean by “bias” when we’re identifying it?

What did you find about de-biasing AI technology?

Shech: Something I found really interesting is that it was difficult to identify a clear articulation of what bias is supposed to be in the first place. We all kind of know what it is, until we start arguing about it, but it turns out to be tricky to define it in a way that captures a lot of exemplars while still doing the work that we want it to do. One of the things that our paper tries to push is that when we make identifications of bias, we could be talking about different things. There’s obviously the need for technical expertise in de-biasing, but you also need theorists who think about ethics and philosophy. De-biasing takes work that is normative and evaluative to really decide what it should look like.

Rudolph: Another important issue is the human labor that goes into de-biasing work. The way that has mostly worked is there are actual people who look at what samples of text we want these models to take as good, and which ones we want them to take as bad. The bad ones are often really bad, and people are poorly paid to read tons of violent, abusive material. The ethical dimension of how these things are trained is also important to be aware of and discuss. We obviously don’t want ChatGPT to spew violent and racist material, but how do we actually go about filtering that out? We want to do that in a responsible and ethical way, too.

What other ethical concerns surround generative AI?

Rudolph: One implication of AI is for intellectual property. There are a lot of interesting lawsuits that are in the pipeline about generative AI. Image generators, for example, would not be able to do the amazing things that they can do if they hadn’t been trained on all this material that was created by actual people who were not asked for their consent or compensated. So, I think there are really important ethical issues about the creation of these models in the first place.

Shech: One of the big issues that also arises in some of the other work that I do is AI being opaque. Sometimes these models have billions of parameters, and it’s hard to understand how they work, to the extent that there’s a lot of both theoretical and empirical work done to try to understand what makes a particular model work so well. When you have some sort of model, making decisions, say, that have to do with cancer diagnosis or criminal justice, there’s something worrisome about putting your trust in something you don’t fully understand.

How is AI affecting education?

Rudolph: The thing that’s gotten the most attention is probably the way that AI is affecting and will continue to affect teaching and learning. We should view the next couple of semesters with an exploratory, experimental mindset. It’s going to take time to figure out the right balance of which kinds of assignments these tools can be helpful for. I taught logic last semester, and ChatGPT was not very good at the questions that I was asking, so I would sometimes use it as an example to the students of where this goes wrong.

Shech: There’s an interesting balance to be found between stopping students from cheating as technology evolves, but also making decisions about when it's okay to use this as a tool. In the humanities, we put a lot of emphasis on writing and cultivating that as a skill that is not only useful in life but is going to be meaningful to your interaction with the world. Is that the kind of thing in the future we’re going to care less about, because anybody can have some future ChatGPT write beautifully for them? I don’t know. I think it's something we need to think critically about. What are skills that we still care about, and we want to have as a society, and which ones are we okay with letting the machines do?

About the experts:

Rachel Rudolph is an assistant professor of philosophy in the College of Liberal Arts. Her work focuses on the philosophy of language, including how people communicate and how language affects how people conceptualize the world around them.

Elay Shech is an associate professor of philosophy in the College of Liberal Arts. His research focuses on the philosophy of science, physics, biology and machine learning.

MEDIA CONTACT

Charlotte Tuggle, Director
News and Media Services
CLA Office of Communications and Marketing
clanews@auburn.edu

from left: Rachel Rudolph and Elay Shech

from left: Rachel Rudolph and Elay Shech

Categories: Cyber, Security, Education, Advanced Systems, Liberal Arts


Back to Articles

Related Stories

Alabama Cyber Research Consortium Awarded National Science Foundation Grant

Auburn Speaks: On Cyber and the Digital Domain Now Available with Augmented Reality

Ronald Lee Burgess Jr., USA, Retired, Cyber Laboratory Dedicated