Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

In October of last year, I got an email from my agent saying that one of my published novels had been scraped, i.e. used without my permission to train AI systems by companies like Meta and Bloomberg. 

The book—a romance published years ago under a pen name—was just one of hundreds of thousands used to train AI. Lawsuits by authors including John Grisham and George R.R. Martin quickly followed. A few months later, in December, the New York Times sued OpenAI and Microsoft for scraping its content as well—in this instance to train AI news bots, with which the paper argued it now must compete. 

During the recent Writers Guild of America strike, the world watched as creatives fought for their jobs and benefits, all while executives claimed AI was just as good as humans at creating content. 

This all gave me pause. As a writer, was AI coming for me? Would I even have a job in five years—or in one?  

At the same time, I knew AI was fueling breakthroughs, including being able to detect some types of cancer more reliably than humans and being able to identify whether drivers might hit the car in front of them. 

It was all a little brain-scrambling. Was AI here to help us or harm us? 

To get a handle on the AI landscape, I reached out to LSA experts—alumni and faculty in a variety of disciplines and industries—for help and perspective. I asked them to illuminate whether, culturally, we’re experiencing the rise of the machines or the dawn of a new day. Or some unblazed trail in between.

 

 

AI isn’t new; it’s been around for decades in the form of algorithms that are trained to select pieces of data that they think might be useful to us. Essentially, it’s a math problem that is trying to solve for what we want and need—like getting the right ad to pop up in our social media feed, or returning Google results based on where we live. 

What launched AI into the daily news and started a new tech arms race was generative AI. In this case, a program like ChatGPT could take data about, say, dolphins and do more than just regurgitate facts. It could write a limerick or term paper about dolphins. It could paint you a dolphin picture, or generate an image of Dwayne “The Rock” Johnson riding a dolphin.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI was suddenly mimicking humans’ cognitive ability to take disparate pieces of data and connect them in new ways.

A University of Michigan committee recently found that 60 percent of faculty and students had used GenAI systems. The committee’s report also emphasized the risks associated with a tool that can be used to plagiarize and fabricate as well as the promise of GenAI, which could lead to mind-bending discoveries.

Generative AI was certainly revolutionary, but there were problems right away. Most significantly, there was no guarantee that the information in the dolphin term paper would be right. Or that the dolphin in the picture wouldn’t have human hands. Or that The Rock wouldn’t have dolphin fins. This is because all the scraping that generative AI does comes from loads of sources that might not be comprehensive or could include faulty information. In other words: bad data in, bad data out.

This isn’t a big deal when it comes to dolphin limericks. But there are ways in which AI can replicate and perpetuate harmful biases, according to Apryl Williams, an assistant professor of communication and media in LSA.

“We tend to think of AI as an objective product, a science,” Williams says. “It gets complex and long, but at the end of the day it’s just math.” The problem with that perspective, Williams says, is that it’s omitting both where the data is coming from and who is creating the algorithm—and the biases inherent in both.

“The people who are writing the algorithms will create them based on their background, what they learned in school, and their own cultural contexts,” explains Williams. “Typically, people who are creating the algorithms are white researchers. They often don’t include women, people of color, or people with disabilities.”

Williams has seen these biases firsthand in her research on dating apps, which she recently published in her book Not My Type: Automating Sexual Racism in Online Dating (Stanford University Press, 2024). By studying patents filed by a large dating company, Williams found the technology may rank people on attractiveness based on European standards of beauty. “So if you’re not the ideal Western standard of beauty or attractiveness, your score will be lower,” Williams says. “If you’re not small-framed, if your face is not angular, if you don’t have blond hair, the algorithm will evaluate you as less than attractive.”

And it’s not just dating apps. Recently, Bloomberg analyzed more than 5,000 images generated by Stability AI and found that the program reflected race and gender stereotypes. For example, people with lighter skin tones were shown in high-paying jobs while subjects with darker skin tones were shown in jobs like “dishwasher” and “housekeeper.”

“The computers learn everything from us,” says Jennifer Blum (Ph.D. ’11), a director of AI and analytics. “They’re holding up a mirror. They only know what they’ve been taught.”

 

 

Blum is among the researchers working to improve AI to produce better results, commonly called “training” a model. “You want customers to just be able to use AI and know it’s right,” she says.

Blum started in cybersecurity, writing code to protect industry networks from hackers. These days, she works for a private company called HII, using AI to help her clients solve an array of problems. At the center of some of her work is “finding the correct data—or making sure the data is good—to teach a computer.”

Blum’s work, and that of other data scientists, is significant because one of AI’s biggest limitations is that it doesn’t know when it’s wrong. In fact, AI will often go to great lengths to convince users of an answer, even if that answer is wildly incorrect.

“[AI’s] goal, when you get down to it, is to fill in the blank convincingly, not correctly,” journalist Devin Coldewey wrote in an online article for TechCrunch.

Indeed, the reason generative AI produces the answers it does is still a mystery given the billions of calculations it performs. Humans built the algorithm, but somewhere along the line, we lost the ability to explain it.

“We don’t quite know how it works,” says Chandra Sripada, the Theophile Raphael Research Professor of Clinical Neurosciences and professor of philosophy at LSA. As a result, scientists are “opening the black box of AI and looking at the steps in its processing. What kind of training is needed to create intelligence? What kind of inputs start to generate its intelligent behavior?”

As it turns out, these questions are tricky when it comes to people, too. “The field of cognitive science has not had a good understanding of how intelligence is generated in humans,” says Sripada, also the director of the LSA Weinberg Institute for Cognitive Science. “How does information come into the mind and become mentally represented for other mental processes to use and transform?”

AI, he says, could begin to crack the code. “Now, here is this great model to potentially learn how thinking, creativity, and reasoning work.”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI is also learning so fast, with such vast amounts of data, that there is a question of whether it can evolve on its own. Can AI become so learned that it’s aware it contains harmful biases or that its own code is bad? Can it become so smart that it surpasses human cognition and takes over—either for humanity’s benefit or its detriment?

“These machines can potentially become so powerful that we may lose our ability to control them,” Sripada says. “When you have entities so powerful and have so many upsides that people want to use them, how do you guard against serious downstream risks? There are challenging questions inherent in this work.”

He adds that those are the questions LSA is poised to address. “LSA is the home of humanistic disciplines—ethics, political science, and other humanities—exactly the intellectual disciplines that can begin to answer the profound and challenging questions AI presents. The deep scholarly and humanistic interrogation of AI will happen in LSA.”

But in order for that to occur, Sripada says U-M needs to show up at the AI table—and fast—like its peer institutions. He cites a recent Inside Higher Education article from September 2023 that talks about the ways in which leading universities are funding AI centers and initiatives, and spending money hiring experts in an array of disciplines. “To hold our place as a world-leading institution at the frontier of scientific and humanistic questions, we do need to invest substantially. University-wide certainly, but especially in LSA, in the science and the ethics of AI.”

 

 

In the meantime, it may be worthwhile to treat AI with a healthy dose of skepticism. “I’m not worried about computers taking over; I’m more concerned about the people using the computers,” says Blum. “I’m concerned with people getting lazy, getting comfortable, not questioning what they are reading or seeing.”

It’s this place of discomfort—of challenge and difficulty—that Jim Burnstein, professor and director of LSA’s Screenwriting Program, tells his screenwriting students is at the heart of the creative process. “You’ll be trying to come up with that next story and [decide to use AI] to get unstuck, but being stuck is where all of your creative breakthroughs come from,” he says. His advice is for writers to not use AI at all.

Webb Keane, the George Herbert Mead Distinguished University Professor of Anthropology, recently co-wrote an op-ed on concerns about use of AI with Yale professor Scott J. Shapiro in The Spectator, arguing that AI can “trick users into surrendering their autonomy and delegating ethical questions to others.” Specifically, Keane and Shapiro were referring to AI “god-bots,” which take on the persona of a divine entity—Jesus, Krishna, Buddha—and answer questions posed by users.

The inability to explain how the god-bot is generating its answers or why—inherent in any AI, as Sripada talked about—may make it look like the bot is channeling the superhuman or divine. “When such ineffable workings produce surprising results, it seems like magic,” wrote Keane and Shapiro. “When the workings are also incorporeal and omniscient, it all starts to look a lot like something divine.”

Keane and Shapiro also argue that the bots shouldn’t be able to speak in “absolutes and spurious certainties. They should make clear they are only giving probabilities.”

But regulating what AI can and can’t do is a lot like replacing the wheels on the train after it’s left the station. Ahead of the 2024 presidential election, for example, there is a proliferation of deepfakes—deceptive audio, video, and still images generated by AI to disrupt voter affinity or turnout.

In their op-ed, Keane and Shapiro implored users not to give any AI authority over their lives, to not connect a piece of code to something superhuman or divine.

Blum is more blunt: “Just don’t be stupid about it,” she says. “I really worry about people’s arrogance. We do all these things to make money or because they’re fun and entertaining. But we could benefit from asking if something is a good idea.

“We could stand to be a bit more humble and a bit more cautious.”
 

Learn about supporting the Department of Communication and Media


 
Illustrations by Becky Sehenuk Waite
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

More Stories from the Magazine

Can We Really Teach Machines to Smell?

A statistics professor and his students are exploring the rapidly developing field of artificial intelligence with a new spin on humans’ oldest sense.

 

Reality Check

We all know you shouldn’t believe everything you see on TV, in the movies, or on social media. Right? Right? Still, media can have a profound impact on how we view the real world, leading to everything from bloody noses to long-lasting harm.

 

More Than One Story

An LSA student has curated a museum exhibit that celebrates the stories of Chinese Americans in Detroit—from their migration to the present-day community.

 

Turn Dreams Into Reality

A scholarship made Alison’s dream of attending Michigan a reality. Now, as an LSA sociology major, she is turning her passion for serving others into action for a better community.

Your annual fund gift to LSA changes the lives of students like Alison so they can make a difference in the world.

Email
Release Date: 05/08/2024
Category: Faculty; Research
Tags: LSA; Anthropology; LSA Magazine; Humanities; Communication and Media; Social Sciences; Philosophy; Technology