A recent study published in Computers in Human Behavior has found that people evaluate others harshly when they know a message was written using artificial intelligence. Yet, individuals tend to remain completely unaware of potential artificial intelligence use in everyday situations. When left in the dark about how a message was created, recipients assume a human wrote it and form positive impressions of the sender.
Generative artificial intelligence refers to computer programs that can produce realistic, human-like text based on simple user instructions. People increasingly use these tools (such as Claude, ChatGPT, and Gemini) to draft emails, social media posts, and text messages. Scientists Jiaqi Zhu and Andras Molnar wanted to explore how relying on these programs affects how we view one another in daily life.
Usually, writing a thoughtful message requires time and mental energy. These efforts signal a sender’s sincerity and investment in a relationship. Because text-generating programs remove this effort, the researchers wanted to know if the availability of these tools makes people more suspicious of the messages they receive.
Past studies have shown that people judge communicators negatively when they know a message was generated by artificial intelligence. However, in the real world, people rarely admit that they used a computer program to write their emails. Zhu and Molnar conducted their research to see how people form impressions in realistic situations where artificial intelligence use is kept secret or remains uncertain.
“In academic settings, discussion of generative AI has become unavoidable since ChatGPT’s release in late 2022. For most instructors, detection and regulation of AI use are now part of the job, and in this climate, it’s easy for vigilance to slide into full-on paranoia. Some instructors may even become overzealous, reading AI into writing that may be entirely human, as evidenced by the growing number of high-profile lawsuits against colleges over students who were failed or expelled based on suspected AI use,” said study author Andras Molnar, an assistant professor of psychology at the University of Michigan.
“But in my conversations with people outside academia, I realized we might be living in a bubble: what feels routine in academia may not reflect how people think elsewhere. That’s what motivated our study: we wanted to understand whether people suspect AI use in everyday contexts like emails, text messages, and social media profiles.”
