In my work, I encounter people who would never trust the internet to tell them what is right and wrong, yet they trust generative AI to tell them how to write.

There is a long, problematic history of people feeling positively or negatively toward different kinds of English, rewarding how it is spoken or written by some sectors of society and devaluing how it is used by others. When generative AI language tools came along, they scaled up these problems. English-based large language models are trained on text from the public internet. Human instructions tell the models to sound like formal English. Because of that, large language models end up trained on all the bias baked into standardized human texts and ideas.

At its best, AI English is a language database driven by statistics. It’s big, but it’s canned. History tells us that a full range of global human English gives people the greatest possibilities for expression and connection.

Read the full article in The Conversation.