Sweetland Center for Writing, Summer 2023

Chat-GPT and other chatbots have been in the press regularly for since the last months of 2022, and the technology provokes fear (this will be the end of the English major or college essay), excitement (it can answer a wide range of questions, quickly produce texts in a variety of genres), and everything in between. Instructors and students now have access to technologies capable of composing text in response to user prompts: OpenAI’s Chat-GPT, Microsoft’s Bing, Google’s Bard, and the University of Michigan’s UM-GPT. Other generative artificial intelligence (GenAI) tools are also widely available and have the capacity to output visual images, videos, and music. The widespread use of the technologies raises concerns about the future of writing in education, equitable access, academic integrity, responsible and ethicaluses, and privacy.

Information about the technologies isavailable on U-M’s Generative Artificial Intelligence website. In the spring, Sweetland’s Director, Tessa Tinkle, was on a GenAI Advisory Committee charged with making preliminary recommendations to the Provost about how the campus might address the new technologies. The Committee’s report is on the GenAI website, and some of the report and thinking behind it have become part of this document and Sweetland’s website. Sweetland’s website will be augmented in the coming months as our research progresses.

Writing is a crucial part of learning in all disciplines, so it is not advisable for instructors to eliminate writing assignments with the hope of preventing students from using a chatbot to cheat. It is also not advisable to require all student texts to be handwritten in class. This strategy does not promote thoughtful writing practices, and it disadvantages some students with disabilities as well as English language learners.

The U-M community will need extensive research to understand and manage the risks and opportunities associated with the ongoing technological transformation. This brief document offers preliminary suggestions about how instructors might integrate chatbots experimentally into writing courses, not to replace students’ intellectual labor, but to explore how the technology might enhance learning, supplement students’ research and writing processes, and advance GenAI literacy.

Why Instructors Might Consider Teaching Writing with a Chatbot?

Chatbots and other GenAI tools are already being used by students and instructors as part of research and writing processes:

  • At the start of a research project, a chatbot can offer a general overview of a topic and suggest some publications for the student’s review.
  • A chatbot can be prompted to refine and narrow the topic, suggest research questions, and create a preliminary bibliography (this might contain references to nonexistent works, so the references have to be checked).
  • A chatbot can be used to suggest several outlines, thesis statements, or titles for a paper. Students may use the suggestions to prompt their own thinking.
  • A chatbot can draft a paper, and a student can use that text as the basis for their own revision practice, tracking changes to reveal their revisions and then reflecting on how the draft advanced or hindered their learning and thinking processes.
  • A chatbot can give feedback on a draft, including suggestions for improvement. (It cannot necessarily implement its own suggestions.)

Chatbots can be used at any point in the writing process. Indeed, the technology can be intgrated into scaffolded assignments sequences (e.g., Paul Fyfe, How To Cheat on Your Final Paper: Assigning AI for Student Writing, which describes an experiment in asking students to use a chatbot). At present, however, these tools cannot determine which ideas are worth pursuing, or which outline would best meet the writer’s purpose.

Users need to exercise care and be aware of chatbots’ limits and weaknesses, while remaining skeptical about the output:

  • Data input (i.e., questions or prompts) into commercial tools is not private, so users should not disclose sensitive personal information.
  • Chatbots are neither sentient nor capable of reasoning. They cannot evaluate accuracy of information, correctness of scholarly references, or the truth value of output. The words generated by chatbots may sound authoritative but are not based on knowledge and are not necessarily factual. The output must be fact-checked for misinformation, and citations must be verified.
  • Chatbots are probabilistic models that predict likely word sequences, so the output can be overly general, nonspecific, and bland.
  •  The technologies raise multiple ethical concerns:
    • exploitive labor practices are evident in the development;
    • the energy-dependent technology produces environmental risks for those least likely to benefit from the tools; and
    • the training data are not curated for diversity and reproduce social biases and stereotypes while marginalizing non-hegemonic voices and dialects.

UM-GPT protects users’ privacy, so it is preferable to commercial technologies, though it has the same weaknesses and limitations in terms of output and ethics.

Detection tools cannot reliably detect AI-generated text. At present, such tools produce many false positives and false negatives, creating risk for students. If we enter student writing into these tools, moreover, we violate their right to privacy.

Given the broad availability of the technology, and the impossibility of detecting students’ uses of it, instructors should consider integrating a chatbot into courses in order to develop students’ ability to think critically about text output, and to identify biases, inaccuracies, falsifications, ethical implications, benefits, and limits of the technology. We have an opportunity to guide students toward responsible uses of the technology that enhance learning and productivity.

Course Policies for Chatbots

GenAI is changing rapidly, and new tools will become available. In the near term, course policies will be provisional and subject to change. U-M schools and colleges have not yet updated Academic Misconduct policies to take account of GenAI. As policies develop, several principles can guide instructors to develop policies appropriate for their contexts:

  1. Protect the cognitive dimension of learning: GenAI should enhance, not hinder, learning.
  2. Responsibility for content: Students are responsible for all content (ideas, facts, citations), however the work is generated. (Note: chatbots can and do generate untrue, biased, inaccurate, and fictional content.)
  3. Require ethical transparency: Students should be transparent about how they use GenAI and adhere to standards of academic integrity.
  4. Explain academic misconduct: Instructors should distinguish between allowed and disallowed uses of GenAI, including how, when, and why GenAI may or may not be used. 

Instructors have four basic options for course policies (see the Sentient Syllabus Project for more complex and detailed suggestions):

  1. Writing is foundational to this course, and I strongly recommend that students not use ChatGPT or similar technologies. You are expected to use your own knowledge, your own words, and your own critical thinking and reasoning skills at every point in your writing process.
  2. Chatbots are permitted only with the instructor’s pre-approval and only in specified contexts.
  3. Some assignments in this course require you to use a chatbot to develop generative artificial intelligence literacy and critical thinking skills. You will turn in your prompts and the text output as part of these assignments, as well as your own personal reflection on how well the chatbot performed and how your use of the tool impacted your learning.
  4. You may use chatbots or other GenAI tools in this course so long as you document your use, indicate how you’ve checked the accuracy of the output, turn in your prompts as well as the output, and explain your reasoning for using the technology. You are responsible for what you turn in for assessment, including any inaccuracies or factual errors in the text.

It is difficult at present to specify penalties for unpermitted chatbot use. Since detection tools are not reliable, a ban on chatbots cannot be enforced fairly. An instructor who engages students in evaluating output and considering (un)ethical uses will be in a position to guide responsible uses and critical evaluation of the output. 

Evidence-Based Writing Pedagogy

The goal of a writing assignment is ideally for students to learn from the experience of writing, as opposed to demonstrating what they have already learned. The following features characterize evidence-based best practices for academic writing at all levels, from the first year through the dissertation. The Sweetland Center for Writing Guides to Teaching Writing explain these practices and provide examples from multiple disciplines. Best practices have new implications specific to chatbots. The following table offers preliminary ideas about how instructors might add GenAI awareness to their pedagogy. Since GenAI is constantly changing, all suggestions are provisional and will need to be updated as the technology develops.

Best Practice GenAI-Aware Practice
Make expectations for writing assignments clear by explaining the social context, academic conventions, genre, audience, and purpose for the task.

Test the assignment by putting it into UM-GPT. Discuss the output with students, demonstrating the strengths and limits of the text in addressing a specific audience in a particular genre, using appropriate academic conventions of citation and documentation of sources.

Assign meaning-making and authentic learning tasks that require students to make their own sense of new information in light of their experiences, apply knowledge to specific real-world issues or problems, synthesize multiple sources, and evaluate information. ChatGPT is trained on data published before 2021, so it cannot respond plausibly to current events. Moreover, the training data does not include a wide range of materials, including students’ life experiences. GenAI-aware writing tasks can require students to draw on their own experiences or apply their knowledge to current situations.
Focus on the process of writing and learning more than on a polished product. Since chatbots can be used at any stage of the composing process, students must learn to use the technology effectively and responsibly, which includes prompt writing, assessment of ideas and organization, evaluation of appropriateness for audience and purpose, fact checking, and source/citation verification. Assignments can require them to turn in all prompts and output from a chatbot, using track changes to show how they’ve revised the output. Students can also reflect on the ethics of their use and on how usingthe chatbot enhances or hinders their learning. 
Design interactive writing process in which students receive formative feedback on a draft from peers and/or an instructor. GenAI can offer suggestions for text improvement, and students can learn to determine whether and how the feedback is helpful. Simultaneously interacting with peers and the instructor at specific points in the writing process functions to emphasize that writing is a social act, and that readers react diversely to the same text. Responding to formative feedback from peers and instructors can teach students to re-evaluate their work, check it against their own purposes, and be able to explain why they develop the ideas and expression one way rather than another. Interactive writing processes enhance learning in multiple dimensions of knowledge, skill, and values.
Assign metacognitive exercises as part of the planning, drafting, and revision stages of a major writing assignment: e.g., require students to cogitate on how well their planning worked, what they learned from formative feedback, and why they revised as they did. Machines cannot think, but chatbots can output a “metacognitive” reflection of sorts. Instructors should nonetheless require students to reflect on their writing process, what they learn from interaction with peers, instructors and even GenAI tools, the decisions they make about revision, and how they adhere to academic conventions (source citation, use of sources). Focusing on process, interactive learning, and metacognition foregrounds individual learning and awareness of learning that does not come from a machine.
Assign some high-stakes writing that is completed outside of class, scaffolded so that complex tasks are broken down into a manageable sequence, revised in response to formative peer and/or instructor feedback, and involving reflection on feedback. Focusing on the writing process, interactive learning, and metacognition are perhaps even more important in GenAI environments. Exercises using a chatbot at various stages in the writing and research process can be added for the sake of comparing GenAI output with human invention.
Make standards for assessment transparent and explicitly connected to learning goals for the course, whether through a rubric or less formal tool.

ChatGPT can output mechanically and grammatically correct prose, but it also eliminates minoritized dialects. Since machines can produce “correct” prose, instructors should consider reducing the weight of correctness in grading standards. Assessment should prioritize critical thinking, the learning process, research, problem-solving, analysis, expression of linguistic identity, effective ways of addressing specific audiences, synthesis of multiple sources, application to unique real-world situations, personal relevance and application of concepts, fact-checking skills, and reflections on feedback and revision.


Explicitly teach how to evaluate sources, integrate primary and secondary sources into a composition, and document use of sources. If GenAI use is permitted, scaffold the writing process to include fact-checking, verifying sources of information, identifying bias, recognizing fabrications, and adhering to academic genre and citation standards. These are functions chatbots do not do well, making students’ ability to perform them all the more important.

Sample Writing Assignments using Chatbots

Here are some classroom activities and assignments that require students to use a chatbot and evaluate the text output. For the sake of students’ data privacy, these assignments are intended to be implemented with UM-GPT.

Educate students about the factual unreliability of text output by having everyone in the class ask UM-GPT to create a 100-word biography of themselves. Here’s an example of what UM-GPT produced for Theresa Lynn Tinkle, Professor of English and Director of the Sweetland Center for Writing, born in Stockton, CA, birthdate specified: “Theresa Lynn Tinkle, born in Stockton, California, is a passionate advocate for arts and social justice. Starting her journey at Berkeley City College, she studied fine arts and later transferred to the University of California, Santa Cruz, where she received her Bachelor's degree in Art and Social Documentation. Theresa's work centers around exploring the intersection of art, activism, and community engagement. She has organized numerous art exhibitions, workshops, and public installations all focused on challenging societal norms and promoting inclusivity. With her strong belief in the power of art to inspire change, Theresa continues to impact her community by using her creativity to address pressing social issues.”

In fact, Tinkle’s degrees are a B.S. in education and a Ph.D. in English literature, and she has never attended the schools named in this biography. The chatbot creates a wholly fictional character with specific beliefs, achievements, and passions—ignoring the prompt’s factual information about her current position.

When asked for the source of this information, UM-GPT responds: “As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I have not been directly trained on specific biographical information or have access to sources to cite them. I should note that while I strive to provide accurate and up-to-date information, my responses may not always reflect the most current news events or developments. News is constantly evolving, and new information can emerge that may change the context or understanding of a given topic.”

Students may use this exercise as an introduction to the chatbot’s limits and tendency to fabricate untruths that sound authoritative.

Assign students in small groups to ask UM-GPT questions about a subject they know very well, and then to reflect on how their knowledge compares with GenAI output.

Enter a writing assignment into UM-GPT and ask the chatbot to produce an essay of a specified length. Use the output as the basis for a class discussion about the quality, accuracy, voice, rhetorical strategies, point of view, use of references, and truthfulness of the text. Next, ask the chatbot to create a grading rubric for the assignment, and use the output to initiate a discussion with students about the most important learning goals for the assignment (the chatbot will probably not be rightabout the course-specific objectives).

Assign students the task of comparing feedback on a draft from a peer, the instructor, a peer writing tutor, and UM-GPT. Require them to identify feedback strategies they find helpful or not, and to evaluate the revisions they made based on each source of feedback.

Assign a scaffolded essay project that uses UM-GPT at specific stages in the writing process (brainstorming, preliminary research, drafting, feedback for revision). Students retain a record of text input and output, and analyze the process in terms of ethics and learning value, quality of the output, and lessons learned about effective prompts. A next step might require students to complete the same process without a chatbot, adding a reflection on how their understanding of the topic was affected by the different composition processes.

Debate UM-GPT. Have students pose a controversial question to the bot and follow up with questions that challenge the output, present counterarguments, and request specific examples. This task may be addressed collaboratively in a class activity, or individually as the start of an assignment sequence about testing arguments. The task may provide the basis for a reflection exercise about the value of the chatbot to the learning process.

Ask students to interact with UM-GPT in order to analyze its social mimicry. What prompts lead to apologies? What prompts lead to corrections of content? How does the social mimicry create the impression of having a conversation with a sentient tool (though the tool cannot actually think or reason)?