- Teaching Support and Services
- Guides to Teaching Writing
-
- Teaching Writing with Chatbots
- List of GenAI Tools
- GenAI In The Writing Process
- GenAI Multimodal Projects
- Citation Conventions for GenAI and Chatbots
- Writing Genres and GenAI
- Writing Assignments in STEM
- GenAI and Writing in Engineering and Technical Communication
- Linguistic Justice and GenAI
- Sample U-M Syllabus Statements
- Using ChatGPT for Basic Research
- ChatGPT Response to Sample Essay Prompt
- Call for Test Cases
- Steps for ChatGPT Sample First-Year Writing Course Essay Test Case
- Assigning and Managing Collaborative Writing Projects
- Cultivating Reflection and Metacognition
- Giving Feedback on Student Writing
- Integrating Low-Stakes Writing Into Large Classes
- Motivating Students to Read and Write in All Disciplines
- Providing Feedback and Grades to Second Language Students
- Sequencing and Scaffolding Assignments
- Supporting Multimodal Literacy
- Teaching Argumentation
- Teaching Citation and Documentation Norms
- Teaching Multimodal Composition
- Teaching Project-based Assignments
- Teaching with ePortfolios
- Using Blogs in the Classroom
- Using Peer Review to Improve Student Writing
- OpenAI ChatGPT 3.5 vs UM-GPT: Test Case
- Support for FYWR Courses
- Support for ULWR Courses
- Fellows Seminar
- Writing Prize Nominating
The easiest way to discuss the features, affordances, advantages, and risks of AI utilities with students is to actually use those utilities with them so both teachers and students can see how the process may unfold and discuss the results and implications. If teachers are concerned about students using something like ChatGPT to complete assignments, there’s nothing more useful than entering an assignment prompt and seeing what happens; if students are considering using AI to respond to assignment prompts, it’s just as useful to see how a teacher evaluates the response.
1. Here are a few guidelines for modeling an AI prompt response:
- Try feeding the original assignment language prompt into the service and request clarification or streamlining. This can illuminate assumptions students may make on the basis of the original assignment language.
- Ask the service to complete the assignment and discuss with students the quality of the initial results.
- Modify those initial results over several iterations and exchanges using a combination of the following:
- Style modifications (to determine how well the service can adapt to specific, genre-dependent norms)
- Content modifications (to generate different and/or more thorough responses)
- Structural modifications (to shape and sequence the results in more strategic or effective ways)
- Accuracy modifications (to detect factual errors or misrepresentations)
- Elaboration modifications (to provoke deeper, more thorough or focused inquiry)
2. Discuss each stage of the process and the results with students
In that discussion, ask students to consider the following:
How much “better” are the AI-generated results than what the students think they can do, if at all?
How much effort does it take to make the services produce work that meets the students’ standards?
How does this process influence their thoughts about what constitutes an effective assignment, both for learning and for developing writing skills?
If the services can be incorporated into their writing processes, what personal guidelines do they think will optimize their use?
Here (change link) is a thorough and specific example of a test case from a First-Year Writing Course, and here (change link) is a rubric for how to generate a test case for First-Year Writing Courses more generally.
Sweetland hopes to compile an archive of test cases across multiple disciplines, so if you are willing to share any you create for your course please submit a PDF here.