It's become common to think about artificial intelligence in terms of the speed of its development and use; had this been written even a few months earlier, it could have begun with the revelation that the text had been produced via ChatGPT. The fact that even that trick is already a cliche demonstrates that speed remains the constant theme in both artificial intelligence and how we respond to it.
Accordingly, when Sweetland convened a working group in summer 2023 to complement the University's broader efforts to responsibly consider the utility and effects of generative artificial intelligence tools for the students, the faculty, and the community at large, we knew that the landscape would be changing even as we traversed it. That's why we focused less on explaining the fundamentals (What is a large language model? Exactly how do algorithmic protocols produce "hallucinations"?) and more on guiding principles and practical examples to establish the context in which the use of these tools will evolve.
Working group members included Laura Clapper, Christopher Crowder, Raymond McDaniel, Monroe Moody, Larissa Sano, Simone Sessolo, Ali Shapiro (Penny Stamps School of Art and Design), Theresa Tinkle, and Clay Walker (College of Engineering).
While offering some lists and assessments of available tools identified as AI (even as we remain plausibly suspicious of that term as applied to those tools) and general guidelines for their possible use in text-based and multimodal compositions, most of our work considered possible relationships between the tools' deployment and the specific assumptions and needs of various professions and disciplines. We also offered perspectives about how to consider generative AI tools in terms of linguistic justice, and designed guidelines to assist instructors in their development of syllabi materials to establish the terms of conversation between students and faculty about these issues.
Ideas and theories about the role of generative AI in research, scholarship, artmaking, and writing, however, must inevitably bend to how we use the tools in practice, and to demonstrate that practice the working group also produced a number of case studies to illustrate how users can apply the tools to some of the more common categories of academic work. These case studies revealed some of the limitations that have prevented generative AI use from becoming quite the problem some had prophesied. But to return to the established theme: those case studies are already somewhat out of date, and while we have set as solid foundation for the thoughtful contemplation of these technologies, that ground will continue to shift, and we will be required to shift with it if we want to reduce the odds of us being shifted, unawares, by it.