by Adia Pudtalsri

Nominated by Jonathan Holland for English 125: Writing and Academic Inquiry

Instructor Introduction

Darrin Zhou's essay was completed for the literary journalism segment of Immersion Writing (English 425). This assignment is a long-form investigation written in a first-person, narrative style that weaves together personal experience with research. Students choose topics that are part of the cultural moment, and spend nearly two months immersed in their material, investigating the subject with a journalistic approach (e.g., interviews, field observations, reportage, etcetera), before ultimately crafting that material into a compelling story. To do so successfully, one must not only skillfully observe and engage with subjects, but also find avenues to get beyond the reporting—to find the story behind the story. And to give life to the material in an artful, compelling way. Darrin’s essay does just that. His investigation is timely and illuminating, elegantly balancing the literary and journalistic elements of the subject, while offering readers insightful reflection, deft characterization, and superb narrative development. All of which culminates marvelously at the end. 

— Jonathan Holland

Who Holds the Brush? The Artist's Dilemma in the Age of Artificial Intelligence

Earlier this year, seemingly unseen images capturing the lush, hand-drawn aesthetic of legendary Spirited Away director Hayao Miyazaki went viral. However, they were not leaked scenes, nor were they drawn by the master animator himself. They were the product of a live demo. On March 25, OpenAI CEO Sam Altman, in a live demonstration, commanded a new image model to transform a casual selfie into an "anime frame." The output was uncanny: crisp lines and classic eel shading-a digital technique that meticulously replicates the hand-drawn aesthetic it was trained to mimic. The trend ignited instantly. Software engineer Grant Slatton shared his own "Ghiblified" family photo, complete with a corgi, rendered with the studio's signature traits: voluminous hair, luminous eyes, and softly creased clothing (Chayka). His tweet, amassing over 53 million views, spawned a flood of imitations so vast that Altman joked OpenAI's servers were "melting" (Price et al.). This marked a profound shift. Unlike earlier AI memes celebrated for their absurdity, like Will Smith clumsily eating spaghetti or the Pope donning a puffer jacket, the "Ghiblification" wave was different. Here, the aesthetic itself was the appeal; people admired the color palettes and line work as art, often forgetting their mechanical origin. The bitter irony is absolute: Miyazaki, whose life's work defined this aesthetic, has called AI "an insult to life itself' (Murthi).

Miyazaki's condemnation is mirrored by a more immediate, economic threat facing working artists. For illustrators like Dungeons & Dragons artist Greg Rutkowski, the danger is tangible. Since September 2022, his name has been used as an AI prompt over 400,000 times without his consent, more frequently than Pablo Picasso or Leonardo da Vinci (Hutchinson and John). He immediately foresaw the consequence: "I won't be able to recognise and find my own works on the internet." Rutkowski confided. "The results will be associated with my name, but it won't be my image. It won't be created by me. So it will add confusion for people who are discovering my works." His prediction proved correct. Despite Stability AI eventually disabling the prompt after his public objection, Rutkowski had already noticed a drop in work-for-hire commissions as AI generated convincing imitations of his style. This is the harm of what scholar Trystan Goetze terms a "style theft": clients are driven away from the original artist toward cheap, automated imitators (Goetze). Defenders of AI art rightly point to art's long tradition of inspiration. But the Rutkowski case forces a critical question: when does algorithmic "inspiration" cross the line from homage into outright theft of an artist's identity?

This conflict between artist and AI is rooted in a fundamental shift in the technology's purpose, from augmenting human intellect to replicating human creativity. Though the concept of AI was originally coined at a conference at Dartmouth University in 1956 (Diamandis), the desire to automate creativity itself is ancient. Satirized in Jonathan Swift's 1726 Gulliver's Travels, "The Engine" is a large mechanical contraption used to assist scholars in generating new ideas, sentences, and books (Chatwal). This fantasy became a reality with models like OpenAI's GPT-3, which could generate writing and code from prompts (Mucci). Initially, AI promised pure augmentation. The decisive shift came with image-generation models like DALL-E, which have the ability to generate highly detailed images from textual descriptions. These tools moved beyond assisting intellect to simulating artistry, promising to turn ideas into imagery without traditional graphic design skills (Vass). This is the turning point: the technology evolved from a tool for expressing ideas into a system that generates their aesthetic form. It is this capacity that powered the viral "Ghiblification" of photos and enabled the industrial-scale "style theft" that threatened working artists. The tool designed to bypass traditional skills has become the engine that bypasses the artist altogether.

This sidelining of the artist forces us to confront a central dilemma: who holds the brush? A definitive answer requires navigating the central debate between two defining perspectives: the philosophical viewpoint, articulated by Noel Carroll, that artistic intent is a non-negotiable product of human consciousness, and the artistic practice of figures like Refik Anadol, who reframe AI as a "thinking brush" for cognitive enhancement. To frame AI as merely art or not art, however, misses the critical point. It is not a replacement for human art, but a transforrnative medium that, by operating without human intent, functions as an engine of cultural and economic extraction. Navigating this reality therefore requires ethical policies and institutional refonns to protect artistic labor and foster critical literacy.

The case against Al's status as art begins with the argument surrounding artistic intention originating from a conscious mind, a requirement AI fails to meet-a point that even theories dismissing the importance of intent ultimately presuppose. As coined by critics William Wimsatt and Monroe Beardsley in "The Verbal Icon" (1954), the "Intentional Fallacy" warns that "the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art" (Mambrol). While this argument centers around the belief that we shouldn't over-rely on authorial intent to find meaning, it presupposes a human author existed in the first place. American philosopher Noel Carroll builds from this, contending that intention remains vital for initially identifying something as artwork (Huddleston 1-8). This intention emerges from an embodied consciousness interacting with the world, the very source the intentional fallacy takes for granted. We see this synthesis of the mind, body, and world in Claude Monet's late work, where his repeated paintings of water lilies reflect a deep, perceptual engagement with the garden in Givemy (Muir and Sutherland), and in Edvard Munch's "The Scream," which viscerally channels his personal tragedies characterized by the death of his mother and sister into iconic form (Sooke). This artistic intent is not merely metaphysical; it has a biological infrastructure. Cognitive science grounds this embodied consciousness in biology, revealing that artistic intention is facilitated by brain systems like the bilateral hippocampus, which activates memory, imagination, and creative thinking (Beaty). Therefore, intentionality is an embodied process of conscious translation, an aspect AI cannot fundamentally replicate.

This absence of lived experience is mechanical, evident in the very architecture of generative frameworks. Models such as DALL-E and Midjourney are trained to reverse a diffusion process, which involves a neural network iteratively removing noise from randomness (Wijaya) by predicting the most probable image that corresponds to a text prompt ("What is DALL-E ... "). This means that generation is fundamentally an act of sampling from a learned data distribution, a mechanistic process that linguist Emily Bender frames as the work of a "stochastic parrot": an AI system that uses statistical relationships learned from massive datasets to convincingly generate human-like text, while lacking true semantic understanding behind the word patterns (Bender et al. 616-617). This process confines creativity to what scholar Margaret Boden classifies as the combinatorial level: creativity that is proficient at recombining existing elements but structurally incapable of the exploratory extent that defines groundbreaking human art (Boden). This technical foundation leads to a homogenization effect, a cultural consequence where AI-driven systems designed to optimize efficiency and cater to widespread preferences result in uniformity in artistic expression (Mann). This is magnified by an empirical study that revealed that large language models built on entirely separate architectures still had striking overlaps in their outputs, recycling the same frames, phrases, and ideas despite their technical diversity (Wenger and Kenett). This drive toward predictable outputs exists in direct opposition with a core artistic tradition that values creative uncertainty. As artist John Lyons asserts, art is "an adventure in creative uncertainty" (Pearson), a principle embodied in works like John Cage's 4 '33", which allows the audience to experience the unpredictability of ambient sound (Marley). Governed entirely by its training data and optimized to avoid true randomness, AI is structurally opposed to this ideal.

A defense of AI art begins by reframing its "derivativeness" as a natural extension of art's inherent intertextuality: a term popularized by literary critic Julia Kristeva that refers to the idea that a text "cannot exist as a hermetic or self-sufficient whole, and so does not function as a closed system" (Raj 77-80). This idea of work being the fundamental absorption and transformation of another extends beyond a modern theory and can instead be framed as a timeless artistic reality. As T.S Eliot argues in "Tradition and the Individual Talent," poets must work with the awareness that "art never improves, but that the material of art is never quite the same," resulting in a necessary "conformity between the old and the new" (Eliot). This logic of repetition-with-variation is primitive, and its relevance extends beyond writing to the wider realm of artistic practice. This concept is evidenced by the prehistoric cave paintings of Lascaux and Chauvet Pont d' Arc, which, despite being created roughly 20,000 years apart, exhibit a strikingly similar visual language of animals and symbols (Vamelis). This provides irrefutable proof that human expression has always operated within and through tradition and shared conventions. This kind of repetition is not empty reproduction. As philosopher Gilles Deleuze establishes in "Difference and Repetition," this form is inherently creative, given that each iteration introduces small yet meaningful variations. By differentiating mechanical repetition, which is characterized by identical reproduction, from productive reproduction, which subtly diverges from the original form, a case can be made that the latter accumulates into genuine innovation (Bell). This allows us to reframe our understanding of Al's derivative nature as a productive process of intertextual recombination that transforms the visual tradition it was given.

This creative prowess positions AI as a new artistic medium defined by a distinct aesthetic language rather than the mimicry of old forms. This follows a historical pattern: just as photography was dismissed as a thoughtless mechanism for replication that could "never assume a higher rank than engraving" (Teicher), AI is navigating a similar path as a new technology that is initially dismissed before being accepted as a legitimate and respected artistic medium. Its aesthetic language is one of generative realism and interactive potential, generating highly realistic works with a possible future for interactive and dynamic features that respond to user input or environmental factors to create evolving visual experiences ("AI-Generated Art"). The medium's practice is iterative, allowing artists to use features like "inpainting" to tweak, blend, and push outputs until the result is both surprising and satisfying (Adeyemi). This process is exemplified by Amith Venkataramaiah, whose Plastic Animals series uses Midjoumey's iterative refinement to visualize hybrid creatures, critiquing climate change through the medium's unique capacity for conceptual fusion (Ohja). This ultimately represents an intellectual shift rather than a mechanical one. As asserted by Refik Anadol, a pioneering artist who has had his AI-powered art installations featured in MOMA and Guggenheim Bilbao, AI is less a tool, which he finds to be "more like a physical object or a keyboard or mouse." Instead, he frames AI as a "cognitive enhancement" of the artist's mind, "enhancing our thoughts, our reasoning, and our creativity." This transforms the artist's relationship to the medium, creating what he has coined as a "thinking brush" that will expand creative reasoning itself (Carmichael). Therefore, through Al's redefinition as a cognitive collaborator, AI is being validated as a reputable new medium in contemporary art.

Al's legitimacy is reinforced by its demonstrated credentials in social impact. The technology is actively dismantling the socioeconomic and technical barriers that have traditionally restricted artistic expression, transforming a specialist skill into a more universal literacy. Scholarly analysis confirms a strong social gradient in arts participation, where individuals from lower socioeconomic backgrounds face prohibitive costs for supplies and training, leading to an uneven cultural landscape (Marino). Generative AI directly counters this exclusion by providing a low-cost alternative. Practically, this means artists like Kendra Germenis can use AI to create client mockups and unique backgrounds, bypassing the need for expensive design software subscriptions (Germenis), while emerging assistive technology like text-to-speech and tactile graphics open new creative avenues for visually impaired artists to bring their artistic visions to life ("Will AI Create... "). The cumulative impact of this accessibility is a quantifiable market transformation. A 2025 study found that the introduction of AI on a digital platform led to an 88% increase in active creators per month and a 78% surge in image output per month, empirically validating the technology's power to expand the creative class (Waikar). This dramatic spike in participation and change in dynamics signals the rise of a participatory culture, where lowered barriers empower individuals to transition from consumers to active creators (Hennen). Therefore, Al's primary benefit is the democratization of the creative act itself.

Beyond democratizing access, AI instigates a qualitative shift in the creative process itself, replacing linear execution with an iterative workflow. This new method stands in stark contrast to traditional art-making, which often involves a prolonged physical struggle with materials, such as "painting with an outstretched[ed] arm, often at shoulder height or higher, or at awkward angles," as artist Ben Benedict contends (Horejs). The AI-assisted process, by contrast, is a rapid feedback loop of prompt, generation, and refinement, where artists start with an initial query, evaluate outputs, and adjust and refine as needed ("Iterative Improvement: The Art of... "). In practice, this means artists use AI-generated patterns as a starting point, embracing the technology's ability to generate randomness that they then strategically shape and adjust ("Art in the Age... ").This transforms the artist's role into that of a director who poses strategic questions, as architect and urban planner Moshe Safdie suggests, to which AI can provide "significant answers" that fuel improvement "if we ask the right questions" (Mineo). This method is rooted in scholarly inquiry. A 2025 study confirmed that such iterative workflows promote innovation by encouraging a broader search for solutions through frequent task-switching (Kagan et al.). Thus, the artist's primary labor shifts from manual creation to conceptual framing and orchestration.

Yet, Al's redefined role exists within a structure built on ethical violation, originating from an operational foundation made up of a systemic extraction of human creative labor. Leading AI scholar Kate Crawford frames this as "data colonialism," a process where human culture and expression are treated as a free raw material to be mined for capital (Kissell). Crawford's theory manifests itself in a legal reality, with Getty Images' recent lawsuit against StabilityAI that alleges the use of material from Getty during the training period without license or compensation (Manshaus). This extraction is not merely corporate; it is deeply personal. Cartoonist Sarah Andersen, creator of "Sarah's Scribbles," discovered her work was used to train AI systems without her consent, arguing in her amended complaint that these systems function as "copyright-laundering devices," using her work and others' without consent to deliver "the benefits of art without the costs of artists" ("The Great Art Heist... "). This legal gray area is exacerbated by institutional policy: the US Copyright Office explicitly denies protection to works that do not "owe their origin to a human agent," a standard AI-generated output fails (Wang and Lee). The result of this is a perverse system: the copyrighted work of millions of artists is used for a technology that produces outputs legally void of the human authorship required for protection, therefore devaluing the very creative labor it depends on and prioritizing corporate expansion over artists' rights.

Beyond exploiting labor, AI art actively degrades cultural diversity by amplifying biases present in its training data sets. A 2023 study found prompts for "CEO" generated light-skinned faces 80% of the time, while "fast-food worker" produced dark-skinned faces 70% of the time, hard-coding social stereotypes into visual language (Nicoletti and Bass). This amplification extends beyond present-day biases, automating deep historical prejudice by perpetuating colonial stereotypes and replicating dominant cultural narratives. When visualizing pre-colonial New Zealand, OpenAI's Sora renders the landscape as an "empty wilderness," directly echoing the 19th-century European Romantic paintings used to justify colonization (Hellmann). This homogenization has a disproportionate negative impact on marginalized communities: AI operates on probabilities and with 40% of the world's languages underrepresented in training data, it inevitably optimizes for the most statistically prevalent Western perspectives ("Al's Hidden Cultural Colonialism ... "). The consequence of this is active cultural erasure, where AI generates photorealistic yet stereotypical "hybrid" images of indigenous individuals, ultimately reducing complex identities to a set of exotic tropes (Rosser and Spennemann). Worse, this homogenized output risks being scraped back into training datasets, creating a degenerative feedback loop that will further exacerbate these biases over time (Amini and Moore). The ultimate consequence of this is not just more biased images; it is the normalization of a biased visual world where stereotypes about race, history, and culture cease to be seen as errors and instead become the most accessible, "default" representations.

Given that AI-generated media is projected to become a $900 million market by 2030, its presence is not a passing trend but a permanent feature of the cultural landscape (Rosario). Thus, the central question that emerges for working artists is no longer how AI will affect their field, but how they will navigate it. Two primary paths for coexistence have emerged. The first is collaborative embrace: artists can integrate AI as a creative partner, training models exclusively on their own work as exemplified by artists like Sougwen Chung, to generate variations from the original that they can then curate and refine (Baxter). This maintains their role as creative director while leveraging the technology for inspiration.

The second is active advocacy: choosing to resist technology's unethical foundations and fight for structural protections. This path recognizes that individual adaptation is insufficient without systemic regulation. Therefore, while the choice to collaborate or resist falls on each artist, the viability of either path depends on a three-part policy framework focused on transparency, consent, and structural support. First, transparency must become the baseline. Standards like the Coalition for Content Provenance and Authenticity (C2PA) attach verifiable "content credentials" to digital media, disclosing origin and edit history ("Verifying Content Credentials ... "). This practice has already been adopted by leading institutions such as the BBC and Adobe to combat misinformation (Parsons). Transparency alone, however, only addresses the output; it must be paired with ethical sourcing to resolve the foundational issue of extraction. Tools like "Have I Been Trained?" allow artists to search through the 5.8 billion images in training sets and opt out, moving the industry from non-consensual scraping toward a model of informed consent (Wiggers and Kamps). Finally, protecting artists' rights requires modernizing support for artistic labor. This includes updating intellectual property frameworks and identifying the "authors" of AI art to allow for AI users to obtain rights to AI-produced art (Conrad et al.). Implementing this framework would transform AI from an extractive threat into a tool with an ethical pedigree, fostering a sustainable economy where artists can be compensated. The challenges of this complex framework, though, are significant: creating global standards and updating copyright laws requires unprecedented cooperation between artists and technology platforms. Moreover, legally defining concepts like artistic style requires answering these philosophical questions with legal precision. These complexities are precisely why deliberate policy rather than passive adoption is now a necessity.

This call for action is not only global: it needs to be enacted institutionally, as generative AI is fundamentally reshaping learning at the University of Michigan. Though often promoted as an efficiency tool, its usage short-circuits the developmental learning process. A 2025 study established a significant negative correlation between frequent AI use and critical thinking, driven by increased cognitive offloading (Gerlich 1-5). For art students at STAMPS, this means the erosion of skills in color, composition, and technique. This AI art dilemma is a microcosm of a broader academic threat, affecting all fields of study: for LSA writers, it replaces the struggle to find an authentic voice; for engineering students, it deprives mathematical reasoning and problem solving. This erodes the foundational basis that makes a college degree valuable. Despite these risks, students are using these tools at a staggering rate. A global survey found that 86% use AI in their studies, with 54% using it weekly (Kelly) often without guidance. The fact that only 22% have received clear policy direction from their schools (Vilcarino and Langreo) reveals a gap that the University of Michigan has both the responsibility and opportunity to close.

To prepare its students for the world they will lead, the University of Michigan must institutionalize critical AI literacy, moving from an elective interest to a core competency. This shift is already being modeled by faculty like STAMPS professor Osman Khan, creator of the exhibition "Road to Hybridabad," which features AI-trained Scheherazade with "LLM trained on immigrant narratives." His stance on the technology is neither adoption nor rejection; he states, "I'm not an artist who sort of says AI is the future, I'm going to do it, or I'm relinquishing my artistic thing to Al. It's actually just trying to understand what it is. It's a new material, it's a new medium, so as an artist, we have to engage it, and I engage it by using it as opposed to depicting it." This view of AI as a "critical device," rather than a replacement for human artistry reframes it as a subject for critical literacy, not a mere productivity hack. While a foundation exists in courses like "Using AI to Expand Creativity" and relevant speaker series such as "Technology Meets Creativity: Exploring the Potential of Artificial Intelligence for the Arts," these initiatives remain optional, relying on student self-initiation. To ensure every graduate can engage with these tools ethically, the university should enact several measures. First, it should direct the Center for Research on Learning and Teaching (CRLT) to create a comprehensive "Generative AI Toolkit," empowering faculty members to integrate critical AI discussions into their existing courses. Moreover, it should establish a mandatory, cross-listed course, such as "The Ethics of Generative AI," co-taught by faculty from the School of Information, STAMPS, and LSA, to provide a foundational literacy for all undergraduates. Only through such universal requirements can the university close the current guidance gaps and develop skilled users of technology equipped to tackle any field of study.

The question "who holds the brush?" in the age of AI does not yield a simple answer. The brush is now held in the artist's hand, but its strokes are guided by a vast archive of human culture. This partnership generates both groundbreaking innovation and profound ethical conflict. The path forward therefore requires appropriate regulation to ensure this new medium respects the human creativity it learns from, rather than committing the kind of artistic insult that provoked Miyazaki's famous condemnation. The ultimate goal is not to reject the tool, but to master its use, ensuring it serves human creativity rather than consuming it.

Works Cited

Adeyemi, Abigail. “The Rise of AI-Generated Art: How Algorithms Are Creating Masterpieces.” MoMMA, 11 Jun. 2025, https://momaa.org/the-rise-of-ai-generated-art-how-algorithms-are-creating-masterpieces/

“AI-Generated Art.” IxDf - Interaction Design Foundationhttps://www.interaction-design.org/literature/topics/ai-generated-art. Accessed Dec 4 2025.

“AI’s Hidden Cultural Colonialism Crisis: When Algorithms Erase Diversity.” Medium, 25 Jun. 2025, https://medium.com/@TheDailyReflection/ais-hidden-cultural-colonialism-crisis-when-al gorithms-erase-diversity-ba1988309b1f.

“Art in the Age of AI: The Changing Role of Artists.” AGI Fine Art, 9 Sep. 2024, https://agifineart.com/advice/art-in-the-age-of-ai-the-changing-role-of-artists/.

Amini, M. Hadi, and Ervin Moore. “How poisoned data can trick AI - and how to stop it.” The Conversation, 13 Aug. 2025, https://theconversation.com/how-poisoned-data-can-trick-ai-and-how-to-stop-it-256423.

Baxter, Claudia. “AI art: The end of creativity or the start of a new movement?” BBC, 21 Oct. 2024, bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement.

Beaty, Roger E. “The Creative Brain.” Cerebrum : The Dana Forum on Brain Science vol. 20201 Jan. 2020, https://pmc.ncbi.nlm.nih.gov/articles/PMC7075500/.

Bell, Jeffrey A. “Difference and Repetition by Gilles Deleuze.” EBSCOhttps://www.ebsco.com/research-starters/literature-and-writing/difference-and-repetition-gilles-deleuze. Accessed 7 Dec. 2025.

Bender, Emily, et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability and Transparency, 1 Mar. 2021, pp. 616-617. https://dl.acm.org/doi/10.1145/3442188.3445922.

Boden, Margaret A. “Creativity in a Nutshell.” Interalia Magazine, Jul. 2016, https://www.interaliamag.org/articles/margaret-boden-creativity-in-a-nutshell/.

Carmichael, Matt. “How ‘generative reality’ will become a new art medium.” Ipsos, 13 Jun. 2024, https://www.ipsos.com/en-us/future/how-generative-reality-will-become-new-art-m edium.

Chatwal, Gurbans. “Generative AI’s Satirical Beginnings in Gulliver’s Travels.” Medium, 5 Jun. 2025, https://medium.com/algorithmic-insights-ai/generative-ais-satirical-beginnings-in-g ulliver-s-travels-3be2297ca56e.

Chayka, Kyle. “The Limits of A.I.-Generated Miyazaki.” The New Yorker, 2 Apr. 2025, https://www.newyorker.com/culture/infinite-scroll/the-limits-of-ai-generated-miyazaki.

Conrad, Cross, et al. “The Future of AI Art Regulation.” The Regulatory Review, 26 Apr. 2025, https://www.theregreview.org/2025/04/26/seminar-the-future-of-ai-art-regulation/.

Diamandis, Peter H. “The Birth of Artificial Intelligence.” Peter H. Diamandis, 7 Dec. 2023, https://www.diamandis.com/blog/scaling-abundance-series-25.

Eliot, Thomas S. “Tradition and the Individual Talent.” Poetry Foundation, 13 Oct. 2009, https://www.poetryfoundation.org/articles/69400/tradition-and-the-individual-talent.

Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, vol. 15, no. 1, 3 Jan. 2025, pp. 1-5. https://doi.org/10.3390/soc15010006.

Germenis, Kendra. “How I Use AI as an Artist (and Why You Might Want To).” Medium, 28 Mar. 2025, https://medium.com/@kendragermenis/how-i-use-ai-as-an-artist-and-why-you-might-wa nt-to-a05465ddec71.

Goetze, Trystan S. “AI Art is Theft: Labour, Extraction, and Exploitation: Or, On the Dangers of Stochastic Pollocks.” Proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency, 5 Jun. 2024, doi:10.1145/3630106.3658898.

Hellmann, Olli. “Historical images made with AI recycle colonial stereotypes and bias - new research.” The University of Waikato, 24 Oct. 2025, https://www.waikato.ac.nz/int/news-events/news/historical-images-made-with-ai-recycle-colonial-stereotypes-and-bias-new-research/.

Hennen, Shelby. “Social Media and Participatory Culture.” Medium, 21 Oct. 2021, https://medium.com/@henne665/social-media-and-participatory-culture-b283a816b18c.

Horejs, Jason. “Is Creating Art Hard Work?” RedDotBlog, 27 Apr. 2025, https://reddotblog.com/is-creating-art-hard-work-21d/.

Huddleston, Andrew. “Carroll on Intention and Interpretation.” Andrew Huddleston, pp. 1-8. http://www.achuddleston.com/uploads/6/8/5/5/68559477/carroll_on_intention_and_interp retation-_revised_copy.pdf. Accessed 5 Dec. 2025.

Hutchinson, Clare, and Phil John. “AI: Digital artist’s work copied more times than Picasso.” BBC, 20 Jul. 2023, https://www.bbc.com/news/uk-wales-66099850.

“Iterative Improvement: The Art of Query Refinement in Generative AI.” Medium, 7 Oct. 2024, https://prajnaaiwisdom.medium.com/iterative-improvement-the-art-of-query-refinement-in-generative-ai-58c1b278cf40.

Kagan, Evgeny et al. “On Repeat: Does Iteration Drive Innovation?” Manufacturing & Service Operations Management, 3 Nov. 2025, https://doi.org/10.1287/msom.2025.0078.

Kelly, Rhea. “Survey: 86% of Students Already Use AI in Their Studies.” Campus Technology, 28 Aug. 2024, https://campustechnology.com/articles/2024/08/28/survey-86-of-students-already-use-ai-i n-their-studies.aspx.

Kissell, Ted B. “Kate Crawford maps a world of extraction and exploitation in ‘Atlas of AI.’” USC Annenberg News, 14 Apr. 2021, https://annenberg.usc.edu/news/critical-conversations/kate-crawford-maps-world-extracti on-and-exploitation-atlas-ai.

Mambrol, Nasrullah. “Intentional Fallacy.” Literariness, 17 Mar. 2016, https://literariness.org/2016/03/17/intentional-fallacy/.

Mann, Hamilton. “AI Homogenization Is Shaping The World.” Forbes, 1 Nov. 2024, https://www.forbes.com/sites/hamiltonmann/2024/03/05/the-ai-homogenization-is-shapin g-the-world/.

Manshaus, Halvor “AI in the training room - Getty Images vs. Stability AI Limited.” Schjødt, 9 Dec. 2025, https://schjodt.com/news/ai-in-the-training-room-getty-images-vs-stability-ai-limited.

Marino, Sissy. “What’s stopping you from engaging with art? - 5 Hidden Barriers.” Mae, 19 Apr. 2024,https://www.mae.community/magazine/whats-stopping-you-from-engaging-with-art---5-hidden-barriers-.

Marley, Jack. “Why John Cage’s Provocative ‘Silent Piece’ is Still Powerful Today.” Teen World of Arts, 7 Sep. 2024, https://teenworldarts.com/magazine/john-cage-silence.

Mineo, Liz. “If it wasn’t created by a human artist, is it still art?” The Harvard Gazette, 15 Aug. 2023,https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intellig ence-real-art/

Mucci, Tim. “The history of artificial intelligence.” IBM, https://www.ibm.com/think/topics/history-of-artificial-intelligence. Accessed 7 Dec. 2025.

Muir, Kimberly, and Ken Sutherland. “Color, Chemistry, and Creativity in Monet’s Water Lilies.” Art Institute Chicago, 9 Feb. 2021, https://www.artic.edu/articles/862/color-chemistry-and-creativity-in-monets-water-lilies.

Murthi, Vikram. “Hayao Miyazaki Calls Artificial Intelligence Animation ‘An Insult To Life Itself.” IndieWire, 13 Dec. 2016, https://www.indiewire.com/features/general/hayao-miyazaki-artificial-intelligence-animat ion-insult-to-life-studio-ghibli-1201757617/.

Nicoletti, Leonardo, and Bass, Dina. “Humans are Biased. Generative AI is Even Worse.” Bloomberg, 9 Jun. 2023, bloomberg.com/graphics/2023-generative-ai-bias/.

Ohja, Srishti. “Seven exciting artists who use AI as a medium of expression.” Stirworld, 31 Oct. 2024,https://www.stirworld.com/see-features-seven-exciting-artists-who-use-ai-as-a-medi um-of-expression.

Parsons, Andy. “Adobe co-founds the Coalition for Content Provenance and Authenticity (C2PA) standards organization.” Adobe Blog, 22 Feb. 2021, https://blog.adobe.com/en/publish/2021/02/22/adobe-continues-content-authenticity-com mitment-founder-c2pa-standards-org.

Pearson, David. “Embracing Uncertainty: what we can all learn from how artists thrive in an unpredictable world.” The Conversation, 27 Mar. 2025, https://theconversation.com/embracing-uncertainty-what-we-can-all-learn-from-how-artis ts-thrive-in-an-unpredictable-world-252993.

Price, Rob, et al. “OpenAI is getting overwhelmed by ‘Ghiblified’ photo edits. So is the guy who helped popularize the craze.” Business Insider, 28 Mar. 2025, https://www.businessinsider.com/ghibli-ai-trend-viral-response-heartwarming-2025-3.

Raj, Prayer E. “Text/Texts: Interrogating Julia Kristeva’s Concept of Intertextuality." Research Journal of Humanities and Social Sciences, vol. 3, pp 77-80. https://www.researchgate.net/publication/273771676_TextTexts_Julia_Kristeva's_Concep t_of_Intertextuality. Accessed 7 Dec. 2025.

Rosario, Bryanna. “AI is Here To Stay, and Art Is Suffering as a Result.” The Heights, 19 Oct. 2025, https://bcheights.com/223780/arts/ai-is-here-to-stay-and-art-is-suffering-as-a-result/

Rosser, Elise, and Dirk Spennemann. “Indigenous erasure in the age of generative Artificial Intelligence.” Charles Sturt University, https://doi.org/10.2139/ssrn.5689603. Accessed 7 Dec. 2025.

Sooke, Alastair. “What is the meaning of The Scream?” BBC, 4 Mar. 2016, https://www.bbc.com/culture/article/20160303-what-is-the-meaning-of-the-scream.

Teicher, Jordan G. “When Photograph Wasn’t Art.” JSTOR Daily, 6 Feb. 2016, https://daily.jstor.org/when-photography-was-not-art/.

“The Great Art Heist: How AI Companies Built Empires on Creative Theft.” Winsome, 27 May 2025,https://winsomemarketing.com/ai-in-marketing/the-great-art-heist-how-ai-companie s-built-empires-on-creative-theft.

Varnelis, Kazys. “Humanity and its Double: The Uncanny in Art and Artificial Intelligence.” Varnelis, https://varnelis.net/works_and_projects/humanity-and-its-double-the-uncanny-in-art-and-artificial-intelligence/. Accessed 7 Dec. 2025.

Vass, Kate. “History of AI - The New Tools: DALL-E and Midjourney.” Kate Vass Studio, 8 May 2023, https://www.katevassgalerie.com/blog/history-of-ai-dalle-midjourney.

“Verifying Content Credentials.” Content Credentials, https://contentcredentials.org/adopt/. Accessed 7 Dec. 2025.

Vilcarino, Jennifer, and Lauraine Langreo. “Rising Use of AI in Schools Comes With Big Downsides for Students.” EducationWeek, 8 Oct. 2025, https://www.edweek.org/technology/rising-use-of-ai-in-schools-comes-with-big-downsid es-for-students/2025/10.

Waikar, Sachin. “When AI-Generated Art Enters the Market, Consumers Win—and Artists Lose.” Stanford Graduate School of Business, 20 May 2025, https://www.gsb.stanford.edu/insights/when-ai-generated-art-enters-market-consumers-w in-artists-lose.

Wang, Judy, and Nicol Turner Lee. “AI and the visual arts: The case for copyright protection.” Brookings, 18 Apr. 2025, brookings.edu/articles/ai-and-the-visual-arts-the-case-for-copyright-protection/.

Wenger, Emily, and Yoed Kenett. “We’re Different, We're the Same: Creative Homogeneity Across LLMs.” arXiv, 31 Jan. 2025, https://doi.org/10.48550/arXiv.2501.19361.

“What is DALL-E and How Does It Work To Generate AI Images?” Upwork, 13 May 2024, https://www.upwork.com/resources/what-is-dall-e.

Wiggers, Kyle, and Haje Jan Kamps. “This site tells you if photos of you were used to train the AI.” Tech Crunch, 21 Sep. 2022, https://techcrunch.com/2022/09/21/who-fed-the-ai/.

Wijaya, Cornellius Y. “Diffusion Models Demystified: Understanding the Tech Behind DALL-E and Midjourney.” KDnuggets, 13 Aug. 2025, https://www.kdnuggets.com/diffusion-models-demystified-understanding-the-tech-behind-dall-e-and-midjourney.

“Will AI Create New Forms of Art for the Visually Impaired?” Pixel Gallery, 29 Sep. 2023, https://pixel-gallery.co.uk/blogs/pixelated-stories/ai-art-for-the-visually-impaired?srsltid= AfmBOoru6BGPplH7EvaWVM5j_hhhRogI4nsJphpvqYzWf-Gxu8ECBtkI.