iStock/dxy6636911 In January 2023, shortly after the public release of ChatGPT 3.5, we proposed a course called “Generative AI in the Wild.” We had very little idea what the class would look like and how we would teach it, but we knew that it was essential to start teaching such courses if our students were going to graduate with the skills needed to lead in the emerging world. Following four semesters of dramatic change with approximately 170 students across multiple majors, here are our reflections on the journey.1
Approach
While many aspects of the course were unknown, even at the start of the first semester, we were clear on three key features: an applied, multidisciplinary, active learning approach.
Applied
We named the class “Generative AI in the Wild” because we wanted to: 1) distinguish it from any existing corpus of generative AI coursework focused on engineering approaches to building such systems (AI in the lab) and 2) emphasize the expectation that as AI is used more broadly, it will unexpectedly (“wildly”) influence (and complicate) more disciplines and facets of our lives that would have not otherwise engaged with it.
Multidisciplinary
We also knew that we would be dedicated to an “integrated ways of knowing” approach—all students at the University of Notre Dame are required to take at least one course with this moniker. It is given to classes in which multiple disciplinary perspectives and viewpoints are provided, and the interactions between the viewpoints are made apparent. Team-teaching instructors acknowledge these differences—drawing attention to tensions and demonstrating a nuanced dialog—and invite students to practice doing the same.
Accordingly, the course combines prompt engineering techniques and concepts with the critical examination of social, environmental, and ethical considerations that generative AI is forcing people to address in their work, education, and daily lives. Using and understanding generative AI is not merely a technological challenge; it also requires considering humanity’s histories, biases, and values that have led us here (see Figure 1).
Figure 1.Professor John Behrens reflecting on the words of Pope Francis’ 2015 encyclical Laudato si’ while leading the “Generative AI in the Wild” class in 2025. Photograph by Jon L. Hendricks/ University of Notre Dame.
Active Learning
We wanted to emphasize active learning with both text and image (and now multimodal) generative AI systems. To give students access to multiple AI models, we started using the Magai platform [1], a systems aggregator allowing access to various systems, such as the OpenAI GPT series or Anthropic’s Claude series. As the market exploded with a range of new systems (e.g., Llama, Grok, Perplexity, and DeepSeek), Magai added these new text models, as well as image and video models. Students have access to Magai throughout the semester to experience a breadth of systems, as well as additional access to ChatGPT for eight weeks at the end of the semester to gain experience with system-specific features such as image recognition, multimodal functionality, canvas/artifact type interface, and mobile applications.
Curriculum
We initially conceived the course in three parts, aimed at: 1) providing social and cultural context; 2) practicing “how it works” and “how to use it”; and 3) engaging with how generative AI is impacting different fields as reported on by guest speakers specializing in AI: the director of academic integrity at the university, an art historian, a theologian, a philosopher, a computer scientist, and a poet. Students frequently report delight and awe at the range of professionals they meet who are also grappling with the impacts of generative AI. Over time, the flow of the curriculum has emerged in a way that can be summarized in a series of questions, topics, and associated deliverables (see Table 1).

We focus on the active learning approach with the intention that, at the end of the class, students will have a portfolio of written, experimental, and multimedia works that they can show to prospective employers and graduate schools.
AI Systems Use: From Casual to Informed Users
During the course, we expect students to shift from being a “casual user” of generative AI to a well-informed, quasi-expert who can describe different systems and their functionalities, more consciously manipulate system behavior using prompting techniques, and analyze system outputs. To accomplish this, they must understand some of the fundamental properties of AI systems and know how to evaluate them systematically regardless of their exact composition.
To address the first need, we test the behavior of a language model based on its product capabilities, the prompts used, and the task being attempted. A prompting strategy may work well in one system but not in another. Likewise, a system may perform well at one type of task (e.g., coding) but not another. To conceptualize this interplay with the system, we created the Task-Product-Prompt Triangle (see Figure 2). We refer to it throughout the course while diving into each vertex of the diagram as we proceed. For example, to provide students with conceptual tools needed to evaluate tasks and system performance (the top vertex), we draw on Evidence Centered Design (ECD) literature [2], [3]. Originally developed to support the design and interpretation of educational assessment (i.e., the performance of human systems), ECD traces the flow of inference from claims you want to make about a performance, backward to the evidence you need to support the claims, and to the specification of activities or tasks required to generate such evidence. In our course, the Task-Product-Prompt Triangle and the ECD framework support students in articulating the relationship between what they want to claim about generative AI behavior and what types of experiments or experiences they undertook to generate evidence for that claim.

Figure 2.Task-Product-Prompt Framework for evaluating system performance.
After learning about a wide range of prompting strategies [4], [5], [6], students apply the techniques to tasks of personal interest such as computer programming, accounting, medical diagnosis, musical composition, and fitness coaching, among many others. By the end of the course, students experiment with multiple different text, image, and multimodal systems. Through their assignments and projects, students become familiar with the strengths and weaknesses of individual systems, as well as learn to compare and assess model behavior at scale using the Chainforge simulation tool (see Figure 3). Chainforge provides a no-code interface allowing students with nontechnical backgrounds to compare the effects of various prompting strategies across various tasks [7]. Students are not only able to leverage this knowledge to apply generative AI effectively but also are able to proactively determine whether a task or system is infeasible or inappropriate in the future.

Figure 3.Screenshot of a Chainforge simulation testing variations in levels of verbal encouragement for the completion of logical reasoning tasks with 100 repetitions. The interface supports students conducting multifactorial simulations across variations of prompt structure, task features, and multiple language models without any computer coding. This is a simple example that can be extended with JavaScript or Python code and the use of LLMs for evaluation, among other features.
Students are faced with balancing their use, impact, and responsibility considering these complex real-world challenges.
Sociotechnical Connections
While readers of IEEE Technology and Society Magazine will understand that technologies are not just material and/or computational systems but can be understood as systems of social activity at large, this is a revelation to many students. One must keep in mind that most of our students were born in the 21st century and, much like the publics at the start of the 20th century, have only experienced persistent hype cycles (think web 2.0, social media, big data, machine learning, blockchain, and web 3.0) where the technological cycle is presented as an autonomous driving force in the world.
Similar to the way they learn to evaluate a system’s performance, students are also charged with evaluating expertise across disciplines, genres, and media in our integrated format. Furthermore, they consistently observe how the technological histories of art, language, education, labor, ethics, the environment, and other topics allow them to draw meaningful connections to the current moment. We do not need to look far to encounter significant ethical issues and risks related to generative AI [8], [9], including, but not limited to, copyright concerns in media and artistic domains [10], [11], human–AI relationships [12], and misinformation/disinformation and hateful content [13].
In 2025, we have amplified our attention to the environmental impacts of generative AI [14], [15]. As Amazon Web Services constructs one of eight new Indiana-based data centers 30 minutes west of our campus, we bear witness to the displacement of over nine acres of wetlands and 5,000 linear feet of streams, irreversibly impacting the surrounding wildlife and ecosystems [16]. Students are faced with balancing their use, impact, and responsibility considering these complex real-world challenges. It is evidently a tough ask for college students to balance the critical and the practical as they can often be irreconcilably caught up in the futility of weighing global changes through individual choices, but our goal is to provide them the tools to go further and critique as an informed citizen so that if/when they use the systems, they interface with them as a critical thinker.
In the face of these changes, the solution might be in continuing to ask ourselves as educators: how should we use and teach about generative AI, and how do we create a shared journey with our students?
Challenges and Future Directions
One of the biggest challenges of teaching such a course is the remarkable pace of change in technology and the societal response. The curriculum is in near constant evolution and must explicitly address how students should develop their continuous learning skills.
We also see a change in the students. Consider the simple question of whether it would be acceptable to write a wedding speech—a highly sentimental and relatively rare opportunity in one’s life—using ChatGPT? While most of our students disliked the very thought of it in 2024, in 2025, they were a little more divided on that prospect than in previous terms. What is perceived to be socially acceptable has been evolving across personal, educational, and professional boundaries.
During the last two years, students’ relationships with AI have shifted from theoretical to personal, requiring the course to shift from etic to increasingly emic perspectives as students’ lives become gradually immersed in new realities. Like the professors, they seek both skills and personal sense-making. In terms of career preparation, students seek to understand generative AI so that they will not be behind the workforce; it is a valid concern, given that the World Economic Forum Future of Jobs Report 2025 indicates that AI and technology literacy is one of the most desirable competencies across domains [17].
Even after over two years of discussions, those in academia are struggling to enforce policies, adapt curricula, and build trust with students around appropriate use (and refusal) of generative AI. Concerns around academic integrity, cheating, and overreliance are significant and unresolved [18], [19], but our experience suggests that the challenge is to give the students the tools they need to make informed assessments with a profound ambivalence that does not dissolve either into a narrative of unstoppable force or into a nonexistent factor but starts from a place of wanting to know more and know better. By engaging multiple disciplinary perspectives in this course, we underline the need for students, as well as scholars and critics, to expand and innovate on existing approaches and methodologies to use and evaluate generative AI more comprehensively [20], [21], [22].
How do we help students weather this storm of personal, academic, and professional disruption, even as we are learning and adapting? In the face of these changes, the solution might be in continuing to ask ourselves as educators: how should we use and teach about generative AI, and how do we create a shared journey with our students?
There is a consistent stream of new systems, prompting strategies, and technical advancements that are useful for broader applications; most will recognize that developing an understanding of how generative AI works and how it can be used is an important, timely skill. At the same time, the basic competencies we practice throughout higher education are not going to go out of style; we still also need timeless skills to augment the timely ones—skills such as communication, creativity, and critical thinking—which enable students to forge a meaningful path forward. This path should ideally help students to discern opportunities where generative AI can help themselves and others flourish, and where it could diminish those same opportunities.
For those interested in learning more about our approach, the course, and related resources, visit our blog at https:altech.nd.edu/wildai
Author Information
Alexi Orchard is an assistant teaching professor of technology and digital studies at the University of Notre Dame, Notre Dame, IN, USA. Her academic interests include AI ethics and society, critical design, and responsible innovation. Email: aorchard@nd.edu.
John T. Behrens is a professor of practice and the director of technology and digital studies and a concurrent professor of practice of computer science and engineering at the University of Notre Dame, Notre Dame, IN, USA.
Ranjodh Singh Dhaliwal holds the professorship in digital humanities, artificial intelligence, and media studies in the Department of Arts, Media, and Philosophy at the University of Basel, Basel, Switzerland, where he also directs the Digital Humanities Laboratory. His research interests include intersections of media theory, literary studies, computer science, critical design, and science and technology studies.
______________________
To read the full version of this article, including references, click HERE.
______________________





JOIN SSIT