07-09, 16:05–16:35 (US/Pacific), Room 318
This talk presents a candid reflection on integrating generative AI into an Engineering Computations course, revealing unexpected challenges despite best intentions. Students quickly developed patterns of using AI as a shortcut rather than a learning companion, leading to decreased attendance and an "illusion of competence." I'll discuss the disconnect between instructor expectations and student behavior, analyze how traditional assessment structures reinforced counterproductive AI usage, and share strategies for guiding students toward using AI as a co-pilot rather than a substitute for critical thinking while maintaining academic integrity.
Introduction
I'll begin by setting the context of my Engineering Computations course, a beginner Python course focused on computational thinking, numerical tasks, and problem-solving. I'll explain my initial motivation for incorporating generative AI through a RAG-enabled chatbot grounded in course materials, aiming to provide students with legitimate AI support rather than fighting against inevitable AI usage.
The Experiment: Initial Implementation
I'll detail how I introduced AI into the course structure, including:
- The design and capabilities of the RAG-enabled chatbot
- My expectations for how students would use AI as a productivity enhancer
- The initial enthusiasm from students when told AI usage was permitted
- My hope that AI would serve as a collaborative learning partner
What Went Wrong: Unintended Consequences
This section will candidly explore the rapid emergence of problematic student behaviors:
- Students using one-shot prompts to solve entire assignments
- The iterative trial-and-error approach with AI and autograders
- Dramatic decrease in class attendance (down to 30%)
- Students deprioritizing the course relative to others with traditional exams
- The collapse of class dynamics when assessment changes were proposed
The Illusion of Competence
I'll analyze the cognitive phenomenon where AI usage created a false sense of mastery:
- Definition and psychology behind the illusion of competence
- How AI-completed assignments gave students high scores without understanding
- Parallels between passive learning methods and superficial AI use
- Student resistance to acknowledging learning gaps
- The striking disparity between assignment scores and exam performance
Assessment Challenges
I'll discuss the difficult balance between maintaining academic integrity and embracing AI:
- The student backlash when assessment changes were considered
- My decision-making process regarding secured exam conditions
- The modifications made to the autograder for exams
- Analysis of exam results despite open AI access
- The impact on course evaluations and student satisfaction
Lessons Learned and Path Forward
This section will outline key insights and strategies for future implementation:
- Designing assignments that work with AI rather than against it
- Creating regular low-stakes assessments to ensure genuine engagement
- Explicitly teaching effective AI collaboration skills
- Balancing innovation with structure and accountability
- Reframing expectations for both instructors and students
Practical Implementation Strategies
I'll offer specific, actionable approaches for educators:
- Example assignments designed for effective AI collaboration
- In-class exercises that leverage AI while ensuring learning
- Assessment structures that maintain academic integrity
- Methods for monitoring and guiding productive AI use
- Approaches to cultivate student buy-in for responsible AI practices
Conclusion and Q&A
I'll close by reframing this challenging experience as valuable learning for the education community, emphasizing the importance of continued experimentation and honest dialogue about both successes and failures in educational innovation.
Lorena A. Barba is professor of mechanical and aerospace engineering at the George Washington University in Washington, DC. Her research interests include computational fluid dynamics, high-performance computing, and computational biophysics. An international leader in computational science and engineering, she is also a long-standing advocate of open source software for science and education, and is well known for her courses and open educational resources. Barba served (2014–2021) in the Board of Directors for NumFOCUS, a US public charity that supports and promotes world-class open-source scientific software. She is an expert in research reproducibility, and was a member of the National Academies study committee on Reproducibility and Replicability in Science. She served as Reproducibility Chair for the SC19 (Supercomputing) Conference, is Editor-in-Chief of IEEE Computing in Science & Engineering, was founding editor and Associate EiC (2016–2021) for the Journal of Open Source Software, and is EiC of The Journal of Open Source Education. She was General Chair of the global JupyterCon 2020 and was named Jupyter Distinguished Contributor in 2020.