SciPy 2025

Teaching Python with GPUs: Empowering educators to share knowledge that uses GPUs
07-10, 15:00–15:30 (US/Pacific), Room 318

In today’s world of ever-growing data and AI, learning about GPUs has become an essential part of software carpentry, professional development and the education curriculum. However, teaching with GPUs can be challenging, from resource accessibility to managing dependencies and varying knowledge levels.

During this talk we will address these issues by offering practical strategies to promote active learning with GPUs and share our experiences from running numerous Python conference tutorials that leveraged GPUs. Attendees will learn different options to how to provide GPU access, tailor content for different expertise levels, and simplify package management when possible.

If you are an educator, researcher, and/or developer who is interested in teaching or learning about GPU computing with Python, this talk will give you the confidence to teach topics that require GPU acceleration and quickly get your audience up and running.


GPUs are everywhere, but not necessarily immediately in front of you. How do you get one, and how do you maximize its potential? These are common questions when planning to teach content involving GPUs. With AI spreading through every field and data volumes continuously expanding, teaching GPU computing has never been more relevant.

When it comes to teaching programming concepts, it's well known that the classical lecture style doesn't cut it anymore. Students learn more if they are in an interactive environment. Teaching computing interactively and with an active learning approach can be hard, even when the software you are teaching is Open Source. If you've ever taught a course, or given a tutorial, you may have stumbled upon some challenges like resources accessibility, differences in operating systems, flavors of library installations, level of depth of concepts, among others. Imagine that the thing you want to teach is using cutting edge features of the latest and greatest hardware. You're now motivated to find a way to get this hardware in the hands of everyone in the room.

The CUDA Python ecosystem has matured over the last few years and leveraging GPUs in your Python code is now easier than ever. To unlock GPUs in your projects you have to have the right hardware and software setup in addition to the standard Python stack.  During this talk we will go over some of the challenges around getting set up, and discuss practical strategies to promote GPU-computing active learning.

Outline:

  • Resource Accessibility: How do you provide GPU access? What are the options available? Cloud-based solutions, Colab notebooks, and ready-to-go Jupyter + RAPIDS deployments.
  • Managing knowledge Levels: zero-code-change, low-code-change, CUDA Python, CUDA C++. When and How? 
  • Environment Management and Dependencies: Explain complexity of managing GPU dependencies. Understanding errors, versions incompatibility and introducing RAPIDS Doctor.
  • Packaging and deployment: Discuss how GPU software libraries are built and distributed, and how to install and deploy them on various platforms.

This talk is intended for educators, researchers, and developers who are interested in teaching or learning about GPU computing with Python. By the end of this talk attendees will leave with the confidence that if they want to teach something that requires GPU acceleration they will be able to get their audience up and running quickly.

Jacob Tomlinson is a senior software engineer at NVIDIA. His work involves maintaining open source projects including RAPIDS and Dask. He also tinkers with kr8s in his spare time. He lives in Exeter, UK.

This speaker also appears in: