SciPy 2024

Warp: Advancing Simulation AI with Differentiable GPU Computing in Python
07-10, 15:25–15:55 (US/Pacific), Ballroom

In this talk we introduce NVIDIA Warp, an open-source Python framework designed for accelerated differentiable computing. Warp enhances Python functions with just-in-time (JIT) compilation, allowing for efficient execution on CPUs and GPUs. The talk’s focus is on Warp’s application in physics simulation, perception, robotics, and geometry processing, along with its capability to integrate with machine-learning frameworks like PyTorch and JAX. Participants will learn the basics of Warp, including its JIT compilation process and the runtime library that supports various spatial computing operations. These concepts will be illustrated with hands-on projects based on research from institutions like MIT and UCLA, providing practical experience in using Warp to address computational challenges. Targeted at academics, researchers, and professionals in computational fields, the course is designed to inspire attendees and equip them with the knowledge and skills to use Warp in their work, enhancing their projects with efficient spatial computing.


Introduction to Warp. We explain the motivations underpinning Warp and how its kernel-based programming model differs from other tensor-based Python frameworks such as PyTorch or JAX. We survey the main modules of Warp: the core module which enables differentiable kernel coding for Python, the warp.sim module which includes many common physical simulation models and integrators, and the warp.fem module which is oriented toward the solution of partial differential equations (PDEs) using finite-element-based (FEM) methods. We also discuss some projects that have used Warp from outside NVIDIA.

Warp language details. We discuss Warp’s compilation, data, and execution models. We review the capabilities and operations supported by Warp’s multidimensional array type wp.array and show how to write Warp kernels in Python running on a CPU or GPU device using wp.launch(). We describe Warp’s code generation process, which turns an @wp.kernel-decorated Python function into a C++/CUDA intermediate representation by traversing the function’s abstract syntax tree (AST).

Writing basic Warp applications. We discuss how common simulation methods (e.g.: mass–spring cloth models, semi-Lagrangian fluids) can be implemented by taking advantage of Warp’s native support for bounding-value hierarchies (BVHs), meshes, hash grids, and sparse volumetric grids (OpenVDB). We discuss common issues encountered by developers when writing a new Warp application and debugging techniques. We also introduce the renderers available in the warp.render module, which can be used to visualize scenes involving shapes of various types.

Automatic differentiation and Warp. We review the theory behind automatic-differentiation (AD) systems, including the merits and drawbacks of forward- and reverse-mode automatic differentiation for different applications. We discuss the specific design choices that were made in Warp to support AD and use concrete examples to illustrate how Warp’s code-generation pipeline can also automatically generate adjoint programs.

Writing differentiable Warp applications. We discuss how to build differentiable Warp applications, which can be used in conjunction with machine-learning frameworks like PyTorch and JAX to learn control policies. We provide best-practices for writing differentiable Warp applications.


Prerequisites

.

Eric is a research scientist at NVIDIA where he develops the open-source Python library Warp. His research interests lie in the intersection of simulation and robotics, particularly differentiable simulators that can be used to reduce the reality gap and control dynamical systems through optimization.
He received his Ph.D. in Computer Science from the University of Southern California under the supervision of Prof. Gaurav Sukhatme.

This speaker also appears in: