SciPy 2023

New CUDA Toolkit packages for Conda
07-14, 10:45–11:15 (America/Chicago), Amphitheater 204

In this talk, we will examine the new CUDA package layout for Conda (as included in conda-forge). Show how CUDA components have been broken out. Share how this affects development and package building. Walk through changes in the conda-forge infrastructure made to incorporate these new packages. Examine recipes using the new packages and what was needed to update them. Additionally will provide guidance on how to use these new packages in recipes or in library development.

Based on feedback from package maintainers and end users, we’ve extended and restructured the CUDA Toolkit packages in conda-forge. We’ve added new packages for CUDA components that were requested. Also we’ve more finely split out CUDA toolkit packages by CUDA component to provide package maintainers and end users a light-weight, precise method for including and stating CUDA dependencies.

In addition to the CUDA redistributable libraries already available, we have included compilers, debuggers, profilers, etc. Thus providing users of the conda-forge channel a full development suite that they can use in their own projects. Also these greatly simplify the build infrastructure in conda-forge. Finally more libraries are included, which will allow package maintainers to enable additional features in recipe builds.

Similarly packages have become more granular. Each component of the CUDA toolkit is separated out. Further components are split into packages used at build time and run time. Maintainers of packages can now select which components they depend on for a build and only depend on the needed shared library at runtime. In terms of the package ecosystem, this makes CUDA component usage legible in downstream recipes and packages, which can make updates more targeted and easier to manage. For end users all of this means quicker downloads, more compact installs, and a smoother upgrade path.

To aid package maintainers and users in leveraging this new functionality, we will share the overall package structure and how this is integrated into conda-forge. Also we will share examples from recipes on how these CUDA packages can be used. Similarly we will show how these packages can be integrated into development workflows.

Got my B.S. & M.S. in Physics. After graduating went to work at Howard Hughes Medical Institute for 5 years working on image processing problems particularly in neuroscience. Got more involved in open source during that work with particular interest in packaging, storage, and distributed array processing. Then joined the NVIDIA RAPIDS team where there has been good overlap with these past interests as well as new ones.

This speaker also appears in:

Thomson Comer has been writing GPU-accelerated libraries at NVIDIA since 2018. He contributes to RAPIDS cuDF, cuSpatial, and node-rapids, and collaborates with customers and curious developers about best practices for GPU acceleration. He earned an M.S. in computer science in 2009 with a concentration in machine learning, computer vision, and graphics. Before NVIDIA, Thomson worked for a decade at the startup accelerator and consulting firm Cardinal Peak.

Rick Ratzel is a technical lead for RAPIDS cuGraph - a library of GPU-accelerated graph algorithms. Rick joined NVIDIA in January 2019, bringing several years of experience as a technical lead for teams in industries that include test and measurement, electronic design automation, and scientific computing. Rick’s focus for cuGraph, and throughout his career, has been on software architecture and API usability.