07-10, 14:35–15:05 (US/Pacific), Room 317
From radio telescopes to proton accelerators, scientific instruments produce tremendous amounts of data at equally high rates. To handle this data deluge and to ensure the fidelity of the instruments’ observations, architects have historically written measurements to disk, enabling downstream scientists and researchers to build applications with pre-recorded files. The future of scientific computing is interactive and streaming; how many Nobel Prizes are hidden on a dusty hard drive that a scientist didn’t have time or resources to analyze? In this talk, NVIDIA and the SETI institute will present their joint work in building scalable, real time, high performance, and AI ready sensor processing pipelines at the Allen Telescope Array. Our goal is to provide all scientific computing developers with the tools and tips to connect high speed sensors to GPU compute and lower the time to scientific insights.
Background / Motivations
Most developers have a trained AI model or Python code that they’d like to connect to a real time sensor data stream. Unfortunately, handling data movement from sensor to compute and ensuring performance through the application pipeline is difficult and typically involves many different skill sets. In this talk, we will introduce the Holoscan SDK, an Apache-2.0 licensed platform for building real time sensor and domain agnostic pipelines. Holoscan can be used to seamlessly connect GPU accelerated libraries (CuPy, Numba, TensorRT, PyTorch) to a data stream and contains building blocks for core sensor processing functions like I/O, visualization, and AI inferencing. Our goal is to demonstrate that Holoscan is an effective, high performance, easy to use, and scalable framework for upgrading offline processing to be online and real-time
Methods
At its core, Holoscan contains a compute scheduler and standardizations for how data optimally moves from sensor to GPU, on GPU, and from GPU to another GPU. It’s meant to be flexible, allowing C++ or Python developers to access shared pointers – meaning developers can “tap into” a data stream on GPU. In this talk, we’ll highlight key Holoscan components, focusing more on developer productivity than software architecture. We will discuss Holoscan’s Network Operators (<10Gbps, <200Gbps for UDP Ethernet ingest and AI inferencing operator, demonstrating how one can apply an ONNX trained model file to a data stream.
The SETI Institute, will touch on how they have integrated Holoscan into the signal processing pipeline at the 20-element Allen Telescope Array. We will also show real time, high bandwidth visualization in CyberEther with data captured and pre-processed with Holoscan’s Advanced Network Operator; this example, in particular, shows that Holoscan can effectively integrate with existing software tools for various scientific computing tasks.
Results
We can currently collect, process, and visualize 1/4th of the data generated at the Allen Telescope Array with plans to process and visualize the entire 20 Elements and ~2GHz of data on a single A6000 GPU.
That said, the majority of our talk will be more focused on “look at what you can do with Holoscan, GPUs, and sensor processing” while providing concrete evidence of the performance and flexibility of the framework within the radio astronomy community.
Conclusions
In this talk, we hope to cement Holoscan as a developer friendly and flexible tool to connect high speed instruments to GPU compute for both accelerated computing and AI inferencing tasks. While focusing on radio astronomy, we hope to make the case that Holoscan is a tool for all scientific computing developers building real-time pipelines
Supporting Data
Adam Thompson is the creator of cuSignal, a GPU accelerated version of SciPy Signal (that has since been transitioned to CuPy) and is the Product Lead for Computational Instruments at NVIDIA. He’s presented at a variety of conferences, including SciPy 2020, IEEE ICASSP, NVIDIA’s GTC, and many others.
Luigi Cruz is the creator of BLADE and CyberEther and has presented at SciPy, C++Now, Gnuradio, and other radio astronomy specific events.
Holoscan for Ptychography: https://developer.nvidia.com/blog/accelerating-ptychography-workflows-with-nvidia-holoscan-at-diamond-light-source/
Holoscan SDK: https://github.com/nvidia-holoscan/holoscan-sdk
Holohub (Sample Apps / Operators): https://github.com/nvidia-holoscan/holohub
SETI Beamformer: https://github.com/luigifcruz/blade
CyberEther (Visualizing and GUI based Programming): https://github.com/luigifcruz/CyberEther
Adam Thompson is a Principal Technical Product Manager at NVIDIA where he focuses on building hardware and software platforms targeting real-time AI, smart sensors, and tying high speed sensor I/O to GPU-accelerated compute. His work advances edge and datacenter/cloud collaborative workloads that integrate Digital Twins of instruments and AI training/fine-tuning deployments.
Adam is also the creator of cuSignal – a GPU-accelerated signal processing library written in Python. With over 400,000 downloads, cuSignal is widely used in the sensor processing communities, and - as of CuPy v13, is fully integrated within CuPy library.
He holds a Masters degree in Electrical and Computer Engineering from Georgia Tech and a Bachelors Degree in Electrical Engineering from Clemson University.
In his free time, Adam enjoys baking, listening to (and discovering!) indie music, modern lit, pour-over coffee techniques, and teaching.
Luigi Cruz is a computer engineer working as a staff engineer at the SETI Institute. He created the CUDA-accelerated digital signal processing backend called BLADE currently in use at the Allen Telescope Array (ATA) and Very Large Array (VLA) for beam forming and high-spectral resolution observations. Luigi is also the maintainer of multiple open-source projects like the PiSDR, an SDR-specialized Raspberry Pi image, CyberEther, a heterogenous accelerated signal visualization library, and Radio Core, a Python library for demodulating SDR signals using the GPU with the help of CuPy.