07-09, 15:25–15:55 (US/Pacific), Ballroom
X-ray ptychographic imaging is becoming an indispensable tool for visualizing matter at nanoscale, driving innovation across many fields, including functional materials, electronics, life sciences, etc. This imaging mode is particularly attractive thanks to its ability to generate high-resolution view of an extended object without using a lens with high numerical aperture. The technique relies on advanced mathematical algorithms to retrieve the missing phase information that is not directly recorded by a physical detector, therefore computation intensive. Advances in accelerator, optics, and detector technologies have greatly increased data generate rate, imposing a big challenge on efficient execution of reconstruction process to support decision-making in an experiment. Here, we demonstrate how efficient GPU-based reconstruction algorithms, deployed at the edge, enable real-time feedback during high-speed continuous data acquisition increasing the speed and efficiency of the experiments. The developments further pave the way for AI-augmented autonomous microscopic experimentation performed at machine speeds.
- Background
The discovery of next-generation materials relies on understanding the structure-function relationships in materials across various length and time scales under realistic conditions. Microscopic imaging is fundamental for visualizing material structures and behaviors. Among the many microscopy modalities, hard X-ray ptychography stands out for delivering high-resolution information of large sample volumes with high detection sensitivity through phase imaging. Advances in accelerator technology, X-ray optics, detectors, and data acquisition methods have made modern X-ray ptychographic experiments increasingly accessible to scientists across multiple domains. However, the growing volumes of data generated at ever-increasing rates surpass traditional data processing methods, which often depend on data transfer to a disk and offline post-analysis. These delays inhibit decision-making, reducing the throughput and quality of data acquired during experiments. Here, we demonstrate how efficient GPU-based reconstruction algorithms deployed at the edge enable real-time feedback in high-speed continuous data acquisition experiments, paving the way for AI-augmented autonomous microscopic experimentation.
- Methods
The edge processing pipeline was developed and deployed at the Hard X-ray Nanoprobe (HXN) 3-ID beamline of the National Synchrotron Light Source II at Brookhaven National Laboratory. The beamline, designed for multimodal microscopy experimentation, features spatial resolution down to 12 nm and combines structural and elemental information from samples. For ptychographic experiments, the beamline employs rapid continuous scanning with samples mounted on fast translational stages, while the Eiger2 1M X-ray camera (DECTRIS Ltd) collects imaging data at a 1-2 kHz frame rate. The ptychographic data is processed using an in-house developed CuPy-based iterative reconstruction algorithm optimized for GPU deployment.
- Results
We developed a pipeline where imaging data from the camera and positional data from the translational encoders are streamed into a server equipped with a single A100 NVIDIA GPU for online reconstruction. By deploying an in-house developed reconstruction algorithm in an edge processing pipeline powered by NVIDIA Holoscan, we demonstrate that image reconstruction can be achieved with minimal latencies matching the camera’s frame rate. This enables researchers to gather real-time feedback during experiments, optimizing the throughput and quality of the collected data. The pipeline was further integrated with the EPICS-based beamline controls system via Ophyd library [0] to enable automatic readout of experimental parameters thus improving the user experience.
- Future work
This work builds on recent efforts to increase the efficiency of ptychographic experiments. The application of deep-learning techniques to ptychographic reconstruction has shown significant gains in real-time data reconstruction, despite the time and experimental costs associated with pretraining [1]. Additionally, new AI-driven algorithms for optimal sample scanning have been proposed in electron-based ptychography, further enhancing data acquisition efficiency [2]. With next-generation cameras expected to deliver data at speeds up to 120 kHz [3], integrating recent and current developments will be essential for performing AI-enabled ptychographic imaging at machine speeds, accelerating the discovery of novel materials.
[0] https://www.tandfonline.com/doi/full/10.1080/08940886.2019.1608121
[1] https://www.nature.com/articles/s41467-023-41496-z
[2] https://www.nature.com/articles/s41598-023-35740-1
[3] https://link.springer.com/article/10.1140/epjp/s13360-024-05224-w
I am a scientific software developer at Brookhaven National Laboratory. I contribute to Bluesky code stack which is a library for experiment control and collection of scientific data and metadata.
Denis Leshchev is Senior Application Engineer, NVIDIA. Dr. Leshchev joined NVIDIA in 2024 and works as an application engineer for computational instruments where he focuses on customer adoption of hardware and software platforms targeting real-time AI, autonomous instruments, and tying high speed sensor I/O to GPU-accelerated compute. Dr. Leshchev has an extensive background in building scientific instrumentation, data acquisition and control systems, as well as data processing pipelines. He holds a PhD in Physics from Universite Grenoble Alpes (Grenoble, France).