Sandeep Madireddy is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. His research interests span the broader areas of theoretical and applied machine learning, probabilistic modeling and high performance computing, with applications across science and engineering. His current research aims at developing deep learning algorithms and architectures tailored for scientific machine learning, with a particular focus on improving training efficiency, model robustness, uncertainty quantification and feature representation learning. He has experience applying these approaches to address diverse problems in various domains, ranging from physical sciences (material science, high energy physics, climate science) to computer systems modeling and neuromorphic computing.
Before joining Argonne, he obtained his Ph.D. in mechanical and materials engineering from the University of Cincinnati, as part of the UC Simulation center (a UC Engineering and Procter & Gamble Collaboration). Before that, he obtained his masters from Utah State University and bachelors from Birla Institute of Technology and Science (BITS-Pilani) in India.
Co-design approach that encompasses neuromorphic computing, systems architecture, and datacentric applications. Focus on high energy physics (HEP) and nuclear physics (NP) detector experiments
This project brings together ASCR and HEP researchers to develop and apply new methods and algorithms in the area of extreme-scale inference and machine learning. The research program melds high-performance computing and techniques for “big data” analysis to enable new avenues of scientific discovery.
Deep transfer learning to automatically segment the precipitate from matrix in 3D Atom Probe Tomography data.
The project overarching objective is to develop the simulation capability and to perform extended MHD and (drift-gyro) kinetic simulations of non-ELMing (and some ELMing) regime operating points to close gaps in understanding, prediction, and optimization of edge stability for an FPP. I am leading a team to develop ML techniques for extracting reduced-order models, data reduction, and feature extraction to the existing non-ELM database (based on interpolation of data), and to extrapolate to new parameter regimes (such as coil currents for negative-triangularity shaping) for ELM-free optimization.
Employing architechtures inspired by insect brain to devise efficient, life-long learning machines.
The overarching objective of this SciDAC-5 project is to create consistent predictions of the dark and visible Universe across redshifts, length scales and wavebands based on state-of-the-art cosmological simulations. The simulation suite will encompass large-volume, high-resolution gravity-only simulations and hydrodynamical simulations equipped with a comprehensive set of subgrid models covering both small and large volumes. The simulations will be coupled to a powerful analysis framework and associated tools to maximize the analysis flexibility and science return.
Multi-dimensional automated scalability tests, program analysis, performance learning and prediction at various levels of the software/hardware stack.
Develop cross-cutting artificial intelligence framework for fast inference and training on heterogeneous computing resources as well as algorithmic advances in AI explainability and uncertainty quantification.
Machine learning-based probabilistic I/O performance models that take the background traffic, and system state into account while prediciting application performance on HPC system.
Deelop a framework for efficient and accurate equilibrium reconstructions, by automating and maximizing the information extracted from measurements and by leveraging physics-informed ML models constructed from experimental and synthetic solution databases to guide the search for the solution vector.
he project goal is to develop a probabilistic ML framework – PRISM – to improve manufacturing efficiency and demonstrate the technologies on wing spars.
The objective of RAPIDS2 is to assist the Office of Science (SC) application teams in overcoming computer science, data, and AI challenges in the use of DOE supercomputing resources to achieve scientific breakthroughs.
The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.
Develop modular characterization approaches, where we can examine key performance parameters and application execution similarities.