Team Members

Current

Post Doctoral Researchers

Avatar

Adarsha Balaji,

Postdoc (2023-Current)

Neuromorphic Computing

Avatar

Anirban Samaddar

NSF-MSGI Fellow (2021); Visiting Student (2022); Postdoc (2023-)

Bayesian Inference

Avatar

Yixuan Sun

Postdoc (2023-Current)

Physics-informed ML

Graduate Students

Avatar

Tung Nguyen

Visiting Student (2023-)

Foundation Model for Weather and Climate

Avatar

Hyunwoong Chang

Summer Intern (2023); Visiting Student (2023-)

Neural Architecture Search

Avatar

Ray Sinurat

Ph.D. Candidate (2021-Current)

Machine learning for I/O

Avatar

Zizhang Chen

Summer Intern (2023); Visiting Student (2023-)

UQ for LLMs

Alumni

Post Doctoral Researchers

Avatar

Jaehoon Koo

Postdoc (2021-2022)

Deep Learning

Graduate Students

Avatar

Kangrui Wang

OSRE/GSoC Summer Intern (2023)

Drift detection and LLMs

Avatar

Orune Aminul

Summer Intern (2023)

Probabilistic Machine Learning

Avatar

Anurag Daram

Summer Intern (2022)

Neuromorphic Computing

Avatar

Sumegha Premchandar

Givens Fellow (2022); Visiting Student (2022-2023)

Bayesian Inference

Avatar

Sanket Jantre

Givens Fellow (2021); Visiting Student (2021-2022)

Bayesian Inference

Avatar

Pankaj Chauhan

Summer Intern (2021)

Deep Learning

Avatar

Neer Bharadwaj

NSF-MSGI Fellow (2020)

Deep Generative Models

Avatar

Kelvin Kan

NSF-MSGI Fellow (2020)

Physics-informed Learning

Avatar

Peihong Jiang

NSF-MSGI Fellow (2019)

Reinforcement Learning

Avatar

Tianchen Zhao

Givens Fellow (2018)

Deep Generative Models

Projects

.js-id-Current

A Transformative Co-Design Approach to Materials and Computer Architecture Research (Threadwork)

Co-design approach that encompasses neuromorphic computing, systems architecture, and datacentric applications. Focus on high energy physics (HEP) and nuclear physics (NP) detector experiments

Accelerating HEP Science:-Inference and Machine Learning at Extreme Scales

This project brings together ASCR and HEP researchers to develop and apply new methods and algorithms in the area of extreme-scale inference and machine learning. The research program melds high-performance computing and techniques for ​“big data” analysis to enable new avenues of scientific discovery.

Atoms to Manufacturing

Deep transfer learning to automatically segment the precipitate from matrix in 3D Atom Probe Tomography data.

CETOP - A Center for Edge of Tokamak OPtimization

The project overarching objective is to develop the simulation capability and to perform extended MHD and (drift-gyro) kinetic simulations of non-ELMing (and some ELMing) regime operating points to close gaps in understanding, prediction, and optimization of edge stability for an FPP. I am leading a team to develop ML techniques for extracting reduced-order models, data reduction, and feature extraction to the existing non-ELM database (based on interpolation of data), and to extrapolate to new parameter regimes (such as coil currents for negative-triangularity shaping) for ELM-free optimization.

Dynamic architectures through introspection and neuromodulation (DARPA’s Lifelong Learning Machines Program)

Employing architechtures inspired by insect brain to devise efficient, life-long learning machines.

Enabling Cosmic Discoveries in the Exascale Era

The overarching objective of this SciDAC-5 project is to create consistent predictions of the dark and visible Universe across redshifts, length scales and wavebands based on state-of-the-art cosmological simulations. The simulation suite will encompass large-volume, high-resolution gravity-only simulations and hydrodynamical simulations equipped with a comprehensive set of subgrid models covering both small and large volumes. The simulations will be coupled to a powerful analysis framework and associated tools to maximize the analysis flexibility and science return.

Foundations for Correctness Checkability and Performance Predictability of Systems at Scale (ScaleSTUDS)

Multi-dimensional automated scalability tests, program analysis, performance learning and prediction at various levels of the software/hardware stack.

High-Velocity Artificial Intelligence for HEP

Develop cross-cutting artificial intelligence framework for fast inference and training on heterogeneous computing resources as well as algorithmic advances in AI explainability and uncertainty quantification.

Improving Computational Science Throughput via Model-Based I/O Optimization (SciDAC SUPER-SDAV)

Machine learning-based probabilistic I/O performance models that take the background traffic, and system state into account while prediciting application performance on HPC system.

ML Assisted Equilibrium Reconstruction for Tokamak Experiments and Burning Plasmas

Deelop a framework for efficient and accurate equilibrium reconstructions, by automating and maximizing the information extracted from measurements and by leveraging physics-informed ML models constructed from experimental and synthetic solution databases to guide the search for the solution vector.

Probabilistic Machine Learning for Rapid Large-Scale and High-Rate Aerostructure Manufacturing (PRISM)

he project goal is to develop a probabilistic ML framework – PRISM – to improve manufacturing efficiency and demonstrate the technologies on wing spars.

RAPIDS2:- SciDAC Institute for Computer Science, Data, and Artificial Intelligence

The objective of RAPIDS2 is to assist the Office of Science (SC) application teams in overcoming computer science, data, and AI challenges in the use of DOE supercomputing resources to achieve scientific breakthroughs.

RAPIDS:- A SciDAC Institute for Computer Science and Data

The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.

Self-Aware Adaptive Workflow and Data Management Services for Future HPC Systems

Develop modular characterization approaches, where we can examine key performance parameters and application execution similarities.

Recent Publications