Mini-course Registration Deadline is May 1, 2026

Lake Tahoe

Join us for this Informative Mini-Course!

Thursday, June 25 1:30 PM – 5:00 PM
Friday, June 26 8:00 AM – 5:00 PM

Theme: ML/AI for Computational Plasma Physics: Foundations, Tools, and Applications

Topics:

  1. ML/AI Foundations and Tools for Plasma Physicists
    • Focus: Demystifying common libraries (e.g., PyTorch, TensorFlow, scikit-learn); interpreting models; understanding “what’s going on in the black box.”
    • Topics:
      • Model interpretability in physics
      • Auto-differentiation and training loops
      • Practical tips for deploying AI tools in plasma physics
  2. AI in Experimental Plasma Physics
    • Focus: How ML/AI enhances experimental design, diagnostics, and control.
    • Topics:
      • Real-time anomaly detection in experiments
      • AI-guided design of experiments
      • Data fusion techniques
  3. AI-Accelerated Plasma Algorithms
    • Focus: How AI augments or replaces traditional numerical methods.
    • Topics:
      • ML-enhanced solvers for kinetic equations
      • Closure models using neural networks
      • Reinforcement learning for control and optimization
  4. Surrogates, Digital Twins, and Emulators
    • Focus: Fast and interpretable reduced-order models for plasma simulations.
    • Topics:
      • Latent space dynamics identification (LaSDI), Dynamic mode decomposition (DMD), and neural operators (NO)
      • Emulation strategies for design and control
      • Uncertainty quantification in surrogates

Mini-Course Presentations

Min Sang Cho

Min Sang Cho

LLNL

LaSDI-Based Physics-Informed Surrogate Modeling for Time-Dependent Atomic Kinetics in Plasma

Abstract

Machine-learning (ML) surrogates are increasingly used to accelerate plasma simulations by learning compact, reusable models that capture essential physics at a small fraction of the computational cost. In this lecture segment on Surrogates, Digital Twins, and Emulators, I will introduce core ideas in reduced-order modeling: compressing high-dimensional simulation outputs into a low-dimensional latent space, and learning interpretable latent dynamics (e.g., LaSDI – Latent Space Dynamic Identification) that enable rapid emulation for design, optimization, and control. I will then connect these concepts to my recent work on time-dependent atomic kinetics in laser-driven plasmas, where high-fidelity inline coupling to radiation–hydrodynamic simulations is often computationally intractable. We develop a LaSDI-based physics-informed surrogate that (i) compresses code-generated atomic populations using an autoencoder trained with a mixed loss that preserves both microscopic populations and macroscopic observables, (ii) identifies latent dynamics using symbolic-regression methods driven by time-dependent temperature and density histories, and (iii) enforces convergence and steady-state constraints—together with stability conditions—to ensure physically meaningful long-time behavior. The result is a fast, interpretable surrogate that illustrates how constraint-aware latent dynamics can bring ML surrogates closer to practical workflows for plasma modeling.

Bio

Min Sang Cho is a staff scientist at Lawrence Livermore National Laboratory (LLNL), where he has conducted research since 2022 after earning his Ph.D. from the Gwangju Institute of Science and Technology (GIST), South Korea. His research focuses on the non-equilibrium dynamics of laser-driven and high-energy-density plasmas, with particular emphasis on atomic-physics modeling. More recently, he has been developing physics-informed surrogate models for atomic plasma kinetics, leveraging latent-space dynamics identification and reduced-order modeling to enable fast, interpretable emulators of complex plasma systems for simulation, design, and analysis.

Asif Igba

Asif Iqba

University of Michigan

Surrogate Modeling and Machine Learning Techniques for Computational Plasma Science: A Case Study on Multipactor Prediction

Abstract

This mini-course introduces practical methodologies for applying supervised machine learning (ML) and artificial intelligence techniques to computational plasma physics, using multipactor discharge as a focused case study. Multipactor is a high-power RF-field-driven electron avalanche and a critical reliability concern in high-power microwave, space, and accelerator systems.  The course walks through an end-to-end modeling pipeline covering dataset construction, feature selection, model validation, and performance assessment. Particular emphasis is placed on challenges specific to plasma modeling such as nonlinear response behavior, limited training data, and model generalization to unseen materials. While demonstrated in the context of multipactor, the techniques are broadly applicable across computational plasma problems including low-temperature plasma, high-power RF, fusion, and space applications. The session will conclude with a look at future directions in physics-informed ML, uncertainty quantification, and hybrid modeling frameworks.

Bio

Asif Iqbal is a Research Fellow in the Department of Nuclear Engineering and Radiological Sciences at the University of Michigan. He received his Ph.D. in Electrical Engineering from Michigan State University in 2021. His research focuses on fundamental and computational plasma physics, with particular emphasis on physics-based (e.g., Particle-in-cell, fluid, and Monte Carlo) and data-driven (e.g., machine learning, surrogate, empirical, and semi-empirical) model development for low-temperature plasma, RF, and accelerator applications. Dr. Iqbal is actively involved in the plasma science community and currently serves as the Visa/International Chair for the IEEE International Conference on Plasma Science (ICOPS 2026).

Jeph Wang

Jeph Wang

LANL

Data-driven measurement and instrumentations

Abstract

Data-driven methods (DDMs), such as deep neural networks, offer a generic approach to integrated data analysis (IDA), integrated diagnostic-to-control (IDC) workflows through data fusion (DF), which includes multi-instrument data fusion (MIDF), multi-experiment data fusion (MXDF), and simulation-experiment data fusion (SXDF). These features make DDMs attractive to nuclear fusion energy and power plant applications, leveraging accelerated workflows through machine learning and artificial intelligence. Here we describe Physics-informed Meta-instrument for eXperiments (PiMiX) that integrates X-ray (including high-energy photons such as -rays from nuclear fusion), neutron and others (such as proton radiography) measurements for nuclear fusion. PiMiX solves multi-domain high-dimensional optimization problems and integrates multi-modal measurements with multiphysics modeling through neural networks. Super-resolution for neutron detection and energy resolved X-ray detection have been demonstrated. Multi-modal measurements through MIDF can extract more information than individual or uni-modal measurements alone. Further optimization schemes through DF are possible towards empirical fusion scaling laws discovery and new fusion reactor designs.

Bio

Dr. Zhehui (Jeph) Wang is a scientist and team leader in the Physics Division at Los Alamos National Laboratory (LANL). Since earning his Ph.D. from Princeton University, he has devoted nearly thirty years to experimental physics and scientific instrumentation at one of the world’s premier research institutions. Dr. Wang has made pioneering contributions across a broad range of scientific areas, including plasma dynamos, plasma flow diagnostics, hypervelocity dust technology and applications, novel applications of dust in fusion plasmas, and fundamental neutron physics, with a particular emphasis on precision measurements of the neutron lifetime. Many of these advances were enabled by his innovations and leadership in measurement technologies, lately enhanced by machine learning and AI. He has also played a key role in dynamic materials research through the development and application of ultrafast synchrotron X-ray imaging and tomography techniques, breaking critical technological barriers at state-of-the-art light sources, such as fourth-generation synchrotrons. Dr. Wang has authored and coauthored more than 100 peer-reviewed technical publications and holds a growing portfolio of patents in advanced radiographic imaging and tomography, spanning plasmas, neutrons, and X-rays. In recent years, his scientific and technical leadership has expanded to data-driven experiments and instrumentation, integrating classical and quantum-enabled methods with cutting-edge instruments, and experimental facilities.

Ionut Farcas

Ionut Farcas

Virginia Tech

Fast prediction of plasma instabilities with sparse-grid-accelerated optimized dynamic mode decomposition

Abstract

Many problems in fusion research require running the same simulation repeatedly for different operating conditions, for example, in optimization, uncertainty quantification, or control. These so-called “many-query” tasks are central to emerging digital-twin efforts, but they are often computationally infeasible when each simulation is expensive and depends on variations in many input parameters (e.g., density, temperature, geometry). In this presentation, we introduce an efficient data-driven approach for building parametric reduced-order models that remain accurate while drastically reducing computational cost. Rather than relying on dense parameter sweeps, which become prohibitively expensive as the number of varying inputs grows, we use a sparse grid strategy to select a small but informative set of parameter instances that capture the dominant trends in the system. This approach mitigates the exponential cost growth typically associated with high-dimensional parameter spaces. We demonstrate the method using gyrokinetic simulations of plasma micro-instabilities relevant to fusion experiments. Reduced-order models are constructed for the full five-dimensional gyrokinetic distribution function by combining optimized dynamic mode decomposition with sparse sampling in parameter space. For a realistic electron-temperature-gradient-driven instability based on a DIII-D discharge involving six input parameters, we show that a predictive parametric reduced-order model can be built using only 28 high-fidelity gyrokinetic simulations. The resulting model is up to three orders of magnitude cheaper to evaluate than full simulations while retaining key physical fidelity. These results highlight the potential of sparse-grid-accelerated, data-driven reduced-order modeling to make high-dimensional many-query tasks tractable, supporting future applications in fusion optimization, uncertainty analysis, and real-time control.

Bio

I am an Assistant Professor in the Department of Mathematics at Virginia Tech. Prior to joining Virginia Tech, I was a Postdoctoral Fellow at the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin, working with Dr. Karen Willcox and Dr. Frank Jenko on data-driven reduced-order modeling and uncertainty quantification for large-scale numerical simulations in rocket combustion and fusion plasmas. I earned my Ph.D. summa cum laude from the Technical University of Munich in 2020, focusing on efficient numerical methods for uncertainty quantification in large-scale problems. My research lies at the intersection of data-driven learning, model reduction, uncertainty quantification, multi-fidelity methods, and high-performance computing, with applications to complex real-world systems.

Azarakhsh Jalalvand

Azarakhsh Jalalvand

Princeton University

AI in Experimental Plasma Physics: From Machine Learning Foundations to Real-Time Tokamak Control

Abstract

Artificial intelligence (AI) and machine learning (ML) are rapidly becoming essential tools in experimental plasma physics, enabling new capabilities in diagnostics, control, and experimental optimization. This mini-course provides an accessible introduction to how ML models are designed, validated, and deployed in real experimental environments, with practical examples from DIII-D and KSTAR tokamaks.

The course begins with the fundamentals: how to frame plasma physics problems for ML, prepare experimental data, select appropriate model architectures, and evaluate model reliability. Key concepts such as supervised learning, neural networks, surrogate modeling, uncertainty estimation, and physics-informed learning will be introduced with minimal mathematical prerequisites. Emphasis will be placed on connecting ML tools to physical intuition rather than treating them as black boxes.

We then walk through the full pipeline from offline model development to real-time deployment in plasma control systems. Practical use cases include real-time anomaly detection for identifying disruptions and diagnostic faults, multimodal data fusion for reconstructing missing or low-resolution diagnostics. Examples will demonstrate how ML enables super-resolution diagnostics, failure-tolerant monitoring, detachment control, and adaptive actuator coordination.

Special attention will be given to real-time constraints: latency, interpretability, validation under distribution shifts, and integration with existing plasma control architectures. Lessons learned from running AI-driven control experiments, where models directly influence actuator commands, will be discussed, highlighting both successes and challenges.

By the end of this mini-course, participants will understand how AI tools are developed and safely implemented in experimental tokamaks, and how these methods can enhance experimental efficiency, resilience, and physics discovery. The goal is to equip students and researchers with a clear conceptual roadmap for applying ML/AI to plasma science problems while maintaining scientific rigor and physical insight.

Bio

Azarakhsh (Aza) Jalalvand is a researcher at Princeton University’s Plasma Control Group, where his work focuses on applying artificial intelligence to fusion energy. His research spans advanced plasma diagnostics, real-time control, and predictive models to support reliable operation of next-generation fusion reactors. He is also engaged internationally, collaborating with Ghent University as a voluntary researcher and facilities such as KSTAR in Korea, and contributing to the ITER Control Scientist Fellow Network. His recent work has demonstrated AI-driven super-resolution diagnostics, diagnostic failure mitigation, and scenario optimization, positioning AI as a key enabler for ITER and future Fusion Pilot Plants.

Mark Kostuk

Mark Kostuk

General Atomics

Closed-Loop Model Optimization within the DIII-D Digital Twin

Abstract

The DIII-D National Fusion Facility’s digital twin has made significant strides in integrating real-time predictive modeling with experimental plasma operations. The digital twin is under continual development and the latest results will be discussed. One of the key ongoing challenges remains the integration and validation of models with data to create a continuously updating high-fidelity representation of the experiment. This mini-course talk will focus on an essential aspect of this integration: closed-loop model optimization.

We will explore two critical applications of optimization (model retraining) involving the digital twin: model update loops and control parameter selection. The model update loop aims to optimize model parameters to improve predictive simulation accuracy, enabling better experimental design over time. If sufficiently fast, it has the potential for model updates to occur in the context of real-time plasma control. Control parameter selection seeks to identify optimal settings to achieve target plasma scenarios. Both applications pose significant challenges due to the need to evaluate performance within an active closed loop. This loop comprises at least the virtual plasma control system (vPCS) and a predictive plasma dynamics model, yet realistically also contains multiple surrogate models covering different experimental aspects and data modalities, therefore care must be taken in how this optimization is performed.

This talk will discuss the techniques and challenges associated with optimizing models within the digital twin, including the interplay between the vPCS and predictive models, and the need for self-consistent results.  We will also highlight the potential for these techniques to improve model accuracy using multiple data modalities and/or combining equation-based models with machine learning models.  As the mini-course is intended for researchers new to these challenges, general concepts and approaches will be described as much as specific results with a goal that everyone can take away useful information to their specific areas of interest.

Acknowledgement: This work was supported under General Atomics’ corporate funds. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, using the DIII-D National Fusion Facility, a DOE Office of Science user facility, under Award(s) DE-FC02-04ER54698. An award of computer time was provided by the ASCR Leadership Computing Challenge (ALCC) program. This research used resources of the Argonne Leadership Computing Facility, which is a U.S. Department of Energy Office of Science User Facility operated under contract DE-AC02-06CH11357.

Bio

Dr. Kostuk obtained his PhD in physics from The University of California, San Diego with a focus on nonlinear dynamics system identification, control theory and chaotic data assimilation. For the last decade he has been focused on applying these skills to fusion simulation, modeling and data analysis at General Atomics and the DIII-D National Fusion Facility. While there, Kostuk has contributed to many software and simulation projects, including being the principle investigator on applying quantum computation to the discovery of fusion energy materials. He led a group of collaborators to be the first to address the challenge of on-demand, remote execution of large, high-fidelity simulations at a leadership compute facility in support of ongoing plasma experiments at DIII-D. He currently leads the Advanced Computing Team at General Atomics, and as part of the DIII-D digital twin effort, is presently focused on the problems of heterogenous model integration, modularity, and performance-at-variable-scales that lie at the core of digital twin development.

Youngsoon Choi

Youngsoo Choi

LLNL

Local Reduced-order modeling for electrostatic plasmas by physics-informed solution manifold decomposition

Abstract

High-fidelity plasma simulations are essential for understanding kinetic phenomena, but they can be too computationally expensive for rapid parameter studies, design exploration, or repeated analysis. Reduced-order modeling (ROM) offers a practical way to accelerate simulation by constructing compact surrogate models that preserve the dominant behavior of the full system.

This lecture introduces the basic ideas of projection-based reduced-order modeling in a way that is accessible to participants without prior background in machine learning or AI. We begin with simple examples (such as diffusion-type problems) to build intuition for key concepts, including snapshot data, low-dimensional basis construction (e.g., proper orthogonal decomposition), and projection (e.g., Galerkin projection). Throughout the lecture, these ideas will be demonstrated using the open-source libROM toolbox, providing a practical, hands-on view of how projection-based ROM workflows are implemented.

We then move to more challenging applications in electrostatic plasma physics, focusing on kinetic models such as the Vlasov–Poisson system. In this setting, a single global reduced basis may struggle to represent multiscale, strongly evolving, or instability-driven dynamics over long time intervals. To address this, the lecture presents local reduced-order modeling, in which the solution manifold is decomposed into multiple regions (or time windows), and a tailored reduced model is built for each region.

A central topic is physics-informed solution manifold decomposition, where physically meaningful indicators (such as physical time and electric-field energy) guide the construction of local ROMs. This approach improves robustness and accuracy while retaining interpretability, especially for challenging electrostatic plasma problems.

Bio

Youngsoo is a staff scientist at LLNL’s CASC group, where he develops efficient foundation models for computational science. His research focuses on creating surrogates and reduced-order models to accelerate time-critical simulations in areas such as inverse problems, design optimization, and uncertainty quantification. He has pioneered advanced ROM techniques—including machine learning-based nonlinear manifolds, space-time ROMs, component-wise ROM optimization, and latent space dynamics identification—and currently leads the libROM team in data-driven surrogate modeling. His contributions extend to open source projects such as libROM, pylibROM, LaghosROM, ScaleupROM, LaSDI, WLaSDI, tLaSDI, gLaSDI, NM-ROM, DD-NM-ROM, and GappyAE. Youngsoo earned his BS from Cornell and his PhD from Stanford, and he was a postdoc at Sandia National Laboratories and Stanford University before joining LLNL in 2017.

William Lewis

William Lewis

Sandia

Surrogate Based Optimization of Magnetized Liner Inertial Fusion Target Design Using Automated Workflows

Abstract

In this talk, I will discuss workflow orchestration tools and machine learning-based optimization of magneto-inertial fusion experiments at Sandia’s Z facility. The need for these tools and methodology derives from the typically high dimensional and computationally complex nature of designing plasma physics experiments. The design process includes elements ranging from selecting fundamental input conditions to specifying instrument configurations to diagnose the plasma generated. Furthermore, each of these components is often studied using software written in a multitude of programming languages by many different authors. As a result, experiment design often follows an iterative process of either manually handing off results from one person to another or utilizing opaque and relatively inflexible scripts to automate workflows. In this lecture, I will demonstrate the use of a Sandia National Laboratories (SNL) developed workflow orchestration tool known as Next Generation Workflow (NGW) for optimization of composite Magnetized Liner Inertial Fusion (MagLIF) liners simulated by a radiation magneto-hydrodynamics code called Kraken. Our application leverages NGW as the workflow orchestrator as it natively interfaces with SNL high-performance computing assets as well as the SNL developed open-source optimization and uncertainty quantification engine Dakota. However, we will emphasize features that are general to all workflow orchestration software and highlight other open-source options.

Bio

William Lewis is a principal member of the technical staff in the Pulsed Power Sciences Center at Sandia National Laboratories. He received a Ph.D. in physics from the University of Colorado at Boulder in 2018, publishing in the areas of optics, condensed matter, and atomic physics. After a brief period as a data analyst, he joined Sandia as a postdoctoral appointee in 2019. His current research focuses on development and application of statistics and machine learning methods to data analysis, simulation, and design of pulsed power experiments conducted on Sandia’s Z Machine. During his tenure at Sandia, he has coauthored more than 20 research articles. These works include a review article on applications of data science methods to high energy density physics as well as over 13 peer-reviewed physics and statistics journal articles utilizing methodologies such as fully connected and convolutional neural networks, wavelet transforms, Bayesian inference, sparse tomographic reconstruction, data mining, discovery of hidden variables, and Bayesian optimization. He has collaborated with numerous academics, students, and members of the Sandia workforce to apply these techniques to problems in x-ray and neutron image and spectroscopy analysis, construction of material properties models, and design of pulsed power experiments.

Elliot English

Elliot English

DeepMind

JAX for Scientific Computing and Fusion

Abstract

Recent innovations in machine learning have enabled a new era of computational science where models are no longer static simulations but tools that are scalable, differentiable, hardware-agnostic, and faster to develop. This talk explores the shift toward differentiable programming in physics, demonstrating how JAX allows researchers to fuse traditional PDE solvers with black-box ML representations to create systems that are directly optimizable via gradient-based methods. We provide a technical deep dive into the JAX ecosystem, illustrating how to encode numerical algorithms as tensor expressions while mastering functional transformations like vectorization, automatic differentiation, and seamless scaling across CPU, GPU, and TPU clusters. By implementing a structured grid PDE solver from scratch, including stencil evaluation and boundary conditions, we demonstrate how to optimize complex system parameters against target design objectives. We also examine several relevant open source tools, including both TORAX and DESC.

Bio

Elliot English is a software engineer at Google, where he leads efforts accelerating ML and HPC applications using JAX and TPUs. His core areas of expertise include frontier AI model development, the design of numerical methods for multiphysics, and high-performance software/hardware codesign. He is particularly interested in applying these to computational engineering problems in fusion energy and sustainable technologies. Prior to joining the ML frameworks and compilers team at Google, he was stellarator optimization lead at Renaissance Fusion, a fellow at Syntiant, CTO at Pilot AI, and software engineer at MetaMind. Elliot was also a postdoctoral scholar at Lawrence Berkeley National Laboratory and holds a PhD in computational mathematics from Stanford University.

Anderson William

William Anderson

LLNL

Latent Space Dynamics Identification for Accelerated Vlasov-Poisson Simulations

Abstract

High-fidelity plasma simulations are often prohibitively expensive for design, optimization, or real-time control. This minicourse introduces practical ML/AI methods for building accurate, fast approximations of these simulations using data from existing solvers. We will cover three widely used reduced-order modeling approaches: Dynamic mode decomposition (DMD),  Latent-space dynamics identification (LaSDI), and neural operators (NO). DMD is a natural baseline which models dynamics as linear. LaSDI compresses simulation data into a low-dimensional representation and learns explicit equations for its time evolution. Neural operators learn mappings from inputs (e.g., parameters, sources, boundary conditions) to full solution fields and can generalize across operating regimes. We will discuss when each approach is appropriate, and outline practical workflows for training, testing, and integrating these models into design and control loops. Finally, we will cover uncertainty quantification and how to assess when a surrogate can be trusted. The emphasis throughout is on intuition and concrete best practices so participants can confidently evaluate and apply these methods to plasma modeling problems.

Bio

Dr. William Anderson is a postdoctoral researcher in the Computing directorate at Lawrence Livermore National Laboratory, specializing in data-driven modeling, model order reduction, and scientific machine learning. Dr. Anderson earned his PhD in Applied Mathematics from North Carolina State University and holds MS and BS degrees in Mathematics from Montclair State University. His research focuses on developing computational methods which accelerate simulations of partial differential equations, enabling faster predictions for complex physical systems. His work combines computational mathematics and machine learning to create accurate, efficient surrogate models. At LLNL, Dr. Anderson collaborates with multidisciplinary teams supporting mission-relevant modeling and simulation.

Yadi Cao

Yadi Cao

UCSD

TGLF-SiNN: Deep Learning Surrogate Model for Accelerating Turbulent Transport in Fusion

Abstract

Artificial Intelligence is rapidly transforming the fusion research and technology. This presentation will be on one critical application: the machine learning surrogate model for the turbulent transport simulations, which is central for Whole Device Simulation in Tokamaks. We will discuss our recent work TGLF-SiNN that improves the robustness and resolves the data scarcity challenge for training this surrogate by careful feature tuning, physics-guided learning, and active learning. TGLF-SiNN is deployed into GA’s simulation pipeline, it reduces training data requirement by 75% and accelerates whole device simulation by 45x.

Bio

Yadi Cao is a postdoctoral researcher at UCSD with Professor Rose Yu. He develops novel AI methods to scale up engineering research by reducing experimental costs and expert labor. Leveraging surrogate modeling and agentic workflows, his work has driven major advances in fusion energy and fluid dynamics, including TGLF-SiNN—which delivers a 45x speedup and is integrated into General Atomics’ tokamak pipeline—and BSMS-GNN for realistic turbomachinery simulations on million-node meshes. Yadi is currently seeking a tenure-track faculty position and collaboration opportunities.

C McDevitt

Christopher McDevitt

University of Florida

Physics-Constrained Deep Learning for Kinetic and Fluid Plasma Modeling

Abstract

Coming soon

Bio

Chris McDevitt is an Associate professor in the Nuclear Engineering Program at the University of Florida where his research is focused on the theory and simulation of fusion plasmas. Prior to joining UF in 2019, he completed his Ph.D. in physics at the University of California at San Diego, where he focused on the description of turbulence in magnetized plasmas. After a short stint as a visiting scientist at Ecole Polytechnique, he moved to Los Alamos National Laboratory where he worked as a staff scientist.

Radha Bahukutumbi

Radha Bahukutumbi

LANL

Prospero: an AI co-scientist for Inertial Confinement Fusion

Abstract

Inertial Confinement Fusion (ICF) plays a critical role in national security, the study of exotic states of matter such as those found in planetary interiors, and in the pursuit of fusion energy. However, predictive capabilities in ICF remain limited because it is a highly coupled, multiphysics, and energy-constrained system. Progress in the field has largely relied on simulations to guide experimental design, combined with systematic semi-empirical tuning of experiments, including emerging machine learning approaches. This talk introduces Prospero; a large language model (LLM) developed specifically for ICF applications. Prospero is designed to contextually retrieve and synthesize information from scientific literature. When integrated with experimental data and used interactively with subject matter experts through natural language interfaces, the system has the potential to generate and refine hypotheses that explain observations. The development process behind Prospero is described, with emphasis on building a trusted collaborative aide capable of supporting ICF researchers. Key technical and scientific challenges associated with this effort are discussed. Finally, several use cases are described to illustrate how combining literature knowledge, experimental data, and expert interaction could potentially accelerate progress in the field.

Bio

Radha Bahukutumbi leads the Plasma Theory and Applications Group at Los Alamos National Laboratory. Her research focuses on high-power laser applications, including inertial confinement fusion (ICF), nuclear astrophysics, hydrodynamics, and laser–plasma interactions. Prior to joining LANL, she was a Distinguished Scientist at the Laboratory for Laser Energetics at the University of Rochester, where she wrote the radiation hydrodynamic code DRACO and led ICF simulation efforts and experiments on the OMEGA and National Ignition Facility lasers for over two decades. Radha is a Fellow of the American Physical Society and a recipient of the Fusion Power Associates Leadership Award. She earned her PhD from the California Institute of Technology.

Important Dates

SEPTEMBER 29, 2025: Abstract Submission Opens

NOVEMBER 12, 2025: Registration Opens

JANUARY 31, 2026: NPSS Awards Nominations Deadline

FEBRUARY 4, 2026: Abstract Submission Deadline extended to FEBRUARY 18, 2026

APRIL 10, 2026: Abstract Acceptance

APRIL 17, 2026: Student Travel Grant Deadline

MAY 1, 2026: Mini-course Registration Deadline

MAY 18, 2026: Early Registration Ends

MAY 18, 2026: Hotel Room Reservations Deadline

SEPTEMBER 26, 2026: Manuscript Submissions Deadline

Contact Us

For inquiries about exhibit/sponsorship opportunities, paper submissions, or general information, please contact the ICOPS 2026 organizing committee at icops2026@ieee.org.

Be sure to visit us again as we release exciting updates about the conference leading up to June 2026.