Keynote Abstracts

[Keynote I] HPC in Health: Scaling Vascular Digital Twins from Millions of Heartbeats to Petabytes of Data

– Amanda Randles (Duke University)

High performance computing (HPC) has long driven breakthroughs in physics, chemistry, and engineering. Today, the emergence of digital twins in healthcare introduces a new frontier: personalized, physics-informed simulations of the human vascular system. These models demand solving fluid dynamics over complex 3D anatomies across millions of heartbeats, while integrating continuous data from wearable sensors. The result is petabyte-scale datasets and real-time simulation needs that stretch the limits of algorithms, data handling, and scalability. This keynote will highlight how vascular digital twins expose new challenges and opportunities for HPC—reducing communication overhead in parallel time integration, compressing multimodal data streams without losing fidelity, and enabling adaptive, continuous simulation at exascale. Meeting these challenges requires leadership-scale systems co-designed with novel algorithms and workflows. Beyond medicine, these lessons illustrate how HPC can evolve to support time-critical, data-rich applications across domains, underscoring the need for sustained investment and long-term vision in high performance computing.

About Amanda Randles:  Amanda Randles is the Alfred Winborne Mordecai and Victoria Stover Mordecai Associate Professor of Biomedical Sciences and Biomedical Engineering at Duke University, where she also serves as Director of the Duke Center for Computational and Digital Health Innovation. She holds courtesy appointments in Mechanical Engineering and Materials Science, Computer Science, and Mathematics, and is a member of the Duke Cancer Institute. Her research focuses on the development of patient-specific digital twin models that integrate high performance computing, machine learning, and multiscale biophysical simulations to enable proactive diagnosis and treatment of diseases ranging from cardiovascular disease to cancer. She has published 120 peer-reviewed papers, including in Science, Nature Biomedical Engineering, and Nature Digital Medicine, and holds 121 granted U.S. patents with approximately 75 additional applications pending. Her contributions have been recognized with the ACM Prize in Computing, the NIH Pioneer Award, the NSF CAREER Award, the ACM Grace Hopper Award, the Jack Dongarra Early Career Award, and the inaugural Sony and Nature Women in Technology Award. She was named to the HPCwire People to Watch list in 2025, is a Fellow of the National Academy of Inventors, and has been honored as a World Economic Forum Young Scientist and one of MIT Technology Review’s Top 35 Innovators Under 35. Randles received her Ph.D. in Applied Physics from Harvard University as a DOE Computational Science Graduate Fellow and NSF Fellow, an M.S. in Computer Science from Harvard, and a B.A. in Computer Science and Physics from Duke. Prior to graduate school, she worked as a software engineer at IBM on the Blue Gene supercomputing team.

[Keynote II] The Evolutionary Flexibility of LS-DYNA

– Bob Lucas (Ansys)

Lawrence Livermore National Laboratory’s DYNA3D is an example of a large Computer-Aided Engineering application that was rearchitected in response to a disruptive change in the execution model and has since successfully evolved for four and a half decades. Its progeny, which includes LS-DYNA, have adapted to vector processors, shared and distributed memory models, SIMD extensions, and now to acceleration with Graphics Processing Units (GPUs). In each case, initial experiments predated the arrival of standards such as the Message Passing Interface (MPI) or OpenMP. But the standards were quickly adopted when they appeared, and as the execution model they embody expanded, so too did LS-DYNA. Today, LS-DYNA embodies over ten million source lines of code, mostly in Fortran, and has many thousands of users Worldwide. Rewriting LS-DYNA in another language to facilitate porting to a new device is not feasible. The use of library calls and compiler directives is the most productive and least disruptive way to continue evolving. This talk will discuss how LS-DYNA is adapting in the era of GPUs and speculate about how OpenMP can help in the future.

About Bob Lucas:  Dr. Robert F. Lucas is a Synopsys Fellow where he is responsible for the default multifrontal linear solver used in LS-DYNA and MAPDL. Previously, he was the Operational Director of the USC – Lockheed Martin Quantum Computing Center. Prior to joining USC, he was the Head of the High-Performance Computing Research Department in the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Prior to joining NERSC, Dr. Lucas was the Deputy Director of DARPA’s Information Technology Office. From 1988 to 1998 he was a member of the research staff of the Institute for Defense Analyses’s Center for Computing Sciences. From 1979 to 1984 he was a member of the Technical Staff of the Hughes Aircraft Company. Dr. Lucas received his BS, MS, and PhD degrees in Electrical Engineering from Stanford University in 1980, 1983, and 1988 respectively.

[Keynote III] Fortran is All You Need

– Damian Rouson (Lawrence Berkeley National Laboratory)

An evolving language is forever a new language even when it’s the world’s first widely used programming language. Viewed from the perspective of parallel and accelerator programming, Fortran 2023 is simultaneously a senior citizen, a young adult, a teenager, and a toddler – depending on whether one focuses on the whole language or on the parallel features’ invention, standardization, and implementation in compilers. This talk will provide an overview of the two feature sets that Fortran programmers can use for parallel programming: multi-image execution for Single Program Multiple Data (SPMD) programming with a Partitioned Global Address Space (PGAS) and ‘do concurrent’ for loop-level parallel and accelerator programming. The talk will highlight the international public/private partnerships that are co-developing these features in the LLVM Flang compiler, the current main branch of which supports multi-image execution and automatic loop parallelization on central processing units (CPUs) by translation to OpenMP with work towards automatic offloading to graphics processing units (GPUs) under way. The talk will highlight the latest developments in open-source software that Berkeley Lab (co-)develops to both support and use the new features in high-performance computing (HPC) and artificial intelligence (AI).

About Damian Rouson: Damian Rouson is a Senior Scientist and the Group Lead for the Computer Languages and Systems Software (CLaSS) Group at Berkeley Lab, where he researches programming patterns and paradigms for computational science, including multiphysics modeling and deep learning. He has prior research experience in simulating turbulent flow in magnetohydrodynamic, multiphase, and quantum media. He collaborates on the development of open-source software for science, including the Caffeine parallel runtime library, the Fiats deep learning library, the Julienne correctness-checking framework, and the LLVM Flang Fortran compiler.  He also teaches tutorials on Fortran and the UPC++ parallel programming model and has taught undergraduate courses in thermodynamics, fluid turbulence, numerical methods, and software engineering at the City University of New York, the University of Cyprus, and Stanford University. He was the lead author on the textbook Scientific Software: The Object-Oriented Way (Cambridge University Press, 2011) won the Berkeley Lab’s Developer of the Year award in 2025. He holds a B.S. from Howard University, M.S. and Ph.D. degrees Stanford University, and a Professional Engineer (P.E.) license in California, all in mechanical engineering.