Paris Perdikaris, University of Pennsylvania
The application of neural networks to solving partial differential equations has experienced a tumultuous journey since the early 1990s, culminating in the development of Physics-Informed Neural Networks (PINNs) that have generated both excitement and skepticism within the scientific computing community. In this talk we will discuss why early approaches showed promise but failed to gain widespread adoption, and why PINNs themselves have faced significant criticism despite their practical appeal. A critical gap in the field has been the failure to recognize that PINNs operate under fundamentally different assumptions than conventional supervised learning paradigms -- unlike traditional machine learning tasks with abundant labeled data, PINNs must simultaneously approximate unknown functions without direct supervision while satisfying physical constraints. This creates fundamentally different algorithmic requirements where standard deep learning assumptions, from network initialization and architecture design to optimization strategies, must be reconsidered. We present an overview of recent methodological advances that address these challenges, including advanced optimization algorithms, improved loss balancing techniques, and specialized architectures designed for physical problems. We demonstrate that properly designed PINNs can successfully perform high-fidelity simulation of complex three-dimensional turbulent flows -- a notoriously challenging setting that has long resisted both classical and ML approaches. These results suggest that with proper understanding of their unique characteristics and careful algorithmic design, PINNs can indeed fulfill their promise as a transformative tool for scientific computing, with the tantalizing possibility that approaches free from resolution constraints may one day enable nearly exact solutions to fundamental problems like Navier-Stokes turbulence.