Instructor: Ulisses Braga-Neto
Course Description: This course introduces the foundations of Scientific Machine Learning (SciML), a rapidly developing area that brings together the fields of Machine Learning and Scientific Computation. After an introduction to the field, we review the basics of Machine Learning and classical ODE and PDE discretization methods, and then discuss automatic differentiation, PDE-constrained deep neural networks, PDE-constrained Gaussian Process, and Operator Learning, and their application in forward prediction, inverse modeling, and uncertainty quantification.
Acknowledgment: The development of this course was supported by TAMIDS through its Data Science Course Development Program.
Class Contents: (under construction)
Lecture 1: Introduction to Scientific Machine Learning
- An overview of scientific machine learning and scientific laws expressed through equations, including several examples.
- Lecture Slides (this will play animations if opened in Keynote)
- Lecture Slides (static PDF)
- Additional resources:
- Rackauckas: The Use and Practice of Scientific Machine Learning
- Braga-Neto: Fundamentals of Pattern Recognition and Machine Learning, 2nd Edition (Chapter 12)
- DOE Report: Basic Research Needs for Scientific Machine Learning
- Karniadakis et al.: Physics-Informed Machine Learning
- Evans: Partial Differential Equations
Lecture 2: Review of Machine Learning
- An overview of machine learning focusing on regression, including least-squares parametric regression, neural network regression, and nonparametric regression using Gaussian processes.
- Lecture Slides (PDF)
- References:
- Bishop: Deep Learning
- Rasmussen and Williams: Gaussian Processes for Machine Learning
- Geron: Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition
- Braga-Neto: Fundamentals of Pattern Recognition and Machine Learning, 2nd Edition (Chapter 11)
Lecture 3: Introduction to ODE and PDE Discretization Methods
- An introduction to the classical discretization methods for ODEs and PDEs, including Runge-Kutta methods for ODEs, finite-difference methods for steady-state PDEs, and semidiscretization methods for time-evolution PDEs, including the method of lines and Galerkin semidiscretization. The lecture includes basic elements of the theory of order, convergence, and stability for ODEs and time-evolution PDEs.
- Lecture Slides (PDF)
- Reference:
Lecture 4: Automatic Differentiation
- Introduction to forward-mode and reverse-mode automatic differentiation, with application to the differentiation of neural networks.
- Lecture Slides (PDF)
- References:
- Rall: “Perspectives on Automatic Differentiation: Past, Present, and Future?”. Chapter 1 of Automatic Differentiation: Applications, Theory, and Implementations.
- Baydin et al.: Automatic Differentiation in Machine Learning: a Survey.
Lecture 5: PDE-Constrained Neural Networks
- A survey of methods for constraining neural networks via partial differential equations (including ODEs and algebraic equations, as special cases). Topics: soft constraint via physics-informed neural networks (PINN); JAX implementation of PINN; multi-objective loss function and self-adaptive PINN; discrete PINNs with Euler and Runge-Kutta semi-discretization; hard constraints.
- Lecture Slides (PDF)
- SciML_Burgers.ipynb (JAX implementation of PINN)
- References:
- Raissi, Perdikaris, and Karniadakis: Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.
- McClenny and Braga-Neto: Self-Adaptive Physics-Informed Neural Networks.
- Braga-Neto: Characteristics-Informed Neural Networks for Forward and Inverse Hyperbolic Problems.
- Braga-Neto: Fundamentals of Pattern Recognition and Machine Learning, 2nd Edition (Chapter 12)
Lecture 6: PDE-Constrained Gaussian Process and Kernel Methods
- Nonparametric Bayesian inference with stochastic processes. Detailed derivation of analytical predictive density with a Gaussian prior and a linear-Gaussian observation model. Gaussian processes with linear constraints. Kernel differentiation. PDE-constrained Gaussian process approximation of linear PDEs. Detailed implementation in Python. Approximation for nonlinear PDEs. Automatic Boundary condition satisfaction. Kernel methods as infinite-dimensional version of kernel ridge regression. Introduction to Reproducible Kernel Hilbert Spaces (RKHS). Representer’s Theorem. PDE-constrained kernel approximation of linear and nonlinear PDE solutions by optimal recovery in a RKHS.
- Lecture Slides (PDF)
- c12_advectionGP.ipynb (numpy and scipy implementation of PDE-constrained GP, from “Fundamentals of Pattern Recognition and Machine Learning, 2nd Edition, Chapter 12)
- References:
- Rasmussen and Williams, Gaussian Processes for Machine Learning
- Raissi, Perdikaris, Karniadakis, Machine Learning of Linear Differential Equations using Gaussian Processes.
- Jazwinski, Stochastic processes and filtering theory.
- Solak, Murray-Smith, Leithead, Leith and Rasmussen, Derivative observations in Gaussian process models of dynamic systems.
- Swiler, Gulian, Frankel, Safta, and Jakeman. A survey of constrained Gaussian process regression: Approaches and implementation challenges.
- Long, Wang, Krishnapriyan, Kirby, Zhe and Mahoney, AutoIP: A United Framework to Integrate Physics into Gaussian Processes.
- Solin and Kok, Know Your Boundaries: Constraining Gaussian Processes by Variational Harmonic Features.
- Chen, Hosseini, Owhadi, Stuart, Solving and Learning Nonlinear PDEs with Gaussian Processes.
Lecture 7: Inverse Problems
- Introduction to the theory of inverse problems involving differential equations. Integral equations. Criteria for well-posedness. Tikhonov regularization. Solving inverse problems using PINNs and physics-informed Gaussian processes.
- Lecture Slides (PDF)
- SciML_Burgers_Inverse.ipynb (JAX implementation of inverse PINN)
- c12_advectionGP_inverse.ipynb (numpy and scipy implementation of inverse PDE-constrained GP, from “Fundamentals of Pattern Recognition and Machine Learning, 2nd Edition, Chapter 12)
- References:
- Isakov, Inverse Problems for Partial Differential Equations
- Hananoglu, Romanov, Introduction to Inverse Problems for Differential Equations
- Raissi, Yazdani, Karniadakis, Hidden Fluid Mechanics: Learning velocity and pressure fields from flow visualizations
- Raissi, Perdikaris, and Karniadakis: Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.
Lecture 8: Uncertainty Quantification
- Uncertainty in practical systems. Aleatoric and epistemic uncertainty. Physics-informed Surrogate models. Bayesian models. Uncertainty quantification by Bayesian PINN, ensembles of PINNs, and PDE-constrained Gaussian Processes.
- Lecture Slides (PDF)
- References:
- Psaros, Meng, Zuo, Guo, Karniadakis, Uncertainty Quantification in Scientific Machine Learning: Methods, Metrics, and Comparisons
- Sudret, Marelli, Wiart, Surrogate models for uncertainty quantification: An overview
- Neal, Bayesian Learning for Neural Networks
- Yang, Meng, Karniadakis, B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data
- Davi, Braga-Neto, PSO-PINN: Physics-Informed Neural Networks Trained with Particle Swarm Optimization