Instructor: Ulisses Braga-Neto
Course Description: This graduate course provides an introduction to the algorithmic and computational foundations of Scientific Machine Learning (SciML). The course starts with an introduction to scientific machine learning, followed by a brief review of traditional machine learning. The course covers the basics of scientific computation, including ODE and PDE discretization methods, and the basic SciML algorithms: automatic differentiation, physics-informed neural networks, and physics-informed Gaussian processes. Applications to forward prediction, inverse modeling, and uncertainty quantification are presented. Additional material will be discussed through student presentations of selected publications in the area. The course is integrated with, and benefits from, the educational activities of the TAMIDS SciML Lab.
Acknowledgment: The development of this course has been supported by TAMIDS through its Data Science Course Development Program.
Class Contents: (under construction)
Lecture 1: Introduction to Scientific Machine Learning
- An overview of scientific machine learning and scientific laws expressed through equations, including several examples of applications.
- Lecture Slides (this will play animations if opened in Keynote)
- Lecture Slides (static PDF)
- Additional resources:
- Rackauckas: The Use and Practice of Scientific Machine Learning
- DOE Report: Basic Research Needs for Scientific Machine Learning
- Karniadakis et al.: Physics-Informed Machine Learning
Lecture 2: Review of Machine Learning
- A review of the basic concepts of machine learning, followed by an overview of least-squares parametric regression using linear and neural network models, and nonparametric regression using Gaussian processes.
- Lecture Slides (PDF)
- References:
- Rasmussen and Williams: Gaussian Processes for Machine Learning
- Braga-Neto: Fundamentals of Pattern Recognition and Machine Learning
Lecture 3: Introduction to ODE and PDE Discretization Methods
- An introduction to the classical discretization methods for ODEs and PDEs, including explicit and implicit Euler and Runge-Kutta methods, and finite-difference methods for steady-state and time-evolution PDEs. The lecture includes basic elements of the theory of order, convergence, and stability.
- Lecture Slides (PDF)
- References:
Lecture 4: Automatic Differentiation
- Introduction to forward-mode and reverse-mode automatic differentiation, with application to the differentiation of neural networks.
- Lecture Slides (PDF)
- References:
- Rall: “Perspectives on Automatic Differentiation: Past, Present, and Future?”. Chapter 1 of Automatic Differentiation: Applications, Theory, and Implementations.
- Baydin et al.: Automatic Differentiation in Machine Learning: a Survey.
Lecture 5: PDE-Constrained Neural Networks
- A survey of methods for constraining neural networks via partial differential equations (including ODEs and algebraic equations, as special cases). Topics: soft constraint via physics-informed neural networks (PINN); deep neural network training: Initialization, gradient descent with adaptive learning rate and momentum, learning rate schedule, second-order methods; Tensorflow 2.x implementation of a PINN; multi-objective loss function and self-adaptive PINN; discrete PINNs with Euler and Runge-Kutta semi-discretization; hard constraints; characteristic-informed neural network (CINN).
- Lecture Slides (PDF)
- SciML_Burgers.ipynb (Jupyter notebook on google colab)
- References:
- Raissi, Perdikaris, and Karniadakis: Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.
- McClenny and Braga-Neto: Self-Adaptive Physics-Informed Neural Networks.
- Braga-Neto: Characteristics-Informed Neural Networks for Forward and Inverse Hyperbolic Problems.
- Geron: Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition.
Lecture 6: PDE-Constrained Gaussian Process and Kernel Methods
- Nonparametric Bayesian inference with stochastic processes. Detailed derivation of analytical predictive density with a Gaussian prior and a linear-Gaussian observation model. Gaussian processes with linear constraints. Kernel differentiation. PDE-constrained Gaussian process approximation of linear and nonlinear PDE solutions. Automatic Boundary condition satisfaction. Kernel methods as infinite-dimensional version of kernel ridge regression. Introduction to Reproducible Kernel Hilbert Spaces (RKHS). Representer’s Theorem. PDE-constrained kernel approximation of linear and nonlinear PDE solutions by optimal recovery in a RKHS.
- Lecture Slides (PDF)
- References:
- Rasmussen and Williams, Gaussian Processes for Machine Learning
- Raissi, Perdikaris, Karniadakis, Machine Learning of Linear Differential Equations using Gaussian Processes.
- Jazwinski, Stochastic processes and filtering theory.
- Solak, Murray-Smith, Leithead, Leith and Rasmussen, Derivative observations in Gaussian process models of dynamic systems.
- Swiler, Gulian, Frankel, Safta, and Jakeman. A survey of constrained Gaussian process regression: Approaches and implementation challenges.
- Long, Wang, Krishnapriyan, Kirby, Zhe and Mahoney, AutoIP: A United Framework to Integrate Physics into Gaussian Processes.
- Solin and Kok, Know Your Boundaries: Constraining Gaussian Processes by Variational Harmonic Features.
- Chen, Hosseini, Owhadi, Stuart, Solving and Learning Nonlinear PDEs with Gaussian Processes.
Lecture 7: Inverse Problems
- Introduction to the theory of inverse problems involving differential equations. Integral equations. Criteria for well-posedness. Tikhonov regularization. Solving inverse problems using PINNs and physics-informed Gaussian processes.
- Lecture Slides (PDF)
- SciML_Burgers_Inverse.ipynb (Jupyter notebook on google colab)
- References:
- Isakov, Inverse Problems for Partial Differential Equations
- Hananoglu, Romanov, Introduction to Inverse Problems for Differential Equations
- Raissi, Yazdani, Karniadakis, Hidden Fluid Mechanics: Learning velocity and pressure fields from flow visualizations
- Raissi, Perdikaris, and Karniadakis: Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.
Lecture 8: Uncertainty Quantification
- Uncertainty in practical systems. Aleatoric and epistemic uncertainty. Physics-informed Surrogate models. Bayesian models. Uncertainty quantification by Bayesian PINN, ensembles of PINNs, and PDE-constrained Gaussian Processes.
- Lecture Slides (PDF)
- References:
- Psaros, Meng, Zuo, Guo, Karniadakis, Uncertainty Quantification in Scientific Machine Learning: Methods, Metrics, and Comparisons
- Sudret, Marelli, Wiart, Surrogate models for uncertainty quantification: An overview
- Neal, Bayesian Learning for Neural Networks
- Yang, Meng, Karniadakis, B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data
- Davi, Braga-Neto, PSO-PINN: Physics-Informed Neural Networks Trained with Particle Swarm Optimization