Math - Seminars - Main Content

Upcoming Seminars:

Contact: Jeremy Kozdon (jekozdon@nps.edu) for zoom information

 


Wednesday, March 16, 2022 in Sp-231 at 1430

Recent advances in high order entropy stable schemes

Prof. Jesse Chan, Assistant Professor, Computational and Applied Mathematics, Rice University

Abstract:

High order methods are known to be unstable when applied to nonlinear conservation laws whose solutions exhibit shocks and turbulence. These methods have traditionally required additional filtering, limiting, or artificial viscosity to avoid solution blow up. Entropy stable schemes address this instability by ensuring that physically relevant solutions satisfy a semi-discrete entropy inequality independently of numerical resolution. In this talk, we will review approaches for constructing entropy stable schemes and discuss recent developments, including positivity preserving strategies and the application of entropy stable discontinuous Galerkin methods to under-resolved compressible flows.


 

Thursday, February 10, 2022 in SP-256 and via Zoom at 1500

The One Learning Algorithm Hypothesis--Towards Universal Machine Learning Models and Architectures

John S. Baras, Institute for Systems Research, University of Maryland USA

Abstract:

We revisit the “One Learning Algorithm Hypothesis” of Andrew Ng (Google Brain) according to which the brain of higher-level animals and of humans processes and perceives sensory data (vision, sound, haptics) with the same abstract algorithmic architecture. We develop models, based on our earlier work on automatic target recognition with radar and other sensors, face recognition and image classification, which employ a multi-resolution preprocessor, followed by a group-invariance based feature extractor, followed by a machine learning (ML) module that employs the two fundamental algorithms of Kohonen Learning Vector Quantization (LVQ), for supervised learning, and Self-Organizing Map (SOM), for unsupervised learning. In addition the model and algorithms utilize a “global” feedback from the output of the overall system back to the feature extractor and to the multiresolution preprocessor. We first summarize briefly our older results with such algorithms and their remarkable, domain agnostic, performance on various applications. We then provide our recent results on the mathematical analysis of the resulting Tree Structure Learning Vector Quantization (TSLVQ) ML architecture and algorithms. We introduce and integrate Deterministic Annealing (DA) with our older architecture and demonstrate the resulting tremendous reduction in data required for learning and application. The new algorithms allow even on-line progressive learning. We utilize Bregman divergences as dissimilarity measures, which allows us to provide direct transition from “dissimilarity distance” to probability of error, which cannot be achieved with the commonly used metric-based dissimilarity measures. We show that many deep learning network architectures can be mapped to this “universal” architecture. We show that the integrated algorithm converges to the true Bayes decision surface, albeit with variable resolution at various parts of it, as required. The latter brings a tight connection to integrated hypothesis testing with compressed data. We demonstrate the results in various applications and close with future directions and extensions.


 

Thursday, June 03, 2021 (via Zoom at 1500)

Quantifying parameter uncertainty for convection within a climate model

Dr. Oliver R. A. Dunbar, postdoctoral scholar, Environmental Sciences and Engineering, California Institute of Technology.

Abstract:

Current state-of-the-art climate models produce uncertain predictions, as evidenced by the variability in competing models, but they are typically ill-equipped to quantify this uncertainty. The models necessarily contain simplified physical schemes used to represent small-scale dynamics or poorly understood physics. The schemes depend upon parameters that are calibrated (often by hand) to fit data, though there may be a range of parameters that feasibly produce a given piece of data. In climate models, the uncertainty of the parameters used for convection schemes is the dominate uncertainties of resulting decadal predictions; it is therefore essential to quantify it to gain meaningful. Unfortunately, this task is far more computationally intensive than parameter calibration, and historically has been out of reach of climate models. However, we formulate a suitable Bayesian inverse problem for time-averaged statistical data, and successfully make uncertainty quantification possible by applying the new Calibrate-Emulate-Sample (CES) methodology. CES is based on three steps: a first Calibration step, which takes the climate model as a black box input, and is well adapted to high performance computing architectures; a second Emulation step automates, smooths, and speeds up calculation of the black box climate model by several orders of magnitude, by making use of Gaussian processes (a machine learning tool); a final Sampling step may then be applied using standard methods from computational statistics to quantify the uncertainty in the calibration.
In this talk, we consider an idealized aquaplanet general circulation model (GCM). We use CES to perform uncertainty quantification on the closure parameters for convection.

Biography:

My current interests are in mathematical and statistical modeling for physical systems, and in the corresponding inverse and data assimilation problems to learn from data.
I have experience with mathematical methods such as optimization and variational methods for partial differential equations, modeling free boundary and shape optimization problems, regularization for deterministic inverse problems, and fluid and solid mechanics. I also have experience in statistical methods such as Bayesian inverse problems, uncertainty quantification, data assimilation, Bayesian experimental design. Most recently I have been working on machine learning and model emulation, graph-based learning, and partial differential equations on graphs.

 


Thursday, June 10, 2021 (via Zoom at 1500)

Why Does Deep Learning Work for High Dimensional Problems?

Prof. Wei Kang, Department of Applied Mathematics, Naval Postgraduate School

Abstract:

Deep learning has had many impressive empirical successes in science and industries. On the other hand, the lack of theoretical understanding of the field has been a large barrier to the adoption of the technology. In this talk, we will discuss some compositional features of high dimensional problems and their mathematical properties that shed light on the question why deep learning works for high dimensional problems. It is widely observed in science and engineering that complicated and high dimensional information input-output relations can be represented as compositions of functions with low input dimensions. Their compositional structures can be effectively represented using layered directed acyclic graphs (DAGs). Based on the layered DAG formulation, an algebraic framework and approximation theory are developed for compositional functions including neural networks. The theory leads to the proof of several complexity/approximation bounds of deep neural networks for problems of regression and dynamical systems.

Math - Seminars - Past Events Accordion

Past Seminars

  • February 11, 2021 | Zoom | 1500
    Topics at the Intersection of Deep Learning and Control Theory
    Prof. Wei Kang, Department of Applied Mathematics, Naval Postgraduate School
  • February 18, 2021 | Zoom | 1500
    Optimal Boundary Control of a Nonlinear Reaction Diffusion Equation via Completing the Square and Al'brekht's Method
    Prof. Arthur J. Krener, Department of Applied Mathematics, Naval Postgraduate School
  • March 4, 2021 | Zoom | 1500
    A domain decomposition Rayleigh-Ritz algorithm for symmetric generalized eigenvalue problems
    Dr. Vassilis Kalantzis,  Research Staff Member, IBM Research USA, Thomas J. Watson Research Center
  • March 11, 2021 | Zoom | 1500
    A Split-Form, Stable, Hybrid Continuous/Discontinuous Galerkin Spectral Element Method for Wave Propagation
    Prof. David Kopriva,  Department of Mathematics, Florida State University and Computational Science Research Center, San Diego State University
  • April 9, 2021 | Zoom | 1500
    An approximation theory perspective on deep learning
    Prof. Alex Townsend , Department of Mathematics, Cornell University
  • April 22, 2021 | Zoom | 1500
    Polynomial-free, Variable High-order Methods using Gaussian Process Modeling for Computational Astrophysics
    Prof. Dongwook Lee , Applied Mathematics, University of California, Santa Cruz
  • May 06, 2021 | Zoom | 1500
    Hidden Physics Models
    Prof. Maziar Raissi , Applied Mathematics, University of Colorado Boulder
  • May 13, 2021 | Zoom | 1500
    The Coming of Game Theory
    Prof. Guillermo Owen, Department of Applied Mathematics, Naval Postgraduate School
  • June 03, 2021 | Zoom | 1500
    Quantifying parameter uncertainty for convection within a climate model
    Dr. Oliver R. A. Dunbar , postdoctoral scholar, Environmental Sciences and Engineering, California Institute of Technology.
  • June 10, 2021 | Zoom | 1500
    Why Does Deep Learning Work for High Dimensional Problems?
    Prof. Wei Kang, Department of Applied Mathematics, Naval Postgraduate School
  • February 25, 2020 | Spanagel 257 | 1500
    Wasserstein Gradient Flow for Stochastic Prediction, Filtering and Control: Theory and Algorithms
    Prof. Abhishek Halder , Department of Applied Mathematics, University of California, Santa Cruz
  • January 10, 2019 | Spanagel 257 | 1500
    Control through canalization in modeling the innate immune response to ischemic injury
    Prof. Elena S. Dimitrova, School of Mathematical and Statistical Sciences, Clemson University
  • January 14, 2019 | Spanagel 257 | 1500
    Computational physics at extreme scales: efficient solvers for discontinuous Galerkin methods
    Dr. Will Pazner , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
  • January 28, 2019 | Spanagel 257 | 1500
    Rapid mixing bounds for Hamiltonian Monte Carlo under strong log-concavity
    Dr. Oren Mangoubi, Computer Science, Ecole polytechnique fédérale de Lausanne (EPFL)
  • Thursday, February 21, 2019 | Spanagel 257 | 1500
    title: 50 Years History of the Cross Correlation between m-Sequences
    Prof. Tor Helleseth ,  Department of Informatics, University of Bergen
  • Monday, March 04, 2019 | Location: Watkins 146 | 1500
    title: Regulation-Triggered Batch Learning: A New Hope for Adaptive Aircraft Control
    Prof. Miroslav Krstic, Department of Mechanical and Aerospace Engineering, University of California, San Diego
  • April 17, 2019 | Spanagel 257 | 1500
    title: The Power of Interpolation: From Linear Algebra and Approximation Theory to Exascale and Beyond
    Dr. Anthony P. Austin,  Department of Mathematics, Virginia Tech
  • July 25, 2019 | Spanagel 257 | 1500
    title: Chebfun: Numerical Computing with Functions
    Dr. Anthony P. Austin,  Department of Mathematics, Virginia Tech
  • November 04, 2019 | Spanagel 257 | 1500
    title: Chebfun: Numerical Computing with Functions
    Dr. Boumediene Hamzi,  ment of Mathematics Imperial College London