Skip to main content

Applied and Computational Mathematics Seminar

Applied Math Seminar

Title: Weighted quantization using MMD: From mean field to mean shift via gradient flows

Abstract: Approximating a probability distribution using a set of particles is a fundamental problem in machine learning and statistics, with applications including clustering and quantization. Formally, we seek a weighted mixture of Dirac measures that best approximates the target distribution. While much existing work relies on the Wasserstein distance to quantify approximation errors, maximum mean discrepancy (MMD) has received comparatively less attention, especially when allowing for variable particle weights. We argue that a Wasserstein-Fisher-Rao gradient flow is well-suited for designing quantizations optimal under MMD. We show that a system of interacting particles satisfying a set of ODEs discretizes this flow. We further derive a new fixed-point algorithm called mean shift interacting particles (MSIP). We show that MSIP extends the classical mean shift algorithm, widely used for identifying modes in kernel density estimators. Moreover, we show that MSIP can be interpreted as preconditioned gradient descent and that it acts as a relaxation of Lloyd's algorithm for clustering. Our unification of gradient flows, mean shift, and MMD-optimal quantization yields algorithms that are more robust than state-of-the-art methods, as demonstrated via high-dimensional and multi-modal numerical experiments.

Date:
-
Location:
POT 108

Applied Math Seminar

Title: Weighted quantization using MMD: From mean field to mean shift via gradient flows

Abstract: Approximating a probability distribution using a set of particles is a fundamental problem in machine learning and statistics, with applications including clustering and quantization. Formally, we seek a weighted mixture of Dirac measures that best approximates the target distribution. While much existing work relies on the Wasserstein distance to quantify approximation errors, maximum mean discrepancy (MMD) has received comparatively less attention, especially when allowing for variable particle weights. We argue that a Wasserstein-Fisher-Rao gradient flow is well-suited for designing quantizations optimal under MMD. We show that a system of interacting particles satisfying a set of ODEs discretizes this flow. We further derive a new fixed-point algorithm called mean shift interacting particles (MSIP). We show that MSIP extends the classical mean shift algorithm, widely used for identifying modes in kernel density estimators. Moreover, we show that MSIP can be interpreted as preconditioned gradient descent and that it acts as a relaxation of Lloyd's algorithm for clustering. Our unification of gradient flows, mean shift, and MMD-optimal quantization yields algorithms that are more robust than state-of-the-art methods, as demonstrated via high-dimensional and multi-modal numerical experiments.

Date:
-
Location:
POT 108

Applied Math Seminar

Title:
Mixed Precision Iterative Methods for Linear Inverse Problems
 
Abstract:
Many problems in science and engineering give rise to linear systems of equations that are commonly referred to as large-scale linear discrete ill-posed problems. The matrices that define these problems are typically severely ill-conditioned and may be rank deficient. Because of this, regularization is often needed to stem the effect of perturbations caused by error in the available data. In this talk we consider the solution of the regularized least-squares problem using both mixed precision and Krylov subspace projection techniques. We utilize a filter factor analysis to investigate the regularizing behavior of the proposed iterative methods.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Mixed Precision Iterative Methods for Linear Inverse Problems
 
Abstract:
Many problems in science and engineering give rise to linear systems of equations that are commonly referred to as large-scale linear discrete ill-posed problems. The matrices that define these problems are typically severely ill-conditioned and may be rank deficient. Because of this, regularization is often needed to stem the effect of perturbations caused by error in the available data. In this talk we consider the solution of the regularized least-squares problem using both mixed precision and Krylov subspace projection techniques. We utilize a filter factor analysis to investigate the regularizing behavior of the proposed iterative methods.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Preconditioning techniques for almost symmetric operator equations in Newton-NEPv
 
Abstract:
This talk aims at presenting different techniques to accelerate the convergence of Krylov subspace solvers for the correction equation in Newton’s method for nonlinear eigenvector problems (NEPv). While this operator equation is generally unsymmetric, its structure allows a splitting into symmetric and nonsymmetric parts which can be used to derive efficient direct and iterative preconditioners. We will show how to obtain that splitting of the update equation, discuss a selection of preconditioning techniques present in the literature and adapt them to the particular problem. Some numerical comparisons on a discrete Kohn-Sham example complete the talk.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Preconditioning techniques for almost symmetric operator equations in Newton-NEPv
 
Abstract:
This talk aims at presenting different techniques to accelerate the convergence of Krylov subspace solvers for the correction equation in Newton’s method for nonlinear eigenvector problems (NEPv). While this operator equation is generally unsymmetric, its structure allows a splitting into symmetric and nonsymmetric parts which can be used to derive efficient direct and iterative preconditioners. We will show how to obtain that splitting of the update equation, discuss a selection of preconditioning techniques present in the literature and adapt them to the particular problem. Some numerical comparisons on a discrete Kohn-Sham example complete the talk.
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Some Modified Matrix Eigenvalue Problems

Abstract: 

This is the title of a well-known numerical linear algebra

survey article by Gene Golub published in 1973. The article

covers a range of matrix eigenvalue problems which require some
manipulations before the standard algorithms may be used.
I am using the same title to consider a new set of modified
matrix eigenvalue problems. This includes constrained and
bi-level optimizations arising from algorithms for fairness in machine
learning, such as spectral clustering with group fairness
and fair principal component analysis. We also consider
eigenvalue optimization via 2D eigenvalue problem with applications
to the calculation of the distance to instability among others,
and stationary values of a quadratic form subject to non-homogeneous
linear constraints for applications such as image segmentation with
constraints. I will discuss how to explore the underlying
structures of these problems to turn them into our familiar
eigenvalue problems and algorithms. 
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Some Modified Matrix Eigenvalue Problems

Abstract: 

This is the title of a well-known numerical linear algebra

survey article by Gene Golub published in 1973. The article

covers a range of matrix eigenvalue problems which require some
manipulations before the standard algorithms may be used.
I am using the same title to consider a new set of modified
matrix eigenvalue problems. This includes constrained and
bi-level optimizations arising from algorithms for fairness in machine
learning, such as spectral clustering with group fairness
and fair principal component analysis. We also consider
eigenvalue optimization via 2D eigenvalue problem with applications
to the calculation of the distance to instability among others,
and stationary values of a quadratic form subject to non-homogeneous
linear constraints for applications such as image segmentation with
constraints. I will discuss how to explore the underlying
structures of these problems to turn them into our familiar
eigenvalue problems and algorithms. 
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Generating Representative Samples: Neural Networks and More

Abstract: Approximating a probability distribution using a discrete set of points is a fundamental task in modern scientific computation, with applications in uncertainty quantification among other things. We discuss recent advances in this area, including the use of Stein discrepancies and various optimization techniques. In particular, we introduce Stein-Message-Passing Monte Carlo (Stein-MPMC), an extension of the original Message-Passing Monte Carlo model and the first machine-learning algorithm for generating low-discrepancy (space-filling) point sets. Additionally, we present a generalized Subset Selection algorithm, a simpler yet highly effective optimization method.

Date:
-
Location:
POT 745

Applied Math Seminar

Title: Generating Representative Samples: Neural Networks and More

Abstract: Approximating a probability distribution using a discrete set of points is a fundamental task in modern scientific computation, with applications in uncertainty quantification among other things. We discuss recent advances in this area, including the use of Stein discrepancies and various optimization techniques. In particular, we introduce Stein-Message-Passing Monte Carlo (Stein-MPMC), an extension of the original Message-Passing Monte Carlo model and the first machine-learning algorithm for generating low-discrepancy (space-filling) point sets. Additionally, we present a generalized Subset Selection algorithm, a simpler yet highly effective optimization method.

Date:
-
Location:
POT 745