Skip to main content

Applied and Computational Mathematics Seminar

Applied Math Seminar

Title:
Mixed Precision Iterative Methods for Linear Inverse Problems
 
Abstract:
Many problems in science and engineering give rise to linear systems of equations that are commonly referred to as large-scale linear discrete ill-posed problems. The matrices that define these problems are typically severely ill-conditioned and may be rank deficient. Because of this, regularization is often needed to stem the effect of perturbations caused by error in the available data. In this talk we consider the solution of the regularized least-squares problem using both mixed precision and Krylov subspace projection techniques. We utilize a filter factor analysis to investigate the regularizing behavior of the proposed iterative methods.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Mixed Precision Iterative Methods for Linear Inverse Problems
 
Abstract:
Many problems in science and engineering give rise to linear systems of equations that are commonly referred to as large-scale linear discrete ill-posed problems. The matrices that define these problems are typically severely ill-conditioned and may be rank deficient. Because of this, regularization is often needed to stem the effect of perturbations caused by error in the available data. In this talk we consider the solution of the regularized least-squares problem using both mixed precision and Krylov subspace projection techniques. We utilize a filter factor analysis to investigate the regularizing behavior of the proposed iterative methods.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Preconditioning techniques for almost symmetric operator equations in Newton-NEPv
 
Abstract:
This talk aims at presenting different techniques to accelerate the convergence of Krylov subspace solvers for the correction equation in Newton’s method for nonlinear eigenvector problems (NEPv). While this operator equation is generally unsymmetric, its structure allows a splitting into symmetric and nonsymmetric parts which can be used to derive efficient direct and iterative preconditioners. We will show how to obtain that splitting of the update equation, discuss a selection of preconditioning techniques present in the literature and adapt them to the particular problem. Some numerical comparisons on a discrete Kohn-Sham example complete the talk.
Date:
-
Location:
POT 745

Applied Math Seminar

Title:
Preconditioning techniques for almost symmetric operator equations in Newton-NEPv
 
Abstract:
This talk aims at presenting different techniques to accelerate the convergence of Krylov subspace solvers for the correction equation in Newton’s method for nonlinear eigenvector problems (NEPv). While this operator equation is generally unsymmetric, its structure allows a splitting into symmetric and nonsymmetric parts which can be used to derive efficient direct and iterative preconditioners. We will show how to obtain that splitting of the update equation, discuss a selection of preconditioning techniques present in the literature and adapt them to the particular problem. Some numerical comparisons on a discrete Kohn-Sham example complete the talk.
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Some Modified Matrix Eigenvalue Problems

Abstract: 

This is the title of a well-known numerical linear algebra

survey article by Gene Golub published in 1973. The article

covers a range of matrix eigenvalue problems which require some
manipulations before the standard algorithms may be used.
I am using the same title to consider a new set of modified
matrix eigenvalue problems. This includes constrained and
bi-level optimizations arising from algorithms for fairness in machine
learning, such as spectral clustering with group fairness
and fair principal component analysis. We also consider
eigenvalue optimization via 2D eigenvalue problem with applications
to the calculation of the distance to instability among others,
and stationary values of a quadratic form subject to non-homogeneous
linear constraints for applications such as image segmentation with
constraints. I will discuss how to explore the underlying
structures of these problems to turn them into our familiar
eigenvalue problems and algorithms. 
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Some Modified Matrix Eigenvalue Problems

Abstract: 

This is the title of a well-known numerical linear algebra

survey article by Gene Golub published in 1973. The article

covers a range of matrix eigenvalue problems which require some
manipulations before the standard algorithms may be used.
I am using the same title to consider a new set of modified
matrix eigenvalue problems. This includes constrained and
bi-level optimizations arising from algorithms for fairness in machine
learning, such as spectral clustering with group fairness
and fair principal component analysis. We also consider
eigenvalue optimization via 2D eigenvalue problem with applications
to the calculation of the distance to instability among others,
and stationary values of a quadratic form subject to non-homogeneous
linear constraints for applications such as image segmentation with
constraints. I will discuss how to explore the underlying
structures of these problems to turn them into our familiar
eigenvalue problems and algorithms. 
Date:
-
Location:
POT 745

Applied Math Seminar

Title: Generating Representative Samples: Neural Networks and More

Abstract: Approximating a probability distribution using a discrete set of points is a fundamental task in modern scientific computation, with applications in uncertainty quantification among other things. We discuss recent advances in this area, including the use of Stein discrepancies and various optimization techniques. In particular, we introduce Stein-Message-Passing Monte Carlo (Stein-MPMC), an extension of the original Message-Passing Monte Carlo model and the first machine-learning algorithm for generating low-discrepancy (space-filling) point sets. Additionally, we present a generalized Subset Selection algorithm, a simpler yet highly effective optimization method.

Date:
-
Location:
POT 745

Applied Math Seminar

Title: Generating Representative Samples: Neural Networks and More

Abstract: Approximating a probability distribution using a discrete set of points is a fundamental task in modern scientific computation, with applications in uncertainty quantification among other things. We discuss recent advances in this area, including the use of Stein discrepancies and various optimization techniques. In particular, we introduce Stein-Message-Passing Monte Carlo (Stein-MPMC), an extension of the original Message-Passing Monte Carlo model and the first machine-learning algorithm for generating low-discrepancy (space-filling) point sets. Additionally, we present a generalized Subset Selection algorithm, a simpler yet highly effective optimization method.

Date:
-
Location:
POT 745

Applied Math Seminar

Title: Log-Sum Regularized Kaczmarz Algorithms for High-Order Tensor Recovery

Abstract: Sparse and low rank tensor recovery has emerged as a significant area of research with applications in many fields such as computer vision. However, minimizing the $\ell_0$-norm of a vector or the rank of a matrix is NP-hard. Instead, their convex relaxed versions are typically adopted in practice due to the computational efficiency, e.g., log-sum penalty. In this presentation, we propose novel log-sum regularized Kaczmarz algorithms for recovering high-order tensors with either sparse or low-rank structures. We present block variants along with convergence analysis of the proposed algorithms. Numerical experiments on synthetic and real-world data sets demonstrate the effectiveness of the proposed methods.

Date:
-
Location:
POT 745

Applied Math Seminar

Title: Log-Sum Regularized Kaczmarz Algorithms for High-Order Tensor Recovery

Abstract: Sparse and low rank tensor recovery has emerged as a significant area of research with applications in many fields such as computer vision. However, minimizing the $\ell_0$-norm of a vector or the rank of a matrix is NP-hard. Instead, their convex relaxed versions are typically adopted in practice due to the computational efficiency, e.g., log-sum penalty. In this presentation, we propose novel log-sum regularized Kaczmarz algorithms for recovering high-order tensors with either sparse or low-rank structures. We present block variants along with convergence analysis of the proposed algorithms. Numerical experiments on synthetic and real-world data sets demonstrate the effectiveness of the proposed methods.

Date:
-
Location:
POT 745