Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Smoothed Separable Nonnegative Matrix Factorization

Abstract : Given a set of data points belonging to the convex hull of a set of vertices, a key problem in data analysis and machine learning is to estimate these vertices in the presence of noise. Many algorithms have been developed under the assumption that there is at least one nearby data point to each vertex; two of the most widely used ones are vertex component analysis (VCA) and the successive projection algorithm (SPA). This assumption is known as the pure-pixel assumption in blind hyperspectral unmixing, and as the separability assumption in nonnegative matrix factorization. More recently, Bhattacharyya and Kannan (ACM-SIAM Symposium on Discrete Algorithms, 2020) proposed an algorithm for learning a latent simplex (ALLS) that relies on the assumption that there is more than one nearby data point for each vertex. In that scenario, ALLS is probalistically more robust to noise than algorithms based on the separability assumption. In this paper, inspired by ALLS, we propose smoothed VCA (SVCA) and smoothed SPA (SSPA) that generalize VCA and SPA by assuming the presence of several nearby data points to each vertex. We illustrate the effectiveness of SVCA and SSPA over VCA, SPA and ALLS on synthetic data sets, and on the unmixing of hyperspectral images.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.telecom-paris.fr/hal-03701535
Contributor : Christophe Kervazo Connect in order to contact the contributor
Submitted on : Wednesday, June 22, 2022 - 11:21:21 AM
Last modification on : Friday, June 24, 2022 - 3:49:16 AM

File

2110.05528.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03701535, version 1
  • ARXIV : 2110.05528

Citation

Nicolas Nadisic, Nicolas Gillis, Christophe Kervazo. Smoothed Separable Nonnegative Matrix Factorization. 2022. ⟨hal-03701535⟩

Share

Metrics

Record views

13

Files downloads

5