https://hal.telecom-paris.fr/hal-03701535Nadisic, NicolasNicolasNadisicUMONS - University of Mons [Belgium]Gillis, NicolasNicolasGillisUMONS - University of Mons [Belgium]Kervazo, ChristopheChristopheKervazoIDS - Département Images, Données, Signal - Télécom ParisTechIMAGES - Image, Modélisation, Analyse, GEométrie, Synthèse - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom ParisIP Paris - Institut Polytechnique de ParisSmoothed Separable Nonnegative Matrix FactorizationHAL CCSD2022blind hyperspectral unmixingpure-pixel search algorithmslatent simplexsimplex-structured matrix factorizationnonnegative matrix factorizationseparability[INFO.INFO-TS] Computer Science [cs]/Signal and Image ProcessingKervazo, Christophe2022-06-22 11:21:212022-06-24 03:49:162022-06-23 11:05:39enPreprints, Working Papers, ...https://hal.telecom-paris.fr/hal-03701535/documentapplication/pdf1Given a set of data points belonging to the convex hull of a set of vertices, a key problem in data analysis and machine learning is to estimate these vertices in the presence of noise. Many algorithms have been developed under the assumption that there is at least one nearby data point to each vertex; two of the most widely used ones are vertex component analysis (VCA) and the successive projection algorithm (SPA). This assumption is known as the pure-pixel assumption in blind hyperspectral unmixing, and as the separability assumption in nonnegative matrix factorization. More recently, Bhattacharyya and Kannan (ACM-SIAM Symposium on Discrete Algorithms, 2020) proposed an algorithm for learning a latent simplex (ALLS) that relies on the assumption that there is more than one nearby data point for each vertex. In that scenario, ALLS is probalistically more robust to noise than algorithms based on the separability assumption. In this paper, inspired by ALLS, we propose smoothed VCA (SVCA) and smoothed SPA (SSPA) that generalize VCA and SPA by assuming the presence of several nearby data points to each vertex. We illustrate the effectiveness of SVCA and SSPA over VCA, SPA and ALLS on synthetic data sets, and on the unmixing of hyperspectral images.