Skip to Main content Skip to Navigation
Conference papers

Distributed speech separation in spatially unconstrained microphone arrays

Abstract : Speech separation with several speakers is a challenging task because of the non-stationarity of the speech and the strong signal similarity between interferent sources. Current state-of-the-art solutions can separate well the different sources using sophisticated deep neural networks which are very tedious to train. When several microphones are available, spatial information can be exploited to design much simpler algorithms to discriminate speakers. We propose a distributed algorithm that can process spatial information in a spatially unconstrained microphone array. The algorithm relies on a convolutional recurrent neural network that can exploit the signal diversity from the distributed nodes. In a typical case of a meeting room, this algorithm can capture an estimate of each source in a first step and propagate it over the microphone array in order to increase the separation performance in a second step. We show that this approach performs even better when the number of sources and nodes increases. We also study the influence of a mismatch in the number of sources between the training and testing conditions.
Document type :
Conference papers
Complete list of metadata
Contributor : Nicolas Furnon Connect in order to contact the contributor
Submitted on : Thursday, April 15, 2021 - 2:49:04 PM
Last modification on : Saturday, October 16, 2021 - 11:26:10 AM


Files produced by the author(s)


  • HAL Id : hal-02985794, version 3
  • ARXIV : 2011.00982


Nicolas Furnon, Romain Serizel, Irina Illina, Slim Essid. Distributed speech separation in spatially unconstrained microphone arrays. ICASSP 2021 - 46th International Conference on Acoustics, Speech, and Signal Processing, Jun 2021, Toronto / Virtual, Canada. ⟨hal-02985794v3⟩



Record views


Files downloads