Skip to Main content Skip to Navigation
Conference papers

Joint phoneme alignment and text-informed speech separation on highly corrupted speech

Abstract : Speech separation quality can be improved by exploiting textual information. However, this usually requires text-to-speech alignment at phoneme level. Classical alignment methods are made for rather clean speech and do not work as well on corrupted speech. We propose to perform text-informed speech-music separation and phoneme alignment jointly using recurrent neural networks and the attention mechanism. We show that it leads to benefits for both tasks. In experiments, phoneme transcripts are used to improve the perceived quality of separated speech over a non-informed baseline. Moreover, our novel phoneme alignment method based on the attention mechanism achieves state-of-the-art alignment accuracy on clean and on heavily corrupted speech.
Document type :
Conference papers
Complete list of metadatas

Cited literature [24 references]  Display  Hide  Download

https://hal.telecom-paris.fr/hal-02457075
Contributor : Roland Badeau <>
Submitted on : Thursday, February 6, 2020 - 1:02:01 PM
Last modification on : Friday, July 31, 2020 - 11:28:08 AM
Long-term archiving on: : Thursday, May 7, 2020 - 3:02:50 PM

File

ICASSP_2020_paper_HAL.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02457075, version 1

Collections

Citation

Kilian Schulze-Forster, Clément Doire, Gael Richard, Roland Badeau. Joint phoneme alignment and text-informed speech separation on highly corrupted speech. 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), May 2020, Barcelona, Spain. ⟨hal-02457075⟩

Share

Metrics

Record views

213

Files downloads

267