Microphone array speech separation algorithm based on TC-ResNet
Zhou, Lin; Xu, Yue; Wang, Tianyi; Feng, Kun; Shi, Jingang (2021-07-21)
Zhou, L., Xu, Y., Wang, T., Feng, K., Shi, J. (2021). Microphone Array Speech Separation Algorithm Based on TC-ResNet. CMC-Computers, Materials & Continua, 69(2), 2705–2716, https://doi.org/10.32604/cmc.2021.017080
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
https://creativecommons.org/licenses/by/4.0/
https://urn.fi/URN:NBN:fi-fe2021120859638
Tiivistelmä
Abstract
Traditional separation methods have limited ability to handle the speech separation problem in high reverberant and low signal-to-noise ratio (SNR) environments, and thus achieve unsatisfactory results. In this study, a convolutional neural network with temporal convolution and residual network (TC-ResNet) is proposed to realize speech separation in a complex acoustic environment. A simplified steered-response power phase transform, denoted as GSRP-PHAT, is employed to reduce the computational cost. The extracted features are reshaped to a special tensor as the system inputs and implements temporal convolution, which not only enlarges the receptive field of the convolution layer but also significantly reduces the network computational cost. Residual blocks are used to combine multiresolution features and accelerate the training procedure. A modified ideal ratio mask is applied as the training target. Simulation results demonstrate that the proposed microphone array speech separation algorithm based on TC-ResNet achieves a better performance in terms of distortion ratio, source-to-interference ratio, and short-time objective intelligibility in low SNR and high reverberant environments, particularly in untrained situations. This indicates that the proposed method has generalization to untrained conditions.
Kokoelmat
- Avoin saatavuus [34516]