Bottleneck Features from SNR-Adaptive Denoising Deep ... - PolyU - EIE

2 downloads 0 Views 899KB Size Report
Experimental results based on a noisy YOHO corpus show that the bottleneck features slightly outperform the conventional MFCC under low SNR conditions ...
Bottleneck Features from SNR-Adaptive Denoising Deep Classifier for Speaker Identification Zhili TAN and Man-Wai MAK Center for Signal Processing, Dept. of Electronic and Information Engineering The Hong Kong Polytechnic University, Hong Kong SAR E-mail: [email protected] Abstract—In this paper, we explore the potential of using deep learning for extracting speaker-dependent features for noise robust speaker identification. More specifically, an SNR-adaptive denoising classifier is constructed by stacking two layers of restricted Boltzmann machines (RBMs) on top of a denoising deep autoencoder, where the top-RBM layer is connected to a soft-max output layer that outputs the posterior probabilities of speakers and the top-RBM layer outputs speaker-dependent bottleneck features. Both the deep autoencoder and RBMs are trained by contrastive divergence, followed by backpropagation fine-tuning. The autoencoder aims to reconstruct the clean spectra of a noisy test utterance using the spectra of the noisy test utterance and its SNR as input. With this denoising capability, the output from the bottleneck layer of the classifier can be considered as a low-dimension representation of denoised utterances. These frame-based bottleneck features are than used to train an iVector extractor and a PLDA model for speaker identification. Experimental results based on a noisy YOHO corpus show that the bottleneck features slightly outperform the conventional MFCC under low SNR conditions and that fusion of the two features lead to further performance gain, suggesting that the two features are complementary with each other. Index Terms—Deep learning; Bottleneck features, denoising autoencoder, speaker identification, deep belief networks

I. I NTRODUCTION In recent years, deep learning has achieved a great success in many areas, including speech recognition [1], computer vision [2], speech synthesis [3], [4] and music recognition [5]. In many of these studies, deep neural networks (DNN) and deep belief networks (DBN) [6] are used as classifiers. This is achieved by adding a softmax layer on top of the hidden layers of restricted Boltzmann machines (RBM). Deep learning is powerful in that the resulting deep networks have strong ability to disentangle the variation in the input patterns, and therefore greatly improve the performance in many classification problems. The posterior probabilities generated by the softmax layer can replace the ones generated by other generative models, e.g. Gaussian mixture models (GMM) in speaker recognition [7], hidden Markov models (HMM) in large vocabulary continuous speech recognition (LVCSR) [1], and the posterior of senones in i-vector based speaker verification [8]. This paper explores the use of DNNs for extracting speakerdependent features for speaker recognition. To this end, we stacked a denoising deep autoencoder [9], [10], two layers of RBMs and a softmax layer to form a DNN classifier that

produces posterior probabilities of speaker identities as output. However, instead of using the classifier directly for speaker identification, we used the RBM just below the softmax output layer of the DNN as the bottleneck layer for feature extraction. More precisely, bottleneck features are extracted from the RBM’s outputs before sigmoid nonlinearity. The bottleneck features, which provide a low-dimensional representation of the input patterns [11], are used for training an iVector-PLDA speaker identification system. The advantage of using the DNN as feature extractor rather than using it directly as speaker identifier is that the number of test speakers will not be limited by the number of nodes in the softmax layer. We used noisy speech as the input and clean speech as the target output to train the denoising autoencoder [10], which is pre-trained by using contrastive divergence [12] followed by backpropagation fine-tuning. Then, two layers of RBMs are trained using the outputs of the denoising autoencoder as input. Finally, a softmax layer with number of nodes equal to the number of training speakers is put on top of the last (bottleneck) layer of RBM and backpropagation fine-tuning is further applied to minimize the cross-entropy training error. Therefore, the first several layers in the DNN classifier help to make the whole neural network more noise robust, while the top layers extract the speaker-dependent information from the denoisd spectra. We demonstrated that at 0dB SNR, the bottleneck features are slightly more robust than the standard mel frequency cepstral coefficient (MFCC) [13]. II. SNR-A DAPTIVE D ENOISING D EEP AUTOENCODER A. Input Preprocessing To train the denoising autoencoder, it is necessary to preprocess the input speech. On one hand, cepstral features have shown promise in previous research; on the other hand, it is intuitive to apply raw features as input to realize the potential of autoencoders in modelling speech signals. In particular, the log-spectra, the log mel-scale triangular filterbank output and even MFCC are candidate inputs. For the log-spectra, we performed 512-point fast Fourier transform on 8 kHz speech data, followed by taking logarithm. Due to the symmetry property of Fourier transform for real numbers, only the first 256 components were used in subsequent steps. For the log mel-scale triangular filterbank output, 20 triangular filterbanks from 300Hz to 3700Hz were used, and

therefore the 256-dimensional spectra were reduced to 20 dimensions. After applying discrete cosine transform (DCT), we obtained the MFCCs. For each of the input types, we packed it with another input node that represents the SNR to form the input patterns of the SNR-adaptive denosing autoencoder. The structure of hidden layers is identical for all input types. It has been shown [14] that it is beneficial to apply Z-norm to the input vectors. In our experiments, the SNR and the input features are normalized independently, i.e. the mean and standard deviation of the SNR and the bottleneck features were estimated separately. In addition to the preprocess techniques above, we can also use a contextual window covering several frames as the input to the DNN. For example, a sliding window covering 7 frames of mel filterbank outputs consists of 20 × 7 = 140 nodes. Together with the SNR node, there are 141 nodes in the input layer. Fig. 1 shows the architecture of the denoising autoencoder with the SNR node omitted.

commonly believed that this pre-training step can bring the DNN close to the global optimal solution, which helps the backpropagation algorithm to convergence to a better solution. RBM is an energy-based model in which nodes within the same layer do not have interaction. For denoising deep autoencoders, the structure is symmetric with respect to the middle layer. Thus we only need to train the first half of the network, i.e., from the input to the middle layer, and then copy the parameters to the upper half of the network before the backpropagation fine-tuning (see Fig. 2). The nodes of RBMs in the middle layers of the autoencoder follows a Bernoulli distribution, which means that both the visible and hidden units in the middle layers are binary. However, the nodes in the input layer should follow a Gaussian distribution, because log-spectra and MFCC follow Gaussian distributions. In the Bernoulli-Bernoulli RBM, the activation function is the sigmoid function: s(z) =

Reconstructed  speech . . .

Hidden Layer  (Bernoulli)

. . .

Bottleneck Layer  (Linear)

(1)

and the energy function is defined as: X X X a i vi − bj hj − vi hj wij E(v, h) = −

1 frames

Output Layer  (Linear)

1 1 + e−z

i∈vis

. . .

. . .

j∈hid

(2)

i,j

where vi and hj are the binary states of visible unit i and hidden unit j respectively, ai and bj are their biases, and wij is the weight between the i-th visible unit and the j-th hidden unit. By using contrastive divergence [12], we maximize the Bottleneck Features probability that the network assigns to a visible vector, v: . . . 1 X −E(v,h) p(v) = e . (3) Z h

. . .

Hidden Layer  (Bernoulli)

. . .

. . .

Input Layer  (Gaussian)

For Gaussian-Bernoulli cases, the activation function in the visible layer is linear, and the energy function is defined by: E(v, h) =

X (vi − ai )2 X X vi hj wij (4) − bj hj − 2 2σi σi i,j

i∈vis

Noisy  Reverberant  Speech

j∈hid

5 frames

where σi is the standard deviation of the Gaussian noise for visible unit i. Sliding Window

Fig. 1. Denoising deep autoencoder.

B. RBM Pre-training Backpropagation (BP) [15] is commonly used for training DNNs. However, BP is a gradient descent algorithm, which could be easily trapped in local minima, especially when the neural network has a deep structure (having many hidden layers). This is because the gradients in the bottom layers are too small. In this case, we can consider the DNN as comprising a number of stacked RBMs, which is trained layer-by-layer via the contrastive divergence algorithm. It is

C. Backpropagation Fine-tuning After RBM pre-training, we can stack the RBMs, copy the parameters in the lower half of the DBN to the upper half, and then fine-tune them by using backpropagation, as Fig. 2 illustrates. In backpropagation training, we presented a mini-batch of input patterns and update the network parameters to bring the actual outputs closer to the target values. To equip our autoencoder with denoising ability, we used noisy speech as input and their corresponding clean counterparts as target outputs, while the error function is the squared loss, L(z, z˜) = kz−˜ z k22 . However, we kept the SNR component in the output the same as the input in the experiments since we only focused on the speech denoising capability of this autoencoder.

Clean Target Output Hidden Layer 2 RBM

w1T + ε 4

w2

Hidden Layer 3

w2T + ε3

Hidden Layer 1

Hidden Layer 2

w2 + ε 2

Hidden Layer 1 RBM

Hidden Layer 1

w1

w1 + ε1

Noisy Input

Noisy Input

Fig. 2. Construction of a denoising autoencoder by training two RBMs layerby-layer and then stacking them symmetrically, followed by backpropagation fine-tuning.

Backpropagation [15] uses the chain rule to iteratively compute the error gradient of each layer, collects the gradients from top to bottom layers, and updates the weights layer by layer. In our experiments, the first three hidden layers are all Bernoulli layers, and therefore they have a sigmoid activation function. However, the output layer of the autoencoder, which aims to reconstruct the input, uses a linear activation function. The autoencoder is a key component of the DNN classifier, as Fig.3 shows. III. SNR-A DAPTIVE D ENOISING D EEP C LASSIFIER Because of the denoising autoencoder, the DNN learns how to extract clean information from the noisy input patterns. However, our goal is to enable the DNN to extract speakerdependent features. To this end, we construct a speaker classifier by putting two more layers of RBMs on top of the autoencoder as shown in Fig. 3. Finally, a softmax layer with the number of nodes equals to the number of training speakers is added to the network. Backpropagation is then applied to fine-tune the DNN by minimising the cross entropy error. Specifically, we assume that we have N training speakers whose spectral feature vectors and speaker labels are given by X = {xi,j ∈