Voice classification spoof vs genuine
Welcome to the voice classification spoof vs genuine
Pipeline:
INPUT -> FRONT-END -> BACK-END -> OUTPUT
Input:
- The number of wav audio file have been divided into three data sets train, develop and evaluation set.
Front-end: Audio pre-processing:
- FFT vs CQT/CQCCs:
- One of another traditional way is used (Fast) Fourier Transform (FFT). This technique is extremely powerful in time-frequency analysis, however it may lack frequency resolution at lower frequencies and temporal resolution at higher frequencies. In the other hand, one of efficient methods had been found in ASV 2015 for that problem is constant Q transform (CQT). The difference is that FFT imposes the regular spaced frequency bins while CQT employs geometrically spaced frequency bins, so CQT can across the entire spectrum and then get the a higher frequency resolution at lower frequencies and higher temporal resolution at higher frequencies. With this technique, it reflects more precisely the human perception system. Additionally, the baseline of feature extraction is proposed from ASV 2015 has shown that it will be more efficient to combine the CQT with traditional cepstral analysis called constant Q cepstral coefficients (CQCCs).
- Q factor is a measure of selectivity of each filter and is defined as a ratio between center frequency \(f_{k}\) and bandwidth \(\delta f\): \[Q=\frac{f_{k}}{\delta f}\]
- Mel-spectrogram:
- Each audio file has different lengths, the “windows” will slice the audio file into multiple small signals which have the same length to make it more convenient and more precisely to process.
- The method is used to convert the signal to frequency is Mel-spectrogram. Mel-spectrogram is visual representation of frequencies over time. To convert a sound to an spectrogram, we apply a filter bank representation of this sound. A sound file is divided into multiple short-time frames according to a certain window_length and time step. The signal changes overtime, thus to get the frequencies from signal, we need to applied Fourier Transform on each short time frames with the assumption that signal does not change at that period of time, otherwise it makes no sense, and the result is an approximation of the frequencies contours by concatenating the whole frames. Each frames will be apply window function to counteract the assumption made by the FFT that data is infinite and to reduce spectral leakage. Then, we can now do an N-point FFT into each frames to calculate the frequency spectrum: \[P=\frac{\left|F F T\left(x_{i}\right)\right|^{2}}{N} \]
- Then, the frequency can be converted to Mel scale and vice versa through these equations: \[m=2595 \log _{10}\left(1+\frac{f}{700}\right)\] \[f=700\left(10^{m / 2595}-1\right)\]
- Those steps above have been made by default of the framework librosa.feature.melspectrogram. There are multiple parameters (y=None, sr=22050, S=None, n_fft=2048, hop_length=512, power=2.0, *kwargs) have been used in this framework. These files in the data set have the sampling rate is 16kHz, so *sr need to be changed into 16000 instead of default value of 22050. Then the spectrogram is convert from amplitude to db and flattened. As a result, the log spectrogram is created in shape of (?, 60, 41, 1). Then we change the shape of log spectrogram into the shape of (?, 60, 41, 2) which is the input matrix vector for the CNN model.
- The goal of this project is to identify spoof vs genuine voice. So, we define two labels for the data set and encode it with OneHot Encoder from python to get the same shape with feature extraction above (?, 2).
Back-end: Deep learning model CNN
- The method named weight_variable and bias_variable will return Tensorflow variable of defined shapes, where bias variable is initialized with all ones and weight variable with zero mean and standard deviation of 0.1.
- The Conv2d method is just a wrapper over Tensorflow conv2d function. It will be called by apply_convolution function, which takes input data, kernel/filer size, a number of channels in the input and output depth or number of channels in the output. It then gets weight and bias variables, applies convolution, adds the bias to the results and finally applies non-linearity (RELU). Max-Pooling can be applied using apply_max_pool function. It takes input data (usually output of convolution layer), kernel and stride size.
- Here, we define some configuration parameters for the deep learning model with Convolutional Neural Network as kernel size, total iterations, a number of neurons in the hidden layer, etc.
- Tensorflow placeholder for input and output data are defined next. A convolution function is applied with a filter size of 30 and depth of 16 (number of channels, we will get as output from convolution layer). Next, the convolution output is flattened out for the fully connected layer input. There are 200 neurons in the fully connected layer as defined by the above configuration. The Sigmoid function is used as non-linearity in this layer. Lastly, the Softmax layer is defined to output probabilities of the class labels.
- The negative log-likelihood cost function will be minimised using Adam optimizer, the code provided below initialize cost function and optimizer. Also, define the code for accuracy calculation of the prediction by model.
- Now the following code will train the CNN model using a batch size of 50 for 2000 iterations. After the training, it classifies testing set and prints out the achieved accuracy of the model along with plotting cost as a function of a number of iterations.
- The accuracy achieved around 90%.
Future work:
- In my next step, I will try to implement different method for audio pre-processing to extract useful features from the audio files such as Constant Q cepstral coefficients (CQCCs), MFCCs, LCNNs.
- Beck-end: I will try to add more fully connected layers and use other different models such RNN, fastAI to compare the accuracy between them.
Source:
Band filtering in the frequency domain
urban sound classification - part 1
urban sound classification - part 2
Mel Frequency Cepstral Coefficient (MFCC)