如何使用MFCC自定义音频处理来训练tensorflow classifi

2024-04-25 09:32:19 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我需要在大学学期中做音频处理,但我希望创建基本的语音检测,并进一步发展。在Tensorflow中查看关于GMM的其他指南和信息通常不允许您自己进行音频处理。在

我想把我用来解码wav文件、构建spectram和转换成MFCC的代码插入到训练模型中,然后该模型可以对特定的简单单词样本进行分类。但是,我在任何地方都找不到这方面的任何信息或帮助者。在

到目前为止,我已经尝试了很多不同的方法,但是Tensorflow网站上的Simple Audio Recognition Tutorial似乎非常接近我最终想要实现的目标,甚至使用MFCC作为默认的音频处理方法。我试着进行逆向工程,分解不同的函数调用。但是,一切都是极其复杂的。在

我想用我自己的音频处理来做教程所做的事情。机器学习部分的复杂性并不一定重要。任何预先存在的方法或非常简单的即插即用的东西都可以。在

这是我的MFCC和spectrogram代码(如果需要):

def do_mfcc(spectrogram, upper_frequency_limit=4000, lower_frequency_limit=0, dct_coefficient_count=12):

    mfcc = dct(spectrogram, type=2, axis=1, norm='ortho')[:, 1: (dct_coefficient_count + 1)]  # Keep 2-13
    mfcc -= (numpy.mean(mfcc, axis=0) + 1e-8)  # Mean normalization of mfcc

    return mfcc

def gimmeDaSPECtogram(input, sample_rate, window_size_ms=30.0, stride_ms=10.0, pre_emphasis=0.97, NFFT=512, triangular_filters=40, magnitude_squared=False, name=None):
    sample_rate, signal = scipy.io.wavfile.read(input)  # File assumed to be in the same directory
    signal = signal[0:int(1.0 * sample_rate)]  # Keep only the first second
    window_size_ms = window_size_ms/1000
    stride_ms = stride_ms/1000


    emphasized_signal = numpy.append(signal[0], signal[1:] - pre_emphasis * signal[:-1])
    frame_length, frame_step = window_size_ms * sample_rate, stride_ms * sample_rate  # Convert from seconds to samples
    signal_length = len(emphasized_signal)
    frame_length = int(round(frame_length))
    frame_step = int(round(frame_step))
    num_frames = int(numpy.ceil(
        float(numpy.abs(signal_length - frame_length)) / frame_step))  # Make sure that we have at least 1 frame


    pad_signal_length = num_frames * frame_step + frame_length
    z = numpy.zeros((pad_signal_length - signal_length))
    pad_signal = numpy.append(emphasized_signal,
                              z)  # Pad Signal to make sure that all frames have equal number of samples without truncating any samples from the original signal

    indices = numpy.tile(numpy.arange(0, frame_length), (num_frames, 1)) + numpy.tile(
        numpy.arange(0, num_frames * frame_step, frame_step), (frame_length, 1)).T
    frames = pad_signal[indices.astype(numpy.int32, copy=False)]

    frames *= numpy.hamming(frame_length)

    mag_frames = numpy.absolute(numpy.fft.rfft(frames, NFFT))  # Magnitude of the FFT
    pow_frames = ((1.0 / NFFT) * ((mag_frames) ** 2))  # Power Spectrum

    low_freq_mel = 0
    high_freq_mel = (2595 * numpy.log10(1 + (sample_rate / 2) / 700))  
    mel_points = numpy.linspace(low_freq_mel, high_freq_mel, triangular_filters + 2)  # Equally spaced in Mel scale
    hz_points = (700 * (10 ** (mel_points / 2595) - 1))  # Convert Mel to Hz
    bin = numpy.floor((NFFT + 1) * hz_points / sample_rate)

    fbank = numpy.zeros((triangular_filters, int(numpy.floor(NFFT / 2 + 1))))
    for m in range(1, triangular_filters + 1):
        f_m_minus = int(bin[m - 1])  # left
        f_m = int(bin[m])  # center
        f_m_plus = int(bin[m + 1])  # right

        for k in range(f_m_minus, f_m):
            fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1])
        for k in range(f_m, f_m_plus):
            fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m])
    filter_banks = numpy.dot(pow_frames, fbank.T)
    filter_banks = numpy.where(filter_banks == 0, numpy.finfo(float).eps, filter_banks)  # Numerical Stability

    filter_banks = 20 * numpy.log10(filter_banks)  # dB
    filter_banks = do_mfcc(filter_banks, upper_frequency_limit=4000, lower_frequency_limit=0, dct_coefficient_count=12)


    # Code below is for visualising MFCC
    plt.subplot(312)
    plt.imshow(filter_banks.T, cmap=plt.cm.jet, aspect='auto')
    plt.xticks(numpy.arange(0, (filter_banks.T).shape[1],
                            int((filter_banks.T).shape[1] / 4)),
               ['0s', '0.25s', '0.5s', '0.75s', '1s'])
    plt.yticks(numpy.arange(1, (filter_banks.T).shape[0],
                            int((filter_banks.T).shape[0] / 4)),
               ['0', '3', '6', '9', '12'])
    ax = plt.gca()
    ax.invert_yaxis()
    plt.show()

    return filter_banks




gimmeDaSPECtogram("samples/leftTest.wav", 16000, window_size_ms=30.0, stride_ms=10.0, pre_emphasis=0.97)

我希望使用MFCC作为输入,在数据集中的每个wav文件上训练一个模型,这样我就可以使用分类器来识别基本单词。在

任何有关在tensorflow中实现自定义音频处理的帮助/建议,我们将不胜感激。即使是任何一种指向正确方向的指南或指针的链接,我们也非常感激!在


Tags: samplenumpyframessignalbinratestepplt