我试图在iOS中实现真正的FFT,因为我使用的是加速框架。这是我的Swift代码
class FFT {
private var fftSetup: FFTSetup?
private var log2n: Float?
private var length: Int?
func initialize(count: Int){
length = count
log2n = log2(Float(length!))
self.fftSetup = vDSP_create_fftsetup(vDSP_Length(log2n!), FFTRadix(kFFTRadix2))!
}
func computeFFT(input: [Float]) -> ([Float], [Float]) {
var real = input
var imag = [Float](repeating: 0.0, count: input.count)
var splitComplexBuffer = DSPSplitComplex(realp: &real, imagp: &imag)
let halfLength = (input.count/2) + 1
real = [Float](repeating: 0.0, count: halfLength)
imag = [Float](repeating: 0.0, count: halfLength)
// input is alternated across the real and imaginary arrays of the DSPSplitComplex structure
splitComplexBuffer = DSPSplitComplex(fromInputArray: input, realParts: &real, imaginaryParts: &imag)
// even though there are 2 real and 2 imaginary output elements, we still need to ask the fft to process 4 input samples
vDSP_fft_zrip(fftSetup!, &splitComplexBuffer, 1, vDSP_Length(log2n!), FFTDirection(FFT_FORWARD))
// zrip results are 2x the standard FFT and need to be scaled
var scaleFactor = Float(1.0/2.0)
vDSP_vsmul(splitComplexBuffer.realp, 1, &scaleFactor, splitComplexBuffer.realp, 1, vDSP_Length(halfLength))
vDSP_vsmul(splitComplexBuffer.imagp, 1, &scaleFactor, splitComplexBuffer.imagp, 1, vDSP_Length(halfLength))
return (real, imag)
}
func computeIFFT(real: [Float], imag: [Float]) -> [Float]{
var real = [Float](real)
var imag = [Float](imag)
var result : [Float] = [Float](repeating: 0.0, count: length!)
var resultAsComplex : UnsafeMutablePointer<DSPComplex>? = nil
result.withUnsafeMutableBytes {
resultAsComplex = $0.baseAddress?.bindMemory(to: DSPComplex.self, capacity: 512)
}
var splitComplexBuffer = DSPSplitComplex(realp: &real, imagp: &imag)
vDSP_fft_zrip(fftSetup!, &splitComplexBuffer, 1, vDSP_Length(log2n!), FFTDirection(FFT_INVERSE));
vDSP_ztoc(&splitComplexBuffer, 1, resultAsComplex!, 2, vDSP_Length(length! / 2));
//
//// Neither the forward nor inverse FFT does any scaling. Here we compensate for that.
var scale : Float = 1.0/Float(length!);
var copyOfResult = result;
vDSP_vsmul(&result, 1, &scale, ©OfResult, 1, vDSP_Length(length!));
result = copyOfResult
return result
}
func deinitialize(){
vDSP_destroy_fftsetup(fftSetup)
}
}
这是我的Python代码,用于计算rFFT和irFFT
# calculate fft of input block
in_block_fft = np.fft.rfft(np.squeeze(in_buffer)).astype("complex64")
# apply mask and calculate the ifft
estimated_block = np.fft.irfft(in_block_fft * out_mask)
问题:
Swift 如果我计算512帧的rFFT,并将irFFT应用于rFFT的结果,则得到相同的原始数组
Python python也是如此,如果我使用rFFT和irFFT,我将得到原始数组作为回报
问题 如果比较Swift-rFFT和Python-rFFT的结果,就会出现问题。它们的结果在十进制值上是不同的。有时真实的部分是相同的,但想象的部分是完全不同的
我在Python中尝试了不同的框架,如Numpy、SciPy和TensorFlow,它们的结果完全相同(小数部分略有不同)。但是,当我使用上面的Swift代码计算iOS相同输入的rfft时,结果是不同的
如果有人谁有加速框架的经验,并持有FFT的知识,帮助我与这一个将是非常有帮助的。我对FFT的知识有限
是的,它有区别
我使用相同的数据将此功能从python librosa部分移植到iOS
https://github.com/dhrebeniuk/RosaKit
两个观察结果:首先,对
DSPSplitComplex(realp: &real, imagp: &imag)
创建临时指针的调用和fromInputArray
初始值设定项被弃用。请看一下Data Packing for Fourier Transforms中的最佳实践其次,
bindMemory
容量是元素的数量。下面几行创建512个复杂元素,它们应该(如果我理解正确的话)创建256个复杂元素来表示512个真实元素西蒙
相关问题 更多 >
编程相关推荐