Sdr bandpass filter

Here are the shortwave BCB frequency bands superimposed on the above multiband filter response (some of these bands are only lightly or regionally in use):

As you can see, some of these SW BCB frequencies are extremely close to the ham bands, and so in those cases the filter will probably not help much, while other SW BCB signals will be significantly attenuated.

Remember that we tell our SDR which frequency to tune to, but the samples that the SDR captures are at baseband, meaning the signal will display as centered around 0 Hz. We will have to keep track of which frequency we told the SDR to tune to. Here is what we might receive:

Because our signal is already centered at DC (0 Hz), we know we want a low-pass filter.

Below shows an example of processing blocks of contiguous samples using stateful filtering:

b=tapsa=1# for FIR, but non-1 for IIRzi=lfilter_zi(b,a)# calc initial conditionswhileTrue:samples=sdr.read_samples(num_samples)# Replace with your SDR's receive samples functionsamples_filtered,zi=lfilter(b,a,samples,zi=zi)# apply filter

Arbitrary Frequency Response¶

Now we will consider one way to design an FIR filter ourselves in Python, starting with the desired frequency domain response, and working backwards to find the impulse response.

We need to add resolution to our original frequency domain array (called interpolating).

H=np.hstack((np.zeros(200),np.arange(100)/100,np.zeros(200)))w=np.linspace(-0.5,0.5,500)plt.plot(w,H,'.-')plt.show()# (the rest of the code is the same)

Both options worked. Lastly, the option will include all the transients; it outputs the entire convolution result.

We will now use all four functions on the firwin2 taps we created above, and using a test signal that is made with white Gaussian noise.

This filter also has the stopband notches placed to optimize attenuation of the 88 – 108 MHz FM broadcast band, which would otherwise be aliased down to 22 – 42 MHz in frequency. Because it’s just a sliding integration, the result is a triangle with a maximum at the point where both square pulses lined up perfectly.

Let’s look at a few more convolutions:

Note how a Gaussian convolved with a Gaussian is another Gaussian, but with a wider pulse and lower amplitude.

Because of this “sliding” nature, the length of the output is actually longer than the input.

The asterisk (*) is typically used as the symbol for convolution:

\[(f * g)(t) = \int f(\tau) g(t - \tau) d\tau\]

In this above expression, \(g(t)\) is the signal or input that is flipped and slides across \(f(t)\), but \(g(t)\) and \(f(t)\) can be swapped and it’s still the same expression. The reason is mainly that a smaller transition width results in more taps, and more taps means more computations–we will see why shortly.

For example, image processing makes heavy use of 2D filters, where the input and output are images. We have two options that will allow it to decay to zero:

Option 1: We “window” our current impulse response so that it decays to 0 on both sides. For that 77 tap filter we used earlier, the taps are:

h=[-0.00025604525581002235,0.00013669139298144728,0.0005385575350373983,0.0008378280326724052,0.000906112720258534,0.0006353431381285191,-9.884083502996931e-19,-0.0008822851814329624,-0.0017323142383247614,-0.0021665366366505623,-0.0018335371278226376,-0.0005912294145673513,0.001349081052467227,0.0033936649560928345,0.004703888203948736,0.004488115198910236,0.0023609865456819534,-0.0013707970501855016,-0.00564080523326993,-0.008859002031385899,-0.009428252466022968,-0.006394983734935522,4.76480351940553e-18,0.008114570751786232,0.015200719237327576,0.018197273835539818,0.01482443418353796,0.004636279307305813,-0.010356673039495945,-0.025791890919208527,-0.03587324544787407,-0.034922562539577484,-0.019146423786878586,0.011919975280761719,0.05478153005242348,0.10243935883045197,0.1458890736103058,0.1762896478176117,0.18720689415931702,0.1762896478176117,0.1458890736103058,0.10243935883045197,0.05478153005242348,0.011919975280761719,-0.019146423786878586,-0.034922562539577484,-0.03587324544787407,-0.025791890919208527,-0.010356673039495945,0.004636279307305813,0.01482443418353796,0.018197273835539818,0.015200719237327576,0.008114570751786232,4.76480351940553e-18,-0.006394983734935522,-0.009428252466022968,-0.008859002031385899,-0.00564080523326993,-0.0013707970501855016,0.0023609865456819534,0.004488115198910236,0.004703888203948736,0.0033936649560928345,0.001349081052467227,-0.0005912294145673513,-0.0018335371278226376,-0.0021665366366505623,-0.0017323142383247614,-0.0008822851814329624,-9.884083502996931e-19,0.0006353431381285191,0.000906112720258534,0.0008378280326724052,0.0005385575350373983,0.00013669139298144728,-0.00025604525581002235]

And even though we haven’t gotten into filter design yet, here is the Python code that generated that filter:

importnumpyasnpfromscipyimportsignalimportmatplotlib.pyplotaspltnum_taps=51# it helps to use an odd number of tapscut_off=3000# Hzsample_rate=32000# Hz# create our low pass filterh=signal.firwin(num_taps,cut_off,fs=sample_rate)# plot the impulse responseplt.plot(h,'.-')plt.show()

Simply plotting this array of floats gives us the filter’s impulse response:

And here is the code that was used to produce the frequency response, shown earlier.

Well, how do we convert from the frequency domain back to the time domain? If you provide the sample rate via then the units of your cutoff frequency and transition width is in Hz, but if you don’t provide it then they will be in units of normalized Hz (0 to 1 Hz). The parameter is by default, but if you want a high-pass or band-pass filter then you must set it to ; it indicates whether 0 Hz should be included in the passband.


The resulting filter had 77 taps.

Back to filter representation. Convolution is equal to a cross-correlation, defined as \(\int f(\tau) g(t+\tau)\), when \(g(t)\) is symmetrical, i.e., it doesn’t change when flipped about the origin.

Filter Implementation¶

We aren’t going to dive too deeply into the implementation of filters.

For filters symmetrical in the frequency domain, these floats will be real (versus complex), and there tends to be an odd number of them. Starting from the code above that gave us :

# Simulate signal comprising of Gaussian noiseN=100000# signal lengthx=np.random.randn(N)+1j*np.random.randn(N)# complex signal# Save PSD of the input signalPSD_input=10*np.log10(np.fft.fftshift(np.abs(np.fft.fft(x))**2)/len(x))# Apply filterx=fftconvolve(x,h2,'same')# Look at PSD of the output signalPSD_output=10*np.log10(np.fft.fftshift(np.abs(np.fft.fft(x))**2)/len(x))f=np.linspace(-sample_rate/2/1e6,sample_rate/2/1e6,len(PSD_output))plt.plot(f,PSD_input,alpha=0.8)plt.plot(f,PSD_output,alpha=0.8)plt.xlabel('Frequency [MHz]')plt.ylabel('PSD [dB]')plt.axis([sample_rate/-2/1e6,sample_rate/2/1e6,-40,20])plt.legend(['Input','Output'],loc=1)plt.grid()plt.savefig('../_images/fftconvolve.svg',bbox_inches='tight')plt.show()

We can see that the bandpass portion is 3 dB lower than the lowpass portion:

As an aside, there is one more obscure option for applying the filter to a signal, called , that performs “zero-phase filtering”, which helps preserve features in a filtered time waveform exactly where they occur in the unfiltered signal.