text
stringlengths 81
47k
| source
stringlengths 59
147
|
|---|---|
Question: <p>What's the advantage of using the Bilinear Transform?</p>
<p><span class="math-container">$$H_d(z) = H_c(s)\bigg|_{s=\frac{2}{T_s}\frac{z-1}{z+1}}$$</span></p>
<p>When you can just use this equation?</p>
<p><span class="math-container">$$H_d(\omega) = H_c(\Omega)\bigg|_{\Omega=\omega/T_s}$$</span></p>
<p>In other words, why does bilinear transform exist? These two equations look almost the same to me...What are the tradeoffs between using one equation verses the other? Is there a case where you would use one verses the other?</p>
Answer: <p><span class="math-container">$$\left . H_d(z) = H_c(s) \right |_{s = \frac{2}{T\,s}\frac{z-1}{z+1}}$$</span> describes a transfer function in the <span class="math-container">$z$</span> domain that you can easily translate into a difference equation and realize in software.</p>
<p><span class="math-container">$$\left . H_d(\omega) = H_c(\Omega) \right |_{\Omega = \frac{\omega}{T\,s}}$$</span> describes an idealized frequency response that you would like <span class="math-container">$H_d$</span> to have when you are done realizing it physically in software.</p>
<p>They're different. Note particularly the "would like to have" -- any translation from the continuous-time domain to the discrete-time domain is an approximation; part of your job is to make sure it's both practically realizable and good enough.</p>
<p>Note that there are other ways of approximating <span class="math-container">$H_c$</span> with some <span class="math-container">$H_d$</span> -- the bilinear transform is just one way. It has a lot of currency because it's conceptually simple, and works pretty well. It also has a lot of currency because it's easy to do with pencil, paper, and a slide rule -- today, there's numerical optimization techniques that can get you closer to some desired filtering goal, for less work (but -- I can never remember the search terms :( )</p>
|
https://dsp.stackexchange.com/questions/68567/converting-analog-filter-into-digital-filter-why-bilinear-transform
|
Question: <p>I'm attempting to design a FIR high pass filter than keeps signals above 200Hz and rejects signals below 60Hz with a sampling frequency of 500 samples/sec. This is my first time attempting this and I'm a little confused.</p>
<p>I started with looking at pole/zero placement. I'm not exactly sure how to go about this but this is what I did. I want a zero on the unit circle at the 60Hz point to minimize that frequency. To find this spot on the axis I did the following:</p>
<p>$$\frac{60}{\frac{500}{2}}\pi=0.24\pi$$</p>
<p>So I believe I need a zero at 0.24pi on the unit circle on the z-plane. Does this sound right? How would I then transform it in a difference equation in the sequence domain?</p>
Answer: <p>This is how you can try to design a short and simple FIR filter by hand. Note that this method is just supposed to be enlightening, a really useful filter can be designed using some software. If you want a zero at 60 Hz you indeed need to place it at an angle of $0.24\pi$ on the unit circle of the $z$-plane. The general relation between angle and frequency is</p>
<p>$$\phi=2\pi\frac{f}{f_s}$$</p>
<p>where $f$ is the desired frequency, and $f_s$ is the sampling frequency. Note that if the filter coefficients are real-valued, zeros (and poles) always occur in complex conjugate pairs. So if you have a zero at $z=z_0$ you must also have a zero at $z=z_0^*$. This just reflects the fact that the spectrum of a real-valued filter is symmetric: $H(\omega)=H^*(-\omega)$. Since you want to design a causal FIR filter, all poles are at the origin of the $z$-plane, and you only have control over the zeros.</p>
<p>So let's design an FIR filter of order $3$ (i.e. length $4$) with a zero at $z_0=e^{j0.24\pi}$, and - because it's a high pass filter - a zero at $z=0$. The zero at $z=0$ contributes a factor</p>
<p>$$H_1(z)=1-z^{-1}$$</p>
<p>to the total transfer function, and the zero at $z_0=e^{j0.24\pi}$ (and its complex conjugate counterpart at $z_0^*=e^{-j0.24\pi}$) contributes a factor</p>
<p>$$H_2(z)=(1-z_0z^{-1})(1-z_0^*z^{-1})=1-(z_0+z_0^*)z^{-1}+|z_0|^2z^{-2}=\\
=1-2\Re\{z_0\}z^{-1}+z^{-2}=1-2\cos(0.24\pi)z^{-1}+z^{-2}$$</p>
<p>because $z_0+z_0^*=2\Re\{z_0\}=2\cos(0.24\pi)$, and $|z_0|=1$ (the zero is on the unit circle). The total transfer function is</p>
<p>$$H(z)=H_1(z)H_2(z)=(1-z^{-1})(1-2\cos(0.24\pi)z^{-1}+z^{-2})=\\
=1-[1+\cos(0.24\pi)]z^{-1}+[1+\cos(0.24\pi)]z^{-2}-z^{-3}\tag{1}$$</p>
<p>From (1) you can read the filter coefficients:</p>
<p>$$h_0=1,\quad h_1=-[1+\cos(0.24\pi)]=-2.4579,\quad h_2=-h_1,\quad h_3=-h_0\tag{2}$$</p>
<p>Note that we just imposed the locations of the zeros but we didn't apply any normalization. A useful normalization for a highpass filter is to require $H(-1)=1$, i.e. the response at the Nyquist frequency (half the sampling frequency) is $1$. Since</p>
<p>$$H(-1)=\sum_n(-1)^nh[n]=h[0]-h[1]+h[2]-h[3]=2h[0]-2h[1]$$</p>
<p>we can define new normalized filter coefficients by</p>
<p>$$\hat{h}[n]=\frac{h[n]}{2(h[0]-h[1])},\quad n=0,1,2,3$$</p>
<p>The magnitude of the resulting frequency response of this filter looks like this (the phase is linear due to the symmetry of the coefficients):</p>
<p><img src="https://i.sstatic.net/2rNXp.png" alt="enter image description here"></p>
<p>You see that the three constraints (zeros at DC and at 60 Hz, and unity response at Nyquist) are satisfied. The transition from stopband to passband is of course not very steep but that's as good as it gets with an FIR filter of order 3.</p>
|
https://dsp.stackexchange.com/questions/19062/pole-zero-placement-for-filter
|
Question: <p>There are a plethora of tools, both commercial and free, which I have found online for designing filters. The ones I have tried (so far) prompt for frequency response, number of steps (for FIR), then generate coefficients and a frequency response plot.<p>
<strong>Question</strong>: are there any particular sites or programs, which allow you to input or modify the coefficients? Free would be preferable, but (especially for an FIR with a large number of taps) commercial will do in a pinch.<br>
(Matlab is too expensive for me, but I've heard of freeware Octave: if anyone recommends it for filter design I'd be willing to try it out)</p>
Answer: <p>I believe that Octave and Python/Scipy are the most commonly used free software packages for digital signal processing (including filter design). They offer of course much more than you might be looking for. On the other hand, if you want to play around with the coefficients and get some quick feedback, it might be very helpful to have one of these (or similar) packages because then you can easily modify the filter, plot frequency responses, and even write your own little programs if necessary. I've used Octave for that purpose and for me it worked fine. If you use Octave, you'll need to install the signal processing toolbox.</p>
<p>A totally different approach would be to choose your favorite programming language and write some simple routines yourself. This is perfectly feasible for some common FIR design methods (such as windowing). In combination with some plotting software this may be all you need. In this case I would consult <a href="http://dsp-book.narod.ru/DSPMW/11.PDF" rel="nofollow">this document</a> on digital filtering. This latter approach is of course not the most efficient one because you'll be re-inventing the wheel, but in this way you'll learn how it all works and you will not just be using a black box design method.</p>
|
https://dsp.stackexchange.com/questions/18851/filter-design-analysis-apps
|
Question: <p>Hi I am a little confused on what the notation of the following statement means.</p>
<p>$$ H_{k}(z)= H(W_{4}^{k} z), k = 0,...,3$$</p>
<p>It comes from a question in which I have designed a FIR low-pass filter $H(z)$ and my goal is to implement a DFT filter bank scheme like this:</p>
<p><img src="https://i.sstatic.net/MZMVd.png" alt="Filter bank showing decimation by M points, application of filters $P_{k}(z)$, application of M-point DFT, output as signals y_{k}[m], and reconstruction in reverse"></p>
<p>Exchange $P(z)$ for $H(z)$ and k would correspond to the subscript of P and M in this case is equal to 4</p>
<p>I guess I am confused on how to find $H_{k}(z)$ or what exactly a polyphase filter is</p>
Answer: <p>Hk are modulations of the low pass filter (band pass instead of low pass).</p>
<p>$$
(W_{4}^{k}) = e^{-2j\pi k /4}.
$$
For $z = e^{j\omega}$: $$H(W_{4}^{k} z) = H(e^{j(\omega-2\pi k/4)})$$
This means that the filters $H_k$ are shifted in frequency- these are the band pass filters you want to get using your filter bank.
For $k=0$ $H_0$ will pass the frequencies $[\frac{-\pi}{4} \frac{\pi}{4}]$, for $k=1,[\frac{\pi}{4} \frac{3\pi}{4}]$ etc..</p>
<p>In the DFT filter bank scheme,
$y_0[m]$ are the outputs from $H_0$ , $y_1[n]$ are the outputs from $H_1$ and so on.</p>
|
https://dsp.stackexchange.com/questions/1340/polyphase-filter-notation
|
Question: <p>I have filtered out the low frequency noise from the signal. For further analysis, I want to compare it with FIR filters (for which I require cut off frequency). Hence, I took the power density spectrum of noise using Welch method and found its peak and used the same as cut off frequency. The PSD looks like this <img src="https://i.sstatic.net/OZIXv.jpg" alt="https://i.sstatic.net/OZIXv.jpg">.</p>
<p>What would be the cut off frequency? The peak occurs at $0.0031$ Hz or should I take a value between $0-0.5$Hz.</p>
<p>Note: The frequency range of my signal is $0-2$ Hz</p>
Answer:
|
https://dsp.stackexchange.com/questions/24707/how-to-determine-cut-off-frequency-for-high-pass-filter-using-spectral-density
|
Question: <p>Write listing in MATLAB which design a low-pass IIR filter basis on Butterworth (Rp=2dB, Rr=40dB, fp=1000 Hz, fr=1300 Hz, fs=5000 Hz) as a prototype and using bilinear transform without built-in functions. Control the point(fr,Rr). Plot the response characteristic (linear scale, and dBs scale). </p>
<p><a href="https://i.sstatic.net/o0Wlw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o0Wlw.png" alt="formulas"></a> </p>
Answer:
|
https://dsp.stackexchange.com/questions/25673/design-iir-butterworth-filter-using-bilinear-transform
|
Question: <p>Let's say I made this block diagram and I want to explain it:</p>
<p><a href="https://i.sstatic.net/rIxpl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rIxpl.png" alt="enter image description here"></a></p>
<p>FYI: $x$ is a signal and each $y$ box is a matrix</p>
<p><strong><em>I want to say that:</em></strong></p>
<p>The signal $x$ is multiplied by each matrix $y$ in the different branches independently to produce
$f_0=xy_0$ </p>
<p>$f_1=xy_1$ </p>
<p>... </p>
<p>$f_m=xy_m$</p>
<p>I want to ask the experts in the signal processing world if this is formal enough? or am I sounding weird?</p>
<p><strong><em>Edit 1:</em></strong></p>
<p><em>What if I said:</em></p>
<p>The signal $x$ is multiplied by a set of matrices $y_k$ where $k = 0,1,..., m$</p>
<p>does it sound better? or would it imply that $x$ is multiplied like: $x*y_0*y_1*...*y_k$</p>
<p>which is <strong>NOT</strong> what I intend to say</p>
Answer: <p>What I would do is:</p>
<ul>
<li>Add $m$ multiplier blocks, using a circle with a $\times$ inside</li>
<li>Each multiplier would have inputs $X$ and $Y_k$</li>
<li>Each multiplier would have output $f_k$</li>
<li>Be consistent in the use of bold typeface, uppercase and lowercase</li>
</ul>
<p>Then I would say something like "The system has input $X$ and outputs $f_k=XY_k$, for $k = 0,\ldots,m$. In the diagram, $\times$ stands for signal-matrix multiplication, which is defined as follows...".</p>
|
https://dsp.stackexchange.com/questions/30292/describing-a-block-diagram-how-do-i-describe-multiplying-a-signal-through-mul
|
Question: <p>I am following the process that is described in this question: <a href="https://dsp.stackexchange.com/questions/31028/transfer-function-of-second-order-notch-filter">Transfer function of second order notch filter</a> , I want to create a notch filter with the band suppressed equal to <span class="math-container">$f_c = 4000$</span> Hz, so using <span class="math-container">$\omega_n= f_c / f_s $</span>, (<span class="math-container">$f_s = 48000$</span>), I obtained the <span class="math-container">$\omega_n = \frac{\pi}{6}$</span>, then using the exact same formula, with <span class="math-container">$a =0.8$</span>. The pole-zero graph I get has the zero in <span class="math-container">$1$</span> and a pole in <span class="math-container">$0.8$</span>, is this correct?? I am getting the half of the filter since the filter is centered in <span class="math-container">$0$</span> and not in <span class="math-container">$4000$</span> Hz.</p>
<p>As far as I know it must be centered in the wn I have (based on <span class="math-container">$f =4000$</span> Hz), but I am not sure why it is centered in <span class="math-container">$0$</span>, or how to center it in the desired frequency.
I get a pole zero graph like this one, with zeros in <span class="math-container">$1$</span> and poles in <span class="math-container">$0.8$</span>.</p>
<p><a href="https://i.sstatic.net/ScRZ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ScRZ5.png" alt="enter image description here"></a></p>
<p>Please advice </p>
Answer: <p>This Octave / Matlab code gives you a 2nd ord notch filter at <span class="math-container">$\omega_n = \pi/6$</span></p>
<pre><code>r = 0.99; % notch radius (closer to 1 stiffer)
wn = pi/6; % notch radian frequency...
% Create the 2nd order NOTCH filter coefficients b() and a()
b = r*conv([1, -exp(j*wn)],[1, -exp(-j*wn)]);
a = conv([1, -r*exp(j*wn)],[1, -r*exp(-j*wn)]);
figure,freqz(b,a,2048);
</code></pre>
<p>with the following result:</p>
<p><a href="https://i.sstatic.net/rtQSX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rtQSX.jpg" alt="enter image description here"></a></p>
|
https://dsp.stackexchange.com/questions/53695/doubts-about-notch-filter-design
|
Question: <p>In DSP class we have <a href="http://ce.sharif.edu/courses/93-94/1/ce763-2/resources/root/Lecture%20Notes/Lec09-FiltersIntroduction2.pdf" rel="nofollow noreferrer">this</a> slide:</p>
<p><a href="https://i.sstatic.net/IfhM4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IfhM4.jpg" alt="enter image description here"></a></p>
<p>So i like to know why this classification is so imoortant?</p>
<p>I have seen this <a href="https://dsp.stackexchange.com/questions/42443/conjugate-reciprocal-pairs-of-zeros-and-poles-in-fir-design">post</a> :</p>
<blockquote>
<p>Conjugate reciprocal pairs of zeros and poles in ...</p>
<p>Assuming the impulse response h[n]h[n] of an FIR filter is real for
all nn,</p>
<p>Why are zeros and poles in FIR design found in reciprocal and
conjugate pairs?Is the assumption necessary for this phenomenon to
take place?</p>
</blockquote>
<p>But i dont get my answer?( mybe because of being newbi in this field)</p>
Answer: <p>Consider</p>
<p><span class="math-container">$$ h[n] = \sum_{k=0}^{M} b_k \delta[n-k] ~~~~\longleftrightarrow ~~~~H(z) = \sum_{k=0}^{M} b_k z^{-k} $$</span></p>
<p>where <span class="math-container">$b_k$</span> are impulse response coefficients, and and <span class="math-container">$H(z)$</span> is the corresponding Z-Transform.</p>
<p>If an FIR filter is <strong>real</strong>; i.e., its coefficients <span class="math-container">$b_k$</span> are real, then its Z-transform <span class="math-container">$H(z)$</span> is also a polynomial of real coefficients. Then the <strong>roots</strong> of such a polynomial (i.e., zeros of <span class="math-container">$H(z)$</span>) are either real or complex-conjugate pairs. This explains why a real FIR filter's zeros are in <strong>conjugate</strong> pairs if they are complex.</p>
<p>To explain why they are also <strong>reciprocals</strong>, we need to consider the <strong>linear phase</strong> property (without losing generality) expressed as:</p>
<p><span class="math-container">$$ h[n] = h[-n] ~~~\implies ~~~ H(z) = H(1/z) $$</span></p>
<p>Where <span class="math-container">$H(1/z)$</span> is the z-transform of <span class="math-container">$h[-n]$</span>. And this means that if <span class="math-container">$z_0$</span> is a zero of <span class="math-container">$H(z)$</span> such that <span class="math-container">$H(z_0) = 0$</span>, then its <strong>reciprocal</strong> <span class="math-container">$1/z_0$</span> is also a zero <span class="math-container">$H(z_0) = H(1/z_0) = 0$</span>.</p>
<p>Therefore, for a <strong>real</strong> and <strong>linear phase</strong> FIR filter, if <span class="math-container">$z_0$</span> is a zero of <span class="math-container">$H(z)$</span>, then <span class="math-container">$z_0^*$</span>, <span class="math-container">$1/z_0$</span>, and <span class="math-container">$1/z_0^*$</span> are also zeros. Hence the zeros of such FIR filters are in pairs of four when they are complex.</p>
|
https://dsp.stackexchange.com/questions/62794/what-is-importance-of-the-conjugate-reciprocal-and-none-reciprocal-zeros-in-fir
|
Question: <p>I don't understand why my output graphs and not showing the real frequencies, tried to change it in a number of ways with no luck. so far I'm stuck.</p>
<p>This is what I have done so far while implementing a LowPassFilter using the frequency sampling method:</p>
<pre><code>M=63
Wp=0.25*pi %Number of samples/passband cutoff
%frequency.
m=0:(M+1)/2
Wm=3*Wp*m./(M+1) %sampling points and the stopband
%cutoff frequency
mtr=floor(Wp*(M+1)/(2*pi))+2
Ad=[Wm<=Wp]
Ad(mtr)=0.38
fs=4*Wp %sample frequency
Hd=Ad.*exp(-j*0.5*M*Wm) %frequency sampling vector.
Hd=[Hd conj(fliplr(Hd(2:(M+1)/2)))] %fliplr(A) returns A with its
%columns flipped in the left-right
%direction.
h=real(ifft(Hd)); %h(n)=IDFT(H(k))
w=linspace(0,pi,1000); %get 1000 row vectors between 0 and pi
H=freqz(h,[1],w); %the amplitude/frequency diagram of the filter.
%Frequency response of digital filter.
figure(1)
plot(w/pi,20*log10(abs(H))); %parameters are respectively the
%normalized frequency and
%amplitude.
xlabel('Normalized frequency');
ylabel('Gain/dB');
title('LowPass Filter - Gain response');
axis([0 1 -50 2]);
f1=100; %frequencies of sine input signals.
f2=300;
f3=700;
figure(2)
subplot(211)
T=1/fs; % Sampling period
L=200000; % Length of signal (time duration)
t=(0:L-1)*T; % Time vector
f = fs/2*linspace(0,1,L/2+1); % single-sided positive frequency
s=sin(2*pi*f1*t)+sin(2*pi*f2*t)+sin(2*pi*f3*t); %input signal definition
plot(t,s); %diagmram plot before
%filtering
xlabel('time [Sec]');
ylabel('Amplitude');
title('Time domain diagram before filtering');
axis([0 200 -3 3]);
subplot(212)
n = 2^nextpow2(L);
Fs=fft(s)/L;
AFs=abs(Fs/n); %transformation to frequency domain.
plot(f,AFs(1:L/2+1)); %frequency Domain diagram plot before filtering.
xlabel('Frequency [Hz]');
ylabel('Amplitude');
title('Frequency Domain diagram before filtering')
</code></pre>
<p>The output so far:</p>
<ul>
<li>I'm trying to make the x axis present the original frequencies.</li>
</ul>
<p><a href="https://i.sstatic.net/0uefK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0uefK.jpg" alt="LowPassFilter Gain Response" /></a></p>
<p>And again, the graph "Frequency Domain diagram before filtering", is not presenting the original frequencies:
<a href="https://i.sstatic.net/oUuKV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oUuKV.jpg" alt="Frequency Domain diagram before filtering" /></a></p>
<p>What am I doing wrong?</p>
Answer: <p>To change to frequencies in units of Hz for the filter multiply the frequency axis by half the sampling rate, as below where fs is the sampling rate:</p>
<pre><code>plot(w/pi * fs/2,20*log10(abs(H)));
</code></pre>
<p>For your lower plot you are not sampling high enough such that the desired frequencies are in the first Nyquist zone. The frequencies desired are 100, 300 and 700 Hz but the sampling rate is given as <span class="math-container">$\pi$</span> (which would therefore more generally be the normalized radian frequency). Either provide the actual sampling rate in Hz at a rate sufficiently high such that 700 Hz is in the first Nyquist zone (<span class="math-container">$fs >> 1400$</span> Hz), or describe the frequency components in <span class="math-container">$s$</span> as normalized frequencies by dividing by the actual sampling rate. For example if the sampling rate was 2 KHz, then f1 would be <span class="math-container">$700/2000 = 0.35$</span> cycles/sample and the frequency axis would be adjusted just as done above with the filter by multiplying by half the sampling rate.</p>
|
https://dsp.stackexchange.com/questions/80673/fir-filter-how-can-i-change-the-axis-to-unnormalized
|
Question: <p>I'm on it for a few hours trying to tweak it in all sort of ways but the output comes out scrambled and I can't understand why.</p>
<p>I am trying to implement an HPF with a stopband frequency of 500Hz and passband frequency of 600Hz.
This is what I've done so far:</p>
<pre><code>M=131072; %the number of samples
Wp=1200*pi; %passband cutoff frequency
m=0:M/2 %the sampling points
q=length(m);
Wm=2*pi*m./(M+1) %stopband cutoff frequency
mtr=ceil(Wp/(2*pi)) %round to positive part,i.e.ceil(3.5)=4;ceil(-3.2)=-3;
Ad=(Wm>=(Wp/(M+1)))
Ad(mtr)=0.38
Hd=Ad.*exp(-j*0.5*M*Wm) %frequency domain sampling vector H(k)
Hd=[Hd conj(fliplr(Hd(2:M/2+1)))]
h=real(ifft(Hd))
w=linspace(0,q,1966140) %linspace(x1,x2,n) generates n points. The spacing between the
%points is (x2-x1)/(n-1).
H=freqz(h,(1),w); %the amplitude -frequency characteristic diagram of the filter
figure(1)
plot(w,20*log10(abs(H))) %parameters are respectively the normalized frequency and
%amplitude
xlabel('the normailzed frequency');ylabel('gian/dB');
title('Gain response - HighPass Filter');
axis([0 2000 -50 2]);
</code></pre>
<p>the output of:</p>
<pre><code>Ad=(Wm>=(Wp/(M+1)))
</code></pre>
<p>gives all 1's after cell 601 in the array (601 included).</p>
<p>after:</p>
<pre><code>Ad(mtr)=0.38
</code></pre>
<p>I get all 1's after cell 600 in the array (600 included).</p>
<p>I think everything is ok until:</p>
<pre><code>Hd=[Hd conj(fliplr(Hd(2:M/2+1)))]
</code></pre>
<p>and maybe something gets scrambled with something here:</p>
<pre><code>w=linspace(0,q,1966140)
H=freqz(h,(1),w);
figure(4)
plot(w,20*log10(abs(H)))
</code></pre>
<p>the output is:
<a href="https://i.sstatic.net/XQXa0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XQXa0.jpg" alt="enter image description here" /></a></p>
Answer: <p>You have no concept of a sample rate in your code. Everything in digital signal processing is relative to the sample rate. You either need to work in normalized frequency (from <span class="math-container">$-\pi$</span> to <span class="math-container">$\pi$</span>) or absolute frequency (from <span class="math-container">$-f_s/2$</span> to <span class="math-container">$f_s/2$</span>). Your frequency axis makes no sense because it contains thousands of periods and that's why are seeing this "modulation".</p>
<p>If you want something to happen at 600Hz you need to pick a sample rate.</p>
<p>In general, zeroing out FFT bins is not a great way to implement high or lowpass filters. See for example <a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">Why is it a bad idea to filter by zeroing out FFT bins?</a></p>
|
https://dsp.stackexchange.com/questions/80704/implementing-hpf-using-frequency-sampling-method
|
Question: <p>I want to design a digital filter for pulse shaping. Pulses are of 100us Fall time. and the sampling rate is 100MegaSamples/sec. and the Shaping time is 5us. What should my coefficients be??? And how to obtain them using matlab or any other related software.</p>
Answer:
|
https://dsp.stackexchange.com/questions/2760/finding-the-coefficients-of-the-digital-filter
|
Question: <p>I have posted this question "Electrical Engineering", but this seems a more appropiate place. I am trying to model a bireciprocal Cauer filter in LTspice but I don't get the expected results. More precisely, using this formula for the coefficients</p>
<p><span class="math-container">$\gamma=\frac{re(p_i)−1}{re(p_i)+1}$</span></p>
<p>where <span class="math-container">$re(p_i)$</span> is the realpart of the pole, gives this result:</p>
<p><a href="https://i.sstatic.net/dOE8W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dOE8W.png" alt="normal way" /></a></p>
<p>At this point it doesn't really matter what settings were in the beginning, the "why" is in the following. Among the few references I found online, one that gives a numerical example is a thesis, <i>Design and Realization Methods for IIR Multiple Notch Filters and High Speed Narrow-band and Wide-band Filters, L. Barbara Dai</i> and, simply by looking at the numbers and comparing them with what I had, it seemed as if the poles need to be "normalized" to the single real pole, <span class="math-container">$p_{\frac{N+1}{2}}$</span>. And so I tried:</p>
<p><span class="math-container">$\gamma=\frac{\frac{re(p_i)}{re_{\frac{N+1}{2}}}-1}{\frac{re(p_i)}{re_{\frac{N+1}{2}}}+1}$</span></p>
<p>and, even if the numerical values still differed, but a not as before, I got this result:</p>
<p><a href="https://i.sstatic.net/X1Yqz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X1Yqz.png" alt=""normalized" coefficients" /></a></p>
<p>The example used here is not the one used in the thesis, but I seem to get good results (I cannot verify them) with either stop-band, or transition-band optimizations and for any (odd) order.</p>
<p>So, my question is: are the <span class="math-container">$A_i$</span> terms from the formula <span class="math-container">$\gamma=\frac{A_i-2}{A_i+2}$</span> calculated as <span class="math-container">$A_i=2\sigma_i$</span>, where <span class="math-container">$\sigma_i$</span> is the realpart of the complex <i>s-domain</i> <span class="math-container">$s_i=-\sigma_i\pm j \omega_i$</span> or somehow else? If else, how?</p>
<hr />
<p>Just for the sake of comparison, the following is a test using the same settings as in the thesis: <span class="math-container">$A_s=68 => A_p, \omega_s=\frac{2}{3} => \omega_p , f_0=2$</span>. The order is calculated based on these four parameters, which will result in a stop-band attenuation optimization, rather than a transition-band or a pass-band optimization. I say this because I don't know what approach Barbara Dai has.</p>
<p>The first simulation is with the raw values from the thesis for <span class="math-container">$\gamma_i$</span> (black trace) and the quantized values (blue trace) (surprisingly, the quantized values seem to get a better result):</p>
<p><img src="https://s27.postimg.org/txv5djaw3/thesis.png" alt="thesis" /></p>
<p>If I calculate the values for <span class="math-container">$\gamma_i$</span> according to the equation from p.26 from the thesis, I get these values:</p>
<p><span class="math-container">$\gamma_1=−0.098365443613057, \gamma_2=−0.34760115224764, \gamma_3=−0.7329991130665$</span></p>
<p>where the values for the real part(s) of the poles, <span class="math-container">$\sigma_i$</span>, are:</p>
<p><span class="math-container">$\sigma_1=0.15406868065906, \sigma_2=0.48411864791316, \sigma_3=0.82088758493805$</span></p>
<p>The results of the simulation with LTspice is this:</p>
<p><img src="https://s29.postimg.org/8h40dsfnr/test1.png" alt="my results" /></p>
<p>where the black trace is with the above coefficients and the blue trace is with Barbara Dai's unquantized.</p>
<p>Seeing this I tried to transform back the values for <span class="math-container">$\gamma_{1,2,3}$</span> from the thesis, to see what values for the poles were originally and compare them against my results:</p>
<p><span class="math-container">$\sigma_1^{BD}=0.15537305159045, \sigma_2^{BD}=0.48869842253758, \sigma_3^{BD}=0.83127957288835$</span></p>
<p>which are different than mine. However, at a glance, it seemed that I could try to divide each real pole from my calculations to the value of the single, real pole at <span class="math-container">$s_4=\sigma_4+j0 , \sigma_4=0.98572364533093$</span>, in order to calculate the values for the lattice coefficients (which is how the 2nd eq. from the beginning appeared), and I got these values:</p>
<p><span class="math-container">$\gamma_1=−0.091240471459003, \gamma_2=−0.34126450145251, \gamma_3=−0.72965482018797 $</span></p>
<p>and the result of the simulation is this:</p>
<p><img src="https://s16.postimg.org/bs9pdbupx/test2.png" alt="surprise!" /></p>
<p>with the blue trace being this result and the black trace Barbara Dai's unquantized, which seems even better even if the lobes in the stop-band aren't quite equiripple:</p>
<p><img src="https://s22.postimg.org/vwrlg7ykh/lobes.png" alt="lobes" /></p>
<hr />
<p>[edit]</p>
<p>The case of the BLWDF implies that, given the stop-band attenuation and frequency, the pass-band attenuation and frequency can be deduced, or vice-versa. For this case, I'll impose <span class="math-container">$A_s$</span> and <span class="math-container">$\omega_s$</span> and deduce <span class="math-container">$A_p=-10 log_{10}(1-10^{-\frac{A_s}{20}})$</span> (eq. 2.51 in the above thesis) and <span class="math-container">$f_p=\frac{f_0}{2}-f_s$</span> (in the analog domain) or <span class="math-container">$\omega_p=\frac{1}{\omega_s}$</span> (in the digital domain, eq. 2.52a,b).</p>
<p>The example at p.27 gives <span class="math-container">$A_s=68 \omega_s=\frac{16}{48}kHz=\frac{2}{3}$</span> (normalized to <span class="math-container">$\frac{f_0}{2}=1$</span>). From these: <span class="math-container">$A_p=6.8831e-7$</span> and <span class="math-container">$f_p=\frac{1}{3}$</span>, or <span class="math-container">$\omega_p=\frac{1}{1.732}=0.57735$</span>. Using these to find the poles would imply several approaches, due to the complexity if Cauer filters. I don't know what approach the thesis uses but, whichever the case, it shouldn't yield such differences as the ones shown in picture#3. For my case, I'll use stop-band optimization, obtained by imposing <span class="math-container">$A_s, A_p, \omega_s$</span> and <span class="math-container">$\omega_p$</span> and determining the order. The poles (zeroes are not needed here) are <span class="math-container">$\sigma_{1,2,3}$</span> below picture#3 and the result is picture#4.</p>
<p>If I try to reverse Barbara Dai's process, to determine what poles were used to calculate her version of <span class="math-container">$\gamma_i$</span>, I get the values of <span class="math-container">$\sigma_{1,2,3}^{BD}$</span> below picture#4, which are slightly different than mine.</p>
<p>At this point, back then when I obtained them, it seemed to me that I <i>could try</i> to divide each pole to <span class="math-container">$\sigma_4$</span>, the real, single pole, and so I did (second formula from above) which gave the results in picture#5, but this can't be the normal way of doing it, it was a whim tried at the moment, which gave quite the unexpected pleasant surprise. But now I'm left with the question: how are the poles derived in order to calculate the values for <span class="math-container">$\gamma_i$</span>? Because it's not meant to be any different than any other Cauer filter design, with the differences in symmetry due to the bireciprocal nature.</p>
<hr />
<p>Ultra-short-summary:</p>
<ul>
<li><p>using the coefficients calculated as <span class="math-container">$\gamma_i=\frac{A_i-2}{A_i+2}$</span>, where <span class="math-container">$A_i=2\sigma_i$</span> (s-domain <span class="math-container">$s_i=\sigma_i+j \omega_i$</span>), is not working (see picture#1, #4 - black trace)</p>
</li>
<li><p>since only odd orders are valid, there is an extra single, real pole. By sheer ogling, dividing <span class="math-container">$\sigma_i$</span> to <span class="math-container">$\sigma_{\frac{N+1}{2}}$</span> gives the results in pictures #2 and #5 - black trace.</p>
</li>
</ul>
<p>Question: Are the terms <span class="math-container">$A_i$</span> calculated as <span class="math-container">$2 \sigma_i$</span> or somehow else? If else, how?</p>
<hr />
<p>I don't know how to explain better at this time. If there are any English errors, my apologies, it's not my native language.</p>
Answer: <p>Let's try to sort of answer this from the BLWDF point of view (without much of the WDF-theory, since this can to a large extent be skipped as you know which structure you want).</p>
<p>Starting from a second-order BLWDF allpass section (based on symmetric two-port adaptors without any negations in the feedback), the transfer function is
$$\frac{z^{-2}-a}{1-a z^{-2}} = \frac{1-a z^2}{z^2-a},$$
where $a$ is the adaptor coefficient (assuming the input connected to the negative side of the subtractor is the input). This has roots in $z=\pm \sqrt{a}$. Hence, the poles can be either on the real or imaginary axis in the $z$-domain. Typically, you would like to map them to the imaginary axis. This clearly hold for the standard approximations such as Cauer/Elliptic filters.</p>
<p>So, one approach is to design your filter directly in the $z$-domain, making sure that the poles end up on the imagniary axis and then take every other pole pair and position in every other branch.</p>
<p>As you mention, for this to happen, you need a anti-symmetric power complementary filter, so it should meet Feldtkeller's equation
$$ |H(e^{-j\omega})|^2 + |H_C(e^{j\omega})|^2 = 1,$$
where $H_C$ is the complementary filter (in the case of parallel allpass filters the sum/difference if the original filter is obtained by subtracting/adding the branches). This gives that</p>
<p>$$(1-\delta_c)^2 + \delta_s^2 = 1 \Rightarrow \delta_s^2 = 2\delta_c - \delta_c^2 \approx 2\delta_c \Rightarrow \delta_C \approx \frac{\delta_s^2}{2},$$
where $\delta_c$ and $\delta_s$ are the passband and stopband ripples, respectively, leading to $A_p = -20 \log_{10} (1-\delta_c) $ and $A_s = -20 \log_{10} (\delta_s)$.
In addition, the passband and stopband edges should be related as $\omega_c = \pi - \omega_s$. However, the trick here is to know exactly how to select your specification so that you get a specification without any over design. If you manage that, you are home.</p>
<p>The same type of problem arises when you go from an analog filter. You need to have a filter, such that the specification become a spec which can be mapped to a BLWDF. Now, the relation is quite straightforward to compute, but you will need to find a spec where all four parameters (passband/stopband ripple/edge) results in an odd order filter without any over design. </p>
<p>While LWDF (and all filters constructed of the allpass branches in parallel) are very sensitive to coefficient quantization, the quantized results in your first comparison figure are really better since they come from a mini-max solution and are not quantized that hard. Your values are, as you've noticed, probably not computed the right way. I tend to believe the reason being that your analog filter is over designed in one way or the other, leading to that it is not actually suitable for a BLWDF, but rather an LWDF, i.e., the poles do not end up exactly on the imaginary axis after the transform. Reading your text again, I think I can confirm that this is the reason: </p>
<blockquote>
<p>The order is calculated based on these four parameters, which will result in a stop-band attenuation optimization, rather than a transition-band or a pass-band optimization. I say this because I don't know what approach Barbara Dai has.</p>
</blockquote>
<p>Hence, you need to adjust the specification such that there are no "optimization" in the design process.</p>
<p>I can extend the answer where required, but please point out where it is needed (I will, e.g., not go into the bi-linear transform for time constraint reasons right now).</p>
|
https://dsp.stackexchange.com/questions/15112/bireciprocal-lattice-wave-digital-filter
|
Question: <p>I have a requirement to design minimax filter with linear programing (<code>linprog</code> in MATLAB).
To build the filter I must choose vector $\omega$ (frequency sample), how many values I need to take to get the optimal result? How to spread them across the interval $[0,\pi]$?</p>
Answer: <p>There is no real rule, but you would usually choose around $10N$ frequency points, where $N$ is the desired filter order. Distribute the points equidistantly over pass band(s) and stop band(s). This does not mean that they are equidistant in the interval $[0,\pi]$ because usually there is at least one "don't care" region where no desired response is specified (such as in the transition band between pass band and stop band).</p>
|
https://dsp.stackexchange.com/questions/23770/equiripple-filter-design
|
Question: <p>The FIR low-pass filter was designed in MATLAB which characteristics are listed below. Coefficient of this filter was written in variable h. Basis on this filter design a band-pass filter with central frequency 1/5(normalized to fs) keeping the same gain and bandwidth. Give the listing in MATLAB(no using buitl-in function) which allow to set down coefficients of designing filter hx. </p>
<p><a href="https://i.sstatic.net/njmyB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/njmyB.png" alt="enter image description here"></a></p>
<p>Thanks in advance</p>
Answer: <p>Mathematically, applying a FIR with impulse response $h_\mathrm{lpf}[n]$ to digital signal is convolution:</p>
<p>$y = x * h_\mathrm{lpf}$, or thanks to the properties of the (discrete) Fourier transform,</p>
<p>$Y = X\cdot H_\mathrm{lpf}$, as convolution becomes multiplication.</p>
<p>Now, making a bandpass out of a low pass can be modeled by shifting the Frequency response $H_\mathrm{lpf}$ in frequency domain. "Shifting" can be represented by a convolution of the low pass filter with a dirac impulse at the desired center frequency:</p>
<p>$H_\mathrm{bpf}= H_\mathrm{lpf} * \delta_{f_\mathrm{center}} $</p>
<p>Again, convolution becomes multiplication when transformed to time domain. The inverse (discrete) Fourier transform of a dirac at ${f_\mathrm{center}}$ is a complex oscillation $e^{2\pi{f_\mathrm{center}}n}$, so this becomes</p>
<p>$y[n] = x * (h_\mathrm{lpf} \cdot e^{2\pi{f_\mathrm{center}}n})$</p>
|
https://dsp.stackexchange.com/questions/25672/how-to-transform-lowpass-fir-filter-to-bandpass-fir-filter-without-using-a-built
|
Question: <p>I read some example of design LPF which I didn't understand something.
The stop-band in that example is $\frac { 22 }{ 25 } $ from the over-all frequency, and I want to filter some white noise.
After we found out in that example that the energy of the white noise in the stop band area is $0.22\%$ from the overall noise before filtering, they used the equation:
$$\frac { 22 }{ 25 } { \delta }_{ s }^{ 2 }=0.0022$$ </p>
<ul>
<li>Why ${ \delta }_{ s }^{ 2 }$? </li>
<li>How does is related to power/energy relationship?</li>
</ul>
<p>The full example in the book of Boaz Porat page 247, can I upload a picture from a book?</p>
Answer: <p>$\delta_s$ is the stop-band (hence the ${}_s$ subscript) attenuation.</p>
<p>A signal in the stop-band with amplitude $1$ would have amplitude $\delta_s$ after filtering.</p>
<p>Since power goes quadratic with amplitude, and if you feed in white noise, $\frac{22}{25}$ of the original energy will be in stop band, and will be reduced by a factor of $\delta_s^2$.</p>
|
https://dsp.stackexchange.com/questions/33922/filter-design-relationship-between-energy-and-stop-band-ripple
|
Question: <p>I gotta wrap my head around to design CIC compensation filter</p>
<p>I'm studying by referring these materials:</p>
<ol>
<li>Altera, "Understanding CIC compensation filters</li>
<li>Hardware Efficient FIR Compensation Filter
For Delta Sigma Modulator Analog to Digital Converters.
Circuits and Systems, 2005. 48th Midwest Symposium on
Saiyu Ren ; Dept. of Electr. Eng., Wright State Univ., Dayton, OH ; Siferd, R. ; Blumgold, R. ; Ewing, R. </li>
</ol>
<p>first Altera just say "inverse sinc function"
they wouldn't let me know how to realize the function except for using matlab function fir2 and how to get the coefficients of taps</p>
<p>so I found another reference (2) which I mentioned above.</p>
<p>they explain how to decide the coefficients of taps. but I couldn't figure out some equations.</p>
<hr>
<p>they say</p>
<pre><code>H(z)=a0*z^0+a1*z^-1 + ... + an*z^-n => H(f)=h(0)+2*sigma(from 1 to (n-1)/2) h(k)*cos(2k*pi*F) where F=f/fs
</code></pre>
<hr>
<p>Here is my first question. How is H(z) converted to H(f). If I substitute z to exp(i<em>w</em>T) it doesn't make sense.</p>
<hr>
<p>second question I though a0=an=h(1). Is it correct? </p>
<hr>
<p>finally i used matlab to calculate the coefficients of taps and i got the same coefficients in comparision with a result of the paper, but a simulation gave me a totally different and wrong data</p>
<pre><code>h= [-0.70 2.09 -1.76 0.74 -1.26 0.40 0.34 0.26 0.74
-0.53 0.24 -0.74 0.04 -0.39 0.05 -0.02 0.28 0.14
0.41 -0.02 0.36 -0.02 0.41 0.14 0.28 -0.02 0.05 -
0.38 0.04 -0.74 0.24 -0.53 0.74 0.26 0.34 0.40 -
1.26 0.74 -1.76 2.09 -0.70] 41 taps
</code></pre>
<p>please let me know where I have to modify...</p>
<p>If I violate The copyright issue (because of the content of paper)
please tell me and I will edit or delete this.</p>
<p>Here's my matlab code and plot<img src="https://i.sstatic.net/zeSIf.png" alt="enter image description here"></p>
<pre><code>clc; clear;
OSR=8
fb=125e6/2
fs=2*fb*OSR
f=[0 14 28 38 45.5 49.5 100 140 170 200 225 250 275 300 325 350 375 400 430 460 490]*10^6;
F=f/fs;
N=41;
L=[1: 1 : (N-1)/2];
S=max(L)
for i=1:S+1
A(i,1)=1;
end
for i=1:S+1
for j=2:S+1
A(i,j)=2*cos(2*pi*(j-1)*F(i));
end
end
H_sinc=((sin(OSR*pi.*F)./(OSR*sin(pi.*F)))).^2;
H_FIR=1./H_sinc;
H_IFIR(1:6)=H_FIR(1:6);
H_IFIR(1)=1;
H_IFIR(7)=0.1;
H_IFIR(8:21)=0;
H_IFIR=H_IFIR';
A_inv=inv(A);
H=inv(A)*H_IFIR;
H=H';
for i=1:S+1
H_sol(i)=H(S+2-i);
end
for i=1:20
H_sol(N-i+1)=H_sol(i);
end
f=[0:100:fs];
F=f/fs;
H_fir1=H_sol(21)+2*H_sol(1)*cos(2*pi.*F) ...
+2*H_sol(2)*cos(2*pi*2.*F) ...
+2*H_sol(3)*cos(2*pi*3.*F) ...
+2*H_sol(4)*cos(2*pi*4.*F) ...
+2*H_sol(5)*cos(2*pi*5.*F) ...
+2*H_sol(6)*cos(2*pi*6.*F) ...
+2*H_sol(7)*cos(2*pi*7.*F) ...
+2*H_sol(8)*cos(2*pi*8.*F) ...
+2*H_sol(9)*cos(2*pi*9.*F) ...
+2*H_sol(10)*cos(2*pi*10.*F) ...
+2*H_sol(11)*cos(2*pi*11.*F) ...
+2*H_sol(12)*cos(2*pi*12.*F) ...
+2*H_sol(13)*cos(2*pi*13.*F) ...
+2*H_sol(14)*cos(2*pi*14.*F) ...
+2*H_sol(15)*cos(2*pi*15.*F) ...
+2*H_sol(16)*cos(2*pi*16.*F) ...
+2*H_sol(17)*cos(2*pi*17.*F) ...
+2*H_sol(18)*cos(2*pi*18.*F) ...
+2*H_sol(19)*cos(2*pi*19.*F) ...
+2*H_sol(20)*cos(2*pi*20.*F);
figure(1), semilogx(f,db(H_fir1))
% hold on
grid on
</code></pre>
<p>please help me to solve this problem...</p>
Answer: <p>I have a simple compensation approach that I've used to implement a reasonable estimate of an inverse $\textrm{sinc}$ for use as a CIC compensator as well as other inverse $\textrm{sinc}$ applications. This approach makes use that the $\textrm{sinc}$ function in the passband can be reasonably approximated by a weighted cosine function. Therefore a raised cosine function in frequency, meaning $b-(\alpha)cos(\omega)$ can be used in cascade with the CIC filter as compensation. This function is adjusted by changing the cosine weight $\alpha$ while keeping $b=1+\alpha$, such that the mean squared error over the passband of interest of the cascaded result is minimized.</p>
<p>Once the minimized passband is established, (you can do this with least squared minimization techniques, but I have done it quickly by evaluating passband error for a least squared miminum while doing a binary search on $\alpha$ values, which quickly converges to an $\alpha$ value that minimizes the error for a given passband), the coefficients of the filter, which is the filter impulse response, are as follows:</p>
<p>Coeff 1, 3: $-\alpha/2$</p>
<p>Coeff 2: $b= 1 + \alpha$</p>
<p>It is that easy! The filter is a simple 3 tap FIR with coefficients [$(-\alpha/2)$ $(1+\alpha)$ $(-\alpha/2)$]. Since the filter is symmetric, it can also be implemented with just two multipliers as shown in the figure. Symmetric filters also have the nice benefit that they are linear phase. </p>
<p><a href="https://i.sstatic.net/hbGEI.jpg" rel="noreferrer"><img src="https://i.sstatic.net/hbGEI.jpg" alt="CIC Compensation FIR1"></a></p>
<p>See the figure below for an example of using this CIC compensator in an x8 CIC Intepolator. In use, the signal is passed through the 3 tap compensator at the lower rate, and then fed into the x8 CIC Interpolator with an output at 8x the sample rate. The cascaded response shows the significant improvement that can be achieved with this simple compensator. </p>
<p>What should now be clearer from this figure is that this compensation approach will slightly decrease overall rejection in portions of the stop band. (For this specific example the decrease was on the order of 1.6 dB). However, since the unity gain positions of the compensator will be aligned with the null locations in the CIC response, this reduction in rejection is not in those rejection regions that matter most for CIC rate conversion. </p>
<p><a href="https://i.sstatic.net/MCIYw.jpg" rel="noreferrer"><img src="https://i.sstatic.net/MCIYw.jpg" alt="enter image description here"></a></p>
<p>Here is a zoom in of the passband showing the excellent flatness that can be achieved with three taps!</p>
<p><a href="https://i.sstatic.net/vp96R.jpg" rel="noreferrer"><img src="https://i.sstatic.net/vp96R.jpg" alt="enter image description here"></a></p>
<h3>Note on solving for the FIR coefficients from the first figure above.</h3>
<p>The coefficients being the impulse response of the filter are determined from the inverse DFT of the frequency response. They can be quickly derived from the generic equation for the frequency response for an FIR filter in terms of the FIR coefficients as:</p>
<p>$H(\omega)= \sum_{n=0}^{N-1}c_ne^{-jn\omega}$</p>
<p>where $c_n$ are each of the $N$ coefficients, and $\omega$ is the digital frequency domain from $0$ to $2\pi$ corresponding to 0 Hz to the sampling rate. </p>
<p>As for the case of our 3 tap symmetric FIR this is </p>
<p>$H(\omega)= -\frac{\alpha}{2}+ be^{-j\omega}-\frac{\alpha}{2}e^{-j2\omega}$</p>
<p>$ = e^{-j\omega}(-\frac{\alpha}{2}e^{+j\omega}+b-\frac{\alpha}{2}e^{-j\omega})$</p>
<p>$=e^{-j\omega}(b-\alpha(\frac{e^{+j\omega}+e^{-j\omega}}{2}))$</p>
<p>$=e^{-j\omega}(b-\alpha cos(\omega))$</p>
<p>which is exactly the desired frequency response as given in the first plot (the first term $e^{-j\omega}$ is just the required delay for the filter to be causal but does not affect the magnitude response (and shows how the phase is linear versus $\omega$ ...the same reason all symmetric FIR filters are linear phase- the response for any symmetric FIR can all be described in terms of cosines with a linear phase delay pulled out of the equation!).</p>
<p>It may be more intuitive to some imagine this operation when going from the time domain to the frequency domain if you are familiar with the FT of a cosine wave: A raised cosine wave in the time domain (meaning a cosine with a DC offset) has a Fourier tranform with 3 impulse components in the frequency domain; a DC term, and a positive and negative frequency term. Likewise, the same waveform in the frequency domain will have three impulse components in the time domain, and these components are the impulse reponse by definition, which are therefore the filter coefficients. </p>
|
https://dsp.stackexchange.com/questions/19584/how-to-make-cic-compensation-filter
|
Question: <p>I have a signal like this:<a href="https://i.sstatic.net/82I9mzrT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82I9mzrT.png" alt="enter image description here" /></a>
The frequency I need is in the range of 5.5 kHz-6.5 kHz. I select the band I need, as Dan Boschen showed me in <a href="https://dsp.stackexchange.com/a/82643/46777">this answer</a>.
<a href="https://i.sstatic.net/jtCHAkcF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtCHAkcF.png" alt="enter image description here" /></a>
And it all works, but I ran into a problem, as it seems to me, of harmonics. For example, if there is a signal with a large amplitude, say in the band 10.5 kHz-11.5 kHz (or 9.5 kHz-10.5 kHz), then its harmonics, as it seems to me, penetrate into my frequency band. And this happens only in the frequency range near the carrier. For example, if there is a signal at a frequency of 10.99 kHz, then I also see it in the 5.99 kHz range, only with a smaller amplitude, and in the 4.99 Hz range with an even smaller amplitude, etc. Moreover, the signal useful to me is also in the 5.99 kHz range and I do not understand how to get rid of the parasitic signal without touching the main one.</p>
<p>Does anyone know why this is happening and how to combat it?</p>
<p>Below I will describe how the signal is processed:
Before the ADC there is a 4th order low-pass filter with a cutoff frequency of 15 kHz. The digitization frequency is 48 kHz.
Next, I multiply the input signal by cos(2πn6/48) and sin(2πn6/48), after which I pass the cosine and sine components through a 10th order low-pass filter with a cutoff frequency of 500 Hz.
Next, I shift the signal up and take the real part like this: LPcos(2πn6/48)* cos(2πn0.5/48)+ LPsin(2πn6/48)* sin(2πn0.5/48)
Then I denominate the signal to a frequency of 2 kHz and break the entire 1 kHz band into 100 Hz bands with ten BP filters and then in each band I build an envelope and look at the signal.</p>
Answer: <p>I believe you are seeing the effects of aliasing. An anti-alias filter is required to select a band of interest and reject potential alias frequency zones prior to A/D conversion and similarly prior to any rate conversion.</p>
<p>If that isn’t clear, please update your question to include the sampling rate and specific processing and I can confirm that is the case (or not) and detail this further. Please show how you addressed the aliasing issues I raised in the link post including any testing of your actual filter performance to confirm the filtering required as described in that post is sufficient or if you are seeing the issue already identified in that post?</p>
|
https://dsp.stackexchange.com/questions/95365/cut-out-harmonics-that-occur-in-the-main-signal
|
Question: <p>I want to design a narrow-band filter for a signal that has been sampled at 125 [kHz].</p>
<p>The specifications are:</p>
<ul>
<li>Passbandfrequency1: 58 [Hz]</li>
<li>Stopbandfrequency1: 59 [Hz]</li>
<li>Stopbandfrequency2: 61 [Hz]</li>
<li>Passbandfrequency2: 62 [Hz]</li>
</ul>
<p>Ripples are standard, and attenuation 60 [dB].</p>
<p>I'd love to hear if you have any suggestions what is best to do so without needing to downsample.</p>
<p>Thanks</p>
Answer: <p>A 5th order elliptic filter can theoretically do that (With a ripple of 1 dB)</p>
<pre><code>fs = 125e3;
[z,p,k] = ellip(5,1,60,[58 62]*2/fs,'stop');
</code></pre>
<p>A filter that aggressive will have a lot of tradeoffs: the impulse response will be insanely long and there are serious stability concerns.</p>
<p><a href="https://i.sstatic.net/ESj2xDZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ESj2xDZP.png" alt="enter image description here" /></a></p>
|
https://dsp.stackexchange.com/questions/94797/designing-narrow-band-notch-filter-with-high-sampling-frequency
|
Question: <p>I read about the design methods of FIR filter which are:
windowing method, sampling frequency method and Equiripple method. And I don't understand the use of the ripple in the Equiripple method and their effect in the filtering process. Can anyone help me?</p>
Answer: <p>There is ripple to three different degrees depicted in this plot:</p>
<p><a href="https://i.sstatic.net/HhkQz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HhkQz.png" alt="enter image description here"></a></p>
<p>ripple is the variance that the filter gain is from the target gain. in the above, there are 3 different frequency segments with 3 different target gains (well, two target gains are the same), and three different error weighting factors.</p>
<p>first frequency segement, from 0 to 0.2 Nyquist: target gain $-\infty$ dB gain (linear gain = 0, a "stopband"), about -67 dB maximum stopband gain (ripple = 0.00045).</p>
<p>second frequency segment, from 0.2 to 0.6 Nyquist: target gain 0 dB (linear gain = 1, a "passband"), about $\pm \tfrac38$ dB deviation from 0 dB (ripple = 0.44).</p>
<p>third frequency segment, from 0.6 to 1.0 Nyquist: target gain $-\infty$ dB gain (another stopband), about -47 dB maximum stopband gain (ripple = 0.0045).</p>
<p>you will see that the ripple (in linear error terms) times the weighting (100, 1, 10) is about constant. </p>
<p>ripple is error. it is the deviation of gain in the actual FIR filter from what your set target is. this error with each band, besides being a tradeoff between each other (given a fixed number of FIR taps), it is a tradeoff with the steepness of the transition bands. at least this is the case with the choice of FIR design method. if the total "average" error is what is salient, then your design method might be <strong>least-squares</strong> (<code>firls()</code> in MATLAB) and if the <em>maximum</em> error (in each band) is what is important, then the design method is likely <strong>Parks-McClellan</strong> (<code>firpm()</code>).</p>
<p>now, theoretically, windowed FIR design using a good window (like Kaiser) can sorta compete with the two "optimized" and iterative methods above and there will also be a measure for ripple, but usually the ripple viewed as linear gain, is the same in the passband as in the stopband, but in dB is more in the stopband. you can hardly see any passband ripple using a Kaiser-windowed design. and the Kaiser-windowed designed FIR is maybe just a little longer than the optimal Parks-McClellan design, given the same transition band and stopband attenuation.</p>
<p>The "optimal" P-McC design will show ripple (whatever you allow in the tradeoff) in the passband and that might make the Kaiser-windowed design look a lot better to you. Sometimes i pick a simple Kaiser-window design over P-McC, particularly if i want to upsample by just a factor of two. In that case, if it's a windowed-sinc, i need not compute half of my output samples because i am copying them from the input (the P-McC does not have that feature). but if i am upsampling by 4 or more, the P-McC will have shorter FIRs, enough so to make up for the requirement to compute the output sample instead of copying.</p>
|
https://dsp.stackexchange.com/questions/38652/whats-the-use-of-the-ripple-in-the-equiripple-method-and-their-effect-in-the-fi
|
Question: <p>Is it possible to design linear-phase filters that sum to a flat frequency response? If so is it practical to use them in real-time audio processing for as many as 10 bands?</p>
<p>My experience has only been with Linkwitz-Riley IIR filters, but I would like to explore the possibilities of linear phase or minimum phase filters.</p>
<p>From my initial research it looks like the frequency sampling method would result in ripples and wouldn't sum to a flat response (especially across several bands).</p>
Answer: <p>Well, by definition of linear phase filter follows that <span class="math-container">$A(f)$</span> of the filter response <span class="math-container">$H(f) = A(f)e^{-j2\pi \frac{N}{2} fT}$</span> is a linear combination of cosines of different frequencies therefore is quite impossible to obtain a flat band (basically you need infinite coefficients of the impulse response).
But you can always approximate it quite well because there is the so called direct optimization method or Parks-McClellan method that allows you to obtain a linear phase filter from the specifications of error in the passband and the error in the stop band.</p>
<p>The only drawback of this filter is that you have to do lots of calculous. In audio applications for example, if you try to design a filter that attenuate <span class="math-container">$40$</span>dB and has a transition bandwidth of <span class="math-container">$100$</span>Hz you will find that you need <span class="math-container">$100$</span> coefficient. This is not too much for a CPU and the filter will be really powerful.</p>
<p>I will also want you to focus on an underrated problem that lot of people don't notice. If you try to increase the sampling frequency (because you want more resolution) also the filter coefficients must increase to approximate the same filter impulse response.</p>
|
https://dsp.stackexchange.com/questions/70913/linear-phase-crossover-filters
|
Question: <p>i got an IIR filter bu I have only the coefficients.
now i'd like to be able to change the filter characteristics of the low pass filter (the "cutoff" frequency) but all i got are a and b coefficients.
i'd like to be able to set a multiplier that will "scale" the filter by that amount. if i set the multiplier to 0.5 i'd like to filter the same but two times lower or two times slower (50 Hz cutoff instead of 100Hz)</p>
<p>is it possible ? Or is it a lost cause ?
and with Matlab ?</p>
<p>thanks so much</p>
<p>Jeff</p>
Answer: <p>Your probably your best option here is "frequency warping". You can calculate the poles and zeros from the coefficients, warp the poles and zeros and then recalculate the coefficients again.</p>
<p>Warping is a procedure the applies a conformal mapping to the poles/zeros. Specifically a conformal mapping that maps the unit circle onto itself will maintain the overall shape of the filter but either bunch it up or stretch in the low frequencies (and vice versa in the high frequencies). A good choice is a first order allpass filter, that indeed maps the unit circle onto itself.</p>
<p>Loosely speaking you replace all delays in your filter with a first order all pass filter and recalculate the poles and zeros. That takes a bit of math to work out the details, but the actual code would be quite efficient.</p>
|
https://dsp.stackexchange.com/questions/68657/changing-filtering-speed-cutoff-frequency-of-an-iir-filter-knowing-only-its
|
Question: <p>I'd like to play with IIR filters, but not by designing them with algorithm but rather play with the coefficients directly.
i got a stability problem though, because of course IIR filters are sensitive to design, and we have to make sure the filter is stable and will not go to infinity.
So is there a rule that have to follow coefficients that would automatically make the filter stable ? (the sum or product of coefficients, etc)</p>
<p>especially for all pole filters if possible ?</p>
<p>thanks so much</p>
<p>Jeff</p>
Answer:
|
https://dsp.stackexchange.com/questions/68838/what-rule-has-coefficients-to-follow-for-an-iir-filter-to-be-stable
|
Question: <p>Does anyone have any good references for deriving parameters of an IIR Low pass/High Pass filter directly in the digital domain using the magnitude squared at the corner frequency? </p>
<p>I have been able to derive the parameters of a first order Low/High pass filter with $3\textrm{ dB}$ attenuation at the corner frequency i.e. calculating $k$ and $\alpha$ in:</p>
<p>$$H(z) = k\frac{\left(1+z^{-1}\right)}{\left(1-\alpha z^{-1}\right)}$$</p>
<p>My issue is that I distinctly remember deriving the parameters using a $6\textrm{ dB}$ attenuation at the corner frequency in a DSP course I have done previously but I have forgotten the trigonometric identiftes used to finish the derivation. </p>
<p>The general procedure is as follows:</p>
<ol>
<li>Let $\omega = 0/\pi$ to calculate the gain term $k$ such that there is a $0\textrm{ dB}$ gain at $0/\pi$</li>
<li>Calculate the magnitude squared at the corner frequency to obtain a value for $\alpha$ in terms of the corner frequency.</li>
</ol>
<p>The problem may be that it should be a second order filter or I am recalling the method for a band pass/stop filter but I'm not sure and it appears this method is not used very often except in the case of band pass/stop filters for parametric EQ.</p>
<p>I hope the question is clear and I will try to improve the structure with the responses so it will be useful for others. Any help will be appreciated.</p>
Answer: <p>To solve the case that you mentioned...</p>
<p>You have 2 variables to determine, so you need two relationships to resolve the two variables. I'm going to use $k$ and $a$ as the variables to make this easy to type up.</p>
<p>$$H(z) = k\frac{1 + z^-1}{1-az^-1}$$</p>
<p>Start by considering the passband gain. Use $f = 0$ for this. Assume you want unity gain at $f_0=0$. </p>
<p>Assume: $H(f_0) = 1, f_0 = 0$</p>
<p>Substitute $e^{i2\pi f/f_s}$ for $z$, $f_s$ is your sampling rate, set $f = 0$ and solve for $k$ to satisfy $H\left(f_0\right) = 1$.</p>
<p>From this you get $k = \frac{(1-a)}{2}$</p>
<p>Now work on the gain squared at your desired corner frequency ($f_c$) to determine $a$.</p>
<p>$H(f_c) = -3\textrm{ dB}$ (magnitude squared will be $-6\textrm{ dB}$ as you've stated)</p>
<p>We'll work with the magnitude squared at $f_c$ and set the gain to $1/2$ ($-6\textrm{ dB}$).</p>
<p>$\lvert H\left(f_c\right)\rvert^2 = \frac{1}{2}$</p>
<p>This time substitute $e^{i2\pi fc/f_s}$ for $z$.</p>
<p>To simplify the arithmatic you can solve this equation:</p>
<p>$$
\left(\frac{\lvert H(f_0)\rvert}{\lvert H(f_c)\rvert}\right)^2 = 2
$$</p>
<p>This eliminates the factor $k$.</p>
<p>You will end up with a quadradic relationship in $a$. Solving for $a$ yields:</p>
<p>$$
a = \frac {1 - \sqrt{1-\cos^2\left(2\pi\frac{f_c}{f_s}\right)}} {\cos\left(2\pi\frac{f_c}{f_s}\right) }= \frac {1 - \sin\left(2\pi\frac{f_c}{f_s}\right)} {\cos\left(2\pi\frac{f_c}{f_s}\right) }
$$</p>
|
https://dsp.stackexchange.com/questions/8021/iir-filter-design-in-digital-domain-using-the-magnitude-squared
|
Question: <p>Suppose we have create an IIR filter with matlab function "ellip", and then we want to quantize the coefficients using:</p>
<p>\begin{align*}
bq=Quantize('round',b,2^8); \cr
aq=Quantize('round',a,2^8);
\end{align*}</p>
<p>I have read that there are 4 major types of rounding:</p>
<ul>
<li>truncate</li>
<li>round</li>
<li>convergent rounding</li>
<li>round-to-zero</li>
</ul>
<p>What is the differences between of them and how i know which method is best to choose?</p>
Answer: <p>The various rounding methods have a computation vs. quantization error tradeoff.</p>
<p><strong>Truncate</strong></p>
<p>Truncation is the simplest method. Everything after the decimal point is simply lopped off. For instance, both 2.1 and 2.9 become 2. This is very simple, but is the worst method in terms of quantization error. It is particularly bad because when you are dealing with non-negative numbers it introduces a strong negative bias. In some algorithms that can be bad.</p>
<p><strong>Round/Convergent Rounding</strong></p>
<p>Simply saying "Round" doesn't tell you enough to know what is meant. What kind of rounding? People usually mean convergent rounding when they say "round", so I will assume that that is what is meant.</p>
<p>Convergent rounding rounds down when the decimal place is $\le .499\overline{9}$ and rounds up when the decimal place is $\ge .500\overline{0}1$. The question remains- what to do when you have exactly $.5$? Some rounding algorithms always round down (this is called "round-to-zero"), but that introduces a very small amount of bias. Convergent rounding tries to eliminate the bias by rounding to the nearest even number, the assumption being that around half the time that will be up and half the time it will be down.</p>
<p>This is the best algorithm in terms of quantization noise, but is the most computationally intense. In many situations though, you don't care how computationally intense it is since the calculation isn't done at run-time.</p>
<p><strong>Round-to-Zero</strong></p>
<p>As mentioned previously, round-to-zero is the same as convergent rounding except it always rounds towards zero when the decimal portion is $.5$. This is a compromise between truncation (easy computationally) and convergent rounding (low quantization error).</p>
|
https://dsp.stackexchange.com/questions/8571/types-of-rounding-in-coefficients-quantization
|
Question: <p>I know that many books and papers talk about the DC offset/DC component of a filter. How do we define the DC offset mathematically, for the case of discrete filters?</p>
Answer: <p>Here is a (working) link to a paper relevant to the OP: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.2334&rep=rep1&type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.2334&rep=rep1&type=pdf</a></p>
<p>It looks like when they say zero DC, they mean "band pass filter" (see page 4). And when they say DC component they just mean frequency response at zero frequency. For instance, see Eq. (20) where they evaluate $G_b(0)$. That's the answer to your question: you can define the DC offset of the filter response as its gain at zero frequency.</p>
|
https://dsp.stackexchange.com/questions/14500/dc-component-of-a-discrete-filter
|
Question: <p>I'm curious about the feasibility of designing a noise shaping filter (but might generalise to any recursive filter) with the constraint that the most recent output samples aren't available for several iterations of the filter.</p>
<p>The use case I have in mind is reducing the word length of an audio stream sampled at 48kHz, and the filter would try to keep most of the quantisation noise out of the 200Hz-8kHz band (where human hearing is more sensitive); but with the additional constraint that the most recent seven outputs are not available to the filter.</p>
<p>I'm applying this constraint so that the output can be produced several samples at a time using a mixture of SIMD and instruction-level parallelism -- working with a notional four-lane SIMD and two-fold unrolling.</p>
<p>There's a lot of work for each sample which <em>could</em> be performed in parallel (scaling, quantisation, dither) if not for the recursive factor. Normally one might hope to use SIMD for the multiply-accumulate of the coefficients, but that's not a great proportion of the work if the filter has only a handful of taps.</p>
<p>So I'm wondering if it's actually possible to design a filter that doesn't get in the way, here, but is still effective for its intended purpose.</p>
<p>Stereo would obviously offer an opportunity to halve the delay, but avoiding that assumption is preferable, as it makes mono a failure case and multi-channel becomes complicated.</p>
Answer: <p>This is tricky. Any filter with 8 tab latency between coefficients can be represented in the z-domain as a rational function in $z^{-8}$ . That's basically the same as saying the impulse response has 7 zeros between each non zero tap. Any filter with this property is periodic in the frequency domain. You can pick any transfer function from -pi/8 to pi/8 but the regions from pi/8 to 3*pi/8,etc. will just be repetitions of that.</p>
<p>You may be able to do better with varying delays but this is a complicated design problem</p>
|
https://dsp.stackexchange.com/questions/15522/filter-design-with-8-tap-latency-on-recursion
|
Question: <p>How can you filter out a person's voice from a group of people talking? </p>
<p>We have a sample of each person's voice from the group, and the sample of the entire group talking at once. Both samples are uploaded into matlab for analysis.</p>
<p>Is there a way to single out any one person's voice?</p>
Answer: <p>IMHO,</p>
<p>Using several microphones (at least 2 like our ears) may help. There will be some constant delay in the time-domain between the recordings from different microphones. This will help you to amplify voice of a single person, because no 2 persons can occupy the same space :-) </p>
<p>To detect this time-domain delay, you have to sample at very high (as high as needed, high sampling rate helps on correct output) rates. You have to (sort of) scan your surround. Then you have to amplify the voice from one person and treat others as background noise. You may have to do some trial/error phase for this. Then you have to repeat this process for each individual...</p>
<p>Hope this helps, I have not implemented such a system, but thinking of doing so...</p>
|
https://dsp.stackexchange.com/questions/15719/speaker-recognition
|
Question: <p>I am planning to use Slepian or DPSS window in my application where I want central lobe to be concentrated and also have low bandwidth:</p>
<p><a href="http://en.wikipedia.org/wiki/Window_function#DPSS_or_Slepian_window" rel="nofollow">http://en.wikipedia.org/wiki/Window_function#DPSS_or_Slepian_window</a></p>
<p>However, since the generating function is missing and looking at some online resources was not very helpful. So, I am wondering if someone can explain.</p>
<p>OR</p>
<p>If someone has DPSS window code (C++, Matlab) and would be willing to share.</p>
<p><strong>UPDATE (after getting answer from @jojek):</strong></p>
<p>Thanks, @jojek,
I was just reading numerical recipe in C (third edition) to understand Slepian window. In their terminology, every Slepian window is defined by two indices jres and kT . Here kT indicates eigen vectors and “jres” some sort of frequency resolution. In their terminology, I am interested in Slepian(2,0) and Slepian(3,0). (Please refer sample page no 664: <a href="http://www.nr.com/nr3sample.pdf" rel="nofollow">http://www.nr.com/nr3sample.pdf</a>)</p>
<p><strong>Question. 1:</strong> If I understand it right, your solution give me kT = 0 which is what I am also looking for. However, I am still confused about how to choose frequency cut-off.</p>
<p><strong>Question. 2:</strong> Numerical recipe in C discusses the origin of this Slepian window and I am interested in knowing where the relevant expression [1] comes from:</p>
<p><strong>"Copying from Numerical recipe in C"</strong>
There are two key ideas in multitaper methods, somewhat independent of each other, originating in the work of Slepian.
The first idea is that, for a given data length N and choice jres, one can actually solve for the best possible weights w , meaning the ones that make the leakage smallest among all possible choices. The beautiful and nonobvious answer is that the vector of optimal weights is the eigenvector corresponding to the smallest eigenvalue of the symmetric tridiagonal matrix with diagonal elements</p>
<pre><code>¼ [N^2 –(N-1-2j)^2 cos(2 pi jres/ N) ]; j = 0, 1, …. N-1
And off-diagonal element:
-1/2 j (N-j) --------------------[1]
</code></pre>
<p>Regards,
Dushyant</p>
Answer: <p>If you follow the reference link no. <a href="http://en.wikipedia.org/wiki/Window_function#cite_note-JOSKaiserDPSS-45" rel="nofollow">43</a> from Wikipedia, then you will end up on <a href="https://ccrma.stanford.edu/~jos/sasp/Slepian_DPSS_Window.html" rel="nofollow"><strong>this website</strong></a> of Stanford University. They are providing all necessary theory behind DPSS window, together with this MATLAB function (not to mention, that MATLAB already has the <a href="http://www.mathworks.co.uk/help/signal/ref/dpss.html" rel="nofollow">dpss</a> function) :</p>
<pre><code>function [w,A,V] = dpssw(M,Wc);
% DPSSW - Compute Digital Prolate Spheroidal Sequence window of
% length M, having cut-off frequency Wc in (0,pi).
k = (1:M-1);
s = sin(Wc*k)./ k;
c0 = [Wc,s];
A = toeplitz(c0);
[V,evals] = eig(A); % Only need the principal eigenvector
[emax,imax] = max(abs(diag(evals)));
w = V(:,imax);
w = w / max(w);
</code></pre>
|
https://dsp.stackexchange.com/questions/17777/slepian-or-dpss-window
|
Question: <p>I'm trying to create a digital filter in code(C) but any language is fine. Now I've got an analogue filter that I have represented by an equation in the Laplace domain and I want to try and implement it digitally. </p>
<p>So my filter has this form in the Laplace domain:
$$\frac{as+b}{cs^2+ds}$$</p>
<p>I then use MATLAB's <code>c2d</code> command which uses the zero order hold transformation (I have a really poor grasp on this, so this might be wrong) and it gives me this formula:</p>
<p>$$\frac{\left(5\cdot 10^5\right)z-67}{z^2-z}$$</p>
<p>I tried following an <a href="http://liquidsdr.org/blog/pll-howto/" rel="nofollow">example</a> that I found that used the Tustin's method, though when I use the <code>c2d</code> function in MATLAB with Tustin it gives me an error.</p>
<p>My attempt has been</p>
<p>$$\frac{hz-i}{jz^2-kz}$$</p>
<p>$b_0=-i, b_1=h, b_2=0, a_0=0, a_1=-k, a_2=j$</p>
<p>Then from this I've tried (which is wrong)
\begin{align}
\text{output}&=z_0 b_0+z_1b_1+z_2b_2\\
z_2&=z_1\\
z_1&=z_0\\
z_0&=\text{input}-a_0z_0-a_1z_1-a_2z_2
\end{align}</p>
Answer: <p>The example I looked at used a tustin or bilinear conversion not a zero order hold(the default for matlabs "c2d" command). So this is more an answer to what i wanted to do rather than the question that i asked above.</p>
<p>I solved the following (converting the s domain function into code) by taking the s domain function.
$$\frac{as+b}{cs^2+ds}$$</p>
<p>and putting this into matlab (matlab command "g=tf([a b],[c d 0])"). Then performed the bilenear conversion with the matlab command "c2d(g,Ts,'tustin')" where g was my transfer funtion and Ts my sampling rate. This produced the output</p>
<p>$$\frac{ez^2+fz+g}{iz^2+jz+k}$$</p>
<p>The a and b coeficients can then be taken from this equation such that(if $i!=1$ the equation needs to be multiplied through by the inverse of "i"):
$b0=e$ $b1=f$ $b2=g$
$a0=i$ $a1=j$ $a2=k$</p>
<p>this can then be converted to code by setting the initial states for simplicity let $$z0=z1=z2=0$$</p>
<p>then set up a loop that repeats the following algorithm</p>
<p>$$output=z0*b0+z1*b1+z2*b2$$
$$z2=z1$$
$$z1=z0$$
$$z0=input-a1*z1-a2*z2$$</p>
<p>For anyone else that got lost like me, this is known as an IIR filter and googling IIR filter design helped sooo much. </p>
|
https://dsp.stackexchange.com/questions/18329/creating-a-digital-filter-from-laplace-to-mathcal-z-transform-zero-order-ho
|
Question: <p>I'm creating windowed sinc filters to apply them to certain signals that I'm dealing with. To design the filters, I'm using the approach described in the book "The Scientist and Engineer's Guide to DSP". Here's a brief resuming:</p>
\[h[i] =
\left\{
\begin{matrix}
Kw(i)\frac{\sin(2\pi f_c(i - \frac{M}{2}))}{i - M/2} & 0\le i \le (M+1)\\
\\
0 & \text{otherwise}
\end{matrix}
\right.
\]
\[
w(i): \text{A window function}\\ K: \text{DC gain}\\ f_c: \text{The cut frequency expressed as a fraction of the signal sampling rate}
\]
<p>After calculating the filter kernel, I obtain the frequency response of the filter by taking the FFT of h[i] using an algorithm that approach the filter length to the nearest power of two(to get better performance). The same procedure it's also applied to the input signal in order to get its frequency response. </p>
<p>To my understanding, the next step would be multiply the two frequency responses to get the filtered signal in the frequency domain, but the problem is that both signals are not the same size and I haven't found an explanation on how to deal with this problem. So people, how should I pad the filter signal to make this point-wise multiplication? Should I fill the end of the filter with zeros to match with the signal size or should I do it symmetrically putting zeros to the left and right of the filter response?</p>
<p>Thanks in advance!</p>
Answer: <p>You don't pad it in the frequency domain, you pad it in the time domain (i.e. before you calculate the FFT). You can put the zeros either symmetrically or after the filter. Either way works, it just results in a shift in the output.</p>
|
https://dsp.stackexchange.com/questions/18580/how-to-pad-a-windowed-sinc-filter-in-the-frequency-domain
|
Question: <p>I am designing an FIR filter.my specs are fs=300MHz, Fc=45MHz, Fs=75MHz, passband gain=3dB , stopband attn=>40dB. what parameters I values of <em>a</em> have to provide in <em>firpm</em> for the minimum order filter design.
I am putting <em>a=[0.01 0.01]</em>. is it correct?</p>
Answer: <p>I suppose you want a low pass filter. Such a filter has one passband and one stopband, and accordingly you need an <code>a</code> vector with 4 elements:</p>
<ol>
<li>the desired magnitude at frequency $0$</li>
<li>the desired magnitude at the passband edge ($f_c$)</li>
<li>the desired magnitude at the stopband edge ($f_s$)</li>
<li>the desired magnitude at Nyquist</li>
</ol>
<p>If I understood your specs correctly, you should use <code>a = [sqrt(2) sqrt(2) 0 0]</code>, and your <code>f</code> vector is <code>f = [0 0.3 0.5 1]</code> because you need to normalize $f_c$ and $f_s$ by the Nyquist frequency $f_s/2=150$MHz. I would suggest you just try some values of <code>n</code> until you reach the desired stopband attenuation. Since you didn't specify any maximum passband ripple, the response in the passband will be fine anyway. If not, you can add a weight vector to trade off maximum passband ripple with stopband attenuation. See the <a href="http://www.mathworks.nl/help/signal/ref/firpm.html" rel="nofollow">MathWorks documentation</a> on how to do this.</p>
|
https://dsp.stackexchange.com/questions/18622/fir-filter-design
|
Question: <p>In my filter the fs=300MHz. and no. of coefficients is 31. I have following queries.
(1) what will be the number of multiplications per second?
(2) can multiplication be done in one clock cycle?
(3) why normally multiplication per second is calculated and not addition/substraction per second in the filter implementation? </p>
Answer: <p>To answer your questions:</p>
<ol>
<li>The number of multiplies is the number of taps times the number of sample per second. Given a sampling rate of 300MHz and 31 taps, you will have to do 9300 million multiplies per second.</li>
<li>As Paul R says, whether a multiply can be done in one cycle depends on the processor. Some can, some can't, and some appear to by having a long pipeline - once the pipeline fills, the result of one multiply comes out per cycle. You would have to find out what processor you are using to determine how many cycles it takes for each multiply. Also note, this varies also with the type of number you are working with. Integer multiplication is usually faster, but can cause you other difficulties (it is harder to do signal processing with integers.)</li>
<li>As Paul R says, really only the multiplies are of interest since most processors have a combined multiply and add instruction. If you are working with a processor without that type of instruction, though, you will have to remember to count the adds when budgeting your clock cycles. Since there are as many adds as multiplies when doing an FIR, knowing how many multiplies there are also tells you the number of adds.</li>
</ol>
|
https://dsp.stackexchange.com/questions/18699/multiplications-per-second-for-fir-filter
|
Question: <p>I'm attempting to apply the following PDE as an image filter to smooth a discrete heightmap with a helmholtz-type equation as described in this <a href="http://www.researchgate.net/profile/Manuel_Gamito/publication/222551401_An_accurate_model_of_wave_refraction_over_shallow_water/links/00b4951a8b09d51acf000000.pdf" rel="nofollow">paper</a>. It seemed like an interesting alternative to a gaussian filter.</p>
<p>The equation is:
$$ddx(h') + ddy(h') + y(h'-h) = 0$$</p>
<p>I solved for h and got:
$$\dfrac{ddx(h')}{y} + \dfrac{ddy(h')}{y} + h' = h$$</p>
<p>I then discretized it with a central finite difference and turned it into a linear system of the form $Ax=b$ where $b$ is the source image and A is a matrix with elements around the diagonal corresponding to the coefficients of a central finite difference approximation of the second derivative. I also added an additional $1$ to the diagonal to account for the standalone $h'$ on the left hand side.</p>
<p>Unfortunately, the results don't look anything like a smoothed version of the original image. For very high y, the resulting image is mostly similar to h, but for lower y it quickly degrades into noise and eventually just a black image.</p>
<p>I suspect part of the issue is the way I'm dealing with the boundary conditions. I recognize that the derivative is undefined at the border of the image, so I've tried a number of different approaches to address this from excluding the borders from the system, to special casing the kernel of border pixels by adding the missing border weights to the diagonal of A.</p>
<p><a href="https://gist.github.com/krisr/034b5558fe4817f53d82" rel="nofollow">Here is some code</a> I've been playing around with to solve this problem.</p>
<p>I would greatly appreciate help understanding how to handle the boundary conditions and learning how to properly apply this PDE as an image filter!</p>
<p>Thanks,
Kris</p>
Answer:
|
https://dsp.stackexchange.com/questions/22821/filtering-an-image-with-a-helmholtz-type-equation
|
Question: <p>Given a system, that behaves as a 1st order filter with network function $H(s)$. We input:
$$v_1 (t)=1+3\cos(10^4 t)$$</p>
<p>And we obtain as output:
$$v_2(t)=1+1.5\cos \left(10^4 t -\dfrac{\pi}{3}\right)$$</p>
<p>Say what kind of filter it is and find its network function $H(s)$.</p>
<p>I'm trying to solve this problem and I have solved the first part by saying that the filter is a <strong>low-pass filter</strong> because the continuous term is conserved in the input as well as in the output. And I have supposed that the network function will be of the form:
$$H(s) = \dfrac{\kappa}{s+a}$$</p>
<p>My question is what do I have to do in order to compute the constants $\kappa, a$ according to the filter characteristics?</p>
<p>I have tried to compute the amplification for the following frequencies:
$$\text{Amplification}(\omega=0) =\dfrac{1}{1} = 1$$
$$\text{Amplification}(\omega=10^4) = \dfrac{1.5}{3} = 0.5$$</p>
<p>But I don't know if this is correct since the phase of cosine in the output is different.</p>
Answer: <p>You need to consider the system's frequency response</p>
<p>$$H(j\omega)=\frac{\kappa}{j\omega+a}\tag{1}$$</p>
<p>Now you know that $$H(0)=1\tag{2}$$ and $$|H(j\omega_0)|=\frac12\tag{3}$$ (with $\omega_0=10^4$). Note that $H(0)$ is real-valued, whereas $H(j\omega_0)$ will generally be complex-valued. From (1) and (2) you immediately get $\kappa=a$. Furthermore, since the filter is stable we know that $a>0$ (i.e. the pole must lie in the left half-plane of the complex $s$-plane). Combining (3) with (1) gives</p>
<p>$$\frac{a}{\sqrt{\omega_0^2+a^2}}=\frac12\tag{4}$$</p>
<p>from which it should be easy for you to compute $a$.</p>
<p>Note that you don't need the phase value to specify the system. What should be checked is if the phase value is the actual phase value of the system that you just computed. In order to do this you need to verify that</p>
<p>$$\arg\{H(j\omega_0)\}=-\frac{\pi}{3}\tag{5}$$</p>
<p>Luckily that's the case, otherwise the assumption (1) would be wrong. I leave the proof of (5) up to you.</p>
|
https://dsp.stackexchange.com/questions/23431/calculating-the-network-function-of-a-filter
|
Question: <p>Basically what it says in the title; I have just started reading about these things and find noncausal filters pretty interesting in concept, but also they do not seem like they would have any advantage worth sacrificing real-time processing. Since "I have just started reading about these things" I feel as if I should make sure, and would also be interested in hearing: does anyone know if there are noncausal filters commonly used in practice? Why are they preferred? Thanks. </p>
<p>Jeff Boucher.</p>
Answer: <p>Yes.</p>
<p>The problem with a system that operates in (near) real time is that you can't look into the future. One way you can deal with this is if you only need a finite amount of look-ahead is to put some delays and then delay the output so you're still causal.</p>
<p>However, many filtering problems have non-causality allowed, e.g. filtering a file on a disk (which occurs a lot. Audio or image or video files you download, time series such as finance data, or histories of systems). For example, you can collect a time series and smooth it with the rauch-tung-striebel filter rather than running a fixed lag smoother. </p>
|
https://dsp.stackexchange.com/questions/25252/are-noncausal-filters-ever-used-in-practice
|
Question: <p>Consider a signal with a sample rate $f_s = 44.1$ kHz. Let us upsample the signal by a factor of $L = 2$ and interpolate the zeros.</p>
<p>An ideal lowpass interpolator would have a gain of $L$ and a cutoff frequency of:</p>
<p>$$f_c = \frac{f_s}{L}$$</p>
<p>An ideal lowpass filter has an infinitesimally small transition band.</p>
<p>In practice I see real lowpass interpolators have a small transition band centred around $f_c$.</p>
<p>The transition band can be quite large, say, $0.45 f_s$ to $0.55 f_s$.</p>
<p>My question is: why do we centre the transition band of a practical lowpass interpolator around the ideal cutoff frequency? By doing that the practical lowpass stopband is above the ideal cutoff which does not make sense to me as that will allow a small unwanted spectral image from the $0.45 f_s$ to $0.50 f_s$ region to creep into the new signal. The obvious alternative is to make the stopband of the practical lowpass $0.5 f_s$ and put up with a passband starting at $0.4 f_s$ assuming we can't make the transition band steeper. There must be some reason this isn't the way it's done.</p>
Answer: <p>When designing a filter, you really care about its behavior in two regions:</p>
<ol>
<li><p><strong>Passband</strong>: You want little attenuation in this region, and maybe other properties as well, like linear phase, depending upon your application.</p></li>
<li><p><strong>Stopband</strong>: You want as much attenuation as needed in this region.</p></li>
</ol>
<p>Between these two is the transition region. This is treated as somewhat of a "don't-care" band. You don't typically constrain the response too tightly in this area, so you can't really count it being usable. In your example, the passband lies below $0.45 f_s$; after the interpolator, you only plan on using frequency content below this threshold.</p>
<p>This means that you can allow some aliasing in order to simplify your filter design. Your transition region starts at $0.45 f_s$; everything above that frequency in your filter's output will either have a response that is unpredictable given your filter specs (if it lies in the transition band) or one that is highly attenuated (if it lies in the stopband). The takeaway: </p>
<blockquote>
<p><strong>You can't rely upon any frequency content past the passband edge in your filter output anyway.</strong> So if it makes the filter design cheaper, why not allow the frequencies above the passband edge in the filter output to contain aliased garbage? </p>
</blockquote>
<p>This technique is commonly used in multirate filters as you've noticed, as it allows savings in the required filter order in order to meet a given set of passband/stopband specifications.</p>
|
https://dsp.stackexchange.com/questions/26691/practical-vs-ideal-lowpass-interpolator
|
Question: <p>How can I implement comb filter in reducing noise in wireless communication?
I am new in signal processing and right now I am still learning about the comb filter. Can I use it to reduce/filter noise in wireless communication?</p>
Answer:
|
https://dsp.stackexchange.com/questions/26974/comb-filter-design-in-wireless-communication
|
Question: <p>How can I design an all pass filter to have a constant phase shift over a bandwidth centered around a carrier?
I don't care about phase shift outside the band.
I would like to have the filter in time domain. This is not a straightforward job right?
Any keywords, design methods, external links are appreciated. </p>
<p>Edit: i need it to be causal. Data comes in real time in time frames.
thanks </p>
Answer: <p>It's instructive to see what an ideal filter adding a constant phase shift would look like. If $\theta$ is the desired phase shift, the corresponding ideal frequency response is</p>
<p>$$H(e^{j\omega})=\begin{cases}e^{-j\theta}&,\quad 0<\omega<\pi\\
e^{j\theta}&,\quad-\pi<\omega<0\end{cases}\tag{1}$$</p>
<p>Using the sign function $\text{sign}(\omega)$, the frequency response $(1)$ can be rewritten as</p>
<p>$$H(e^{j\omega})=\cos\theta-j\,\text{sign}(\omega)\sin\theta,\quad -\pi<\omega <\pi\tag{2}$$</p>
<p>With the DTFT correspondence</p>
<p>$$-j\,\text{sign}(\omega)\Longleftrightarrow g[n]=\begin{cases}\frac{2}{\pi n},&\quad n\text{ odd}\\
0,&\quad n\text{ even}\end{cases}\tag{3}$$</p>
<p>the impulse response corresponding to $H(e^{j\omega})$ is given by</p>
<p>$$h[n]=\cos\theta\cdot\delta[n]+\sin\theta\cdot g[n]\tag{4}$$</p>
<p>where $g[n]$ is the sequence on the right-hand side of Eq. $(3)$, which is the impulse response of an ideal discrete-time Hilbert transformer. Eq. $(4)$ shows that an ideal phase shifter can be implemented as a weighted parallel connection of a wire ($\cos\theta\cdot\delta[n]$) and a Hilbert transformer.</p>
<p>So your problem can be solved by using any of the many available designs of discrete-time Hilbert transformers. Note that you can get much better performance for a given filter order by taking into account that the approximation needs only be accurate in the given frequency band. For a frequency-domain design method this just means that in the formulation of the desired response given in $(1)$ you replace the positive frequencies by the frequency interval of interest and leave the rest as a "don't care" region. The same is done for the negative frequencies.</p>
|
https://dsp.stackexchange.com/questions/27779/filter-design-for-phase-response
|
Question: <p>I need to get coefficients for my FIR filter.
I know my pass band lets say between 350 - 400 Hz
And my stop band(s) lets say 200 - 250 and 500 - 500 Hz,
The other regions in the spectrum I simply don't care. I want the Filter to be relaxed in this regions so be more effective in pass and stop bands. </p>
<p>I am looking for a library with a simple interface that I will just give passband and stop bands and number of taps. Than I will get my coefficients. </p>
<p>What I found is <a href="http://aquila-dsp.org/articles/updated-frequency-domain-filtering-example/" rel="nofollow">aquila example</a>, but I couldn't see how can I define relaxed regions. </p>
<p>Can you please advise me a C/C++ library with relaxed, pass, stop bands and easy to use even for a computer scientist? </p>
Answer:
|
https://dsp.stackexchange.com/questions/28947/a-c-c-library-for-fir-filter-design-with-dont-care-region
|
Question: <p>Can any one tell me how to design wavelets from splines using matlab. whether we can make wavelets from higher order splines or only with B splines</p>
Answer:
|
https://dsp.stackexchange.com/questions/29007/how-to-design-wavelets-from-splines
|
Question: <p>Standard bandpass filters can make super precise analysis filterbanks with 1024 to 4096 filters, on reaktor4. I tried in code to used cookbook BandPass and the result was aweful.</p>
<p>Does someone know a precise BandPass Filter that transmits narrow bands of an intended frequency without noise and irregularity? resonance is an advantage because it takes the detected frequency's value and adds a large pure amount of that frequency as a sine wave, which is cool because ideally i would be letting through individual sine waves of given frequency. i want something of that kind. what kind of filter should be used on filterbanks? do you have an example in code?</p>
Answer: <p>This has already been addressed in depth here:
<a href="https://stackoverflow.com/questions/5901483/simple-audio-filter-bank">https://stackoverflow.com/questions/5901483/simple-audio-filter-bank</a></p>
<p>and i found some c# code on the subject here:<br>
<a href="https://waveletstudio.codeplex.com/" rel="nofollow noreferrer">https://waveletstudio.codeplex.com/</a></p>
<hr>
<p><strong>Copied answer from SO here</strong></p>
<p>Using FFT to split an Audio signal into few bands is overkill. </p>
<p>What you need is one or two Linkwitz-Riley filters. These filters split a signal into a high and low frequency part.</p>
<p>A nice property of this filter is, that if you add the low and high frequency parts you get almost the original signal back. There will be a little bit of phase-shift but the ear will not be able to hear this.</p>
<p>If you need more than two bands you can chain the filters. For example if you want to separate the signal at 100 and 2000Hz it would in pseudo-code somewhat like this:</p>
<pre><code>low = linkwitz-riley-low (100, input-samples)
temp = linkwitz-riley-high (100, input-samples)
mids = linkwitz-riley-low (2000, temp)
highs = linkwitz-riley-high (2000, temp);
</code></pre>
<p>and so on..</p>
<p>After splitting the signal you can for example amplifiy the three output bands: low, mids and highs and later add them together to get your processed signal.</p>
<p>The filter sections itself can be implemented using IIR filters. A google search for "Linkwitz-Riley digital IIR" should give lots of good hits.</p>
<p><a href="http://en.wikipedia.org/wiki/Linkwitz-Riley_filter" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Linkwitz-Riley_filter</a></p>
|
https://dsp.stackexchange.com/questions/29721/what-filter-to-use-in-audio-analysis-filterbank-instead-of-fft
|
Question: <p>I have a first-order high-pass filter with transfer function:
$$G(f)=\dfrac{G_0 jf}{jf + f_c}$$</p>
<p>where $G_0$ is the gain at high frequencies.</p>
<p>If I input a sine wave with frequency 1 KHz and I want a maximum disturbance of 0.1% in the amplitude, how can I know the maximum value of the corner frequency ($f_c$) that allows this error?</p>
<p>P.S.: Corner frequency is the frequency when the gain is at -2dB.</p>
Answer: <p>You set $|G/G_0| = 0.001$ and $f = 1kHz$ and then solve your equation (in magnitude form) for $f_c$</p>
|
https://dsp.stackexchange.com/questions/29738/adjusting-corner-frequency-to-constrain-maximum-disturbance-in-a-high-pass-filte
|
Question: <p>The research paper "<a href="http://download.springer.com/static/pdf/256/art%253A10.1155%252F2010%252F680429.pdf?originUrl=http%3A%2F%2Fjivp.eurasipjournals.springeropen.com%2Farticle%2F10.1155%2F2010%2F680429&token2=exp=1464792290~acl=%2Fstatic%2Fpdf%2F256%2Fart%25253A10.1155%25252F2010%25252F680429.pdf*~hmac=1368b6a8a8f70efe11f4573a825e981cb36e992cb46e41783edc39e637da5867" rel="nofollow"><em>Multidirectional Scratch Detection and Restoration in Digitized Old Images</em></a>" says that,</p>
<blockquote>
<p>4.1. Preprocessing. The preprocessing step aims to enhance image features along a set of chosen directions. First, image is grey-scaled
and filtered with a sharpening filter (we subtract from the image its
<strong>local-mean filtered version</strong>), thus eliminating the DC component.</p>
</blockquote>
<p>Now, I am Googling the term "<a href="https://www.google.com/search?q=local%20mean%20filter&oq=local%20mean%20filter&aqs=chrome.0.69i59.4479j0j7&sourceid=chrome&es_sm=93&ie=UTF-8" rel="nofollow"><em>Local Mean Filter</em></a>" but there is nothing available like that.</p>
<p>Can anyone please provide me any reference of "<em>Local-Mean Filter</em>"?</p>
<p><strong>What filter are they using for Sharpening?</strong></p>
Answer: <p>They probably just wanted to say that the image was blurred (by some local method, e.g. convolution with a Gaussian kernel) in a more scientific way. </p>
<p>On the sharpening: They don't sharpen the image directly. What they do is to blur the image and the subtract the blurred version from the original. The result is the same as sharpening directly via an appropriate filter. </p>
<p><em>Why this works</em></p>
<p>Assume the image $I$ and an blurring kernel $g$ (which could be a Gaussian kernel). In Fourier-Space, blurring the image by convolution becomes a multiplication:</p>
<p>$\mathscr{F}(g\star I) = \mathscr{F}(g)\mathscr{F}(I)$</p>
<p>with $\mathscr{F}(\cdot)$ being the Fourier-Transformation. </p>
<p>Then $I - g \star I$ becomes: $\mathscr{F}(I) - \mathscr{F}(g)\mathscr{F}(I) = \mathscr{F}(I)(1-\mathscr{F}(g))$.</p>
<p>Since $\mathscr{F}(g)$ enhances the low-frequency components in in Fourier-Space, $1-\mathscr{F}(g)$ enhances the higher frequencies (I assume here that $\mathscr{F}(g)$ is normalized appropriately in magnitude, which can be achieved via scaling factors). So, in image-space, the resulting image is a sharpened version of $I$. </p>
|
https://dsp.stackexchange.com/questions/31219/what-is-local-mean-filter
|
Question: <p>Let's assume we have an $x(n)$ time sequence, whose $f_s$ sample rate is 20 kHz. We are required to design a linear-phase lowpass FIR filter that will attenuate the undesired high-frequency noise beyond 4kHz analog frequency. So we design a lowpass FIR filter and come out with an equation for the unit impulse response $h_{low}(n)$,and assume that our filter design exercise is complete. Sometime later, unfortunately, we learn that the original $x(n)$ sequence's sample rate was not 20kHz, but 40 kHz. What must we do to our lowpass filter's $h_{low}(n)$ coefficients, originally designed based on a 20kHz sample rate, so that they will attenuate $x(n)$'s undesired high-frequency noise when the $f_s$ sample rate is actually 40kHz?</p>
<p>Hint-typical low pass filter's frequency response makes a pair with its unit impulse response which is a sinc function</p>
Answer:
|
https://dsp.stackexchange.com/questions/36103/fir-filter-design-sampling-rate
|
Question: <p>So I understand that a type 3 filter is not suitable for a highpass filter design, but is there any reason why it isnt suitable for a lowpass filter? </p>
<p>So ultimately, can a type 3 linear-phase FIR filter be used to design a lowpass filter? Why or why not? </p>
Answer: <p>A type 3 FIR filter has odd symmetry and an odd number of taps. For this reason it has frequency response zeros at $\omega=0$ (DC) and $\omega=\pi$ (Nyquist), corresponding to transfer function zeros at $z=1$ and $z=-1$:</p>
<p>$$H(1)=\sum_{n=-M}^{M}h[n]=0\\
H(-1)=\sum_{n=-M}^{M}(-1)^nh[n]=0$$</p>
<p>where $N=2M+1$ is the filter length. So it can neither be used as a high pass nor as a low pass filter.</p>
<p>Apart from that, the phase shift of $\pi/2$ caused by the odd symmetry is usually undesirable for frequency-selective filters. You could use such a filter for implementing Hilbert transformers or differentiators.</p>
|
https://dsp.stackexchange.com/questions/40685/fir-filters-type-3
|
Question: <p>I am looking for a lowpass FIR filter with flat passband but equi-ripple stopband. In other words, it likes <a href="https://en.wikipedia.org/wiki/Chebyshev_filter" rel="nofollow noreferrer">Chebyshev_filter Type II</a> except that it is FIR instead of IIR. Linear-phase is preferred.</p>
<p>Thanks</p>
Answer: <p>I suggest you the Parks-McClellan method. In Matlab you can use $\tt{firpm}$.</p>
<p>Matlab's (now obsolete function) $\tt{remez}$ also uses this scheme.</p>
<p>The FIR filter is optimally designed to approximate e.g. a Chebyshev filter such that the maximum error between the filter's response and the desired response is minimized.</p>
<p>For more information see <a href="https://au.mathworks.com/help/signal/ref/firpm.html" rel="nofollow noreferrer">this</a> and the examples therein.</p>
|
https://dsp.stackexchange.com/questions/40822/fir-filter-design-with-flat-passband-but-equi-ripple-stop-band
|
Question: <p>Are least square filters, or filters that minimize error energy, the same as least mean square adaptive filters?</p>
Answer: <p><strong>TL;DR:</strong> No, they are not necessarily the same.</p>
<hr>
<p><strong>Gory Details</strong></p>
<p>Least squares is just an optimization technique. It is used in a variety of ways.</p>
<p>For filter <strong>design</strong> it is used to select that realizable filter $H_r(e^{j\omega})$ that most closely matches, in the least squares sense, the ideal required filter response $H_i(e^{j\omega})$:
$$
H_r(e^{j\omega}) = \arg \min \parallel H_r - H_i \parallel_2
$$
where $\parallel \cdot \parallel_2$ is the 2-norm or least-squares norm.</p>
<p>This sort of filter $H_r$ is not adaptive. That is, it doesn't change once it has been designed.</p>
<p>Adaptive filters may also use the least squares criterion, but in a different way: as part of the <strong>adaptation step</strong>.</p>
<p>Adaptive filters start off with initial filter coefficients $\vec{w}_o[0]$ and then use an update:
$$
\vec{w}_o[n] =\vec{w}_o[n-1] + \mu g[n-1]
$$
where $\mu$ is the step-size and $g$ is the gradient of the least squared error surface in the direction of the minimum (from our current "location" of $\vec{w}_o[n-1]$).</p>
<p>Here, $g$ is determined by our error criterion: least squares. This means:
$$
\parallel \vec{w}_{\tt opt} - \vec{w}_o \parallel_2
$$
where $ \vec{w}_{\tt opt}$ is the unknown optimal (minimizing) solution.</p>
|
https://dsp.stackexchange.com/questions/42192/are-all-least-square-filters-adaptive
|
Question: <p>I always read the word "phase" (like linear phase, phase shift...) in DSP but I'm still not sure what it supposes to mean, in intuition and also in practice.</p>
Answer: <p>The phase of a sinusoid $s(t)=A_0\cos(2\pi f_0 t + \phi_0)$ is $\phi_0$ radians.</p>
<p>If this sinusoid goes through an LTI system with frequency response $H(f)$, then the ouptut is $y(t) = |H(f_0)| A_0 \cos(2\pi f_0 t + \phi_0 + \angle H(f))$. So, the phase of the output is different than the phase of the input -- the system introduced a <em>phase shift</em> in the signal.</p>
<p>Note that you can interpret the phase shift as a time delay. If $T_0=1/f_0$, then a phase shift of $2\pi$ corresponds to a delay of $T_0$. This allows you to calculate that the delay $\Delta$ for a phase shift $\theta$ is $$\Delta = \frac{T_0 \theta}{2\pi}.$$ Note that the delay is a function of the sinusoid's frequency. An LTI system that produces the same time delay for all its inputs is said to have <em>linear phase</em>.</p>
<p>Note that LTI systems without linear phase will in general distort their input, even if they have constant gain; this is the reason we want to design linear phase systems. In the case of filters, usually we only require linear phase in the filter's passband.</p>
<p>(Note that this explanation is for continuous time, but the ideas are also valid for discrete time).</p>
|
https://dsp.stackexchange.com/questions/43870/an-explanation-of-phase-of-a-filter
|
Question: <p>I'm working on implementing a filter with a very slow step response. This filter is implemented as a cascaded second-order-section filter (transposed direct form 2). I'm using the output of this filter as the input to a controller. Thus I'm trying to slow down how quickly the controller set point is able to change.</p>
<p>I'm running into an issue where I would like to initialize the set point of this controller. In order to do this, I need to provide initial conditions to the filter. </p>
<p>Let's assume I have a 7th order filter implemented in SOS stages as described above. Is it is a trivial task to find initial conditions such that I can specify the output but also have it such that the there is no "momentum" in the filter for it to drive from its initial output?</p>
<p>If this were an analog filter, I am essentially trying to specify that y(0) = constant, while y'(0), y"(0), ... etc. all equal zero.</p>
<p>I've never attempted to do something similar with a digital filter. Is this trivial? Is this non-trivial? Are there any references on the subject?</p>
<p>Thanks for the help!</p>
<p>Edit: So I've changed up my system design such my filter output isn't the setpoint, but rather a deviation from a setpoint. This allows me to simply zero the filter to achieve what I was originally trying to do. However, my question still stands for a matter of interest. </p>
Answer: <p>If you want a stable linear time-invariant system to output constant $y$, it must have received input $x$ that is the ratio of the constant output and the zero frequency response of the system $H(1):$</p>
<p>$$y = H(1)x\quad \Leftrightarrow \quad x = \frac{y}{H(1)}$$</p>
<p>Or, you'd want to change the state of the system so that it reflects that situation. The system must satisfy the condition $H(1) \ne 0.$</p>
<p>For a composite filter that consists of a serially connected cascade of second-order sections, you can start with the first section with input $x$ and calculate sequentially using $y = H(1)x$ (recycling the variable names to mean the inputs and outputs of a single section) the constant intermediate output of each section. Or you could start from the other end and use $x = y/H(1).$ Knowing both the input $x$ and output $y$, use the signal flow diagram of each section to calculate its state variables, starting with the summation point with dependency only to known constants. Because all oscillation has settled due to constant input and stability of the filter, all signals are constants rather than functions of time, and the delays in the flow diagram have become identities.</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/d/d6/Biquad_direct_form_2_transposed.svg" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/d/d6/Biquad_direct_form_2_transposed.svg" alt="Transposed direct form 2 flow diagram"></a><br><em>Figure 1. Transposed direct form II biquad signal flow diagram. (CC BY-SA 3.0 by Fcorthay)</em></p>
<p>Given constant input $x$, the constant output $y$ of a biquad section decipted in Fig. 1 is:</p>
<p>$$y = H(1)x = \frac{b_0+b_11^{-1}+b_21^{-2}}{1+a_11^{-1}+a_21^{-2}}x = \frac{b_0 + b_1 + b_2}{1 + a_1 + a_2}x$$</p>
<p>As for the state variables, the top sum $m_0,$ the middle sum $m_1,$ and the bottom sum $m_2$ are calculated as (start calculation from the bottom):</p>
<p>$$\begin{align}
m_0 &= b_0x + m_1\\
m_1 &= b_1x - a_1y + m_2\\
m_2 &= b_2x - a_2y\\
\end{align}$$</p>
<p>The top sum $m_0$ should equal $y$, which we can verify:</p>
<p>$$\begin{align}m_0 &= b_0x + b_1x - a_1y + m_2\\
&= b_0x + b_1x - a_1y + b_2x - a_2y\\
&= b_0x + b_1x - a_1\frac{b_0 + b_1 + b_2}{1 + a_1 + a_2}x + b_2x - a_2\frac{b_0 + b_1 + b_2}{1 + a_1 + a_2}x\\
&= \frac{b_0 + b_1 + b_2}{1 + a_1 + a_2}x\\
&= y
\end{align}$$</p>
|
https://dsp.stackexchange.com/questions/44350/is-initializing-a-digital-filters-output-with-no-momentum-a-non-trivial-task
|
Question: <p>In an ideal design, a digital filter has a target gain in the passband and a zero gain (−∞ dB) in the stopband. In a real implementation, a finite transition region between the passband and the stopband, which is known as the transition band, always exists. The gain of the filter in the transition band is unspecified. The gain usually changes gradually through the transition band from 1 (0 dB) in the passband to 0 (−∞ dB) in the stopband <a href="http://zone.ni.com/reference/en-XX/help/371325F-01/lvdfdtconcepts/dfd_filter_spec/" rel="nofollow noreferrer">http://zone.ni.com/reference/en-XX/help/371325F-01/lvdfdtconcepts/dfd_filter_spec/</a>.</p>
<p>Question:
If an ideal design, a digital filter has a target gain in the passband and a zero gain (−∞ dB) in the stopband.
It has a role in the appearance of the ripples in some way in the stopband and passband?</p>
<p>Why the transition band always exists? does this is because the gradually through the transition band from the passband to the stopband?</p>
<p>In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output port.by adding energy converted from some power supply to the signal <a href="https://en.wikipedia.org/wiki/Gain_(electronics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Gain_(electronics)</a>. I want to know how this definition is applied in the design of digital filters.</p>
Answer: <p>In an actual design you need to allow for a smooth transition from the passband to the stopband because the magnitude response of a realizable (i.e., causal and stable) filter is smooth; it can't jump. Of course you can try to approximate a jump in the magnitude, but you'll always get a smooth magnitude response (cf. Gibbs phenomenon). Defining a "don't care" transition band with no specification will decrease the approximation error in the bands of interest.</p>
<p>I don't understand your question about the ripples in the passband and stopband. Maybe you can clarify this and I'll edit my answer.</p>
<p>The passband gain of a filter is simply the amplification factor for signal components that are in the filter's passband.</p>
|
https://dsp.stackexchange.com/questions/46454/transition-bands-and-passband-gain-in-digital-filter-design
|
Question: <p>I am trying to implement a FIR filter on FPGA and trying to have a solid understanding of the FIR filter tap delay and sampling frequency.</p>
<p>Does the “one tap” delay equal to “1/Fs (sampling frequency)”? If I have N-tap, the total delay will be N/Fs? If the Fs sampling frequency is increased, the “one tap” delay is decreased?</p>
<p>Is there any trade off/drawback of increasing the sampling frequency? Maybe it will take more processing time for the output y(n) to come out?</p>
<p>Thanks</p>
Answer: <blockquote>
<p>Does the “one tap” delay equal to “1/Fs (sampling frequency)”?</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>If I have N-tap, the total delay will be N/Fs?</p>
</blockquote>
<p>Depends on how you define "total" delay, but in general the answer is no. For a minimum phase filter the delay will be 1. For a linear phase filter it would roughly be N/2</p>
<blockquote>
<p>If the Fs sampling frequency is increased, the “one tap” delay is decreased?</p>
</blockquote>
<p>Yes. But it also changes the frequency response of your filter. </p>
<blockquote>
<p>Is there any trade off/drawback of increasing the sampling frequency?</p>
</blockquote>
<p>Many. It really depends on what the requirements of your application are. </p>
|
https://dsp.stackexchange.com/questions/46888/about-how-to-increase-the-fir-filter-sampling-frequency-in-fpga-and-what-is-the
|
Question: <p>I'd like to make a filter which essentially masks the spectrum except for frequencies around music notes in the standard tempered scale, i.e. $frequency \in 110 \times 2^\frac{i}{12}, 10 \le i \le 64$, in the case of a violin. The passband around each note should be narrow, perhaps 1% of the space between notes. The idea is that the sound of a violin will be loudest when the played note is in tune, and quieter when not quite hitting the correct note.</p>
<p>What would be the best way to do this? I was thinking perhaps a series connection of 10 comb filters, with the final output subtracted from the input signal. The filter for the $2^\frac{7}{12}\approx1.5$ will be covered by the comb filter $F_c\times 3, F_c\times 6, etc.$, albeit a little out of tune.</p>
<p>Another way would be 55 notch/peaking filters. Would these be best in series or parallel?</p>
<p>Is there a better way?</p>
<p>The solution will be done using 16 or 32 bit fixed-point on a microcontroller, depending on what sounds good enough. I'll try for $F_s$=44kHz, 16bit audio in/out.</p>
<p>Thanks,
James</p>
Answer:
|
https://dsp.stackexchange.com/questions/49426/creating-a-music-note-filter-notch-peaking
|
Question: <p>I have a confusion about what does a two pass FIR (bandpass) filter with order 40 means?? Passband frequencies are [8 13]. Type 2 FIR is same as two pass?? </p>
<p>I have check some previous literature which shows Linear-phase FIR filter can be divided into four basic types.</p>
<p>TYPE I symmetric length is odd</p>
<p>TYPE II symmetric length is even</p>
<p>TYPE III anti-symmetric length is odd</p>
<p>TYPE IV anti-symmetric length is even</p>
<p>But according to mathworks ([firtype][1]) [1]: <a href="https://jp.mathworks.com/help/signal/ref/firtype.html" rel="nofollow noreferrer">https://jp.mathworks.com/help/signal/ref/firtype.html</a></p>
<p>Type 1 - Symmetric factor with even degree</p>
<p>Type 2 - Symmetric factor with odd order</p>
<p>Type 3 - Asymmetric coefficient with even order</p>
<p>Type 4 - Asymmetric coefficient with odd order</p>
<p>kindly correct me if my understanding is wrong : Length = (Order-1)/ 2</p>
<p>If Type II is two-pass then the order of the filter will be even or Odd??</p>
Answer: <p>You can't in general categorize a two-pass filter in any of those listed filter types. "Two-pass" means that the filter processes the data in two passes; the output of the first pass is used as the input of the second pass.</p>
<p>The two-pass filter's impulse response is the convolution of the impulse responses of the filters in the two passes. This comes from the <a href="https://en.wikipedia.org/wiki/Convolution#Algebraic_properties" rel="nofollow noreferrer">algebraic properties of convolution</a>:</p>
<p>$$\text{output}= f*(f*\text{input}) = (f*f)*\text{input},$$</p>
<p>where $*$ denotes convolution and $f$ is the impulse response of a single pass. The two passes have an identical impulse response to match your description. Because convolution is equivalent to multiplication in the frequency domain, the two-pass filter's frequency response will be the square of the frequency response of a single pass.</p>
<p>With infinite impulse response (IIR) filters, also a configuration where the second filter processes the data backwards can be useful, and the two filters may be connected in parallel so that the outputs from the two passes are summed to form the composite filter output. Such configurations are not useful with FIR filters.</p>
|
https://dsp.stackexchange.com/questions/50121/what-is-meant-by-two-pass-fir-filter-a-basic-question
|
Question: <p>The complex function $ D (e^{-jw})$ is defined on the domain of approximation $\Omega$ .In most cases the domain $\Omega$ is the union of several disjoint frequency bands which are separated by transition bands where no
desired response is specified .We denote the union of all passbands by $\Omega^p$ and stopbands by $\Omega^s$. If the designed filter is to have real-valued coeffcients only the domain $\Omega\cap[0 ,\pi]$ is
considered,what is the type of coefficients which used in the domain $[0 ,2\pi]
$ or$ [-\pi,\pi]$?</p>
<p>Thanks in advance.</p>
Answer: <p>If you have to consider the whole frequency spectrum range in $[0,2\pi]$ for the design of the discrete-time filter, without assuming any type of symmetry, then you are considering the most general case of the filter and its coefficients will be <strong>complex</strong> valued. And nothing more can be said about them, unless you impose further constraints on the impulse and frequency responses of the filter.</p>
|
https://dsp.stackexchange.com/questions/50795/the-desired-frequency-response-specifications
|
Question: <p>I have used Scilab functions to produce a low-pass filter for an audio signal and the coefficients for the associated constant coefficient difference equation (CCDE). I then produced filtered signals by running the Scilab <code>filter()</code> function and by running my implementation of the CCDE on the audio signal. The results are identical. </p>
<p>The Scilab <code>filter()</code> function runs considerably faster, by perhaps a factor of 30.</p>
<p>I am new to DSP and I am trying to better understand what the Scilab <code>filter()</code> function is doing that allows it to use the same coefficients so efficiently. In looking at the associated Scilab files, it looks like there is some compiled code behind the <code>filter()</code> function.</p>
<p>Any pointers as to technique or reference materials would be appreciated.</p>
Answer: <p>Scilab's filter, for <strong>short</strong> coefficient vectors, function implements a linear convolution in C code; that alone, since there's no python to actually be evaluated here, just multiplication and addition, is much much faster than writing something in a scripting language that can't 100% be just-in-time compiled.</p>
<p>For longer vectors, scilab implements <em>fast convolution</em>; ie. it exploits the fact that (circular) convolution in time domain corresponds to point-wise multiplication in (discrete) frequency domain, and uses zero-padding and saving of overlaps to emulate the linear convolution (which filtering represents) with that.</p>
<p>So, either way, use your libraries when doing signal processing! Aside from the convolution, there's other things that are generally faster if done via clever usage of library functionality: For example, whenever you have a loop that looks like</p>
<p>sum = 0
for a, b in zip(vectorA, vectorB):
sum += a*b</p>
<p>you'd be far, far better of doing a dot product of the two vectors.</p>
<p>You have to consider this: Your CPU is <em>very</em> fast at doing basic math operations – often, it can do for example 8 multiply-and-accumulates (MAC) operations in a single step. Compared to that, parsing the structure of the (precompiled, even) <code>for</code> loop, building temporary python objects to hold the individual values for <code>a</code> and <code>b</code>, and overwriting the <code>sum</code> object, thus removing the old object and replacing it with a new one, leading to garbage collection and so on, is way way way way more work than just doing the maths. I like to put it like this:</p>
<blockquote>
<p>Imagine you're tasked with multiplying a lot of numbers between 0 and 10, but the numbers you need to multiply are written in text form in a book.<bR>
Reading that book will take much, much longer than the multiplications <br/></p>
</blockquote>
<p>That's how it is to use dynamic languages to do basic math operations. </p>
|
https://dsp.stackexchange.com/questions/50895/ccde-processing-vs-scilab-function
|
Question: <p>IIR filters can be designed using different methods,such as: </p>
<ul>
<li>Analog Prototyping</li>
<li>Direct Design</li>
<li>Generalized Butterworth Design</li>
<li>Parametric Modeling</li>
</ul>
<p><a href="https://www.mathworks.com/help/signal/ug/iir-filter-design.html?lang=en#brbq5qb" rel="nofollow noreferrer">https://www.mathworks.com/help/signal/ug/iir-filter-design.html?lang=en#brbq5qb</a></p>
<p>There is other technique named Model order Reduction , it used for reduce model order while preserving model characteristics that are important for the application.Generally MOR working with lower-order models can simplify analysis and control design, relative to higher-order models.
<a href="https://www.mathworks.com/help/control/ug/about-model-order-reduction.html" rel="nofollow noreferrer">https://www.mathworks.com/help/control/ug/about-model-order-reduction.html</a></p>
<p>It is still unclear to me about all these methods for design IIR filter,what is the difference between them ?in briefly,when can i use these methods? ? </p>
<p>Thank you in advance.</p>
Answer: <p>If you're still new to digital filter design, I would not recommend you to dive into methods for model order reduction. First of all, they are not filter design methods, but they represent a second step to simplify an already existing model/system. Second, these methods are more applicable to control design, where you might have obtained some overly complex model by discretization and/or linearization. The problem of an inappropriate (too high) system order will generally not occur in the design of digital filters, and if it does, you simply design another filter with the same specifications but with a smaller filter order.</p>
<p>It is impossible to explain even just the basics of IIR filter design in one answer, but as a simple guideline, if the desired filter characteristic is one of the four standard characteristics (low pass, high pass, band pass, band stop), and if the phase response is irrelevant, then you're probably best off with a design based on the transformation of an analog prototype filter (Butterworth, Chebyshev, etc.). For non-standard frequency responses, including prescribed phase responses, you will need some non-linear optimization techniques. In some cases it might also be worthwhile to consider FIR filters instead of IIR filters, simply because they are much easier to design, and because they're always guaranteed to be stable.</p>
<p>For a brief overview of digital filter design basics take a look at <a href="https://dsp.stackexchange.com/a/9552/4298">this answer</a>. <a href="http://mattsdsp.blogspot.com/p/phd-thesis.html" rel="nofollow noreferrer">This thesis</a> is probably too specialized for you at this point, but you can find some important references in its bibliography.</p>
<p>I highly recommend these two books:</p>
<p><em>Digital Filter Design</em>, Parks and Burrus</p>
<p><a href="http://www.ece.rutgers.edu/~orfanidi/intro2sp/orfanidis-i2sp.pdf" rel="nofollow noreferrer"><em>Introduction to Signal Processing</em>, Orfanidis</a></p>
|
https://dsp.stackexchange.com/questions/51026/design-digital-filter-with-model-order-reduction-mor-and-other-methods
|
Question: <p>For apply least-squares linear-phase FIR filter design,with frequency domain specification is not symmetrical.</p>
<p>The pass-band error function,</p>
<p>$$E(\mathbf{h})_p=\int_{\omega_{p_1}}^{\omega_{p_2}}| \mathbf{c}^T(\omega)\cdot \mathbf{h}-D(\omega)|^2d\omega \tag{1}$$</p>
<p>The stop-band error function,</p>
<p>$$E(\mathbf{h})_s=\int_{\omega_{s_1}}^{\omega_{s_2}}|\mathbf{c}^T(\omega)\cdot \mathbf{h}|^2d\omega\tag{2}$$</p>
<p>h: unknown complex coefficients.</p>
<p>Total error function,</p>
<p>$$E(\mathbf{h})_t=w_1E(\mathbf{h})_p+w_2E(\mathbf{h})_s\tag{3}$$</p>
<p>$w_1,w_2$: weighting constants .</p>
<ul>
<li>The equation $(1)$ : which represents a filter approximation by determine the error between actual and desired response in pass-band , why the method is different in stop-band (equation $(2)$) ?</li>
</ul>
<p>-In matlab ,does it the <code>freqz</code> function achieve the transpose operation ? </p>
Answer: <p>In the stopband(s), Eqs $(1)$ and $(2)$ are equivalent, because in the stopband the desired response equals zero: $D(\omega)=0$. The reason why you might want to split the error in passband and stopband error is to apply different weights, as shown in your Eq. $(3)$.</p>
<p>The function <code>freqz</code> computes the frequency response of a discrete-time filter on a grid of frequencies, given the filter coefficients. I'm not sure what you mean by "achieve the transpose operation".</p>
|
https://dsp.stackexchange.com/questions/51930/design-of-a-complex-fir-filter
|
Question: <p>This is my very first time in dealing with signal processing, so I am sorry if I will not use a rigorous terminology.</p>
<p>I am dealing with some issues about noise modeling in matlab. I'm trying to figure out a way to construct a model (filter) of a noise from data. My first problem is that I have no <span class="math-container">$ S_{xx}(f) $</span> of the disturbances but only experimental data.</p>
<p>I have plots of the particular requirement with other noises characteristics in a differencial (or stray) acceleration <span class="math-container">$ (m/s^{2})/\sqrt{Hz}$</span> vs frequency <span class="math-container">$ Hz $</span>.</p>
<p>I am wondering how can I get a spectrum from those disturbances and then use it to recreate the noise effect of the disturbances in matlab.</p>
<p>Edit:</p>
<p>After some research, I have some question:
I am oriented to design a noise-shape filter to introduce disturbances in the model. But my problem still remains: How can I be sure that the noise produced by the filter will generate a PSD coherent with the one that I have in my data?</p>
<p>I am refering to something like <a href="http://faculty.etsu.edu/blanton/lab_3_psd.doc" rel="nofollow noreferrer">this</a> for basic use in MATLAB. But these are old notes and the writer uses old versions of MATLAB functions (as <code>psd</code>) and I don't know how to apply it to my case.
I found <a href="http://www.schmid-werren.ch/hanspeter/publications/2012fftnoise.pdf" rel="nofollow noreferrer">this</a> for a more detailed and complex procedure, but even here I don't know how to use it properly. </p>
<p>I forgot to present this alternative: <a href="https://it.mathworks.com/matlabcentral/fileexchange/32111-fftnoise-generate-noise-with-a-specified-power-spectrum" rel="nofollow noreferrer">fftnoise</a> which seems to do exactly what I need</p>
<p>EDIT
Given the interpolation of the ASD (amplitude spectral density) (which is <span class="math-container">$ \sqrt{PSD} $</span> how can I get the H(f) of the filter?
I am not sure how the use of prony and frd (system identification toolbox)</p>
<pre><code>% points
x = ([0.1 0.12 0.2 0.24 0.4 1 2 3].*1e-3);
y = ([150 100 40 30 20 13 12 12].*1e-15);
xlog = log10(x);
ylog = log10(y);
% fitting
N = 1000;
pp = polyfit(xlog,ylog,3);
freq_log = linspace(xlog(1),xlog(end),N);
CAPACT_sensing_ASD_log = polyval(pp,freq_log);
CAPACT_sensing_ASD = 10.^CAPACT_sensing_ASD_log;
freq = 10.^freq_log;
% ESTIMATION
impulse_resp = ifft(CAPACT_sensing_ASD);
f = logspace(-4,-1,N);
phases=rand(1,N);
phases = 0; %(phases-median(phases))*2*pi;
% phases=complex(cos(phases),sin(phases));
response = impulse_resp.*exp(1j*phases*pi/180);
h = idfrd(response,f,1);
data = frd(h,frequency,'FrequencyUnit','Hz'); % object asked by frd! Am I doing it right?
H_CAP_est = tfest(data,3,2);
Num_est = H_CAP_est.Numerator;
Den_est = H_CAP_est.Denominator;
% PRONY
denom_order = 3;
num_order = 2;
[Num_pro,Den_pro] = prony(real(ifft(response)),num_order,denom_order);
s = tf('s');
H_CAP_prony = tf(Num_pro,Den_pro);
</code></pre>
Answer:
|
https://dsp.stackexchange.com/questions/52526/generation-of-noise-shape-filter-from-power-spectrum-density
|
Question: <p>A practical example: I perform a DCT on a time series of discrete values that are spaced in time by 1/30 of a second. What frequencies each bin of this DCT represents? What is the formula to find that?</p>
<p>If I want to filter the DCT to remove all signal bins that correspond to frequencies below 0.4 Hz and above 4 Hz, and keep the values between these intact, how should the filter impulse be constructed? </p>
<p>I understand that in order to filter that DCT I have to to build a filter array that is a series of zeros in the bins I want to remove and ones in the bins I want to keep and multiply elements in each array by its correspondent in the other, zeroing the bins I don't want.</p>
<p>But I need to know the frequency each bin on the original DCT represents.</p>
<p>Please help.</p>
Answer:
|
https://dsp.stackexchange.com/questions/55571/what-frequency-each-bin-of-a-discrete-cosine-transform-represents
|
Question: <p>I'm building a circuit for an electret microphone and I want to build a bandpass filter around the op-amp.
I was using <a href="https://youtu.be/ts-JqEVzvDo?t=394" rel="nofollow noreferrer">this source</a> until I have found that most sources ( e.g. <a href="https://www.maximintegrated.com/en/app-notes/index.mvp/id/1795" rel="nofollow noreferrer">here</a> and <a href="https://www.electronics-tutorials.ws/filter/filter_3.html" rel="nofollow noreferrer">here</a>) indicate that RC filter needs to be grounded.
I can't think of a way check this (FFT doesn't seem to indicate whether the first source is correct or not). Any ideas on how to check or information which source is correct?</p>
Answer: <p>First, the youtube source shows an 1st order active filter, maxim source shows 2nd order active filter and last source shows first order passive filter. That's three different designs.</p>
<p>A 1st order active filter can be implemented either in the negative feedback branch of the op amp (as in youtube source), this does not need to be grounded.</p>
<p>Or it can be done as a classical RC circuit followed by impedance converter or amplifier. Here, either the R goes to ground (highpass) or the C goes to ground (lowpass)</p>
<p>If you combine these two approaches you get a 2nd order filter. So, wether "the filter needs to be grounded", as you ask, depends strongly on where its implemented. </p>
|
https://dsp.stackexchange.com/questions/56295/electret-microphone-rc-filter-design-contradiction-among-information-sources
|
Question: <p>I am designing an IIR filter with fixed-point arithmetic and I have to select the proper length for the accumulator and product. I would like to know a standard to follow in order to choose its length. I have selected the <em>Direct Form 1</em> topology to implement a 6th order digital filter (instead of a cascaded second-order block), as it has only only one point where the sum is done and thus only one point which has to be taken care of when considering quantification error and overflow (correct me if I am wrong!).</p>
<p>For the <em>ACCUMULATOR</em>, I assume that I have to consider the worst case of summing numbers with <strong>N bits</strong>, so the answer should be <strong>N+M bits</strong> to avoid overflow, where <strong>M</strong> represents the quantity of numbers which it is summed. </p>
<p>For the <em>PRODUCT</em>, multiplying two numbers of <strong>N bits</strong>, in the worst scenario gives us a <strong>N+N=2N bits</strong> result.</p>
<p>I don't know if it is this easy or if I am skipping an important concept. </p>
<p>If you have experience in this field, any documentation about designing IIR filters on microcontrollers will be welcomed.</p>
Answer: <ol>
<li>Fixed point processing is very difficult. Floating point is A LOT easier. </li>
<li>The best algorithm and scaling approach depends a lot on your specific filter and the statistics of your signal. There is no "one size fits all" solution.</li>
<li>Cascaded second order sections are almost always the way to go. Primarily they guarantee that your coefficients values are bounded to something reasonable (2 or less). With transfer function implementation the coefficients tend to get really large or really small wich generates a lot of noise or scaling problems.</li>
<li>Within each section, use either Direct Form I or Transposed Form II. This helps controlling the scaling of the state variables and intermediate result</li>
<li>The best way to approach this to use "Q notation" <a href="https://en.wikipedia.org/wiki/Q_(number_format)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Q_(number_format)</a> to determine for EVERY SINGLE operation, which bits to use for what purpose. Doing this analysis will determine which bits to compute and which bits to keep (using multiplies, shifts, adds and masks).</li>
<li>Sometimes you need to treat this as a statistical problem. In many cases doing conservative scaling results in very poor signal to noise ratio, so you need to trade off the probability of clipping versus your steady state signal to noise. That's highly dependent on your signal, the specific filter and the requirements of your application.</li>
<li>You also need to carefully control your clipping and rounding behavior to avoid wrap-arounds and limit cycles.</li>
</ol>
|
https://dsp.stackexchange.com/questions/59491/criterion-to-choose-length-of-an-accumulator-and-product
|
Question: <p>We are currently designing a type 2 compensator <span class="math-container">$G_1$</span> (1 pole at the origin, 1 zero and 1 pole) to stabilize a power factor correction (PFC) circuitry. The crossover frequency is low - 2-3 Hz - and the compensator is implemented using a biquad structure sampling at 10 kHz. For ripple filtering reasons, this filter will be followed by another low-pass 50-Hz filter <span class="math-container">$G_2$</span> having a sampling frequency of 200 Hz. I need the magnitude and phase responses when the two filters are cascaded. Is it correct to convert these filters in the s-domain then study the response of <span class="math-container">$G_1(s)G_2(s)$</span> or do I need to rescale <span class="math-container">$G_2$</span> in the z-domain with a 10-kHz sampling frequency before analyzing the product of the two transfer functions? Thank you.</p>
Answer:
|
https://dsp.stackexchange.com/questions/59608/cascading-filters-at-different-sampling-rates
|
Question: <p>I know that in a RRC filter a high value of the span gives a better response, in the sense that the RRC filter response is more near to the ideal RRC filter response. However, would there be any advantage on using a small span?</p>
<p>Lets say, for example, that I am sending <code>100 BPSK</code> symbols, a value of <code>span = 10</code>, would be better, in terms of the BER, than a <code>span = 2</code>. </p>
<p><strong>Q:</strong> But would there be any advantage with <code>span=2</code>?</p>
Answer: <p>The advantage of a RRC filter with a smaller span is that it has fewer taps, so the filter requires fewer multiplies and additions per sample. Longer filters approximate the ideal RRC response more closely but require more computation. Shorter filters do not approximate the ideal RRC as well but require less computation. Typically, one will choose a filter length that strikes a balance between computational demands and performance.</p>
|
https://dsp.stackexchange.com/questions/59752/is-there-any-advantage-on-using-a-low-span-value-in-a-rrc-filter
|
Question: <p>I have a filter designed in matlab with the function cheby2( N, Rs, Ws, 'stop'). The filter would give nice frequency response with for a given parameter set when the filter order is 2 or 4 (N=4). But if I increase the filter order to say 14 the magnitude plot of filter response is not at all smooth in fact in the stop band it gives very high gain instead of attenuation. Any suggesetion what is going on. What I should look at? Do you need any more information to access the problem. The parameters I am using are:
Rs = 40;
Wc = 0.2;
Wb = 1/128
Ws = Wc +0.5*Wb*[ -1, 1];</p>
Answer: <p>You have a very narrow stop band which means that all the poles are crammed in a very small area of the complex plane, close to the unit circle. This can result in severe numerical problems, even for relatively small filter orders, even with floating point arithmetic.</p>
<p>Another important point that you might not realize is that if you design a band pass or a band stop filter using the command <code>cheby2</code> (and all other similar commands), then <code>N</code> is the order of the prototype low pass filter, and not the order of the resulting band pass or band stop filter. So if you choose <code>N=14</code> then you've designed a band stop filter of order <span class="math-container">$28$</span>. Such a filter is very hard, if not impossible to realize because of numerical problems.</p>
<p>Also note that calling <code>cheby2</code> with <code>[z,p,g]</code> (zeros, poles, and gain) as output argument usually results in higher accuracy than using the polynomial coefficients.</p>
<p>I would actually suggest trying a notch filter instead of such a narrow band band stop filter.</p>
|
https://dsp.stackexchange.com/questions/61177/how-to-smooth-filter-response
|
Question: <p>Consider a transfer function (TF) with <span class="math-container">$n$</span> number of poles <span class="math-container">$(p_1,..p_n)$</span> and <span class="math-container">$m$</span> number of zeros <span class="math-container">$(z_1,..z_n)$</span>. One can write the magnitude of the frequency responce of the TF in terms of poles and zeros. </p>
<p>This means one can take partial derivative of the magnitude of the TF with respect to each pole and zero for a paticular frequency. Now consider <span class="math-container">$f_1$</span> is the frequency at which TF is deviated most from the desired responce. Thus one can change the poles and zeros based on the gradient of the magnitude of TF at <span class="math-container">$f_1$</span> and optimize the poles and zeros for <span class="math-container">$f_1$</span>. Can one can keep doing this procedure for each frequency until a desired result is obtained? Is this a valid process of filter design?</p>
<p>Any comment would be appreciated.</p>
Answer: <blockquote>
<p>Does gradient vector of pole zero carry useful information?</p>
</blockquote>
<p>Yes. The partial derivatives can be used in creating iterative search algorithms for fitting IIR filters to arbitrary targets. Examples of algorithms that use the derivatives are Steepest Descent or Conjugate Gradient. </p>
<p>It's not a trivial process though and there are lot of details to be worked out. </p>
<ul>
<li>Poles, zeros and transfer function targets are all complex, so you either need to work with complex derivatives are use a suitable real-valued representation (real/imag, magnitude/phase, etc.). </li>
<li>It's important to formulate a suitable error criteria that properly
reflects the requirements and trade-offs for your specific
application,</li>
<li>The search algorithms often get stuck in local minima</li>
<li>Log spaced frequency grids may have to be pre-warped to straighten out the error surface a bit. </li>
<li>Many search algorithm have "heuristic" parameters that are hard to get right </li>
</ul>
|
https://dsp.stackexchange.com/questions/61429/is-gradient-vector-of-pole-zero-carry-usful-information
|
Question: <p>I'm looking for an introductory book to time-frequency analysis. The book should be practical in nature and not mathematics heavy. Suggestions?</p>
Answer: <p>I recommend this book:</p>
<p><a href="http://www.amazon.fr/Understanding-Digital-Signal-Processing-Edition/dp/0137027419" rel="nofollow">Understanding Digital Signal Processing, Richard G. Lyons</a></p>
<p>This book explains the basics concepts of digital signal processing, which includes time-frequency analysis, in a very intuitive way.</p>
|
https://dsp.stackexchange.com/questions/15898/introductory-book-on-time-frequency-analysis
|
Question: <p>I had preliminary knowledge of digital signal processing from Oppenheim's <a href="http://dl.acm.org/citation.cfm?id=1795494" rel="nofollow noreferrer">Discrete-Time Signal Processing</a> and is studying time-frequency analysis now. May someone suggest introductory reference (textbook, website, review paper...) for the fast algorithm aspects of time–frequency analysis?</p>
Answer: <p>If you are interested in discrete linear systems (either time-frequency or time-scale), you could invest in multirate filter banks, that provide tools for computing, optimizing, etc, such as the polyphase matrix or the lifting scheme. A tutorial paper is </p>
<ul>
<li><a href="http://www.systems.caltech.edu/dsp/ppv/papers/ProcIEEEmultirateTUTExtra.pdf" rel="nofollow noreferrer">Multirate digital filters, filter banks, polyphase networks and Applications: A Tutorial</a> by P. P. Vaidyanathan.</li>
</ul>
<p>An earlier reference is </p>
<ul>
<li><a href="http://users.isy.liu.se/en/rt/fredrik/spcourse/multirate.pdf" rel="nofollow noreferrer">Frequency-domain and multirate adaptive filtering</a> by J. Shynk.</li>
</ul>
<p>However, additional notes could be helpful on you side whether you need:</p>
<ul>
<li>specificities: amount of memory, real-time constraints, quantization</li>
<li>analysis only or analysis/synthesis</li>
<li>linear or nonlinear time-frequency</li>
<li>discrete or discrete approximation to continuous</li>
<li>fast in number of operations, convergence, or using hardware capabilities (finite-precision, approximate versions, etc.)</li>
</ul>
<p>If you do consider sliding windows for Fourier analysis, this can be cast into the framework of modulated <a href="https://doi.org/10.1109/TSP.2009.2023947" rel="nofollow noreferrer">complex oversampled filterbanks</a>, for which fast algorithms do exist, see for instance:</p>
<ul>
<li><a href="https://doi.org/10.1049/el:20001068" rel="nofollow noreferrer">Fast Implementation of Oversampled Modulated Filter Banks</a>, 2000, S. Weiss and R.W. Stewart</li>
</ul>
|
https://dsp.stackexchange.com/questions/43744/introduction-to-fast-algorithms-for-time-frequency-analysis
|
Question: <p>I have been reading Leon Cohen's book "Time Frequency Analysis" as part of a project for university. On page twelve or equation (1.57) during his derivation of a representation of the average frequency in terms of the time-domain signal he provides the following relation which from my perspective came out of thin air, I am wondering if anyone else felt the same was able to derive the relation or at least explain it ?</p>
<p><span class="math-container">$$
\langle \omega \rangle = \int \omega |S(\omega)|^2 d\omega = \frac{1}{2 \pi} \int \int \int \omega s^*(t)s(t')e^{j(t-t')\omega} d\omega\; dt'\; dt
$$</span></p>
Answer: <p>They just express <span class="math-container">$S(\omega)$</span> (and its complex conjugate) by the Fourier transform of <span class="math-container">$s(t)$</span>:</p>
<p><span class="math-container">$$S(\omega)=\int s(t)e^{-j\omega t}dt\tag{1}$$</span></p>
<p>From which we get</p>
<p><span class="math-container">$$|S(\omega)|^2=\int s(t)e^{-j\omega t}dt\int s^*(t')e^{j\omega t'}dt'=\int\int s(t)s^*(t')e^{-j\omega(t-t')}dtdt'\tag{2}$$</span></p>
<p>and</p>
<p><span class="math-container">$$\int \omega |S(\omega)|^2d\omega=\int\int\int \omega s(t)s^*(t')e^{-j\omega(t-t')}dtdt' d\omega\tag{3}$$</span></p>
|
https://dsp.stackexchange.com/questions/85395/time-frequency-analysis-equation-derivation
|
Question: <p>I am a beginner in digital communications: I am studying the spread spectrum communication and I have a question on the spreading signals. For example I have 2 spreading signals and I do a time frequency analysis.</p>
<p>Should the spreading signals overlap in time? And if yes, why?</p>
Answer: <p>The overlap is not important, it is just necessary that the spreading sequences are pseudo-orthogonal.</p>
<p>Can someone confirm?</p>
|
https://dsp.stackexchange.com/questions/56293/time-frequency-analysis-for-a-spreading-signal
|
Question: <p>I am little confused about why we need analytic signals so bad in time-frequency analysis. What might happen if I use non-analytic signals to do time-frequency analysis?</p>
Answer: <p>Assuming time-frequency aims a providing a separation (at least visual) between signal components, the main reasons could be:</p>
<ul>
<li>for quadratic distributions, which tend to yield interference between components, "cancelling" at negative frequencies reduce the quantity of components that can interfere.</li>
<li>for linear distributions, the filter bank formalism, and especially the down-sampling operators, is simplified, reducing the impact of aliasing errors.</li>
</ul>
<p>For real signals, the Hermitian symmetry yield that "no information" is lost in the analytic form, a complex-valued function that has no negative frequency components, and it is easy to go back to real.</p>
<p>However, this is not so simple in practice. </p>
<ol>
<li>If the analytic signal constructed from a wide-sense stationary (WSS) real signal is always proper, this may not be the case for non-stationary signals, as underlined in <a href="https://doi.org/10.1109/TSP.2003.818911" rel="nofollow noreferrer">Stochastic time-frequency analysis using the analytic signal: why the complementary distribution matters</a>. </li>
<li>Moreover, computing the analytic signal on discrete, finite-length data is often only approximate, especially for real-time applications. Hence the energy in the negative frequencies is rarely zero.</li>
<li>Extending the concept of analyticity beyond 1D is not evident, and several design still exist.</li>
</ol>
|
https://dsp.stackexchange.com/questions/46245/why-are-analytic-signals-so-important-in-time-frequency-analysis
|
Question: <p>I had this question in the exam without any further explanation.</p>
<p>Why the time translation invariance is an important feature for time-frequency distributions?</p>
<p>I am writing to ask whether anyone can please explain what is time translation invariance? And why is it important feature?</p>
Answer:
|
https://dsp.stackexchange.com/questions/19156/time-frequency-analysis
|
Question: <p>Given a signal $x(t)$, how do I implement a form of autocorrelation function defined as $a(t,T) = x(t-T)x(t+T)$, where $T$ is an arbitrary constant? </p>
<p>(a fast implementation would be ideal)</p>
<p>Edit:
This kind of signal I came across from seeing a "Parametric Symmetric Autocorrelation function", defined as above.</p>
<p>It is used in time-frequency analysis methods like WVD,...etc.
$ R(t,\tau) = x(t + \frac{\tau}{2})x^{*}(t-\frac{\tau}{2})$</p>
<p>Thus far, I have implemented the steps as below for an example chirp:
but the output of the fft2 at the end is wrong. (not a correct frequency)</p>
<p>At the output of the autocorrelation function (the PSIAF variant):
<a href="https://i.sstatic.net/c56fH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c56fH.jpg" alt="This output from autocorrelation seems correct"></a></p>
<p>The final output of the LVD is wrong (should be point like):</p>
<p>Solved: Will look at using some of the already published C codes as per answer below to compute the $ R(t,\tau) $</p>
Answer: <p>This line seems wrong:</p>
<pre><code>X = signal(X1_signal_indices).*conj(X1conj_signal_indices);
</code></pre>
<p>shouldn't it be</p>
<pre><code>X = signal(X1_signal_indices).*conj(signal(X1conj_signal_indices));
</code></pre>
<p>??</p>
<p>Note that there is some C code for implementing <a href="https://sites.google.com/site/kootsoop/Home/cohens_class_code" rel="nofollow">the WVD and other distributions here.</a> That code calculates your $R(t,\tau)$ first before convolving various 2D functions with it and then taking the FFT in order to generate the different distributions.</p>
|
https://dsp.stackexchange.com/questions/31718/auto-correlation-for-time-frequency-analysis
|
Question: <p>Can anyone give me an example of two signals with different temporal waveforms having the same Fourier transform (FT)? </p>
<p>Would the inverse Fourier transform still be able to recover correctly each signal?</p>
<p>Actually, I tried to check the question above, in matlab, using two chirp signals (same duration), the first one having a frequency increasing from 10 to 50 Hz as time increases and the other one having a frequency decreasing from 50 to 10 Hz as time increases.
The result was that the moduli of the FT were identical but not the phases (that were symmetric wrt frequency = 0) even with the addition of noise. </p>
<p>What surprised me is that the inverse FT was able to recover correctly both signals when I was expecting it to not be able to do it since we usually say that the FT makes it so that we lose the time localization.</p>
<p>I am aware that the time localization information is still contained in the phase part of the FT but if this is the case then what do we really need the time-frequency representations for? Is it just to easily extract this information since it's so hard doing it using the phase of the FT?</p>
<p>Any thoughts on that would be appreciated.</p>
Answer: <p>What a tricky question to overlook. Indeed I'm one of those who would immedieately press that Fourier transforms do lose time localization of the events as the comments stated. Yet it's certainly (mathematically and practically) true that any (transformable) signal waveform is <strong>exactly</strong> preserved under this reversible transform including all of its time distribution information as well. Your example of the reversed chirp case clearly demonstrates this. This needs an answer. At least to clarify why then there is the well accepted idea that FT do not preserve time localization of events?</p>
<p>Now, the idea that the Fourier transforms are losing time localization of the signals come from the observation that its <strong>bases</strong> are of <strong>infinite</strong> extent sinusoidals. And a sine wave of infinite extent and constant amplitude will not have a time localization. On the other hand it has a perfect frequency localization, being a pair of impulses at an exact frequency. The bases of wavelets, for example, are partly local in both domains.</p>
<p>This is the essence of time-frequency analysis. A base that's exactly local in frequency will loose all time localization and a base that's exactly local in time will loose any frequency localization. And those in <strong>between</strong> are the transforms that provide a compromise between exact localization and no localization at all. </p>
<p>A consequence of the fact that Fourier bases are of infininte extent is the following: Assume part of a signal contains a short duration high frequency spike, and the remaining parts are of quite still low frequency variations. When a Fourier decomposition or synthesis is used to analyse or construct such a signal, the high frequency spike (which is higly local in time) is to be created with high frequency sine bases that not only exist at the position of the spike but also exist along all the rest of the signal. Because those sines will extend from the beginning to the end of the whole signal duration. This creates the problem that local processing is quite inefficient with Fourier bases for transient signals. And <strong>transient</strong> signals have the prime importance in many mathematical, scientific and engineering fields. </p>
<p><strong>Wavelets</strong> are kind of the <strong>optimal</strong> (probably in some sense?) transforms that provide a maximum amount of simultaneous time-frequency localizations (resolutions). And this is clearly apparent from the wave-packet (or the gaussian, or the maxican hat, or a shapeless daubechies...) shape of its bases.
Wavelets therefore can provide an improved solution the problem associated with the previous paragraph.</p>
<p>A <strong>spectogram</strong> is partly providing similar information to a wavelet time-frequency analysis, and gives you frequency contents of the signal over its time duration. </p>
<p>Note that you can still go with Fourier analysis for transient cases as well but that requires what Peter K. referred to as a codswallop work...</p>
|
https://dsp.stackexchange.com/questions/41194/motivation-of-time-frequency-analysis
|
Question: <p>I am trying to perform time-frequency analyses using the PyWavelets (pywt) toolkit for python. My ultimate goal is to perform time-frequency analyses for EEG signals but I am starting with something simpler.<br>
For a sanity test, I am creating a simple signal of length 2 seconds, with sample rate 250Hz, containing 2 sine waves - one of 3Hz and one of 10Hz. I would like to create a time-frequency plot that has two horizontal lines - one for the 3Hz and one for the 10Hz, which looks like this (only for illustration purposes):
<a href="https://i.sstatic.net/Zgw4T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zgw4T.png" alt="enter image description here"></a></p>
<p>For this purpose, I tried using code from the following tutorial : <a href="http://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/" rel="nofollow noreferrer">http://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/</a>, specifically in section 3.1 of the tutorial.</p>
<p>This is a minimal example based on the code from the tutorial:</p>
<pre><code>from UliEngineering.SignalProcessing.Simulation import sine_wave
import pywt
import numpy as np
import matplotlib.pyplot as plt
def plot_wavelet(time, signal, scales,
waveletname='cmor',
cmap=plt.cm.seismic,
title='Wavelet Transform (Power Spectrum) of signal',
ylabel='Period (seconds)',
xlabel='Time'):
dt = time[1] - time[0]
[coefficients, frequencies] = pywt.cwt(signal, scales, waveletname, dt)
power = (abs(coefficients)) ** 2
period = 1. / frequencies
levels = [0.0625, 0.125, 0.25, 0.5, 1, 2, 4, 8]
contourlevels = np.log2(levels)
fig, ax = plt.subplots(figsize=(15, 10))
im = ax.contourf(time, np.log2(period), np.log2(power), contourlevels,
extend='both', cmap=cmap)
ax.set_title(title, fontsize=20)
ax.set_ylabel(ylabel, fontsize=18)
ax.set_xlabel(xlabel, fontsize=18)
yticks = 2 ** np.arange(np.ceil(np.log2(period.min())),
np.ceil(np.log2(period.max())))
ax.set_yticks(np.log2(yticks))
ax.set_yticklabels(yticks)
ax.invert_yaxis()
ylim = ax.get_ylim()
ax.set_ylim(ylim[0], -1)
cbar_ax = fig.add_axes([0.95, 0.5, 0.03, 0.25])
fig.colorbar(im, cax=cbar_ax, orientation="vertical")
plt.show()
def generate_sine_wave(length, samplerate, frequencies):
wave = np.zeros(int(length * samplerate))
for frequency in frequencies:
wave += sine_wave(frequency=frequency, samplerate=samplerate,
length=length)
return wave
signal = generate_sine_wave(2, 250, [3, 10])
N = len(signal)
t0 = 0
dt = 1/250
time = np.arange(0, N) * dt +t0
scales = np.arange(1, 256)
plot_wavelet(time, signal, scales)
</code></pre>
<p>This plot from this code doesn't give me the plot I want, it looks like this:
<a href="https://i.sstatic.net/MheCk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MheCk.png" alt="enter image description here"></a></p>
<p>I tried many modifications for this code but none gave me the result I want. And there are a couple of things I don't understand about the code:<br>
- What is the purpose of the "period" variable in the "plot_wavelet" function and how do I make the y-axis show frequencies instead?<br>
- What is the purpose of the "scales" variable?<br>
- How do I define a frequency range that I want the result to include?<br>
- How do I use linear scaling for the frequencies instead of log scale? </p>
<p>If anyone can give some pointers regarding this I will be very happy. Been spending some time trying to plot normal time-frequency plots but still haven't been able to find a python tool that performs this simple plot which makes sense to me.<br>
Thank you,<br>
Elad</p>
Answer: <p>You can find a nice tutorial for time-frequency analysis in Numerical python by Johansson, chapter 17. link to github <a href="https://github.com/Ziaeemehr/signal_processing/tree/master/Numerical_Python_johansson" rel="nofollow noreferrer">repository</a>.</p>
<p>You can also check the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html" rel="nofollow noreferrer">scipy.signal.spectrogram</a>.</p>
<pre><code>import numpy as np
from scipy import signal
from scipy.fft import fftshift
import matplotlib.pyplot as plt
# Generate a test signal, a 2 Vrms sine wave whose frequency
# is slowly modulated around 3kHz, corrupted by white noise
# of exponentially decreasing magnitude sampled at 10 kHz.
fs = 1e4
N = 1e5
amp = 2 * np.sqrt(2)
noise_power = 0.01 * fs / 2
time = np.arange(N) / float(fs)
mod = 500 * np.cos(2 * np.pi * 0.25 * time)
carrier = amp * np.sin(2 * np.pi * 3e3 * time + mod)
noise = np.random.normal(scale=np.sqrt(noise_power), size=time.shape)
noise *= np.exp(-time / 5)
x = carrier + noise
fig, ax = plt.subplots(2, figsize=(8, 7))
f, t, Sxx = signal.spectrogram(x, fs)
ax[0].pcolormesh(t, f, Sxx)
ax[0].set_ylabel('Frequency [Hz]')
ax[0].set_xlabel('Time [sec]')
# Note, if using output that is not one sided, then use the following:
f, t, Sxx = signal.spectrogram(x, fs, return_onesided=False)
ax[1].pcolormesh(t, fftshift(f), fftshift(Sxx, axes=0))
ax[1].set_ylabel('Frequency [Hz]')
ax[1].set_xlabel('Time [sec]')
plt.savefig("fig.png")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/140nT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/140nT.png" alt="enter image description here"></a></p>
|
https://dsp.stackexchange.com/questions/60366/python-tool-for-time-frequency-analysis
|
Question: <p>I have empirically developed a sensor failure detection system which works fine. The system receives inputs from different types of sensors. Because of noise characteristics, I use low pass filters on some sensors output. In the system, all these sensor readings form a signal which is constantly compared with a model and in the end create a remainder signal. In case of a sensor failure, the remainder signal violates pre-defined thresholds and raise an alarm.</p>
<p>For the system analysis, I use superposition law, meaning that except one, all inputs are considered zero and a step signal is propagated through the system. Here the step signal represents a sensor failure. With various approximations, I am able to get the corresponding transfer functions. That way, I can justify the system performance. However, this has raised lots of ambiguities. I am asked to justify and optimize the system performance (with the simultaneous consideration of all inputs) through time or frequency analysis. The result can be a frequency response or a mathematical equation or other system performance representations.</p>
<p>My questions: Since I am dealing with a relatively complex, nonlinear system, is there any way to analyze and optimize its performance with the simultaneous consideration of all inputs/sensor readings? Is there some good literature that I can study this topic from?</p>
Answer: <p>If superposition works, then independent mode/component extraction is of interest. <a href="https://dsp.stackexchange.com/a/71399/50076">Synchrosqueezing</a> is well-suited for this task. Extracted features can ten be fed to an anomaly detection system - optionally with <a href="https://github.com/gregversteeg/gaussianize" rel="nofollow noreferrer">Gaussianization</a>.</p>
<p>Other methods can be applied to the extracted components as if they were individual signals, so the described approach is expansive and flexible.</p>
|
https://dsp.stackexchange.com/questions/76817/time-frequency-analysis-of-a-nonlinear-system
|
Question: <p>It seems there are several papers from the seventies but backtracking from the references gets quickly difficult. Who calculated for the first time a time-frequency representation of a signal?</p>
Answer: <p>According to the preface of <a href="https://books.google.com/books?id=sjN2qq99-WwC&lpg=PR1&pg=PR1#v=onepage&q&f=false" rel="nofollow noreferrer">Foundations of Time-Frequency Analysis</a>, a rough timeline is as follows:</p>
<ul>
<li>1930 - Early development of quantum mechanics by H. Weyl, E.Wigner, and J. von Neumann.</li>
<li>1946 - Theoretical foundation of information theory and signal analysis by D. Gabor (cf. <a href="https://web.archive.org/web/20200806201955/https://ieeexplore.ieee.org/document/5298517" rel="nofollow noreferrer">"Theory of communication"</a>).</li>
<li>1980 - Time-frequency analysis established as an independent mathematical field (apart from engineering) by Guido Janssen.</li>
<li>1990 - Development of wavelet theory. Overview of the mutual influence given in <a href="https://ieeexplore.ieee.org/document/57199" rel="nofollow noreferrer">"The wavelet transform, time-frequency localization and signal analysis"</a> by Ingrid Daubechies.</li>
</ul>
|
https://dsp.stackexchange.com/questions/17909/when-was-the-time-frequency-analysis-invented
|
Question: <p>I want to know " Whether there is any Tool Box in Mathematica (MMA) for the Time-Frequency (TF) Signal Analysis".</p>
<p>I am well-versed in MMA Programming, so want to do TF Signal Analysis in MMA. I think if there is toolbox of TF Analysis, then it will be of very much great help in long programming.</p>
<p>Thanks in Advance!!!!</p>
Answer: <p>If you are using an older version of <em>Mathematica</em> (pre v.8) and are interested in wavelets - yes, you need an add-on to perform wavelet analysis. More about it <a href="http://library.wolfram.com/infocenter/TechNotes/4639/" rel="nofollow noreferrer">here</a>. If you are using v.8 or above then everything wavelet-related is built-in. If you are interested in Fourier analysis - it is built in (since v.1). To showcase the capabilities of the time-frequency analysis functionality:</p>
<p>Suppose we have the signal $2 \exp \left(-\frac{t^2}{10}\right) \cos (5 t)$
<img src="https://i.sstatic.net/v6rh7.png" alt="Mathematica graphics"></p>
<p>We can visualise its Gabor transform</p>
<p><img src="https://i.sstatic.net/ePbIT.png" alt="Mathematica graphics"></p>
<p>Wigner transform (notice the artifacts)</p>
<p><img src="https://i.sstatic.net/6yecE.png" alt="Mathematica graphics"></p>
<p>and the (somewhat) corrected Gabor-Wigner transform</p>
<p><img src="https://i.sstatic.net/rRoCX.png" alt="Mathematica graphics"></p>
<p>A second test signal</p>
<p><img src="https://i.sstatic.net/W4Nq1.png" alt="Mathematica graphics"></p>
<p>and its spectrogram calculated using partitions of length 256, offset 1 and BlackmanHarrisWindow</p>
<p><img src="https://i.sstatic.net/yYXGp.png" alt="Mathematica graphics"></p>
<p>its periodogram (with the same options as above)</p>
<p><img src="https://i.sstatic.net/EXdfb.png" alt="Mathematica graphics"></p>
<p>Now we move onto wavelets
Scalogram after a continuous wavelet transform with a Morlet wavelet</p>
<p><img src="https://i.sstatic.net/qud3E.png" alt="Mathematica graphics"></p>
<p>and the respective scalogram after performing a discrete wavelet transform</p>
<p><img src="https://i.sstatic.net/CoO51.png" alt="Mathematica graphics"></p>
<p>Just because I find it pretty - a scalogram of a simulated noise in 3D</p>
<p><img src="https://i.sstatic.net/0Mtj9.png" alt="Noise"></p>
|
https://dsp.stackexchange.com/questions/19188/time-frequency-signal-analysis-in-mathematica-mma
|
Question: <p>Given the history of the sum of a time-varying mixture of periodic signals, say square waves, how would you efficiently estimate the number and frequencies of components active at a particular time? The amplitudes and frequencies of the components are arbitrary but fixed real numbers; if a component is active at a certain moment, it will retain its amplitude and frequency if it is active again.</p>
Answer: <p>Let us start with the unsupervised methods...</p>
<p>A first approach would be to compute a spectrogram and factorize it with NMF (<a href="http://en.wikipedia.org/wiki/Non-negative_matrix_factorization" rel="nofollow">Non-negative Matrix Factorization</a>). If you are unfamiliar with this technique, it decomposes a spectrogram into a sum of $k$ constant-spectrum sources, each of these having a time-varying amplitude envelope applied to them. This model perfectly suits your problem and there is a good chance that the columns of the decomposition will be spectra for each waveform/frequency pair (from which you can estimate a fundamental frequency through spectral sum or by turning it back into an autocorrelation function), and the rows will be the activation signals. There are many implementation out there and it's cheap to just throw your problem at it, there's a good chance it'll just work.</p>
<p>Note that the underlying model of NMF is less restrictive than your signal model...</p>
<p>First, because your activation envelopes are either 0 or 1. I haven't dealt with such constraints in the past, but there's probably a way of defining a penalty measure on the terms of the activation matrix $W$ different from 0 and 1, adding that to the optimized criterion and derive a new set of multiplicative update equations. One constraint not mentioned in your comment is that you probably also assume that the sources are not "blinking" in and out rapidly, and stay active/inactive, for a rather long number of consecutive frames. Constraints can be added to NMF to penalize discontinuous activation envelopes. See <a href="http://www.cs.tut.fi/sgn/arg/music/tuomasv/virtanen_taslp2007.pdf" rel="nofollow">Virtanen's paper on continuity constraints</a> for that.</p>
<p>The second specificity of your problem is that you might want to make the assumption that all sources have the same waveform, that is to say, that all sources have the same log-frequency spectrum, modulo a translation. To address this specifity, a recommended technique is to compute a constant-Q spectrogram on your input data (which has the right shift-invariance), and perform a <a href="http://homepage.eircom.net/~derryfitzgerald/SSP05.pdf" rel="nofollow">shifted-NMF</a>, with <em>one</em> target source. This will recover the spectrum common to all your sources; and the spectral shifts and temporal activation matrix will provide the equivalent of a spectrogram tailored to your source spectrum. See the synthetic signal in the results section of this paper - this is <strong>exactly</strong> the signal model you describe. This might require some work, but you could probably tweak this method as well to your more rigid signal model in which the activations are either 0 or 1.</p>
<p>Now on to the supervised methods.</p>
<p>If the set of candidate waveforms is known in advance and small and/or the number of frequencies at which they can appear is small (say you want to transcribe music played on a 80s toy keyboard), you could afford matching pursuit. This will correlate your signal with impulse responses consisting of a windowed signal for all searched waveforms/frequencies pair. Computational costs might rapidly make it a bad choice.</p>
<p>Another supervised method is to use the same kind of multiplicative gradient updates as for NMF, but keeping the matrix $W$ containing the spectral templates "locked". That is to say, you build a matrix W with all candidate spectra you want to probe, and perform your multiplicative update to decompose your spectrogram into a product of an activation matrix H and W. Again, update rules tweakable to include the binary activation constraints. See Bertin's work on piano transcription which uses this technique to factorize spectrograms into a sum of synthetic piano note spectrograms (<a href="http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=7663" rel="nofollow">a good start here</a>). This has also been used for <a href="http://www.eurasip.org/Proceedings/Eusipco/Eusipco2005/defevent/papers/cr1410.pdf" rel="nofollow">drum transcription by Paulus</a> - using drum samples as a basis.</p>
|
https://dsp.stackexchange.com/questions/4697/time-frequency-analysis-of-non-sinusoidal-periodic-signals
|
Question: <p><a href="https://i.sstatic.net/tZou6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tZou6.png" alt="enter image description here"></a>I have an image of a spectrogram and I wish to detect the tracks/contours of prominent frequencies present in the spectrogram.</p>
<p>In the end, I want to be able to get various prominent curves from the image.
The second image represents the expected frequencies. I want to be able to detect those frequency profiles are present in the noisy spectrogram. At the end, I expect a yes or a no as an output.</p>
<p>PS: Clearly I can see and tell that the expected frequency profile is present in the spectrogram. I wish to automate the process using a program preferably in python.</p>
<p><a href="https://i.sstatic.net/bdKMG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bdKMG.png" alt="enter image description here"></a></p>
Answer: <p>The OP is interested in detecting the presence of frequencies in the 155 to 165 Hz frequency band within a block of data (or any other defined frequency band). The spectrogram is useful to observe multiple frequencies versus time, but if only one or even a few blocks of frequencies are desired for detection, then an alternate and computationally simpler approach that provides high accuracy is to use bandpass filters followed by a magnitude detectors for each band.</p>
<p>Below I demonstrate with a single FIR bandpass filter. IIR filters could optionally be used. If multiple filters are desired to cover all frequencies, we can easily combine the FIR filter approach as a polyphase implementation together with a DFT to provide a channelizing filter bank as detailed in <a href="https://dsp.stackexchange.com/questions/87697/use-of-dft-for-decimating-channelizers/87698#87698">this other answer</a> with the end result copied in the graphic immediately below. This combines the efficiency and band coverage of the FFT with the frequency selectivity of the filter as designed:</p>
<p><a href="https://i.sstatic.net/FLMib.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FLMib.png" alt="DFT Channelizer" /></a></p>
<p>For any filter (including the FFT response), there must be a transition band between what is passed through and what is rejected. The complexity (length, and with that the delay) of the filter is the reciprocal of this transition band. Below shows an example FIR bandpass filter for detecting the frequency range of 155 to 165 Hz at a sampling rate of 1000 Hz. Here I used 951 coefficients to achieve a transition bandwidth of 3 Hz. The complexity of this filter can be reduced further by combining frequency translation and resampling which I won't go into but know that it can be done with a significant reduction in processing - the point here is to provide an example bandpass filter to demonstrate power detection within a frequency band.</p>
<p><a href="https://i.sstatic.net/zYNG0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYNG0.png" alt="bpf filter" /></a></p>
<p>This filter design was with the following Python code using a least squares design method:</p>
<pre><code>fs = 1000 # sampling rate
f1 = 155 # first band corner
f2 = 165 # second band corner
ft = 3 # transition band
coeff = sig.firls(951, [0, f1-ft/2, f1+ft/2, f2-ft/2, f2+ft/2, fs/2], [0, 0, 1, 1, 0, 0], fs=fs)
</code></pre>
<p>To detect the presence of a signal with frequency content within the filter, pass the signal through the filter, and then compute the mean square of the output as a power detection, the result of which is passed through a threshold detector using a threshold based on trading probability of false alarm with probability of detection. The mean square can also be implemented as a filter to provide a moving window result.</p>
<p>Below I demonstrate the detection using two noise signals, one with a 160 MHz tone and the other with 170 MHz tone both in a portion of the noise signal. The signals are buried in the noise using a standard deviation that is 3x larger than the peak of the sinusoids as detailed in the plot below.</p>
<p><a href="https://i.sstatic.net/GLiDw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GLiDw.png" alt="test signal" /></a></p>
<p>I implemented a signal detector using a "<a href="https://dsp.stackexchange.com/questions/86636/fast-settling-cic-fir-filter-design/86654#86654">CIC filter</a>", and an approximation to the rms result by using the magnitudes similar to diode detectors (which do not provide a "true-rms" result, and for Gaussian noise will under-report the noise by -1.05 dB as explained <a href="https://dsp.stackexchange.com/questions/80643/snr-of-averaging-fft-in-magnitude/80645#80645">here</a>, but we just need a go-no go metric for signal detection and computing squares and square-roots is processing we can avoid). A CIC filter is an efficient approach to a moving average followed by decimator (so equivalent to a block by block average, and here as a block by block signal detector):</p>
<pre><code># Efficient CIC signal detector
def detector(signal, dec):
# accumulate magnitudes
sig2 = np.abs(signal)
accum = sig.lfilter([1.], [1., -1.], sig2)
# decimate
accum_dec = accum[::dec] / dec
# return result as square root of moving difference
return np.diff(accum_dec)
</code></pre>
<p><em>To get a "true-rms" detector, modify the above to compute the square of the signal instead of absolute value, and then return the square root of the result.</em></p>
<p>With the following result using an rms block size of 10,000 samples (the horizontal axis shows the index for the original samples aligned with the 1,0000,000/10,000 = 100 samples returned for comparison with previous plot):</p>
<p><a href="https://i.sstatic.net/6yfeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6yfeK.png" alt="detected output" /></a></p>
<p>The above case would also be detected with a single FFT of the whole block, which would be more efficient than the filtering approach provided. But the following case where I reduced the time duration of the signal significantly demonstrates a case where a single FFT would be unable to detect the presence of the signal where the approach detailed here is successful (and a sliding FFT with better time localization would not be more efficient for the achieved performance):</p>
<p><a href="https://i.sstatic.net/ZSjQQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSjQQ.png" alt="Signal + Noise" /></a></p>
<p>The rms block window size was reduced to 100 samples for tighter time localization in this case:</p>
<p><a href="https://i.sstatic.net/JRthU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JRthU.png" alt="detected output" /></a></p>
|
https://dsp.stackexchange.com/questions/64084/time-frequency-analysis-by-frequency-contour-detection-in-spectrogram
|
Question: <p>When do we use time domain analysis and when do we use frequency domain analysis? </p>
<p>As far i studied i know that when we need to study individual sinusoidal components of a signal, we choose frequency domain analysis. Is it the only application of frequency domain analysis? </p>
Answer: <p>Frequency domain analysis has much broader application (more numerous to list) than just analyzing sinusoidal components of a signal. Frequency domain analysis appears as a mathematical tool whenever the equivalent operations in the time domain can be simplified, and vice versa. For example, convolution in one domain is multiplication in the other which can simplify many problems. Further given the great efficiency of the FFT to solve the Discrete Fourier Transform, it has found its way into so many applications where the end result can be viewed as simplifying the number of operations needed to solve for a result (such as radar imaging, autofocus, and highly efficient communications). The broader class of the Laplace Transform is also the frequency domain and is used to convert difficult integro-differential equations to simple algebra!</p>
|
https://dsp.stackexchange.com/questions/68083/time-domain-analysis-vs-frequency-domain-analysis-applications-wise
|
Question: <p>I'm a stack exchange user for some time and now I'm registering to ask a simple question (I think!).</p>
<p>I have a vibration signal with an amplitude and time (sampling frequency not constant) in a $10000\times 2$ double variable.</p>
<p>The data is available at: <a href="https://1drv.ms/x/s!AoCOij4si31tzgY89bhr6XH-_gQq" rel="noreferrer">https://1drv.ms/x/s!AoCOij4si31tzgY89bhr6XH-_gQq</a></p>
<p>How can I do some sort of frequency analysis (FFT, DFT) or similar. How can I do it in Matlab?</p>
<p>Sorry if the question is duplicated but I couldn't find any answer for my problem.</p>
Answer: <h1>The DFT Matrix for Non Uniform Time Samples Series</h1>
<h2>Problem Statement</h2>
<p>We have a signal <span class="math-container">$ x \left( t \right) $</span> defined on the interval <span class="math-container">$ \left[ {T}_{1}, {T}_{2} \right] $</span>.<br />
Assume we have <span class="math-container">$ N $</span> samples of it given by <span class="math-container">$ \left\{ x \left( {t}_{i} \right) \right\}_{i = 0}^{N - 1} $</span>. The samples time <span class="math-container">$ {t}_{i} $</span> is arbitrary and not necessarily uniform.</p>
<p>We're after the DFT of the samples <span class="math-container">$ \left\{ X \left[ k \right] \right\}_{k = 0}^{K - 1} $</span> as it was samples in a uniform manner (Implicitly means the samples in Frequency domain will be uniform as well).</p>
<h2>Deriving the Connection</h2>
<p>In the <a href="https://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow noreferrer">DFT Transform</a> the connection between time and frequency is given by:</p>
<p><span class="math-container">$$ x \left[ n \right] = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N} n } \tag{1} \label{EqnIdft} $$</span></p>
<p>In <span class="math-container">$ \eqref{EqnIdft} $</span> we use <span class="math-container">$ n $</span> for modeling the sample index in time. We usually build samples in time as <span class="math-container">$ x \left[ n \right] = x \left( n {T}_{s} \right) $</span> where <span class="math-container">$ {T}_{s} $</span> is a uniform sampling interval.<br />
Hence we could write:</p>
<p><span class="math-container">$$ x \left( n {T}_{s} \right) = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N {T}_{s}} n {T}_{s}} \tag{2} \label{EqnIdft2} $$</span></p>
<p>In <span class="math-container">$ \eqref{EqnIdft2} $</span> we added explicit scaling of time. This is a known property of Fourier transform family which scales the domain in order to normalize the transform.</p>
<p>Now, there is nothing which blocks us from using arbitrary time:</p>
<p><span class="math-container">$$\begin{align*} \tag{3} \label{EqnIdft3}
x \left( t \right) & = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N {T}_{s}} t} && \text{} \\
& = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k {F}_{s}}{N} t} && \text{Since $ {F}_{s} = \frac{1}{{T}_{s}} $}
\end{align*}$$</span></p>
<p>As can be seen <span class="math-container">$ \eqref{EqnIdft3} $</span> makes sense as it goes through each element according to its frequency and sums to give the output at time <span class="math-container">$ t $</span>. We can go step farther and generalize it for cases we don't have uniform sampling frequency.<br />
The average sampling frequency is given by <span class="math-container">$ \bar{F}_{s} = \frac{N}{ {T}_{2} - {T}_{1} } $</span>. Let's define <span class="math-container">$ T = {T}_{2} - {T}_{1} $</span> and we'll get:</p>
<p><span class="math-container">$$ x \left( t \right) = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{ j 2 \pi k \frac{t}{T} } $$</span></p>
<p>Which is many ways resembles the <a href="https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform" rel="nofollow noreferrer">DTFT Transform</a> equation which does the same in the other direction, transforming uniform discrete samples in time domain to arbitrary frequency (Within a frequency interval) in Frequency Domain:</p>
<p><span class="math-container">$$\begin{align*} \tag{4}
X \left( f \right) & = \sum_{n = 0}^{N - 1} x \left[ n \right] {e}^{-j 2 \pi f {T}_{s} n } && \text{} \\
& = \sum_{n = 0}^{N - 1} x \left[ n \right] {e}^{-j 2 \pi \frac{f}{ {F}_{s} } n } && \text{Since $ {F}_{s} = \frac{1}{{T}_{s}} $}
\end{align*}$$</span></p>
<p>We see the same scaling, <span class="math-container">$ \frac{f}{ {F}_{s} } $</span> which scales the continuous <span class="math-container">$ f $</span> relative to the interval of frequencies <span class="math-container">$ {F}_{s} $</span> which is equivalent to <span class="math-container">$ \frac{t}{ T } $</span> which scales <span class="math-container">$ t $</span> relative to the time interval of the continuous signal.</p>
<h2>The Transform Matrix</h2>
<p>So, given the set of time indices <span class="math-container">$ {\left\{ {t}_{i} \right\}}_{i = 0}^{N - 1} $</span> the transformation matrix, from frequency domain to time domain, is given by:</p>
<p><span class="math-container">$$ D \in \mathbb{R}^{N \times K}, \; {D}_{i, k} = {e}^{ j 2 \pi k \frac{ {t}_{i} }{T} } $$</span></p>
<h2>The Model</h2>
<p>In vector form the model is:</p>
<p><span class="math-container">$$ x = D y $$</span></p>
<p>Where <span class="math-container">$ y \in \mathbb{C}^{K} $</span> is the vector of the frequency coefficients in uniform grid, <span class="math-container">$ x $</span> is the samples in time (Non Uniform, Or at least no assumption of uniformity) and <span class="math-container">$ D $</span> as defined above.<br />
Since in our model we're after <span class="math-container">$ y $</span> the answer is given by:</p>
<p><span class="math-container">$$ y = {D}^{\dagger} x $$</span></p>
<p>Where <span class="math-container">$ {D}^{\dagger} $</span> is the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="nofollow noreferrer">Pseudo Inverse Matrix</a> of <span class="math-container">$ D $</span>.</p>
<h2>Implementation & Results</h2>
<p>The code is as following:</p>
<pre><code>subStreamNumberDefault = 79;
run('InitScript.m');
figureIdx = 0;
figureCounterSpec = '%04d';
generateFigures = ON;
%% Simulation Parameters
samplingFrequency = 101; %<! [Hz]
samplingInterval = 1 / samplingFrequency; %<! [Sec]
startTime = 1; %<! [Sec]
endTime = 4; %<! [Sec]
timeInterval = endTime - startTime; %<! [Sec]
numSamples = round(samplingFrequency * timeInterval);
numSamplesTT = round(1.2 * numSamples);
signalFreq = 2; %!< [Hz]
% The uniform time grid
vT = linspace(startTime, endTime, numSamples + 1);
vT(end) = [];
vT = vT(:);
% The non uniform time grid - Reconstruction
vTT = endTime * rand(numSamplesTT, 1);
vTT = sort(vTT, 'ascend');
% The non uniform time grid - DFT
vTD = linspace(startTime, endTime, (10 * numSamples) + 1);
vTD(end) = [];
vTD = vTD(sort(randperm(length(vTD), numSamples)));
vTD = vTD(:);
% The uniform frequency grid
vF = (samplingFrequency / 2) * linspace(-1, 1, numSamples + 1);
vF(end) = [];
vF = vF(:);
vK = [-floor(numSamples / 2):floor((numSamples - 1) / 2)];
vK = vK(:);
%% Generate Data
vX = cos(2 * pi * signalFreq * vT);
vFx = fftshift(fft(vX));
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = subplot(1, 2, 1);
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
set(get(hAxes, 'Title'), 'String', {['Reference Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hAxes = subplot(1, 2, 2);
hStemObj = stem(vF, abs(vFx));
set(hStemObj, 'LineWidth', lineWidthNormal);
set(get(hAxes, 'Title'), 'String', {['DFT of the Reference Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Frequency [Hz]']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Magnitude']}, ...
'FontSize', fontSizeTitle);
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
%% Analysis - Reconstruction
mD = exp(1j * 2 * pi * (vTT / timeInterval) * vK.') / numSamples;
% Reconstruction according to the model
vY = real(mD * vFx);
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
hLineSeries = plot(vTT, vY);
set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
%% Analysis - DFT of the Non Uniformly Sampled Data
vY = cos(2 * pi * signalFreq * vTD);
mD = exp(1j * 2 * pi * (vTD / timeInterval) * vK.') / numSamples;
vFy = pinv(mD) * vY;
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
hLineSeries = plot(vTD, vY);
set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hStemObj = stem(vF, abs([vFx, vFy]));
set(hStemObj, 'LineWidth', lineWidthNormal);
% hLineSeries = plot(vTT, vY);
% set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['DFT of the Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Frequency [Hz]']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Magnitude']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
</code></pre>
<p>Results are:</p>
<p><img src="https://i.sstatic.net/ttz0k.png" alt="" />
<img src="https://i.sstatic.net/fiPAH.png" alt="" />
<img src="https://i.sstatic.net/4R1Is.png" alt="" />
<img src="https://i.sstatic.net/FVTTk.png" alt="" /></p>
<h2>Summary</h2>
<p>In this post we derived how to estimate the Uniform DFT of a Non Uniform Time Series by solving linear system of equations.</p>
<p>The full code is available on my <a href="https://github.com/RoyiAvital/StackExchangeCodes" rel="nofollow noreferrer">StackExchange Signal Processing Q32137 GitHub Repository</a> (Look at the <code>SignalProcessing\Q32137</code> folder).</p>
<h2>Remark: Why Do We Need to Apply <code>fftshift()</code> on the DFT of the Signal?</h2>
<p>Indeed in the Reconstruction part we use <code>fftshift()</code>. The shallow answer is easy, we also build the vector <code>vK</code> as symmetric around zero.<br />
But there is a deeper reason for that. In the DFT when we use uniform sampling in Frequency Domain and Time Domain <em>Magic</em> happens without us seeing it explicitly.</p>
<p>When we defined the term <span class="math-container">$ \frac{k}{ N {T}_{s} } n {T}_{s} $</span> we replaces <span class="math-container">$ n {T}_{s} $</span> with <span class="math-container">$ t $</span> hence we prevent the term <span class="math-container">$ {T}_{s} $</span> to cancel itself. Now setting <span class="math-container">$ {F}_{s} = \frac{1}{{T}_{s}} $</span> means that we multiply by <span class="math-container">$ k $</span> and we get frequencies which are out of the Nyquist Frequency.<br />
In most cases when we that happens the Modulo property of the exponent comes in and we get the correct negative value of the frequency in the range <span class="math-container">$ \left[ -\pi, \pi \right] $</span>. Yet when <span class="math-container">$ t $</span> is arbitrary we can think that <span class="math-container">$ {F}_{s} $</span> is changing per sample which means when we go farther than <span class="math-container">$ \pi $</span> the modulo doesn't bring us to the correct answer.</p>
<p>First, as intuition, always think the DFT is defined on the <span class="math-container">$ \left[ -\pi, \pi \right] $</span> interval and it is continuous. So as long as you work on this range things works as intended. This intuition can come from the Fourier Series and Discrete Fourier Series (DFS).</p>
<p>Let's try explaining it using a concrete example. Let's examine the exponent term from the derivation:</p>
<p><span class="math-container">$$ 2 \pi \frac{k}{N {T}_{s}} n {T}_{S} = 2 \pi \frac{k}{{F}_{S}} \frac{{F}_{s}}{N} n = 2 \pi \frac{k b}{{F}_{s}} n $$</span></p>
<p>Where <span class="math-container">$ b $</span> is the Bin Resolution in the Frequency domain. Now given the signal is:</p>
<p><span class="math-container">$$ x \left( t \right) = \cos \left( 2 \pi f t \right) \Rightarrow x \left( n {T}_{s} \right) = \cos \left( 2 \pi f {T}_{s} n \right) \Rightarrow x \left[ n \right] = \cos \left( 2 \pi \frac{f}{ {F}_{s} } n \right) $$</span></p>
<p>For <span class="math-container">$ {F}_{s} = 100 $</span> [Hz] and <span class="math-container">$ N = 100 $</span> (Which means <span class="math-container">$ b = 1 $</span>) we will have delta at <span class="math-container">$ k = 2 $</span> and <span class="math-container">$ k = 98 $</span>. For <span class="math-container">$ k = 98 $</span>:</p>
<p><span class="math-container">$$ 2 \pi \frac{98}{{F}_{s}} n $$</span></p>
<p>This is clearly above the Nyquist frequency (<span class="math-container">$ \frac{{F}_{s}}{2} $</span>) and only for <span class="math-container">$ {F}_{s} = 100 $</span> its modulo is <span class="math-container">$ -2 $</span> which is correct. But in the model above, since we have arbitrary <span class="math-container">$ t $</span> one could think we have changing <span class="math-container">$ {F}_{s} $</span> which means we don't get the correct value.</p>
<p>This means the actual equation should be:</p>
<p><span class="math-container">$$ x \left( t \right) = \frac{1}{N} \sum_{k = \left \lfloor - \frac{K}{2} \right \rfloor }^{ \left \lfloor \frac{K - 1}{2} \right \rfloor } X \left[ k \right] {e}^{ j 2 \pi k \frac{t}{T} } $$</span></p>
|
https://dsp.stackexchange.com/questions/32137/frequency-analysis-dft-fft-of-a-signal-without-a-constant-sampling-frequency
|
Question: <p>MATLAB has a <a href="http://www.mathworks.com/help/signal/ref/spectrogram.html" rel="nofollow">spectrogram</a> function for the time-frequency analysis of a single signal. It also has a <a href="http://www.mathworks.com/help/signal/ref/cpsd.html" rel="nofollow">cpsd</a> function for estimating the cross-frequency spectrum for two signals. However, cpsd averages across windows, collapsing the time axis into a single estimate. </p>
<p>Is there a function in this same family that does not average, but returns a time-frequency cross-spectrum instead? </p>
<p>Because I want to use this in a step-by-step tutorial for teaching purposes, I'd like to avoid the following two possible solutions:</p>
<ul>
<li>The <a href="http://chronux.org/Documentation/chronux/spectral_analysis/continuous/cohgramc.html" rel="nofollow">cohgramc</a> function from the Chronux toolbox (uses a multitaper spectral estimation method I don't want to get into)</li>
<li>The <a href="http://www.mathworks.com/help/wavelet/examples/wavelet-coherence.html" rel="nofollow">wcoher</a> function from the Wavelet toolbox (uses wavelets, idem)</li>
</ul>
<p>Both of these methods return nice time-frequency cross-spectra, but I would like just to have a basic version with the familiar windowing parameters that spectrogram and cpsd use. Does this exist?</p>
Answer: <p>I don't know of a function that does all of what you ask, but it is easy enough to write. They key step is to create short-time integrations on which the FFTs may be performed. As you say, cpsd then averages all these STIs, but you can write your own to skip that step.</p>
<p>The key function to do this is $y = buffer (x,fftlen,overlap)$. You input your FFT processing parameters, similar to cpsd, and get a matrix of overlapping time series. I apply a Hanning window to each STI before performing FFT. Then you can simply apply $S = FFT(y)$ to get a matrix of overlapped spectra.</p>
<p>So it would look something like this:</p>
<pre><code>% Assuming x is longer than fftlen:
overlap = fftlen/2; % 50% overlap
win = hanning(fftlen);
X = buffer(x,fftlen,fftlen/2,'nodelay'); % Matrix of overlapping STIs
numSTIs = size(X,2);
winX = X.*win(:,ones(1,numSTIs); % Time-domain windowed STIs
S = fft(winX,fftlen,1)/fftlen; % Double-Sided Complex Spectrum Matrix
SdB = 20*log10(2*abs(S(1:fftlen/2+1,:))); % Log Scale Single-Sided Real Spectrum Matrix
</code></pre>
<p>You can then calculate your auto-spectrum using either of these matices. Of course you would need an x2 to repeat and calculate a cross-spectrum matrix.</p>
<pre><code>for i = 1:numSTIs
Sx1x2(:,:,i) = S1(:,i)'*S2(:,i);
end
</code></pre>
<p>(If anyone knows of a way to do this without a for loop, please let me know!)</p>
|
https://dsp.stackexchange.com/questions/11503/how-can-i-compute-a-time-frequency-cross-spectrum-in-matlab
|
Question: <p>I have some 64 channel EEG data sampled at 256Hz and I'm trying to conduct a time frequency analysis for each channel and plot a spectrogram.</p>
<p>The data is stored in a numpy 3d array, where one of the dimensions has length 256, each element containing a microvolt reading over all sampled time points (total length is 1 second for each channel of data)</p>
<p>To be clear: my 3D array is 64*256*913 (electrode * voltages * trial). Trial is just a single trial of an experiment. So what I want to do is take a single electrode, from a single trial, and the entire 1D voltage vector and creating a time-frequency spectrogram. So I want to create a spectrogram plot from data[0,:,0] for example.</p>
<p>For each electrode, I want a plot where the y axis is frequency, x axis is time, and colour/intensity is power</p>
<p>I have tried using this in python: </p>
<pre><code>from matplotlib.pyplot import specgram
#data = np.random.rand(256)
specgram(data, NFFT=256, Fs=256)
</code></pre>
<p>This gives me something that looks like this:</p>
<p><a href="https://i.sstatic.net/F5Ukt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F5Ukt.png" alt="enter image description here"></a></p>
<p>Right off the bat this looks incorrect to me because the axis ranges are incorrect</p>
<p>Furthermore, when I run the same code for all EEG channels, over all of my data, I end up with the exact same plot (even though I have verified that the data is different for each)</p>
<p>I'm pretty new to signal processing, is there somewhere that I went wrong in either how my data is laid out or how I used my function?</p>
Answer: <p>The idea of a spectogram is to split your signal into a number of blocks or frames, which are potentially overlapping. After windowing, an FFT is calculated per frame. The output of these FFTs are collected as column vectors in your graph. Thus, the x-axis is related to time and the y-axis is related to frequency. Since the FFT of a real-valued signal is symmetric, only half the frequencies have to be plotted. Therefore, the y-axis goes from 0 to Fs/2=128. </p>
<p>Since you have chosen the FFT length (<code>NFFT=256</code>) equal to the lenght of your signal, you have only one full frame of data. </p>
<p>Potential solutions are to reduce your FFT length <code>NFFT</code> or increase your data length. For now, the easiest way seems to be in reducing <code>NFFT</code>. Note however, that you also need to adjust <code>noverlap</code> in that case. A widely used amount of overlap is <code>NFFT/2</code>. </p>
<p><strong>Example:</strong>
The following command will give you a spectogram.</p>
<pre><code>specgram(data, NFFT=64, Fs=256, noverlap=32)
</code></pre>
<p>Alternative implementation:</p>
<pre><code>Nfft=64
specgram(data, NFFT=Nfft, Fs=256, noverlap=Nfft/2)
</code></pre>
<p>The FFT length parameter <code>NFFT</code> leads to a tradeoff between time and frequency resolution.</p>
<p><strong>EDIT</strong> To answer additional questions in the comments.</p>
<p>From the documentation in
[<a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.specgram]" rel="nofollow">http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.specgram]</a>:</p>
<blockquote>
<h1>Returns the tuple (spectrum, freqs, t, im):</h1>
<h3>spectrum: 2-D array</h3>
<p>columns are the periodograms of successive segments</p>
<h3>freqs: 1-D array</h3>
<p>The frequencies corresponding to the rows in spectrum</p>
<h3>t: 1-D array</h3>
<p>The times corresponding to midpoints of segments (i.e the columns in >spectrum)</p>
<h3>im: instance of class AxesImage</h3>
<p>The image created by imshow containing the spectrogram</p>
</blockquote>
<p>Thus, the first element in the returned tuple contains the power values.</p>
|
https://dsp.stackexchange.com/questions/25115/python-time-frequency-spectrogram
|
Question: <p><strong>Explanation:</strong></p>
<p>I would like to analyse the data from an experiment, which investigates the performance of a mechanical component using sensors, that has generated <strong>2000 CSV</strong> files. Each file contains <strong>513 Rows</strong> x <strong>1220411 Cols</strong>, and they are in spectrogram format (columns are time and rows are frequency):</p>
<pre><code>| Time (s)| 0.0000 |0.000164|...
|:--------|--------|:-------|:-------
| 1.52kHz | 2747 | 350 |...
| 3.05kHz | 2996 | 420 |...
| 4.57kHz | 4078 | 300 |...
| ... | ... | ... |...
</code></pre>
<p>I have plotted my 3D chart of the first 100 rows using persp3D():</p>
<pre><code>persp3D(x,y,z, theta=45, phi=5, xlab="Frequency (kHz)", ylab="Time (s)", axes=TRUE, expand=0.5, shade=0.2)
</code></pre>
<p><a href="https://i.sstatic.net/xEGYt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xEGYt.jpg" alt="enter image description here"></a>
I would like to extract 1-4 column (1 sample of time for all frequencies) from each file to get a table of data with the total of 2000-8000 columns (for 2000 files with all the frequencies) and plot this to get the 3D plot of the experiment. </p>
<p><strong>Question: best data reducing method</strong></p>
<ul>
<li>I would like to know what is the best method to make sure that those 1-4 columns represent the total data set in each file?</li>
<li>Is averaging simply a good method in this case? What are the alternatives?</li>
</ul>
Answer: <p>So, first of all, CSV seems to me the least suitable format imaginable for this amount of data. It needs to be parsed, is memory hungry, and wastes precision, and isn't linearly addressable (ie. to get to the 99999. element, you need to parse the preceeding 99998 elements). </p>
<p>So I'd recommend keeping the structure from these files, but converting them to binary files, eg. numpy arrays, HDF5, whatever, but keep it binary, parse-free. Convert once and keep that data as what you load – it will reduce your file loading times immensely, and your software (whatever you're using) possibly doesn't even need to load all the file at once into memory, because when the data format has a fixed bit width per element (e.g. always 4-Byte integers and 8-Byte floating point numbers), it's trivial for the software to just load the part of the file into memory that you need. In fact, operating systems do that for you ("mmaped files").</p>
<p>Reducing your text file format to e.g. an array of single-precision floats or 32bit integers, your total data size would be a mere 8GB – and that fits multiple times in the RAM of every modern workstation, so I'd argue it's not really "big data" anymore! (Just to take the fear of handling so many numbers away)</p>
<p>I've ranted about this before, and people have answered that "CSV files are human-readable". I dare anyone who says so to read a 100k-column, 500 row CSV file and justify their impression of the data as being representable for the whole set.</p>
<hr>
<blockquote>
<p>I would like to know what is the best method to make sure that those 1-4 columns represent the total data set in each file?</p>
</blockquote>
<p>As always in engineering, there's no "best" without <strong>you</strong> defining a measure for good. If you want to highlight the differences among different frequencies, pick the column e.g. that has the highest sample variance. If you want to represent the whole measurement "fair", don't pick a single column, but average all columns. It's impossible to answer that question without being you – it's you that wants to show something. What you have is data – meaning is something that you humanly attribute (and, yes, you <strong>must</strong> be aware that the choice of data reduction method is <strong>your biased decision</strong>, and you should definitely communicate that). </p>
<blockquote>
<p>Is averaging simply a good method in this case?</p>
</blockquote>
<p>It's the first thing I'd try to see if I can still see something interesting.
That's a pure coincidence – I might as well just pick a random one!</p>
<blockquote>
<p>What are the alternatives?</p>
</blockquote>
<p>Impossible to tell, again – picking one column based on some metric, calculating variance, third statistical moment, postulate a certain functional shape would be handy and fit that shape to your data set … The sky is the limit, and what you want to find out is your only guideline.</p>
|
https://dsp.stackexchange.com/questions/36839/time-frequency-analysis-of-big-data-data-size-reduction-averaging-the-most-ap
|
Question: <p>Let us imagine an LTI system with physically realizable input (ruling out fancy mathematical functions and the concomitant complexities and paradoxes) completely known from -$\infty$ to $\infty$. We want to calculate the output. We can analyse it in time domain using the linear constant coefficient differential equation and in the frequency domain using the Fourier representation of the signal and frequency response of the system. </p>
<p>The outputs calculated from both the methods should be identical. In time domain, we get (transients + steady state) response of the system, In frequency domain, the sinusoids are eternal and hence transients are generally neglected. But since we are talking about the same system with the same input, the outputs calculated from both the methods should be the same. </p>
<p>Will they be identical in spite of neglecting the transients in the frequency domain?</p>
Answer: <p>now, even a steady-state sinusoid can be thought of, in the limit, as a sum of weighted pulses, each with a beginning and some kinda end. how do all of these <em>"transient"</em> signals add up to a steady-state sinusoid? but they do:</p>
<p>$$\begin{align}
x(t) &= \lim_{T \to 0} \sum_{n=-\infty}^{+\infty} x(nT) \cdot \operatorname{rect}\left( \frac{t-nT}{T} \right) \\
&= \lim_{T \to 0} \sum_{n=-\infty}^{+\infty} x(nT) \cdot \frac{1}{T}\operatorname{rect}\left( \frac{t-nT}{T} \right) \cdot T \\
&= \int\limits_{-\infty}^{\infty} x(u) \cdot \delta(t-u) \cdot du
\end{align}$$</p>
<p>where $ \operatorname{rect}(u) \triangleq \begin{cases} 1 \quad |u|<\tfrac12 \\ 0 \quad |u| > \tfrac12 \end{cases} $</p>
<p>The Fourier integral models the entire time function as an infinitely dense and infinitely large set of sinusoids. in a similar manner, those steady-state functions can add to something with a transient.</p>
<p>$$ x(t) = \int\limits_{-\infty}^{+\infty} X(f) \, e^{j2\pi ft} \ df $$</p>
|
https://dsp.stackexchange.com/questions/31299/time-domain-and-frequency-domain-analysis-equivalence
|
Question: <p>Assume you have a signal, and within it, some pulses are present. A pulse is a simple tone. You know the pulses' duration and shape. (Let us assume that a pulse is made of a couple of cycles, and then to which all those cycles are multiplied by a hamming window. So the final pulse may look like the blue plot below: </p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/d/d7/Analytic.svg" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/d/d7/Analytic.svg" alt="something like this"></a> </p>
<p>What we do not know are its frequency. (You know its frequency to within $\pm 100\textrm{ Hz}$). </p>
<p>The question is:</p>
<p>Does performing a match-filtering of a signals' absolute magnitude spectrogram with a 2-D version of your pulse in the <em>time-frequency domain</em>, confer upon you any advantages, versus performing a match-filtering of the signals' (shown in red as an example), against the known <em>envelope</em> of the pulse, in the time-domain? </p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/d/d7/Analytic.svg" alt="[envelope">]<a href="https://upload.wikimedia.org/wikipedia/commons/d/d7/Analytic.svg" rel="nofollow noreferrer">2</a>* </p>
<p>For the TF-domain method, assume:</p>
<ul>
<li>STFT analysis. </li>
<li>I am using an analysis window equal to the expected pulse length.</li>
<li>Percent Overlap: Whatever you want, I do not think it matters for this case. </li>
</ul>
<p>I am really on the fence on this one because on the one hand, you cannot create information out of nothing, so taking your problem to the time-frequency space seems redundant, while on the other hand, going into the time-frequency space allows you to, perhaps, create 2-D filters that better match your pulse, and/or, ignore noise from other bands which are (perhaps?) not ignored in the time-domain match-filtering case? </p>
<p>My biggest point of confusion is that, inherent to going into the TF domain, we now have both time and frequency localization ambiguity, (based on our choice of the analysis window we use). In contrast, in the time domain, we are $100\%$ sure of our time localization. How - or why - would trading in $100\%$ time-locatlization unambiguity for some joint time-frequency ambiguity help? I am not seeing it.</p>
<p><strong>EDIT</strong>: </p>
<p>Another way to look at the problem is with this rephrase: <strong><em>When</em> would one want to do match filtering in <em>only</em> the time-domain ($0\%$ time ambiguity, $100\%$ frequency ambiguity), vs doing it in the joint TF-domain, (x% time ambiguity, (1-x)% frequency ambiguity).</strong> </p>
<p>I had a broader question but broke it down into this one first. </p>
Answer: <p>Think of the time - frequency ambiguity of your matched filter like so:</p>
<ul>
<li>Frequency ambiguity means it will respond to a range of frequencies</li>
<li>Time ambiguity means the response will be 'smeared' around the spatial location.</li>
</ul>
<p>If you you have 0% frequency ambiguity, the matched filter must look like a sine wave and go on for ever, which in the frequency spectrum looks like a dirac delta.</p>
<p>0% time ambiguity is a dirac delta in the time domain.</p>
<p>So if you have a matched filter that is more than 1 sample wide in the time domain, then it already is ambiguous in both time and frequency domains.</p>
<p>If you are doing matched filtering of the envelope, then you are just looking at the modulating signal, and there is no need to look at the 2D time-frequency spectrogram.</p>
<p>If you want to match the envelop (modulating signal) and the base frequency, then you need a quadrature filter with a bandwidth around the range of frequencies you expect. A quadrature filter is required because it will make the response invariant to the phase of the base signal.</p>
<p>If you don't know the base frequency, a 2D time-frequency spectrogram will be useful as it will show you what frequency is being modulated. Essentially, the spectrogram is response of the signal - time axis - to a bunch of different (centre) frequency quadrature filters - frequency axis. </p>
<p>TLDR:</p>
<p>The premise that the evelope-matched filter in the time domain is 100% localised is incorrect.</p>
|
https://dsp.stackexchange.com/questions/2400/match-filter-in-time-frequency-domain-instead-of-just-time-domain-redundant-or
|
Question: <p>The uncertainty principle states that there is a trade off between time and frequency. So, finding frequency components at specific time is impossible. However, the instantaneous frequency measure the frequency as a function of time. Which means using the instantaneous frequency, the frequency components could be found for a signal at a specific time. How can you interpret this? Why don't we use the instantaneous frequency for time-frequency analysis? </p>
Answer: <p>The uncertainty principle works in the presence of (an uncertain amount of) noise or other signals (including possible harmonics), corrupting the exact phase, and thus the rate of change of phase of the signal of interest. If the phase is corrupt, or mixed with the phase of other signals, then deriving an instantaneous frequency from the 1st derivative of that phase might produce nonsense. Time-frequency analysis might be one way to (statistically?) separate information about the signal of interest out of these potentially existing "corrupting" influences.</p>
<p>Whereas the phase of a perfectly analytic signal including zero additive noise is better defined.</p>
|
https://dsp.stackexchange.com/questions/38970/does-the-instantaneous-frequency-contradict-the-uncertainty-principle
|
Question: <p>Given the acceleration response time history of a multi-story structure, how can I find natural frequencies using time-frequency analysis techniques?
If you just provide some references or articles, I truly appreciate it.</p>
Answer: <p>In general, you need the time history of the excitation in order to interpret the response, like Dan Boschen has said.</p>
<p>But under certain circumstances, the excitation of the system can be assumed to be minimum entropy (or quite similarly minimum energy), which allows one to identify an estimate of the system's transfer function from the response alone. Therefore, this method does not require knowledge of the precise excitation, rather an estimate of the excitation is obtained from the method as well. This is called <a href="https://en.wikipedia.org/wiki/Linear_prediction" rel="nofollow noreferrer">linear prediction</a>.</p>
<p>I think the most prominent example of linear prediction is the estimation of the transfer function of the vocal tract from a voice recording. In the GSM standard this is used as a means to compress speech information, which is called Linear Predictive Coding (LPC). Only the resonances of the vocal tract have to be transmitted then (together with the tone pitch), not the individual samples of the speech. The reason why this works is that the vocal tract is excited by the vocal chords, which clap together and cause delta-peak-like excitations, which are minimum entropy. Also, if the person is whispering, the excitation is white-noise, which is also minimum entropy.</p>
<p>For a building, I don't know, but probably the random hits of an earthquake might serve as a minimum entropy excitation. But be aware that the total response in this case might be resulting from the building reponse <em>and</em> the response of the earth's crust (reflections? refractions? the original excitations is in several kilometers depth). So what enters the building foundation might already have been filtered by the earth. So don't take my word for it, that you can apply this to your use case.</p>
<p>If it was, you could also look at the spectrum of a single earthquake hit and see where some single most intensive resonances are, and you could even estimate the damping coefficient of a resonance (related to the width of the "spectral line"), if it is sufficiently far away from other resonances. But of course, this is only a very rough image, and it is not very rigorous. Moreover, it gets complicated if you want to find the mode shapes of the building, which requires the analysis of a lot of measurement locations in the building. This is infeasible to do by just looking at some plots, and so you would need the full math apparatus of linear prediction (again: if applicable).</p>
|
https://dsp.stackexchange.com/questions/86797/natural-frequencies
|
Question: <p>A recent publication, <a href="https://doi.org/10.1038/s43588-021-00183-z" rel="nofollow noreferrer">The fast Continuous Wavelet Transform (fCWT)</a>, enables real-time, wide-band, and high-quality, wavelet-based time–frequency analysis on non-stationary noisy signals.</p>
<p>I'm a beginner with wavelet and I'm working on real-time wavelet implementation. Is this fCWT a novelty in wavelet concept or just an optimization on the digital CWT computation?</p>
Answer: <p>I've modestly reviewed the paper.</p>
<p>I'm skeptical of its speedups and implementation accuracy. It includes time of sampling the wavelets in benchmarks, which is valid, but arguably the main use case is if wavelets are pre-computed and reused. Paper also make several dubious statements that suggest the authors don't really know what they're doing (especially regarding "resolution"), or would even know if they were wrong.</p>
<p>To test its correctness, one should pass in a unit impulse and compare the complex-valued output against known correct implementations:</p>
<pre><code>x = np.zeros(N)
x[N//2] = 1
out0 = cwt0(x)
out1 = cwt1(x)
</code></pre>
<p>I believe MATLAB is correct, but in Python I only know of one that's correct and has complex-valued outputs: <a href="https://github.com/OverLordGoldDragon/ssqueezepy" rel="nofollow noreferrer">ssqueezepy</a>, which I authored. SciPy and PyWavelets are <a href="https://dsp.stackexchange.com/q/70642/50076">not correct</a>.</p>
<p>Moreover, authors conveniently excluded ssqueezepy from their comparisons: they claim x34 speedup against PyWavelets, while ssqueezepy shows x10; this makes them only x3.4 faster than ssqueezepy (but to be fair, they aren't the same configurations).</p>
<p>I'm working on a CWT that should be, worst case, x2 faster than it currently is, and several times faster best case - but one doesn't necessarily need to wait; discussed <a href="https://dsp.stackexchange.com/a/83495/50076">here</a>.</p>
<blockquote>
<p>Is this fCWT a novelty in wavelet concept</p>
</blockquote>
<p>There's only one CWT. The only thing that can change is the wavelets or padding used, which isn't the subject of the paper, but the paper misleadingly suggests otherwise with "higher resolution" claims.</p>
|
https://dsp.stackexchange.com/questions/83469/does-fast-continuous-wavelet-transform-fcwt-have-theory-supported-novelty-or-j
|
Question: <p>This question is an extension to the question about WVD vs STFT originally posted <a href="https://dsp.stackexchange.com/questions/86211/wigner-ville-distribution-wvd-vs-stft-for-spectral-analysis/86287?noredirect=1#comment182690_86287">Here</a>. During the QA it was pointed out that the WVD only works for noiseless signals.</p>
<p>To test that out I created a simple chirp signal in MATLAB and compared WVD spectrograms at different SNRs.</p>
<p>Below is the time-domain signal on the left and the WVD corresponding WVD on the right for 25dB SNR:</p>
<p><a href="https://i.sstatic.net/6l3XY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6l3XY.png" alt="Time-Domain Chirp Signal and WVD at 25dB SNR" /></a></p>
<p>Below is for the 0dB SNR case:</p>
<p><a href="https://i.sstatic.net/N8vTD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N8vTD.png" alt="Time-Domain Chirp Signal and WVD at 0dB SNR" /></a></p>
<p>Even at 0dB SNR the presence of chirp is still visible in the WVD spectrogram, is MATLAB doing any other post-processing as well? and is the WVD really useless for time-frequency analysis of real-world signals?</p>
Answer:
|
https://dsp.stackexchange.com/questions/86297/comparison-of-wvd-vs-stft-spectral-analysis-in-the-presence-of-noise
|
Question: <p>I have a sum of periodic signals that I am trying to untangle using time-frequency analysis. I seem to get wildly different results depending on the window length and shape. This is a problem because I want to develop an automated, and hopefully sequential algorithm to do the job.</p>
Answer: <p>Window functions have an inherent tradeoff between two of their frequency-domain properties:</p>
<ul>
<li><p><strong>Main lobe width:</strong> Any tapered window function will cause some "smearing" in the frequency domain. This is visualized by the width of the center lobe in the window function's frequency response. The wider the main lobe, the more difficult it is to resolve two tones that are close in frequency (if they are closer to one another than the main lobe width, they will tend to smear together). So ideally, you would like to have a window function that has a very narrow main lobe.</p></li>
<li><p><strong>Maximum sidelobe height:</strong> Many window functions have frequency responses that consist of a single main lobe surrounded by repeated sidelobes that decay at some window-specific rate. The height of these sidelobes can make it difficult to resolve two tones that are separated in frequency, but differ greatly in amplitude. So ideally, you would like to have a window function that has very low sidelobes.</p></li>
</ul>
<p>The problem: if you decrease the main lobe width of a window function, the sidelobes will grow, and vice versa. So, you need to strike an application-specific balance when choosing a window, based upon the distances in frequency and amplitude that you expect between your signals of interest. Given specific parameters of your system, it's possible to choose a window that (hopefully) meets your requirements.</p>
<p>As far as choosing the length of your window (which is equivalent to choosing the length of the DFT), you're best served with making your observation as long as possible within the constraints that your application might impose (e.g. latency requirements, how long the signals of interest can be considered stationary, computational resources, etc.). Your ability to resolve in frequency is directly proportional to the observation length (measured in time, not necessarily based on the FFT length, which can be zero-padded with no improvement in frequency resolution).</p>
|
https://dsp.stackexchange.com/questions/1618/how-critical-is-the-selection-of-the-window-function-in-stfts
|
Question: <p>I thought this was supposed to be an obvious question, until I finally set up my real time system.</p>
<p>So basically I have a transmitter that sends 128 samples/second to a receiver. The transmitted information is stored as an object in MATLAB and continuously updated.</p>
<p>When people talk about real time signal processing, I'm really confused as to what they mean by real time. </p>
<p>For example, say I want to extract the "mean" feature of this signal. Do I compute the mean when I receive one sample, two samples, all 128 samples, or ...what is this mean value?</p>
<p>More intriguing for me is the prospect of doing real time wavelet transform for joint time frequency analysis. Again, the question of "real time" comes up. How many samples do I need to reliable compute the wavelet (or fourier) coefficients so I could get a good view of the energy contained in this signal. </p>
<p>Can anyone who is knowledgeable on this topic please elaborate how when and under what condition do you compute "features" or perform frequency domain analysis for a real time system.</p>
<p>Thanks!</p>
Answer: <p>'Real time' is a concept from computer engineering. A real time system is one that is guaranteed, by design, to execute a function or routine in a certain time T, or less. For example, a real-time avionics system is proven to react to signals coming from certain instruments in a time below a given threshold.</p>
<p>In your case, a more precise description (IMHO) of what you want is a "streaming system". You want a receiver that can process a stream of incoming samples without "dropping" samples; in other words, without its buffer overflowing. The easiest way to achieve that is to provide large enough computing power that the probability of dropping samples is very small.</p>
<p>This property is largely orthogonal to the problem of estimating signal features. Since the incoming signal is random, its features are going to vary. You may need to calculate, or find by experiment, how many samples do you need to process to have a useful feature estimate.</p>
<p>For example, these days most, if not all, transmitted signals have no DC component, so the mean will be close to zero all the time (barring imperfections in your analog front-end). I wouldn't worry about updating the mean estimate very often.</p>
<p>In modulation recognition, in contrast, you may need a few thousand samples, and the processing could take some time. You may decide to do something like gathering 5,000 samples (which in your case would take ~40 seconds), and you may take five second to process them, so you'll be updating that estimate every 45 seconds or so.</p>
<p>As you see, it really varies by estimate, and you'll need to figure out the best number in each case, given your requirements and your processing resources.</p>
|
https://dsp.stackexchange.com/questions/18740/what-does-real-time-signal-processing-mean
|
Question: <p>I have a question related to wavelet transform: we know that while the Fourier transform is good for a spectral analysis or which frequency components occurred in signal, it will not give information about at which time it happens. That's why the wavelet transform is suitable for the time-frequency analysis. It is also good for signal denoising, but of course it has some disadvantages.</p>
<p>So I would like to know what are main advantages of the wavelet transform? Is it good for spectral estimation; like finding amplitudes, frequencies and phases, or it just helps us to find discontinuous and irregularities of a signal?</p>
<p>Thanks in advance</p>
Answer: <p>If you consider the whole set of potential wavelet transforms, then you have a lot of flexibility. </p>
<p>For instance, should you use 1D continuous complex wavelet transforms, by analyzing the modulus and the phase of the scalogram, and provided you use well-chosen wavelets (potentially different for the analysis and the synthesis), and a proper discretization, you can:</p>
<ul>
<li>find discontinuities and irregularities of a signal and its derivatives <a href="https://i.sstatic.net/jg016.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jg016.jpg" alt="enter image description here"></a></li>
<li>find break point location by wavelet ridge extrapolation</li>
<li>denoise</li>
<li>perform matched filtering based on templates (with <a href="http://arxiv.org/abs/1108.4674" rel="nofollow noreferrer">complex continuous</a> or <a href="http://arxiv.org/abs/1405.1081" rel="nofollow noreferrer">discrete dual-tree wavelet</a> frames)</li>
<li><a href="http://www.scholarpedia.org/article/Wavelet-based_multifractal_analysis" rel="nofollow noreferrer">analyse (multi-)fractalty</a></li>
<li>analyse frequencies (with Gabor wavelets for instance)</li>
</ul>
<p>Due to the redundancy, and the quantity of available wavelets (not the same is best for different purposes), they could appear a little less efficient for the analysis of pure stationary and harmonics signals, for which Fourier is better suited.</p>
<p>The main drawbacks are:</p>
<ul>
<li>for fine analysis, it becomes computationaly intensive </li>
<li>its discretization, the discrete wavelet transform (comp. efficient), is less efficient and natural</li>
<li>it take some energy to invest in wavelets to become able to choose the proper ones for a specific purpose, and to implement it correcly.</li>
</ul>
|
https://dsp.stackexchange.com/questions/15148/disadvantages-of-wavelet-transform
|
Question: <p>I have some biomechanical data of a few subjects standing on a force plate. The center of pressure along the x and y axes was measured. The total time of measurement was 30s and the sampling frequency was 100Hz. </p>
<p>I want to observe if there is any reduction of the density P(t,ω) in higher frequencies as time passes.</p>
<p>I see that there are many time-frequency analysis methods and I was wondering which would be the best method to answer my question.</p>
<p>I have also another question. I inserted one signal after detrending the mean in the signal analyzer app in MATLAB but I am using it for the first time and it is like a black box for me. Which method does it use? Is it even suitable for my purpose? </p>
Answer:
|
https://dsp.stackexchange.com/questions/66338/calculating-the-spectrogram-of-the-center-of-pressure-time-series-in-human-stand
|
Question: <p>Is it possible to implement some sort of filter which adapts as a function of time?</p>
<p>Specifically, say I have a "noiseless" model of some signal which has the same frequency components at the same times as the signal I expect to measure, but with different phase and amplitudes (thus invalidating simple matched filtering).</p>
<p>So, in other words, the absolute value spectrograms (built by STFT, though I suppose any other time-frequency analysis such as CWT would be similar) of both signals will be similar in shape, though one of the acquisitions will be noisy, with different phase and amplitudes.</p>
<p>I guess I could do some filtering window by window, but I'm not entirely sure if this won't produce nasty artifacts when reconstructing in the time domain. Also, I wonder if there might be a smarter way of going about this problem.</p>
Answer:
|
https://dsp.stackexchange.com/questions/56313/bandpass-filtering-with-passband-changing-with-time
|
Question: <p>I am currently doing analysis on Photoplethysmograph (PPG) data and I want to know the frequency (heart rate) at every time point so a windowed FFT might not be the best option. I am looking at wavelet to generate frequency and time information. I have been working with matlab example code however I have trouble determining what is the best wavelet to use for this application. I do now know much about wavelet. Does the wavelet used being dictated by the shape of my signal. In my case PPG signal which looks like this:</p>
<p><img src="https://i.sstatic.net/L2VBx.gif" alt="http://home.lu.lv/~spigulis/PPG-bios02_files/image004.gif"></p>
<p>Or some other consideration is needed when choosing wavelet? How critical is the choice of wavelet?</p>
<p>Thanks,
Kelvin</p>
Answer: <p>What information are you trying to extract from your signal?</p>
<blockquote>
<p>I want to know the frequency (heart rate) at every time point</p>
</blockquote>
<p>If this is the information you want then any sort of frequency analysis is unlikely to be very useful. It will show you that you have a 1 or 2 hz periodic signal with a particular frequency profile, but you already know all of this information so that won't be especially illuminating.</p>
<p>You are probably better off with peak detection, or local minima/maxima detection to extrapolate beats.</p>
<p>If you can provide a sample of your data, and/or more details about the exact information you are trying to extract then it will help us to find a more precise solution to your problem.</p>
|
https://dsp.stackexchange.com/questions/18058/time-frequency-localization-using-wavelet-transform
|
Question: <p>In one of <a href="http://nptel.ac.in/" rel="nofollow noreferrer">NPTEL</a> courses about time-frequency analysis, the professor said that the duration bandwidth principle is $\sigma_t^2 \sigma_\omega^2 \ge \frac{1}{4}$.</p>
<p>He added that the formula making use of time resolution and frequency resolution is a false one. The time and frequency resolutions correspond to the time "distance" between two samples of the signal and frequency resolution corresponds to the "distance" between two samples in the frequency domain (two successive samples of the Fourier transform of the time domain signal). Can anyone, please, clarify this to me as I get to see the one using time and frequency resolutions in so many papers?</p>
<p>NB: Here is the link to the course I'm talking about <a href="http://nptel.ac.in/courses/103106114/" rel="nofollow noreferrer">http://nptel.ac.in/courses/103106114/</a></p>
Answer: <p>Defining the Fourier Transform:</p>
<p><span class="math-container">$$ \mathscr{F} \Big\{x(t)\Big\} \triangleq X(f) \triangleq \int\limits_{-\infty}^{+\infty} x(t) \ e^{-i 2 \pi f t} \ \mathrm{d}t $$</span></p>
<p>and inverse:</p>
<p><span class="math-container">$$ \mathscr{F}^{-1} \Big\{X(f)\Big\} \triangleq x(t) = \int\limits_{-\infty}^{+\infty} X(f) \ e^{i 2 \pi f t} \ \mathrm{d}f $$</span></p>
<p>This is the <em>"unitary"</em> definition so that the duality theorem exactly reverses the roles of <span class="math-container">$t$</span> and <span class="math-container">$f$</span> without scaling.</p>
<p>An important theorem, known as Weyl's, 1931, is:</p>
<p>If function <span class="math-container">$x(t)$</span> and related functions <span class="math-container">$\big(t\,x(t)\big)$</span>, <span class="math-container">$x'(t)$</span> are in <span class="math-container">$L^2$</span> (square integrable) with the related <span class="math-container">$\|\cdot\|$</span> <span class="math-container">$L_2$</span> norm symbol then:</p>
<p><span class="math-container">$$ \| x(t) \|^2 \le 2\| t\,x(t) \| \| x'(t) \| $$</span></p>
<p>Equality is attained when <span class="math-container">$x(t)$</span> is a modulated Gaussian/Gabor elementary function defined as:</p>
<p><span class="math-container">$$ x'(t) / x(t) \propto t $$</span></p>
<p>or practically as:</p>
<p><span class="math-container">$$x(t) = C \exp \big(-\alpha(t - \mu)^2 + i 2 \pi \nu (t - \mu) \ \big) \qquad \qquad \alpha, \nu, \mu \in \mathbb{R} \quad \alpha>0 $$</span></p>
<p>found by integration by part + <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy–Bunyakovsky–Schwarz</a>.
If one defines time or frequency location, as a center of mass related to energy as:</p>
<p><span class="math-container">$$ E = \int\limits_{-\infty}^{+\infty} |x(t)|^2 \ \mathrm{d}t = \int\limits_{-\infty}^{+\infty} |X(f)|^2 \ \mathrm{d}f$$</span></p>
<p>and</p>
<p><span class="math-container">$$ \overline{t} = \frac{1}{E} \int\limits_{-\infty}^{+\infty} t \ |x(t)|^2 \ \mathrm{d}t $$</span></p>
<p><span class="math-container">$$ \overline{f} = \frac{1}{E} \int\limits_{-\infty}^{+\infty} f \ |X(f)|^2 \ \mathrm{d}f $$</span></p>
<p>and energy dispersion as:</p>
<p><span class="math-container">$$\Delta t = \sqrt{ \frac{1}{E} \int\limits_{-\infty}^{+\infty} (t - \overline{t})^2 \ |x(t)|^2 \ \mathrm{d}t }$$</span>
<span class="math-container">$$\Delta f = \sqrt{ \frac{1}{E} \int\limits_{-\infty}^{+\infty} (f - \overline{f})^2 \ |X(f)|^2 \ \mathrm{d}f }$$</span></p>
<p>then for finite-energy every signal <span class="math-container">$x(t)$</span>, with <span class="math-container">$\Delta t$</span> and <span class="math-container">$\Delta f$</span> finite, one gets:</p>
<p><span class="math-container">$$\Delta t \, \Delta f \ge \frac{1}{4\pi}$$</span></p>
<p>The limit is attained for some Gaussian variations. My interpretation of the question is is more about the term 'time/frequency dispersion' than the fuzzy concept of 'time/frequency resolution'.</p>
|
https://dsp.stackexchange.com/questions/42867/uncertainty-principle-duration-bandwidth-principle
|
Question: <p>I have a requirement to detect/reduce sidetalk/background noise in real-time audio. I am stuck in how can I detect this from audio time-frequency domain analysis. I am already getting the time-freq data from stft (I am using java for an easier way to integrate with our project). Can I do this without any machine/deep learning algo as I have not so much idea about these.. and whenever I read any articles those mainly come to this point and hand over the data's to machine/deep learning algo. But when I visualizes the audio data in tfft (spectrum) via wavepad I clearly could identify the voice and noise within same/other freq band. But what could be the algo behind this? How can I detect these from time-freq stft data?
<a href="https://i.sstatic.net/bYeyu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bYeyu.png" alt="enter image description here" /></a>
Thanks in advance</p>
Answer: <p>After some r&d on this area, I found out that there is only one good way to approach the problem. That is Short Time Fourier Transform <strong>(STFT)</strong> as mere RMS of an audio frame is mixed energy of all frequencies present there. It can give an idea but will fail in many of the cases. but through STFT we are getting frequency bins (range) and those frequencies specific power where I can detect voice compared to a certain threshold and can detect noise/side-talk below that threshold. Now you can go 2 ways with the Infos.</p>
<p><strong>Number 1</strong> is obvious with complex scaling of the frequency power in the frequency domain and then reconstructing the audio in real-time. But after trying this I failed horribly as frequency scaling has a great impact on the time domain. (tried overlapping_samples+windowing+zero-pad+scaling+overlap-save etc and will continue exploring, make me know if you can help me as a newcomer)</p>
<p>And then I thought of another easier & fun way <strong>(Number 2)</strong> for a minimal/average noisy environment. You can collect as much information in frequency domain like voice/non-voice probability by the threshold, threshold pass/fail count in a single frame, and design an adaptive algorithm to calculate a "scale" value and apply this scale in the time domain. This won't give 100% background noise removal(as it will be present during actual speech) but when you are not talking it can work like magic if you can implement it correctly. So the background noise or side-talk will be scaled to really low and won't hurt others when you are not talking. Here is the result after I designed my adaptive algo and applied.</p>
<p><a href="https://i.sstatic.net/sRno5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sRno5.png" alt="enter image description here" /></a></p>
|
https://dsp.stackexchange.com/questions/81297/detecting-background-noise-from-audio-time-freq-domain-analysis
|
Question: <p>I collected some data for a practical application, where the signal represents force data obtained from an impact of a punch against a force plate attached to a quasi-rigid rig (it moves once the impact occurs). I have a few questions to understand how to deal with my data.</p>
<p>Features of the signal:</p>
<p>the signal (sampling 1000Hz) has a length of 2 seconds, but the impact last about 60ms. The recording is 2s for practical reasons though. Therefore, before the impact a baseline is recorded (values close to zero), and after the impact vibrations occur until data set back to baseline levels. I need to figure out a reasonable cut-off frequency to use combined with a 4th order butterworth filter to properly filter the signal.</p>
<ol>
<li>When exploring (FFT) the frequency of that signal should I use the whole 5s long signal or only the 30ms of interest? Due to sampling frequency, data points over 30ms maybe not enough?</li>
<li>Vibrations occurring due to the impact are not of interest, but the frequency is likely to be in part similar to the frequency of the portion of the signal of interest. How to deal with this? I could select from before the impact (arbitrary) to when the signal becomes zero after the impact. Then use from that point onwards to explore frequency of the unwanted signal only?</li>
<li>Impacts are at least 10 per subject, when exploring the frequency of that signal, should I average all trials and all subjects out to obtain an average frequency content or should I perform the analysis for each trial and subject indepedently?</li>
</ol>
<p>Attached a figure of a typical signal (x = time; y = Newtons)</p>
<p><a href="https://i.sstatic.net/qV2vX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qV2vX.jpg" alt="Impact signal (first spike would be the portion of interest)"></a></p>
<p><a href="https://i.sstatic.net/CuG2Z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CuG2Z.jpg" alt="Power spectral density"></a></p>
<p><a href="https://i.sstatic.net/cJiia.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cJiia.jpg" alt="Amplitude spectogram"></a></p>
<p><a href="https://i.sstatic.net/QpWKU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QpWKU.jpg" alt="3D spectogram"></a></p>
<p><a href="https://i.sstatic.net/VF7ro.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VF7ro.jpg" alt="Zoom the the portion of interest filtered at 80Hz Low Pass"></a></p>
Answer: <ol>
<li><p>I think you will get almost the same amount of frequency content for both the cases (fft of 60 ms and fft of 2 sec). The time series indicates that only from 0.6 to 1.2 sec there is vibration. But it's better to use the full signal while doing the FFT as there is no fear of losing any data. If you cut some portion from the time series, you have to be careful about choosing the window (use cosine tapering not rectangular window) as edging effect may be introduced into your frequency content. </p></li>
<li><p>Sorry, I didn't get the question. Could you please rephrase it?</p></li>
<li><p>I think that for all the trials, the time series would be different. Therefore, it is not recommended to average all the time series. It is better to analyze all the time sequences and obtain the frequency contents. If dominant frequencies for all the trials are similar, then the average frequency can be calculated. If not, you can try to find out the reason e.g. inconsistency in the sample material etc.</p></li>
</ol>
|
https://dsp.stackexchange.com/questions/50611/frequency-analysis-to-determine-low-pass-cut-off-frequency
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.