text
stringlengths 81
47k
| source
stringlengths 59
147
|
|---|---|
Question: <p>I have a device that, in the lab, takes voltage readings every 2 seconds. However in real-life application the device would only wake every 10 minutes and then take a 100 readings whilst awake. How do i change my collected data to mirror real-life application? Only then can I interpret if my device is working as it should. I see from reading various posts that downsampling alone is not appropriate.</p>
Answer:
|
https://dsp.stackexchange.com/questions/61794/how-to-downsample-my-data-readings-from-0-5hz-to-0-001667hz-do-i-filter-and-dow
|
Question: <p>This question is a part of a more general question the answer of which I don't know -
<em>How to apply a filter in the freq domain and then convert the filtered signal back to the time domain?</em> Well, I partially googled the answer that I need to</p>
<ul>
<li>convert the signal in FFT</li>
<li>multiply by the filter</li>
<li>convert back to the time domain</li>
</ul>
<p>but I'm not entirely sure if I've applied this idea correctly to atmospheric absorption filtering (see below).</p>
<p>I'm also not sure whether I can do the filtering entirely in the time domain (as suggested <a href="https://dsp.stackexchange.com/questions/19505/how-to-convert-filter-into-frequency-domain-to-do-filtering">here</a> by convolution?) somehow alleviating the need to switch back and forth to the freq domain.</p>
<p>Input: sound wave (gunshot sound pressure subtracted atmospheric pressure) and atmospheric conditions. Output: the same sound wave attenuated with an atmospheric absorption filter.</p>
<p>I'm using <code>python-acoustics</code> to find the atmospheric attenuation coefficient as described in <a href="https://en.wikibooks.org/wiki/Engineering_Acoustics/Outdoor_Sound_Propagation" rel="nofollow noreferrer">Engineering Acoustics/Outdoor Sound Propagation</a>.</p>
<p>I've came up with the following code:</p>
<pre><code>def atmosphericAttenuation(signal, distance, Fs, **kwargs):
"""
Apply atmospheric absorption to the `signal` for all its FFT frequencies.
It does not account for the geometrical attenuation.
Parameters
----------
signal - a pressure waveform (time domain)
distance - the travelled distance, m
Fs - sampling frequency of the `signal`, Hz
kwargs - passed to `Atmosphere` class
Returns
-------
signal_attenuated - attenuated signal in the original time domain
"""
# pip install acoustics
from acoustics.atmosphere import Atmosphere
atm = Atmosphere(**kwargs)
signal_rfft = np.fft.rfft(signal)
freq = np.fft.rfftfreq(n=len(signal), d=1. / Fs)
# compute the attenuation coefficient for each frequency
a_coef = atm.attenuation_coefficient(freq)
# (option 2) signal_rfft *= 10 ** (-a_coef * distance / 20)
signal_rfft *= np.exp(-a_coef * distance)
signal_attenuated = np.fft.irfft(signal_rfft)
return signal_attenuated
</code></pre>
<p>Am I doing it right? Which one is correct:</p>
<ul>
<li><code>signal_rfft *= np.exp(-a_coef * distance)</code> <- <span class="math-container">$P(r) = P(0) \exp (-\alpha r)$</span></li>
<li><code>signal_rfft *= 10 ** (-a_coef * distance / 20)</code> <- <span class="math-container">$A_a = -20 \log_{10} \frac{P(r)}{P(0)} = \alpha r$</span></li>
</ul>
<p>If neither, please describe how it should be done.
Thank you.</p>
Answer: <p>FFT has a fixed frequency resolution and I don't recommend to do such a frequency modification with FFT. See more at this <a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">question</a>.</p>
<p>You may use <code>Atmosphere.impulse_response</code> to obtain the impulse response and then apply a time-domain convolution, which gives a more reasonable result.</p>
<blockquote>
<p>Which one is correct:</p>
<ul>
<li><code>signal_rfft *= np.exp(-a_coef * distance)</code></li>
<li><code>signal_rfft *= 10 ** (-a_coef * distance / 20)</code></li>
</ul>
</blockquote>
<p>According to the <a href="http://python-acoustics.github.io/python-acoustics/atmosphere.html" rel="nofollow noreferrer">documentation</a>, attenuation coefficient <span class="math-container">$\alpha$</span> describing atmospheric absorption in dB/m as function of frequency. So the second one is correct.</p>
|
https://dsp.stackexchange.com/questions/76023/how-to-apply-an-atmospheric-attenuation-filter-in-the-freq-domain-and-then-conve
|
Question: <p>I have just implemented a Discrete Time Convolution between HRIR filter at a range of angles. I also implemented an Overlalp-and-Add method and FFT and IFFT to compute the same convolution in Frequency Domain.
The HRIRs are the same measurements which are taken from a public database for both DT and Overlap-And-Add methods.</p>
<p>The input signal is an audio file and the output is supposed to be a 3D effect of producing a sound that moves in the horizontal and vertical planes for defined angle steps for both methods. The horizontal plane movement is a full circle at the level of the listener's ears from left to right and the vertical movement is a half circle from left ear to right ear at the level of the listener.</p>
<p>My questions are:</p>
<p>1) If I wish to compare the filtering done to each HRIR angle above in Discrete Time, is it beneficial to check the shapes, magnitudes of each HRIR angle and compare it against each other angle?</p>
<p>2) With regards to comparing the filtering done in the the Frequency Domain. What parameters should I compare? </p>
<p>3) Does the FFT of the individual HRIR at an angle help to compare the results?</p>
<p>Note that I have only compared the quality of the sound in the output file in both DT and Frequency Domains.</p>
Answer: <p><a href="https://i.sstatic.net/L7aJK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L7aJK.png" alt="HRIR"></a></p>
<p><a href="https://i.sstatic.net/VTfxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VTfxB.png" alt="HRTF"></a>
These are the figures of filtering at certain times in Discrete Time and Frequency Domains</p>
|
https://dsp.stackexchange.com/questions/56755/comparisons-of-fir-causal-filters-of-type-hrir-in-discrete-time-and-frequency-do
|
Question: <p>The l1 trend filtering is expressed by taking <span class="math-container">$||Dx||_1$</span> where D is the second difference matrix why is taken of size (n-2)xn rather than circulant matrix of size nxn. I have implemented in both ways but there is not much difference in the results</p>
Answer: <p>If you have a periodic signal then you naturally take a circulant matrix. However, in the non-periodic case the values at begin and end of the signal are unrelated, so it makes no sense to consider the wrap-around differences. Thus the convolution with with the filter of length 3 gives <span class="math-container">$n-(3-1)=n-2$</span> complete linear combinations in the "inner" segment of the product.</p>
|
https://dsp.stackexchange.com/questions/82499/why-is-the-second-difference-matrix-of-size-n-2-x-n-in-l1-trend-filtering
|
Question: <p>Suppose that I have two signals $x[n] = \left\{2,4,1\right\}$ and $p[n] = \left\{5,1,8\right\}$ and I want to multiply them.</p>
<ul>
<li>How do you do that?</li>
<li>How different is it from convolving two signals?</li>
</ul>
<p>I understand that multiplication in one domain is equal to convolution in other domain. How do you choose as to what to use : multiplication or convolution?</p>
Answer: <p>Yes, you are correct. Multiplication in time domain means convolution in frequency domain and vice versa. Multiplying your signals $x[n]$ and $y[n]$ will give an output:</p>
<p>\begin{align}
z[n]&=\{2\cdot 5, 4\cdot 1, 1\cdot 8\}\\ &= \{10, 4, 8\}\end{align} </p>
<p>Remember that this output is in time domain. When you convolve $x[n]$ and $y[n]$, you will get $z[n]$ in time domain as:</p>
<p>\begin{align}
z[n]&=\{5\cdot 2, 5\cdot 4+1\cdot 2, 5\cdot 1+1\cdot 4+8\cdot 2, 1\cdot 1+8\cdot 4, 8\cdot 1\}\\ &= \{10,22,25,33,8\}
\end{align}</p>
<p>Just flip one of the signals around zero and start moving right one place at a time. Multiply the corresponding points as you go along. The output has a larger sequence because convolution output has ${\rm length}(x)+{\rm length}(y)-1$ points. </p>
<p>To answer your query about where to use multiplication & convolution, assume you want to pass signal $x(n)$ through filter $y(n)$. The output of the filter $z(n)$ will be convolution of $x(n)$ and $y(n)$.<br>
Now assume that you first converted from time to frequency domain, i.e. $X(e^{i\omega})$ and $Y(e^{i\omega})$ are frequency domain representation of $x(n)$ and $y(n)$. Now, to find $Z(e^{i\omega})$(output in frequency domain), you have to multiply $X(e^{i\omega})$ and $Y(e^{i\omega})$. To get the output in time domain i.e. $z(n)$ you have to apply inverse transform. </p>
<p>So that's how you use convolution and multiplication.</p>
|
https://dsp.stackexchange.com/questions/10453/how-are-two-signals-multiplied-and-how-is-it-different-from-convolving-two-sign
|
Question: <p>I am trying to filter out some square wave signal to within a limited band (1/4 or 1/8 of the original), I realized that there's a lot of ringing in the wave when I use my filter (elliptical), I also tried Butterworth, and others (given in Matlab fir1, and classic iir filters) but the only filter that seems to give no ringing is Gaussian. So my question is, how should I go about designing a LPF with minimal ringing? (preferred characteristics: minimal pass band ripple, stop band of more than -50dB, relatively fast roll off). Also as I am trying to implement this in a DSP, low order filters such as IIR types are preferred.</p>
<p>Thank you for any help.</p>
Answer: <p>Generally the amount of ringing that you get is a function of the steepness of the filter in the frequency domain, regardless of filter type.
At the same steepness an Elliptic will require a lower order but will have pretty much the same ringing as a Butterworth. </p>
<p>Choosing between a linear phase or minimum phase will change the character but not the extent of the ringing. </p>
<p>No matter how you slice it, a square wave contains a lot of high frequencies and once you filter those out, it's not a square wave any more.</p>
|
https://dsp.stackexchange.com/questions/23035/low-pass-filter-with-minimal-ringing
|
Question: <p>I'm confused because of filter <em>length</em>, whether (such) filters can be used to filter audio on a "per sample basis"?</p>
<p>By per sample basis I mean that I would like to filter audio one sample at a time, but vary the filter parameters even one sample at a time.</p>
<p>What then confuses me is, can the filter have any effect if only one sample is input to it? Or whether the filter needs a longer sample (e.g. that matches the filter length)?</p>
<hr>
<p>Since the filter type seems crucial, I'm specifically interested in the Parks-Mccellan FIR and the context is <strong>dynamic equalization</strong> (in which it seems like the parameters such as gain would need to be varied on a close to per sample basis). I.e. I'm interested in whether the PM algorithm can be utilized for dynamic equalization in such way that one recomputes the PM between some samples and then filters those samples, i.e. in a block-like fashion.</p>
Answer: <p>Obviously, filters need more than one sample of input – I mean, a single sample is a single number, and how would a single number have something like a frequency? Filtering is something you apply to a digital <em>signal</em>, and <em>signal</em> is defined by being a <em>changing entity</em> – i.e. <em>different samples</em>.</p>
<p>Hence, digital filters always need <em>sequences</em> of samples. The typical (non-rate-changing) filter works in a matter that each time you "push in" a new sample, an output sample "pushes out" of the filter.</p>
<p>You should definitely brush up your knowledge on what a digital filter is, and especially how FIR filters work. The important point here is the understanding that application of a filter to a signal is a <em>discrete convolution</em>. </p>
<p>You're absolutely free to change the system with which you convolve any time – but you'd break the time-invariance that makes LTI (linear, time invariant) systems such as FIRs so "easy" to handle.</p>
<p><strong>EDIT:</strong> with your clarification: </p>
<p>Yes, any FIR will need "history" of samples. But also, yes, you can recompute the coefficients anytime, and "switch over" to a different FIR. However, you must make sure your new FIR's "delay"/memory elements were already filled with the previous input samples, otherwise you'll get inconsistencies.</p>
|
https://dsp.stackexchange.com/questions/32104/do-filters-work-on-a-per-sample-basis
|
Question: <p>I am learning about FIR filters and I'm confused.
I am trying to find out different types of FIR filters.</p>
<ol>
<li>Is direct form and n-tap FIR filter the same?</li>
<li>What does transposed FIR filter do?</li>
</ol>
Answer: <p>A finite impulse response (FIR) digital filter implements the following convolution sum</p>
<p>$$y(n)=\sum_{n=0}^{N-1}h(k)x(n-k)\tag{1}$$</p>
<p>for each output sample $y(n)$, where $x(n)$ is the discrete-time input signal, $h(n)$ is the filter's impulse response, and $N$ is the filter length. The values $h(n)$ are also called <em>filter taps</em>, and $N$ is then referred to as the number of taps. The filter described by Equation (1) is also called an $N$-tap filter.</p>
<p>Direct-form and transposed direct-form are just different implementations, i.e. different ways to compute the sum in (1). In theory they are identical, but when computed with finite precision, there can be differences between the different implementations. The direct-form FIR structure is also called <em>tapped delay line</em> or <em>transversal filter</em>.</p>
<p>The two realizations below are the direct-form structure (transversal filter, tapped delay-line) and the transposed structure (from Oppenheim and Schafer, <em>Discrete-time Signal Processing</em>):
<img src="https://i.sstatic.net/rr3dr.png" alt="enter image description here"></p>
|
https://dsp.stackexchange.com/questions/15412/fir-filters-direct-form-transposed-fir
|
Question: <p>I have a signal sampled at 256 Hz, which I want to filter with a 50 taps long FIR filter in real time.
Would it be a problem, if my data block size is only 32 samples?
And should I then concatenate 3 blocks, convolve them with the filter and output only the middle part (to avoid discontinuities)?</p>
<p>And more general questions: were can I read more about all real-time aspects of time-series filtering? In particular about block processing?</p>
<p>Many thanks for help.</p>
Answer: <p>Your data block can be as short as 1 sample (although longer is usually more efficient), as long as you save each full convolution response vector to overlap-add/sum into all the following output blocks. Look up overlap-add/save fast convolution see how this can be done using an FFT/IFFT plus some add/save buffering.</p>
|
https://dsp.stackexchange.com/questions/43488/buffering-block-and-filter-length-in-real-time-processing
|
Question: <p>The transfer function of a Low pass filter is H(w). From this I want to develop a high Pass filter. I read in <a href="http://books.google.co.in/books/about/Digital_Signal_Processing_Principles_Alg.html?id=CTw6GoBh-vkC" rel="nofollow">DSP by Proakis, Sec 4.5</a>, that the high pass filter can be obtained by translating H(w) by PI radians. I did not get how it will form a High Pass Filter? I think tt should form a Bandpass </p>
Answer: <p>In previous sections of the book, the fact that a discrete-time signal's spectrum is periodic may have been mentioned. It can be described formally as follows:
$$X(e^{j\omega})=\frac1{T} \sum_{k=-\infty}^{\infty}X_C\biggr(j\biggr(\frac{\omega}{T}-\frac{2\pi k}{T}\biggr)\biggr)$$
being $X_C(j\omega)$ the Fourier Transform of the continuous signal, and $T=1/f_s$ the sampling period. That expression is actually saying that the discrete-time spectrum can be computed from the continuous spectrum just by expanding its width $T$ times, and repeating this every $\frac{2\pi}{T}$. To keep things simple, we can normalize the frequency so that every signal's spectrum is periodic every $2\pi$, no matter how we sampled it. </p>
<p>Keeping this in mind, a low pass filter isn't just a "step" around $\omega=0$, but also around $\omega=2\pi,4\pi,...$ and, of course, their negative counterparts. I mean low frequencies repeat around $\omega=0, 2\pi,4\pi,...$, while high frequencies do around $\omega=-\pi,\pi,3\pi,...$
<img src="https://i.sstatic.net/xopkk.png" alt="enter image description here">
Shifting its frequency response by $\pi$, we get $H(e^{j(\omega-\pi)})$, which would look like the following:
<img src="https://i.sstatic.net/xE90M.png" alt="enter image description here">
Remember that $\omega=\pi$ corresponds to your Nyquist frequency (= half the sampling frequency).</p>
<p><em>Image credit: Oppenheim & Schafer, Discrete-time signal processing</em></p>
|
https://dsp.stackexchange.com/questions/8279/frequency-response-of-low-pass-filter-and-high-pass-filter
|
Question: <p>Practically speaking, if one is interested in a frequency band well-separated from the line noise (say, for example, the 10-20 Hz band, with 60 Hz line noise), would it be advisable to notch out this line noise before bandpass filtering to the desired frequency range? It seems that, given of course an acceptable filter rolloff, it would be better not to perform an additional filtering step if it were not necessary; however, I'm wondering whether this may lead to some unforeseen problem.</p>
Answer: <p>A general rule of thumb is to filter as little as necessary, because every filter distorts your desired signal, even if just a little bit. Also: a notch filter is advised in a situation where narrow-band interference is present within your signal's band. The notch filter will reject the interference while having minimal effect (ideally) in your signal.</p>
<p>In the problem you describe, my preferred approach would be to go with a low-pass filter only, and increase its order to achieve sufficient rejection of the 60 Hz noise.</p>
|
https://dsp.stackexchange.com/questions/22451/notch-filtering-line-noise-outside-of-frequency-band-of-interest
|
Question: <p>Can you please explain in simple terms what do the input parameters indicate in the <a href="http://www.mathworks.com/help/images/ref/ordfilt2.html" rel="nofollow"><code>ordfilt2</code></a> function in matlab?</p>
<pre><code>B=ordfilt2(A,Order,Domain)
</code></pre>
<p>I have seen people use this function as <code>J = ordfilt2(I, 9, true(5))</code> or <code>J=ordfilt2(I,25,ones(3,3))</code></p>
<p>But I do not understand what each input does to the image <code>I</code> to give <code>J</code> as output..</p>
<p>Thanks a lot in advance</p>
Answer: <p>I'm pretty new to this myself, so please correct me if I get this wrong.</p>
<p>Using your example, <code>J = ordfilt2(I, 9, true(5))</code>.
<code>ordfilt2</code> will move over the 2d array <code>I</code> in blocks of the same size as <code>true(5)</code>. For each of these 5x5 blocks, sort all the elements from smallest to largest. Now fill in the corresponding block in <code>J</code> with a bunch of copies of the 9th smallest element.</p>
<p>To use a smaller example, so I have room to type it:</p>
<pre><code>I = [ 1 2 4 5 ;
5 3 5 1 ;
0 3 5 2 ;
2 1 7 7 ];
J = ordfilt2(I,3, ones(2,2));
</code></pre>
<p>Now, let's go through a few blocks one at a time. The first <code>ones(2,2)</code> block is <code>[1 2; 5 3]</code> in the top left corner. If we sort these elements, we get <code>[1 2 3 5]</code>, and since we're looking for the 3rd smallest, we receive a value of 3 for the (1,1) position of <code>J</code>.</p>
<pre><code>J = [ 3 ? ? ? ;
? ? ? ? ;
? ? ? ? ;
? ? ? ? ];
</code></pre>
<p>next up is the <code>I(1:2,2:3)</code> block. Ordering those elements gives <code>[2 3 4 5]</code>, so the third smallest is 4. Now we replace <code>J(1,2)</code> with a 4. </p>
<pre><code>J = [ 3 4 ? ? ;
? ? ? ? ;
? ? ? ? ;
? ? ? ? ];
</code></pre>
<p>Go ahead and run this command and compare I, J to understand what's happening. One other thing to know is that the input matrix is padded by default with zeros at the lower and right sides. So the <code>I(4:5,4:5)</code> block is <code>[ 7 0; 0 0]</code>.</p>
|
https://dsp.stackexchange.com/questions/12582/what-exactly-does-ordfilt2-do
|
Question: <p>I'm currently working in a project in which I use the coefficients of a IIR Elliptical digital filter (9th order) and the voice signal from a recording of my cellphone but I'm having issues finding the best Q format to work its implemenation, any suggestions? Till now I've been analysing which are the maximum and minimum values of those two sources in orden to determine which would be the biggest or smallest result from their product, is this a good approach to solve this problem? How can anyone determine the best Q format to work in a system?</p>
Answer: <p>You need to calculate the transfer function from your input to each individual state variable.</p>
<p>This depends A LOT on how you implement your filter: I strongly recommend splitting it in second order sections and using either Direct Form I or transposed Form II for each section. Section order and pole/zero pairing can also make a big differences. Unless you have a REALLY good reason not to, follow this recipe: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.zpk2sos.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.zpk2sos.html</a></p>
<p>Once you have your filter topology sorted out you need to calculate all the transfer functions from the input to the state variables and look at the frequency responses. At this point you can also play around with the individual section gains to make this as smooth as possible and manage outliers and peak frequencies.</p>
<p>Then you can pick the Q based on the maximum gain over all frequencies in each transfer function. Add another 3 dB of headroom for transients (sine waves are NOT the worst case signal).</p>
|
https://dsp.stackexchange.com/questions/66815/how-to-find-the-best-q-format-representation
|
Question: <p>I'm trying to build a dataset where one of the features is a signal which has originally been sampled at 500 Hz, while another feature is a signal which was sampled at 100 Hz. I want to downsample both of them at 10 Hz and then align them (they start at different times).</p>
<p>What should I do after low-pass filtering them at 5 Hz? And what would be the most correct filter to use?</p>
<p>e.g. I have two signals, <code>signal1</code> and <code>signal2</code>:</p>
<p><code>signal1</code>: Sampled at 500 Hz, the first sample is at time <code>signal1_t0</code>.
<code>signal2</code>: Sampled at 100 Hz, the first sample is at time <code>signal2_t0</code>.</p>
<p>Once that I have filtered the data as described above, would it be ok to firstly set <code>t0 = max(signal1_t0,signal1_t0)</code> and then get e.g. the closest sample to <code>t0*n + 0.1</code> from all the samples in the interval <code>(t0*n, t0*n + 0.1)</code> for each signal?</p>
Answer: <p>To align the signals precisely fractional delay all-pass filters are used for time delay correction. This is assuming that the different processing channels each have a different time delay which in many cases won't conveniently be an integer of the final output sample rate. Fractional delay filters are resampling structures and include Farrow Filters and Polyphase Filters.</p>
<p>There is more info on Farrow Filters here:</p>
<p><a href="https://dsp.stackexchange.com/questions/17501/coefficients-of-farrow-structure">Coefficients of Farrow structure?</a></p>
<p>And general fractional delay filters here:</p>
<p><a href="https://dsp.stackexchange.com/questions/9349/how-to-pick-coefficients-for-fractional-delay-filters">How to pick coefficients for Fractional Delay Filters?</a></p>
|
https://dsp.stackexchange.com/questions/76732/whats-the-correct-way-to-align-two-signals-downsampled-at-10-hz
|
Question: <p>I have two signals and want to multiply them, e.g. voltage and current are multiplied to get power. The result (e.g. power) shall be a filtered result, e.g. to see the average.</p>
<p>Is it better to filter (digitally) each signal separately before the multiplication or to filter after the multiplication?</p>
<p>Intuitively, I would have said that filtering before the multiplication is better for three reasons:</p>
<ol>
<li>Less numerical problems</li>
<li>Noise of both signals is added by the multiplication, thus filtering before multiplication improves SNR</li>
<li>Multiplication generates modulation products and if you filter out a frequency you do not want to see anyway it also does not cause modulation products.</li>
</ol>
<p>Am I on the right track here or is there something I am overlooking?</p>
<p>Related: <a href="https://dsp.stackexchange.com/questions/31440/snr-after-multiplying-two-noisy-signals/31443#31443">SNR After Multiplying Two Noisy Signals</a></p>
<p><strong>Edit 1</strong>: To answer the question of typical signal spectra: Think of a laboratory grade measurement device like a multimeter. It acquires two signals which can be any signal shape (in time domain) but are band-limited by the input circuitry to avoid anti-aliasing. Noise is typically white noise with a bit of 1/f noise added. SNR is usually 10's of dB.</p>
Answer: <p>A personal rule: in general, it can be better to <strong>perform non-linear operations before linear ones</strong>. One reason behind that is that a lot of practical concerns are related to outliers or suspect behavior, which can easily be smoothed out (and become indistinct from other signals) by linear filters.</p>
<p>Let me reformulate. If <span class="math-container">$f_i$</span> denote filters, and <span class="math-container">$s_i$</span> signals, should one do <span class="math-container">$f_0 \ast(s_1 .s_2)$</span> or <span class="math-container">$(f_1 \ast s_1)(f_2 \ast s_2)$</span>? Without a better knowledge on the spectra of the signal, the nature of the noise and the numerically objective, I have no definite answer. The second one is apparently more flexible, become you can play on two filters. </p>
<p>If the result is some kind of average, I would filer the product, as averaging is filtering, and it commutes with filtering. Thus, you have more chance to preserve the average after filtering the product, if the noise is low. Since two signals like <span class="math-container">$+1,-1,+1,-1\ldots$</span>. Any even-length avering will turn them into <span class="math-container">$0$</span>, while their product is flat with average 1.</p>
<p>I cannot see why you will have less numerical problems.<br>
Noise is added "logarithmically", but the same happens to signals. But yes, modulation can be troublesome, but two cosine with zero mean will be multiplied into a non-zero mean signal, and this is what you want to measure.</p>
<p>In some cases though, to preserve contrast in image, <a href="https://en.wikipedia.org/wiki/Homomorphic_filtering" rel="nofollow noreferrer">homomorphic filtering</a> convert a product into a sum to filter separately the two components (cf. "The second one is apparently more flexible"). Finally, you can use non-linear filters on the product as well, in case of complicated noises. A related question on the ordre of operations can be found in <a href="https://dsp.stackexchange.com/a/59246/15892">Is convolution distributive over multiplication?</a></p>
|
https://dsp.stackexchange.com/questions/59425/filter-before-or-after-multiplication-of-two-signals
|
Question: <p><a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/filtops.htm" rel="nofollow">This link</a> contains the following statement:</p>
<blockquote>
<p>In contrast to the frequency domain, it is possible to implement non-linear filters in the spatial domain. In this case, the summations in the convolution function are replaced with some kind of non-linear operator.</p>
</blockquote>
<p>Why is it not possible to perform non-linear filtering in frequency domain ?</p>
Answer: <p>Linear filters in the spatial domain have a direct equivalent in the frequency domain, so you can transform your data and filter between spatial<->frequency domains and get equivalent results. However the transforms that we use for converting between spatial<->frequency domains are only valid for <em>linear systems</em> - a non-linear filter such as a median filter therefore has no frequency domain equivalent, so we can't perform e.g. median filtering in the frequency domain.</p>
<p>Note that this does not mean that there are no non-linear operations that we can perform in the frequency domain, e.g. we could reduce all frequency components below a certain threshold value to zero, it's just that there are no counterparts between the two domains for non-linear operations, so the frequency domain threshold operation described would have no equivalent operation in the spatial domain.</p>
|
https://dsp.stackexchange.com/questions/8995/is-it-not-possible-to-perform-non-linear-filtering-in-frequency-domain
|
Question: <p>I have barometer noisy data with known variance.<br />
I studied Kalman filter but I did not find an answer to this problem:<br />
My process model is: altitude is changed because of velocity that is changed because of acceleration that is normally distributed.</p>
<p><span class="math-container">$$ s[k+1]=s[k]+v[k]*dt+a[k]*dt*dt/2 $$</span></p>
<p>Is my process state just (altitude) or (altitude and velocity) or (altitude, velocity, acceleration)?<br />
When I use velocity and acceleration - how shall I fill the measurement matrix when measuring just altitude? In some examples, I have seen 0 as a measurement of unknown variables, but it makes no sense to me, because velocity and acceleration remain constant, because no correction is applied to them.<br />
My goal is to mainly compute velocity.</p>
<p>Is there some other recommended algorithm to estimate the vertical velocity?</p>
<p>UPDATE:
I will try to ask the more specific question about Kalman filter :-)</p>
<ul>
<li><p>I have state X = (s, v, a) - trajectory, velocity, acceleration
I have state transition model F = ((1 dt dt*dt/2)(<strong>0</strong> 1 dt)(<strong>0</strong> 0 1)) (where dt is interval between last and current measurement)</p>
</li>
<li><p>I have one-dimensional measurement Z (I measure trajectory only)</p>
</li>
<li><p><strong>So observation model H is (1 0 0)?</strong></p>
</li>
</ul>
<p>I have seen examples where only acceleration was measured and H was (0 0 1), but when acceleration was corrected, velocity and trajectory was updated also in "a priori state estimate" because velocity is dependent on acceleration. But will the velocity and acceleration be updated when they are not dependent on trajectory? See bold zeros in F matrix.</p>
Answer: <p>You might get good results when you consider the acceleration as an input $u$, so <a href="https://en.wikipedia.org/wiki/Kalman_filter#Underlying_dynamical_system_model" rel="nofollow noreferrer">the model</a> could then be written as</p>
<p>$$
x[k+1] =
\begin{bmatrix}
1 & \Delta t \\
0 & 1
\end{bmatrix} x[k] +
\begin{bmatrix}
\frac{\Delta t^2}{2} \\ \Delta t
\end{bmatrix} u[k] + w[k]
$$</p>
<p>$$
y[k] =
\begin{bmatrix}
1 & 0
\end{bmatrix} x[k] + v[k]
$$</p>
<p>where $x[k]$ a vector with the position and velocity and $w[k],v[k]$ zero mean Gaussian white noise. Here $v[k]$ has a covariance $R$ equal to the variance of the barometer and $w[k]$ probably has the following covariance $Q$</p>
<p>$$
Q = \sigma_u^2\,
\begin{bmatrix}
\frac{\Delta t^2}{2} \\ \Delta t
\end{bmatrix}
\begin{bmatrix}
\frac{\Delta t^2}{2} \\ \Delta t
\end{bmatrix}^\top
= \sigma_u^2\,
\begin{bmatrix}
\frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} \\
\frac{\Delta t^3}{2} & \Delta t^2
\end{bmatrix}
$$</p>
<p>where $\sigma_u^2$ the variance of the measured acceleration (this assumes that there are not disturbance forces acting on the system).</p>
|
https://dsp.stackexchange.com/questions/48911/how-to-use-kalman-filter-for-altitude-prediction-based-on-barometer-data
|
Question: <p>I was reading the answer to <a href="https://dsp.stackexchange.com/questions/46671/decimation-and-filtering-in-the-frequency-domain">this question</a> provided by Phil Karn.</p>
<p>In the answer, it has been said:</p>
<blockquote>
<p>Ensure that the impulse response of your lowpass filter is shifted to the front of your time domain buffer AND properly windowed to M samples before you take the forward FFT to get the frequency domain representation of your filter. This keeps the result from wrapping around in the time domain when you take the inverse FFT. (Remember you're actually doing circular convolution when you want linear convolution.)</p>
</blockquote>
<p>I know that windowing is important because of spectral leakage but I do not completely understand this paragraph.</p>
<p>I think in this paragraph some important points are mentioned which are worth explaining in more detail.</p>
<p>What does exactly "shifting the impulse response of a filter to the front of the time domain" mean?</p>
<p>It has been said that "This keeps the result from wrapping around in the time domain when you take the inverse FFT". Is this a result of "shifting the impulse response of a filter to the front of the time domain" or a result of "windowing"?</p>
Answer: <p>Well, as mentioned by the original answer, "I assume you already know the basic rules for fast convolution: the FFT length N is equal to the data blocksize L plus the length of the filter impulse response M minus 1. Each operation uses L samples of new data plus M-1 samples of data from the old block."</p>
<p>The OP was going to do fast convolution i.e. FFT convolution, by taking FFT of the signal and multiplying by the FFT of the lowpass filter. What you quoted and what I just quoted, taken together, is just saying that</p>
<ol>
<li>when taking FFT of the signal he needs to pad the signal with zeros at the end before FFT'ing.</li>
<li>when taking FFT of the lowpass filter he also needs to pad the signal with zeros at the end before FFT'ing.</li>
<li>Of course, (1) and (2) need to be padded to the same FFT length, for multiplication between them to be possible.</li>
<li>The total FFT length required is L+M-1, where L is the actual data length of (1) and M the actual data length of (2).</li>
<li>Now, if the lowpass filter impulse response were sampled earlier than its first meaningful output sample, M would include leading zeros, those leading zeros would still have to be counted, making M unnecessarily big. If the FFT length is smaller than L+M-1, the multiplication and then iFFT would create a convolution where the tail of the response wraps over to the beginning, per circular convolution. That's why the impulse response should be pushed all the way to the beginning / left / front.</li>
<li>Finally because lowpass filters are typically IIR filters, their impulse response actually nominally go on forever so you need to crop it to M with a windowing function. The quality and length of the window function will determine how much of the low-pass-filtering of the designed LPF actually makes its way through to the FFT multiplication.</li>
</ol>
<p>Hope that helps...</p>
|
https://dsp.stackexchange.com/questions/88074/what-is-the-necessity-of-shifting-the-impulse-response-of-a-filter-to-the-front
|
Question: <p>I am trying to implement a gaussian filter with matlab.Here is my implementation:
So far All I get as output is a black image.
Any hints?</p>
<pre><code>clear all;
close all;
I = double(imread('Put your path here'))/255;
I=rgb2gray(I);
sigma=1;
[M,N,s]=size(I);
f1=-fix(M/2):ceil(M/2)-1;
f2=-fix(N/2):ceil(N/2)-1;
[fx,fy]=meshgrid(f1,f2);
x=exp(-2*pi*pi*sigma*sigma*(fx/M.^2+fy/N.^2));
If=fft2(I);
If=fftshift(If);
If=If(:,:,1)*x;
If=ifftshift(If)
I=ifft2(If);
imshow(I)
</code></pre>
Answer: <p>Here is the working code,</p>
<pre><code>clear all; close all; clc;
I = double(imread('Cameraman.tif'))/255; % Divide only if image value in [0-255] range
figure,imshow(I);title('Cameraman original')
%I=rgb2gray(I); % NOTE: This is only for RGB images, commented otherwise
sigma=3;
[M,N,s]=size(I);
f1=-fix(M/2):ceil(M/2)-1;
f2=-fix(N/2):ceil(N/2)-1;
[fx,fy]=meshgrid(f1,f2);
X=exp(-2*pi*pi*sigma*sigma*((fx/M).^2+(fy/N).^2));
If=fft2(I);
If=fftshift(If);
If=If.*X;
If=ifftshift(If);
I=real(ifft2(If));
figure,imshow(I);title(['filtered image with \sigma =', num2str(sigma)])
</code></pre>
|
https://dsp.stackexchange.com/questions/44098/how-to-perform-a-gaussian-blur-using-fft
|
Question: <p>A low-passed signal, bandwidth limited to 4KHz is originally sampled 10 KHz. If I want to resample it at 20 KHz, I take these steps. Are these correct? Am I missing an step?</p>
<ol>
<li><p>First we need to filter the signal. Since the bandwidth is limited to 4KHz, we need a LPF with cutoff freq. of 4KHz.</p>
</li>
<li><p>Now we need to upsample it at 20KHz. We do this by simply inserting zeros between original samples.</p>
</li>
<li><p>I am skeptical here whether we need another LPF here or not. If so, should the cutoff freq. be 10KHz? Or 4 KHz suffices?</p>
</li>
</ol>
Answer: <blockquote>
<p>First we need to filter the signal. Since the bandwidth is limited to 4KHz, we need a LPF with cutoff freq. of 4KHz.</p>
</blockquote>
<p>If the signal is already bandlimited, there is no need for this filter.</p>
<blockquote>
<p>Now we need to upsample it at 20KHz. We do this by simply inserting zeros between original samples.</p>
</blockquote>
<p>Correct. That's the first step of upsampling.</p>
<blockquote>
<p>I am skeptical here whether we need another LPF here or not. If so, should the cutoff freq. be 10KHz? Or 4 KHz suffices?</p>
</blockquote>
<p>You do need a lowpass filter here. Inserting zeros results in a periodic repetition of the original spectrum. You need a filter to remove the mirror spectrum between 5 kHz and 10 kHz. The cutoff frequency needs to be around 4k kHz but below 5 kHz.</p>
<p>In practice, the choice of this lowpass filter is the most tricky problem. It requires a complicated trade off between passband ripple, stopband steepness and attenuation which in turn determines residual aliasing. There are also phase distortion, time domain ringing, causality and implementation considerations like memory, CPU, latency, real-time etc. All of these depend a lot on your data and the requirements of your specific application.</p>
|
https://dsp.stackexchange.com/questions/89492/upsampling-of-a-signal
|
Question: <p>I am pretty familiar with IIR and FIR filtering and I have implemented them in my several projects. However, recently i found something called zero phase filtering. I have tried to understand it but not going well. All i know is that zero phase filtering is filtering that do not produce any phase delay and distortion </p>
<ol>
<li>Is it right?</li>
</ol>
<p>I have understood that FIR have a constant delay for a group signal while IIR have a non linear delay based on the frequency, so it will distort the signal phase.so there is a zero phase IIR to overcome the cons</p>
<ol start="2">
<li>How does the zero phase IIR work? (Needs an overview and maybe some examples)</li>
</ol>
<p>And i have read about forward backward filtering that also remove the delay and distortion of filtering</p>
<ol start="3">
<li>How does it work?</li>
<li>Is it one of the zero phase filtering method?</li>
<li>It said that it can be done in offline filtering. What is offline filtering?</li>
</ol>
<p>Thank you</p>
Answer: <p>Zero-phase filtering is a non-causal procedure, so it cannot be done in real time, only offline for IIR filters or pseudo real-time, i.e., with a sufficient delay for FIR filters. A zero-phase filter needs to have a purely real-valued frequency response, and, consequently, it must have an impulse response that is even with respect to the time index <span class="math-container">$n=0$</span>, i.e., it is non-causal.</p>
<p>Zero phase filtering with IIR filters is achieved with forward-backward filtering, as implemented in Matlab's <a href="https://nl.mathworks.com/help/signal/ref/filtfilt.html" rel="nofollow noreferrer"><code>filtfilt</code></a> function. The resulting total frequency response is the squared magnitude of the original IIR filter's frequency response. Since the squared magnitude is real-valued, the resulting filter is a zero-phase filter. Of course there's a delay because you have to feed the time-reversed output of the first filter pass back to the input of the filter, after which you need to time-reverse the output. More details about forward-backward filtering can be found in <a href="https://dsp.stackexchange.com/a/9468/4298">this answer</a>.</p>
|
https://dsp.stackexchange.com/questions/54047/what-is-zero-phase-filtering-and-forward-backward-filtering
|
Question: <p>First of all thanks for your patience: it is the first time for me posting a question in this forum. I am not a DSP expert, but I should get by if you give me in depth explanation.</p>
<p>This is an example of my raw data </p>
<p><img src="https://i.sstatic.net/lqtUx.png" alt="enter image description here"></p>
<p>and <a href="http://www.dropbox.com/s/2uxv4h9bmm3nutm/acc.dat" rel="nofollow noreferrer">a link to the actual dataset (time=t0, acceleration=acc)</a>. </p>
<p>My problem is the following: I am measuring accelerometer data for an experiment, but I am having a hard time filtering out the raw acceleration data for double integration of velocity and position. The first thing that I do is to remove the dc component of the data via</p>
<pre><code>#python code
func = interp1d(t0,acc)
dt = 10./1000. # [secs]
ta = np.arange(t0[0],t0[-1],dt)
a = func(ta)
n = len(a)
freq = np.fft.fftfreq(n, d=dt)
freq[0]=1e-16
F = np.fft.fft(a)
F[0]=0
new_acc = np.real(np.fft.ifft(F))
</code></pre>
<p>In particular, I have the physical constraint the integrated velocity should be always positive, or equal to zero where the acceleration is equal to zero.
However, when I integrate the <code>new_acc</code>, </p>
<pre><code>v = cumtrapz(new_acc)*dt
</code></pre>
<p>this is what I get:</p>
<p><img src="https://i.sstatic.net/OBEol.png" alt="enter image description here"></p>
<p>Leaving alone for a minute the position (which is ultimately really what I am interested in)<br>
<strong>my question is:</strong> </p>
<p><em>is there any way to design a filter such that the integral of the filtered signal is always >=0 ?</em> </p>
<p>Alternatively, if you believe that the one above is the wrong question to ask: </p>
<p><em>what is the best way to integrate (twice) this dataset?</em></p>
Answer: <p>Beware. A Gaussian random walk has a non-zero distribution width that increases with time. Thus the double integral of that random walk will likely very quickly zoom off the edge of your solution space. A linear filtering of that random walk won't behave any better. Whether or not mixed with the "real" acceleration signal.</p>
|
https://dsp.stackexchange.com/questions/10513/constraint-on-a-filter
|
Question: <p>Investigation of the method of suppression of random noise by coherent signal accumulation</p>
<p>Purpose - to identify opportunities for the coherent accumulation for cases of stationary and quasi-stationary signal.
Suppose that the input is the mixture of observed signal and a random white noise (i.e. noise with uniform spectral density). The signal is stationary and is described from sample to sample by constant function (for example, a sinusoidal signal with a constant frequency and constant initial phase). In this case, the input noise in it amplitude is several times greater than the amplitude of the signal. By coherent accumulation of the input mix for a number of samples it possible to increase the signal/noise ratio.</p>
<ol>
<li>According to the results of modeling build a relationship:
a) The signal/noise ratio in the output mix with respect to duration of accumulation, ie accumulated number of samples at a constant signal/noise ratio at the input , ( the number of samples of accumulation varies )
b) the signal/noise ratio at the output with respect to signal/noise ratio at the input for a fixed number of samples (M = 10, 25 , 50 ) (SNR at the input vary)</li>
<li>Repeat item "1" for the case of quasi-stationary signals.
As a useful signal you should set a rectangular pulse of constant duration, which offset from the origin varies from sample to sample by a linear law.</li>
<li>To develop a functional diagram of a device that filters the signal by accumulation.</li>
</ol>
<p>Type of signal: The sum of two harmonic signals;
S/N ratio: 0.2;
The number of cycles of accumulation: up to 500;
Limits of change in the signal / noise ratio: 0.1-2.</p>
<p>THEORY
The method of accumulation is applicable if the useful signal during the reception time constant or a periodic function. The method is consist of many times repeated signal and summation of its individual realizations in the receiver.
Let the transmission of the desired signal implemented in two levels.</p>
<p><img src="https://i.sstatic.net/TEibF.jpg" alt="enter image description here"></p>
<p>In the interval Tx signal is constant.
In the observation interval Tx sample values of the received signal are accumulated.</p>
<p>y1=x+r1
y2=x+r2
...............
yn=x+rn</p>
<p>and these values are summed.</p>
<p><img src="https://i.sstatic.net/kO4gL.jpg" alt="enter image description here"></p>
<p>We introduce two assumptions:
1) counts of interference ri are independent of each other;
2) The obstacle is stationary (its characteristics do not depend on time)
and lets find (Px/Pr)out on the output of the accumulator, ie</p>
<p><img src="https://i.sstatic.net/AKcnt.jpg" alt="enter image description here"></p>
<p>*вых -output</p>
Answer:
|
https://dsp.stackexchange.com/questions/10995/can-somebody-help-me-solve-this-signal-accumulation-problem
|
Question: <p>Given the signal shown below, what is the best way to remove the steps and local maximas it contains. The signal contains some steps which can last up to 100 Samples before they return to about the same value as before the step (Marked with red circles). There are also some peaks which last only for one sample which should also be removed.</p>
<p>First Dataset: On this set, the median Filter works as expected (not shown here). <img src="https://i.sstatic.net/32H4e.png" alt="Sample Signal, all peaks should be removed, also the ones wich are marked red."></p>
<p>On the following Dataset, the Medianfilter with the same settings fails. To many samples are replaced by the median value.
<img src="https://i.sstatic.net/REhYy.png" alt="Sample Signal, green: Median filter from one of the Answers"></p>
<p>What I've tried already:</p>
<ul>
<li>Work with a moving average. Advantage: Small peaks have less influence and are nicely removed. Disadvantage: Long lasting steps remain in the signal.</li>
<li>Get the first difference of the signal (<code>diff(X)</code>) and correct the value in X if a big positive change is followed by a negative change immediately. Advantages and disadvatages are the same as above.</li>
</ul>
<p>What are some good methods to remove/correct these unwanted values?</p>
Answer: <p>The problem with the moving average is that the average is not robust to the presence of the outliers - so you would need a very large window size to "dilute" the outliers.</p>
<p>Try a non-linear filter instead, like a median filter: Apply a median filter on your signal - you would need a window size of at least 300 samples. Compute the difference between the original signal and the median-filtered version. If the difference is above a threshold, replace the signal by the median-filtered version.</p>
<hr>
<p>Here is some <code>scilab</code> code that attempts to implement this suggestion. The results are plotted here; it seems to work nicely on the fabricated data.</p>
<p><img src="https://i.sstatic.net/kQyMD.png" alt="enter image description here"></p>
<pre><code>function sm = smooth(x,len)
sm = filter(ones(1,len)/len,1,x);
endfunction
N = 6000;
x = 0.1*rand(1,N,'normal');
y = cumsum(x);
y = smooth(y,100);
clf
subplot(211);
plot(y)
Njumps = 20;
jump_indices = round(rand(1,Njumps)*N);
jump_length = 0;
y2 = y;
for idx = jump_indices,
y2(min(N,idx:(idx+jump_length))) = y(min(N,idx:(idx+jump_length))) + 1;
//plot(min(N,idx:(idx+jump_length)),y2(min(N,idx:(idx+jump_length))),'r')
jump_length = jump_length + 1;
end
plot(y2,'g');
filter_length = 10;
y3 = y;
for k = 1:N,
y3(k) = median(y3(max(1,min(N,(k-filter_length/2):(k+filter_length/2)))));
end
plot(y3,'k')
subplot(212);
plot(y-y2);
plot(y-y3,'r');
</code></pre>
|
https://dsp.stackexchange.com/questions/12672/detecting-removing-steps-in-signal
|
Question: <p>I'm reading a material where it says that a filter mask or kernel can be separable if the matrix of the filter mask has a rank 1. The two slides which describes this are as below:<img src="https://i.sstatic.net/vAIiA.png" alt="enter image description here"></p>
<p><img src="https://i.sstatic.net/Lv2AJ.png" alt="enter image description here"></p>
<p>Reading these slides it seems to me that it's trying to mean that the averaging filter can be separable, while Laplacian of Gaussian(LoG) is not. But it doesn't make sense to me, because LoG is the combination of two filters, laplacian and gaussian while in the contrary the averaging filter is just one filter, how can an averaging filter be separable? </p>
<p>I'm really confused on this matter. It would be helpful if you can make any sense out of this and explain me. Thanks.</p>
Answer: <p>Separable just means you can do it in the x-direction and then in the y-direction and have it come out the same as if you did it in both dimensions simultaneously to begin with. It's not too hard to see that this will work for an average filter. If the filter is averaging over a 3x3 grid then in the 2-d case you take an average of nine values. In the separable filter case you first take three averages of three values. Then you average those three averages together. In both cases you get the same answer.</p>
<p>The separable case is much faster because you get to reuse some of the work you did in the x-dimension when you are doing the y-direction. In other words, each average of three values you computed in the x-direction will be used multiple times when filtering in the y-direction. The filters that can be made separable are precisely those whose matrix rank is one. </p>
|
https://dsp.stackexchange.com/questions/15850/role-of-the-rank-of-the-filter-mask-matrix-in-image-processing
|
Question: <p>I've made a simple first order IIR highpass filter with a zero at z = 1 and a pole at z = 0.9. Its frequency response looks like this:</p>
<p><img src="https://i.sstatic.net/bDtTx.png" alt="enter image description here" /></p>
<p>Now, I filter a DC signal using this filter. Here's the MATLAB code I use to do it:</p>
<pre><code>b = [1 -1]; % Zero at z = 1
a = [1 -0.9]; %Pole at z = 0.9
figure(1)
freqz(b, a)
t = 1:100;
x(1:length(t)) = 1; % Constant function
y = filter(b, a, x);
figure(2)
plot(t, x)
xlabel('Time');
ylabel('Input Signal');
figure(3)
plot(t, y)
xlabel('Time');
ylabel('Output Signal');
</code></pre>
<p>As my filter is highpass, I expect the DC to become zero, or atleast become severely attenuated. However, the output signal I get looks like this:</p>
<p><img src="https://i.sstatic.net/rdh2d.png" alt="Exponentially decaying output signal" /></p>
<p>From my understanding, this exponential output is a transient produced because I haven't set the initial conditions correctly. Sure enough, setting x[-1] = 1 solves the problem. However, this works only for this particular input DC signal. For any general input signal, how do I set the initial conditions so that transients aren't produced?</p>
<p>Edit : I'm aware that the filtfilt() function does forward-backward filtering with transient minimization, but I really want to port the filter to an embedded platform, so I need to understand how transient removal works. Thanks in advance for the help!</p>
<p>Edit 2 : As suggested by Kuba Ober, I tried setting x[-1] as the value that it actually should have been. It works fine for a DC input, but here's what happened for a sinusoidal input:</p>
<pre><code>clc; clear all;
p = 0.9
a = [1 -p]
b = [1 -1]
n = 1:100; % Samples
f = 0.2; % Frequency in Hz
Fs = 10; % Sampling rate in samples per second
t = n/Fs; % Time axis
x = sin(2*pi*f*t);
% Filter with the appropriate initial conditions
y = filter(b, a, x, filtic(b, a, [], [sin(2*pi*f*0)]));
figure(1)
plot(t, x)
xlabel('Time');
ylabel('Input Signal');
figure(2)
plot(t, y)
xlabel('Time');
ylabel('Output Signal');
</code></pre>
<p>Here's the input signal :</p>
<p><img src="https://i.sstatic.net/I3lfv.png" alt="enter image description here" /></p>
<p>And here's the output :</p>
<p><img src="https://i.sstatic.net/rxaFC.png" alt="enter image description here" /></p>
<p>The first peak is visibly smaller than the second, which indicates some transients being present. I'm not entirely sure about this, but I think the reason it doesn't work is because just setting x[-1] is not enough, I also need to set y[-1]. The problem here is that there's no way to find out what y[-1] actually should be.</p>
<p>Edit 3 : Let me provide a little more info on the problem I'm working on. I'm trying to use filters to remove noise from ECG (Electrocardiogram) signals in an embedded platform. Here's a typical ECG signal, after filtering:</p>
<p><img src="https://i.sstatic.net/poWOy.jpg" alt="enter image description here" /></p>
<p>Here's what an ECG signal looks before filtering:</p>
<p><img src="https://i.sstatic.net/zbfFC.jpg" alt="enter image description here" /></p>
<p>Note the DC offset in the signal before filtering. For filtering, I need a notch filter to remove high frequency power line noise and a highpass filter to remove the DC and the low frequency "drifting" of the signal.</p>
<p>The filters I use need to be linear phase, since the time domain morphology of an ECG signal is very important for diagnosis. However, my filter doesn't need to be real-time, as I'm doing the processing offline after acquiring the ECG signal from the patient. So, for implementing nonlinear phase IIR filters, I'm currently using forward-backward zero phase filtering.</p>
<p>One opinion that's shared by @Matt L. and @Royi is that transients are unavoidable in real-time filtering and that I should use a longer input signal and crop off the first few seconds of the filtered output instead. This is something I'd like to avoid, as acquiring long ECGs from a living patient is somewhat difficult. Also, I do not have to filter in real-time, so any technique of transient removal that hinges on knowing the entire signal in advance is perfectly all right. Any help is appreciated!</p>
Answer: <p>Your first order filter recursion for some real constants $a,b,c$ is </p>
<p>$$ y[n] = a x[n] + b x[n-1] - c y[n-1] $$</p>
<p>with the two initial memory states $x[-1]$ and $y[-1]$ at $n=0$. </p>
<p>Your "no transient" condition can be translated to $y[0]=0$ and a necessary second condition so that you can solve for both of your memory states. The second condition could be, that the discrete derivative of $y$ also vanishes at $n=0$, so $y[0]-y[-1]=0$. You can also take any other condition that seems sensible to you.</p>
<p>The two equations give you a unique solution for the two unknown memories, namely:
$$y[-1] = 0$$
and
$$x[-1] = -\frac{a}{b}x[0] $$</p>
<p>Alternatively, your conditions may better be chosen as $y[0]=0$ and $y[0]-y[-1]=x[0]-x[-1]$ in order to capture the initial slope of the input. The resulting recursion equation at $n=0$ is then
$$0=a x[0]+b x[-1]+c(x[0]-x[-1])$$
giving you the solution
$$x[-1]=-\frac{a+c}{b-c}x[0]$$
and
$$ y[-1]= -\left(1+\frac{a+c}{b-c}\right)x[0]$$</p>
<p>(Please check my calculations!)</p>
<p>But in general you cannot expect a simple initial condition to give you the same result as knowing the signal history. So you can only take this to a certain point and in general it would probably be better if you just discarded the transient response of your output.</p>
|
https://dsp.stackexchange.com/questions/16869/removing-transients-in-highpass-filtering-with-matlab
|
Question: <p>Consider the following black-and-white image. It depicts a freehand sketch.</p>
<p><img src="https://i.sstatic.net/TMV3q.png" alt="enter image description here"></p>
<p>I wish to characterize the "density" of sketch strokes. For e.g. the hair strokes are densely grouped together. So are strokes near the wrists and the necklace stone. Other strokes are somewhat scattered and "far", e.g. the nearly vertical strokes depicting the dress.</p>
<p>Is there a good measure (texture-like ?) which can be used to quantify the above notion ?</p>
Answer: <p>I ended up using the <a href="http://in.mathworks.com/help/images/ref/radon.html" rel="nofollow">radon transform</a> in 8 canonical directions. I then normalized the resulting distribution and computed its entropy. The numbers seem to correlate with level of texture in the image.</p>
|
https://dsp.stackexchange.com/questions/24295/texture-like-measures-for-quantifying-density-of-data-in-binary-images
|
Question: <p>I generate random spike data that represent the ouput from a rotary encoder. Here the output of my algorithm:
<a href="https://i.sstatic.net/lJIYf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lJIYf.jpg" alt="enter image description here"></a>
The first plot is position, then velocity and then acceleration. Even though I downsample my position vector and pass it to a average filter of 20 points, the derivative are very sensitive as you can see. I check the following question:
<a href="https://dsp.stackexchange.com/questions/9498/have-position-want-to-calculate-velocity-and-acceleration">have position, want to calculate velocity and acceleration</a>. Few option seem to be available , but I don't get the answer , the Savitzky-Golay Filters is just a smoothing function, I don't get how he get the velocity, also what are the orther alternatives. </p>
Answer: <p>There are several ways to calculate the derivative of a stochastic signal. I will show you two methods.</p>
<p>First of all, adjust the output of a second order transfer function to your data: $${\omega_n^2}/({s^2+2\zeta\omega_n s+\omega_n^2})$$ Once you are satisfied with the fit, put an additional $s$ in the numerator and you will get the derivative of your signal: $${\omega_n^2\times s}/({s^2+2\zeta\omega_n s+\omega_n^2})$$ You can use a <em>bilinear transformation</em> and convert this solution into a discrete one, to implement in a software easily, if you like.</p>
<p>Another solution is to use the spectrum and calculate the <em>fft</em> of your signal. Then filter it vanishing the small values in the higher frequencies. Now, take the result and multiply it by $j\omega$. This is taken from the Fourier Transform Properties Table to calculate the derivative of a signal. Finally, redo the signal with <em>ifft</em>.</p>
<p>I added here a code to exemplify these aforementioned algorithms. The output picture is added as well at the end of the post. I hope it helps.</p>
<pre><code>% \brief: this code exemplifies two techniques suitable for
% calculating the first-order derivative of an
% stochastic signal.
%
% \algorithm 1: LTI transfer function integrated with Tustin.
% \algorithm 2: FFT derivative property.
%
% \author: Luciano Augusto Kruk
% \web: www.kruk.eng.br
% some constants:
fs = 1000; % [Hz] sample rate
T = 1/fs;
t = 0:T:0.5;
nT = length(t);
% signal + colored noise:
[B,A] = butter(10,250*T*2);
s = sin(2*pi*20*t)+filter(B,A,0.1*randn(1,nT));
figure(1); clf;
subplot(3,1,1);
plot(t,s);
title('signal s(t)');
grid on;
% first technique: second order transfer function
qsi = 0.7;
wn = 2*pi*60;
wnT2 = (wn*T)^2;
b = 2*T*wn*wn;
a1 = 4 + (4*qsi*wn*T) + wnT2;
a2 = (2*wnT2)-8;
a3 = wnT2 - (4*qsi*wn*T) + 4;
dsdt = zeros(size(s));
for i = 3:nT
dsdt(i) = (1/a1) * ...
((-a2*dsdt(i-1)) - (a3*dsdt(i-2)) + (b * (s(i) - s(i-2))));
end
subplot(3,1,2);
plot(t,dsdt, t,2*pi*20*cos(2*pi*20*t))
title('algorithm 1');
ylabel('ds/dt');
grid on;
set(gca, 'ylim', 150*[-1 1])
legend('algorithm', 'real')
% second technique: fft/ifft properties
s_fft = fft(s);
idx = 1:(nT/2);
f_fft = (idx-1) * (fs/2) * (1/length(idx));
idx = round((nT/2) + ((-nT/2.5):(nT/2.5)));
s_fft(idx) = 0;
w = 2*pi*(((-nT/2):((nT/2)-1))/nT)*fs;
dsdt = ifft(s_fft .* sqrt(-1) .* ifftshift(w));
subplot(3,1,3);
plot(t,dsdt, t,2*pi*20*cos(2*pi*20*t))
title('algorithm 2');
ylabel('ds/dt');
grid on;
set(gca, 'ylim', 150*[-1 1])
legend('algorithm', 'real')
</code></pre>
<p><a href="https://i.sstatic.net/zqTOK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zqTOK.jpg" alt="derivative estimate of stochastic signal"></a></p>
|
https://dsp.stackexchange.com/questions/26248/derive-velocity-and-acceleration-from-position
|
Question: <p>So I have generated a time-series with length on 2N samples, where the first N are generated by one auto-regressive system and the N+1 to 2N are generated by another, similar, auto-regressive system. </p>
<p>Here's a picture of the transition itself:<br>
<a href="https://i.sstatic.net/KrqYc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KrqYc.png" alt="Picture of transition between two systems with what I want marked in red"></a>
So, around time-point 250, I have few of those small blue peaks, but what I want is a more-or-less smooth transition, as marked in red. </p>
<p>Any ideas on how I could achieve that, without damaging the rest of the signal?</p>
Answer:
|
https://dsp.stackexchange.com/questions/33805/how-do-i-smooth-over-transition-from-a-time-series-generated-by-one-system-to-a
|
Question: <p>I have some data from a position encoder, so naturally i want to estimate its speed. However, the data is very quantized, so it's difficult to smooth enough to differentiate easily: </p>
<p><img src="https://i.sstatic.net/eMRcm.png" alt="data"></p>
<p>Each step level is about 70-140 data points long on average, so my usual tricks of savitzky-golay filtering aren't up to it. I've played with piecewise splines as well, but not had much luck; there's a lot of ringing. </p>
<p>Is there a straightforward way to draw the obvious line through the trace? </p>
Answer: <p>Looks like your data is virtually free of noise. That, combined with a very high sampling frequency would mean that at the jumps the data is exactly at the threshold between two quantized values. Set up nodes at the middle points of the vertical jumps and construct splines that connect the nodes. The easiest is to just draw straight lines between successive nodes, which gives a piece-wise constant differential. I wonder if that is good enough, or if you already tried that. If you need the velocity real-time, this approach is problematic because occasionally you might have to wait for a new node for some time.</p>
<p>You can further low-pass filter the interpolated data. If you use a filter with an impulse response that is nowhere negative, such as a Gaussian function, then there will be no overshoot.</p>
<p>With linear interpolation, everywhere between successive nodes, the speed will be simply the position difference between the nodes divided by the time difference between the nodes. You can run the smoothing filter on that piece-wise constant speed data and the result will be the same as if you'd run it on the linear position ramps and then differentiated (associative property of convolution, as both differentiation and filtering are convolution).</p>
|
https://dsp.stackexchange.com/questions/33821/smoothing-a-staircase
|
Question: <p>I want to compare the performance of a Wiener Filter and the Kalman filer to estimate the value of a constant $d$ using mesurements corrupted by a white noise. That is, my measurements are of the form
$$x(n) = d + v(n)$$
where $v(n)$ have a normal distribution with mean $0$ and known variance $\sigma^2$. </p>
<p>Using the Kalman filter, I could put it on State Space form
$$d(n+1) = d(n)$$
$$x(n) = d(n) + v(n)$$
and solve the problem. But I am having difficults to set the problem so I can solve with the Wiener filter. My desided signal is the constant signal $x$. My filter input is the measurements $x$. But I have doubts if I am doing something wrong, because my estimated signal is not a really good estimative of the desired response. I used the following matlab code:</p>
<pre><code>function test()
n = 0:511;
d = 10 * ones(1,512);
v = 0.5*randn(1,512);
x = d + v;
w = WienerFIRFilter(x, d, 12);
y = filter(w', 1, x);
plot(x)
hold on
plot(y, 'r')
end
</code></pre>
<p>The function WienerFIRFilter is defined as following</p>
<pre><code>function w=WienerFIRFilter(u,d,M)
aux = xcorr(d,u,'biased');
p = aux(1,(length(aux)+1)/2:((length(aux)+1)/2)+M-1);
[U, R] = corrmtx(u,M-1);
w=inv(R)*p';
end
</code></pre>
<p><a href="https://i.sstatic.net/j4AP0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j4AP0.png" alt="enter image description here"></a></p>
<p>Am I doing something wrong?</p>
Answer: <p>You can use this code</p>
<pre><code> function w=WienerFIRFilter(input,desired,M)
auto_corr=xcorr(input,M,'unbiased');
r=auto_corr(M+1:end); % positive lags only
R=toeplitz(r); %correlation matrix
cross_corr=xcorr(input,desired,M,'unbiased');
p=(cross_corr(M+1:end));
w=inv(R)*p'
end
</code></pre>
|
https://dsp.stackexchange.com/questions/34707/wiener-filter-to-estimate-constant-signal
|
Question: <p>I am planning the following.</p>
<p>First sample the 20 MHz WiFi channel (WiFi channel-1 in figure).</p>
<p>Put band pass filters (5 MHz wide) around each of the ZigBee center frequencies (11, 12, 13, 14).</p>
<p>Re-sample the chunks to 4 MHz.</p>
<p>Is there anything wrong with this approach ?</p>
<p><a href="https://i.sstatic.net/g1UU2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g1UU2.jpg" alt="enter image description here"></a></p>
Answer: <blockquote>
<p>Put band pass filters (5 MHz wide) … Re-sample the chunks to 4 MHz.</p>
</blockquote>
<p>Don't do that! If you reduce the sampling rate to 4 MS/s, you need to filter to 4 MHz bandwidth anyway. So you could instead just use 4 MHz wide filters and get rid of the resampling filter.</p>
|
https://dsp.stackexchange.com/questions/37016/extracting-narrow-band-zigbee-signals4-mhz-from-a-wide-band-wifi-signal20-mhz
|
Question: <p>If I apply a digital Bessel filter to a perfect step function, I get something that looks like the following:</p>
<p><a href="https://i.sstatic.net/iMDYj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iMDYj.png" alt="enter image description here"></a></p>
<p>The green line is the input step response data sampled at 500kHz, the red line is obtained using the <code>scipy.signal.lfitler</code> routine with a 8-pole Bessel low-pass filter at 10kHz, and the blue line is the result of the same filter but using the <code>scipy.signal.filtfilt</code>.</p>
<p>Given the signal (either red or blue) is there an analytical form for the response that would allow me to extract the cutoff frequency using a nonlinear fit? Both functions look like a sigmoidal of some kind - is there an analytical formula for it?</p>
<p>The form</p>
<p><a href="https://i.sstatic.net/CAA6U.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CAA6U.gif" alt="enter image description here"></a></p>
<p>seems to fit both cases quite closely, but not perfectly, and it's not clear what relation the time constant bears to the cutoff frequency.</p>
<p>If I fit that form to the <code>filtfilt</code> data, I get a linear relationship between the time constant for the fit, and the time constant for the filter (1/fc), with a slope of about 1/35. I imagine that slope will be a function of the filter order. Can anyone suggest a better analytical form for the fit function?</p>
<p><a href="https://i.sstatic.net/t0qaU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t0qaU.png" alt="enter image description here"></a></p>
Answer: <p>Yes, there is an analytical response, though it might not look like you would like to have it:</p>
<p>Your Bessel Filter can be written in Z-Domain as </p>
<p>$$H(z)=K\frac{\prod_{i=0}^7(z-z_i)}{\prod_{i=0}^7(z-p_i)}$$</p>
<p>where $z_i$ are the zeros and $p_i$ are the poles of the z-Transform. Now, to get the step response, you "simply" need to calculate </p>
<p>$$h[n]=\mathcal{Z}^{-1}\left(\frac{z}{z-1}H(z)\right).$$</p>
<p>i.e. you multiply the Z-Transform of the filter with the Z-Transform of a unit-step (which is $z/(z-1)$) and calculate the inverse Z-Transform. If you have access to Mathematica etc. it can do it for you. If not, you would need to perform partial fraction decomposition of the expression and then perform inverse Z-Transform of the several partial fractions.</p>
|
https://dsp.stackexchange.com/questions/37301/analytical-expression-for-step-response-of-digital-bessel-filter
|
Question: <p>I am rather new to the world of signal processing, and am struggling to understand a fundamental concept: How are filters actually implemented?</p>
<p>I have read a significant portion of <a href="http://www.dspguide.com/" rel="nofollow noreferrer" title="DSP for Scientists and Engineers">this online book</a>, and scrounged the internet, finding snippets of useful information here and there, but I cannot quite tie it all together. </p>
<p>In brief, I have a vibration profile (~200,000 samples) in the time domain, which I would like to analyze using a particular frequency weighting. This involves a four-step filter, using a (Butterworth) high-pass and low-pass, followed by an a-v transition and an upward step. The analog equations are shown below.</p>
<p><a href="https://i.sstatic.net/Vkgav.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vkgav.png" alt="Filter Req's"></a></p>
<p>Yet I am still struggling on how to implement them. I have seen mentions of the bilinear transform, used in programs in MATLAB/Python, but such implementations seem to omit the 's' or 'p' variable from the filters shown, typically creating the numerator 'B' and the denominator 'A' from the coefficients of the analog filters. </p>
<p>Other sources suggest taking the Fourier of the filter equations and the signal, multiplying, then applying the inverse Fourier to bring it back into the time domain. Yet I cannot discern how to take the Fourier of an equation.</p>
<p>My same problem occurs with using convolution in the time domain, as I have not understood how to convolve an equation yet. </p>
<p>I am sure I have come near the answer, and I know this is a basic principle, but I cannot seem to comprehend it, and my Googling is off the mark. Any help on this would be appreciated. General advice is also welcome, as I have a lot to learn.</p>
<p>** If important, I am currently working in Python, using the scipy.signal package, with access to MATLAB for testing and evaluation purposes.</p>
<hr>
<p>Currently I am looking at substituting j2πf for p, then simply running the data through the equations and taking the inverse Fourier to bring it back into the time domain. </p>
<p>Alternatively, my eyes have just noticed what was staring me in the face: the second equation given for each filter is in terms of 'f', which would suggest I am able to simply run the frequency-domain data through this equation before taking the inverse Fourier to recover the time-domain. If this is a correct realization, this question can be closed (or I can 'answer' my own, now self-evident, question).</p>
Answer: <p>There are a few different ways to proceed. They involve doing some algebra but it is more an issue of tedium. The notation of your analog filters is a little non standard, essentially your $p$ is $s$ in most books. </p>
<p>Traditionally, phase wasn't considered important in hearing so standards tended to be specified by magnitude, so there is some ambiguity with respect to the digital implementation's phase response. You have 2 basic simple choices, a digital IIR filter or FIR filter. FIR is likely to be easier but easy and better are typically not the same. FIR can be direct or FFT based. IIR has a number of choices such as biquads or direct. Floating point math has its own easy/better issues. </p>
<p>One path way is to focus on the terms inside the magnitude brackets, i.e. the complex analog transfer functions. The other path is to focus on the analog magnitude. </p>
<p>So, given the analog transfer functions, you can either use a table of Laplace transforms and deduce the impulse response of each component you listed, or use the bilinear transform to express each component that is in $s$ in terms of $z$. </p>
<p>If you derive the impulse responses, you can sample points of the impulse response and provided a generally exponential decay, use those samples as coefficients of a FIR filter of sufficient length. The FIR filters are then applied sequentially. One nice property of ideal LTI filters, the order that you apply the filters are ideally interchangeable. </p>
<p>If you choose the bilinear transform and do the algebra to derive the discrete time numerators and denominators, you need to be aware that the bilinear transform squeezes the response as it approaches the Nyquist frequency. One needs to plot the discrete time response to asses how the corresponding analog response is distorted. If some modest tweaking results in an acceptable response then, the numerators and denominators need to be factored so that they are compatible with the IIR implementation, such as a cascade of biquads.</p>
<p>If you go with the magnitudes, the task becomes a direct sampling from the analog frequency domain to the discrete time periodic frequency domain. One can choose a linear phase response, and appropriately impose symmetry and define the frequency domain FIR filter. This might not sound like an analog implementation.</p>
<p>The essential challenge is mapping a frequency response in $-\infty < \Omega < \infty$ to $-\pi \le \omega \le \pi$ </p>
<p>There are other approaches like state variables, so this is not the entire story. </p>
|
https://dsp.stackexchange.com/questions/43143/filter-implementation
|
Question: <p>I am processing the signal from MPU6050. Applying FIR filter in the frequency domain and then taking IFFT to get filtered signal but getting some spikes in the output signal. Searched a lot about it and found here exit spectral leakage. Found some solutions like zero paddings and windowing but nothing is working.
Can some one take a look and let me know what I am doing wrong here.</p>
<p>Here is what I am getting
<a href="https://i.sstatic.net/SHiuQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SHiuQ.png" alt="Filtering result after IFFT"></a></p>
<p>Some zoom in
<a href="https://i.sstatic.net/c4b8D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c4b8D.png" alt="Zoom in graph"></a></p>
<p>More zoom in</p>
<p><a href="https://i.sstatic.net/oHBdq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oHBdq.png" alt="Zoom in graph"></a></p>
<p>Compete MATLAB Code</p>
<pre><code>close all; clear all ;clc; delete(instrfindall);
arduino=serial('COM6','BAUD', 115200);
fopen(arduino);
java.lang.Thread.sleep(0.01); % in mysec!
samples = pow2(nextpow2(5000))
sig = []; out_sig =[0]; tic; G=0; trying=0; pingpong = 0;
while 1
idn = fscanf(arduino);
xx = str2double(idn);
if isnan(xx)
fclose(arduino);
java.lang.Thread.sleep(1); % in mysec!
fopen(arduino);
java.lang.Thread.sleep(1); % in mysec!
trying = trying + 1
continue
else
sig = [sig xx];
end
if length(sig)>samples - 1
break
end
end
fclose(arduino);
plot(sig,'g'); %---------------------------->Orginal signal <---------------------------
hold on
PG = plot(out_sig,'k'); %---------------------------->Output signal <---------------------------
PG.YDataSource = 'out_sig';
SegmentLength = pow2(nextpow2(1000)) % Transform length to next pow of 2
Fs = SegmentLength;
wiin = hann(64,'symmetric');
filtcoeff = fir1(63, 20/Fs, 'low', wiin, 'scale');
firfilterimpresp = impz(filtcoeff);
filterffte = fft(firfilterimpresp,SegmentLength); % FFT of impulse response of filter/system
ChunkStart = 1;
ChunkEnd = SegmentLength;
sigblock = [];
for t = 1:samples/SegmentLength
sigblock = sig(ChunkStart:ChunkEnd);
blockfft = fft(sigblock,SegmentLength);
out_sigg = real(ifft(blockfft.*filterffte'));
out_sig = [out_sig out_sigg];
refreshdata
drawnow
ChunkStart = ChunkEnd;
ChunkEnd = ChunkEnd + SegmentLength -1 ;
end
</code></pre>
Answer: <p>The sharp peaks are actually due to segmentation and concatenation, you need to have overlapping segmentations. The peaks occur on the edges of segments mostly. In following figures I tried to show what I mean, blue curve are Hanning function coefficients. I did not understood why your application is so, but if it is essential to process your signal in segmented mode, you should consider overlapped segments. For your case each segment of signal must have 25% overlap with the next one, in other words, assuming the segment length to be <span class="math-container">$N$</span>, if <span class="math-container">$x_{i-1}(t)$</span> is a segment of the signal you must have <span class="math-container">$x_i(t)$</span> where <span class="math-container">$x_{i-1}(0.75N : N) = x_i(0:0.25 N)$</span>. </p>
<p><a href="https://i.sstatic.net/yIBQS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIBQS.png" alt="enter image description here"></a></p>
<p>The following shows a 50% overlapping mode,</p>
<p><a href="https://i.sstatic.net/lSyib.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lSyib.png" alt="enter image description here"></a></p>
|
https://dsp.stackexchange.com/questions/59678/real-time-fft-ifft-with-low-pass-filter
|
Question: <p>Apologies in advance for asking something so basic, this is not my field at all so I'm a bit lost where else to find this info. I'm trying to understand this piece of code that detects sudden drops in a signal, it goes something like this:</p>
<pre><code>filter = [ones(1,10)*-0.05 ones(1,10)*0.05];
filterResponse = conv(exampleSignal, filter, "same");
</code></pre>
<p>Can anyone tell me what type of filter this is and why does it centers the signal at 0? Just anything to get me started onto more googling would be really helpful. Thanks in advance.</p>
<p><a href="https://i.sstatic.net/Kv9xA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kv9xA.jpg" alt="signal and filter response example" /></a></p>
Answer: <p>This is a moving difference filter which acts as a discrete time derivative and is less sensitive to high frequency noise. When all the samples in the waveform are the same (DC) the last ten samples will cancel out the first 10 samples for DC cancellation but when a change occurs within a 10 sample interval compared to 10 samples prior the difference will be maximized.</p>
<p>This filter is identical to a 10 sample moving average followed by a difference over 10 samples and provides a filtered estimate of the time derivative of the waveform as shown in the diagram below (<span class="math-container">$z^{-10}$</span> in the diagram refers to a 10 sample delay and MAF refers to a "Moving Average Filter" which is the sum of the prior 10 samples):</p>
<p><a href="https://i.sstatic.net/RIeo0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RIeo0.png" alt="equivalent filter" /></a></p>
<p>This equivalence form helps provide a more intuitive understanding of the filter with further details provided below:</p>
<p>The basic form of a differencing filter is <span class="math-container">$y[n] = x[n]-x[n-1]$</span>, with coefficients given as [1 -1] and is a discrete time approximation of a derivative (as the Forward Euler mapping of <span class="math-container">$s$</span> which is the Laplace Transform of a time derivative):</p>
<p><a href="https://i.sstatic.net/RnNLU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RnNLU.png" alt="differencing filter" /></a></p>
<p>Also consider how the time derivative is <span class="math-container">$\lim_{T \rightarrow 0}\frac{x(t+T)-x(T)}{T}$</span> in comparison to the form given above.</p>
<p>This filter has a simple high pass frequency response with the magnitude response shown below, given as the response from DC to the sampling rate where the frequency axis has been normalized by the sampling rate. Like the derivative which has a frequency response that increases as a function of f (given the Laplace transform of a time derivative is <span class="math-container">$s$</span>), this filter is most sensitive to the highest frequency components in the signal (at the Nyquist frequency at half the sampling rate given by the normalized frequency of <span class="math-container">$0.5$</span>):</p>
<p><a href="https://i.sstatic.net/LRnvs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRnvs.png" alt="freq response" /></a></p>
<p>If we insert 9 zeros to upsample the filter by 10, to get coefficients given as [1 0 0 0 0 0 0 0 0 0 -1], (which is the differencing stage in the first diagram provided above) such a zero insert creates periodicity in the frequency response, repeating the same frequency response ten times over the frequency range from DC to the sampling rate as shown in the magnitude response below (creating the classic "comb filter"):</p>
<p><a href="https://i.sstatic.net/98lyW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/98lyW.png" alt="interpolated response" /></a></p>
<p>The final response of the OP's filter is found from the convolution (or cascade) of the interpolated differencing filter with a 10 sample moving average filter (which removes the high frequency sensitivity as a low pass filter).<br />
A moving average filter over 10 samples with coefficients given as <code>[ones(1,10)]</code> would have a Dirichlet response which is an aliased Sinc function. This is clear when we consider the coefficients of the filter is the impulse response of the filter, and the Fourier Transform of the impulse response is the frequency response. The coefficients are a sampled pulse, and the Fourier Transform of a pulse is a Sinc function. The response of a filter with coefficients <code>[ones(1,10)]</code> is shown below:</p>
<p><a href="https://i.sstatic.net/XmDLv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XmDLv.png" alt="moving average" /></a></p>
<p>And the final form of the filter as the convolution of the coefficients of the interpolated differencing filter and the 10 sample moving average filter is shown with the response below (omitting the scaling by <span class="math-container">$0.05$</span>). Convolution in time is multiplication in frequency, and we see that the resulting response is the product of the responses given above:</p>
<p><a href="https://i.sstatic.net/PEhBS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PEhBS.png" alt="final frequency response" /></a></p>
|
https://dsp.stackexchange.com/questions/75095/help-identifying-filter
|
Question: <p>I'm filtering a raw PPG signal sampled at 100sps</p>
<p>After applying a bandpass filter my signal looks like this
<a href="https://i.sstatic.net/C25nN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C25nN.png" alt="PPG with trend" /></a></p>
<p>As you can see, It has a trend (a low pass component which is not in the desired signal).
I tried applying a high pass filter, it removes the trend but also changes wave shape significantly. Is there a way to detrend this signal without losing the shape?</p>
<p>I'm using python to implement filters. This is only a part of the signal and, actual signal is a few mins long. I'm not worried about the speed as this doesn't have to be real-time.</p>
Answer: <p>I assume you want to remove the trend which behaves like that:</p>
<p><a href="https://i.sstatic.net/pnkx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pnkx7.png" alt="enter image description here" /></a></p>
<p>Probably some kind of a parametric model will do the work.<br />
Something as simple as a 2nd degree polynomial with regularization will estimate this pretty well.</p>
|
https://dsp.stackexchange.com/questions/76067/detrending-ppg-signal
|
Question: <p>Is it possible, and how could it be done, to make an extension of filters that converts the signal in the filter's stop-band into noise with no information content or at least with zero correlation to the input signal in that frequency range, with only an arbitrarily small degradation of the signal in the pass-band?</p>
<p>To visualize this, here's a low-pass filter magnitude frequency response with equiripple stopband lobes peaking at -72.5 dB:</p>
<p><a href="https://i.sstatic.net/tCtXW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCtXW.png" alt="enter image description here" /></a></p>
<p>When a sum of two sinusoidally modulated frequency sweeps (a good test signal for trying to solve the problem) is filtered, a resulting spectrogram is:</p>
<p><a href="https://i.sstatic.net/IxRPM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxRPM.png" alt="enter image description here" /></a></p>
<p>The best approach I can currently think of is to add to the input signal some noise. Naively adding triangular noise at 2.5 dB higher power than a sinusoid at the peak of a stop-band lobe still shows the sinusoids in the spectrogram:</p>
<p><a href="https://i.sstatic.net/3FBNW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3FBNW.png" alt="enter image description here" /></a></p>
<p>Cranking up the noise to peak sinusoid power at stopband + 18.5 dB almost fully hides the sinusoids over the stopband:</p>
<p><a href="https://i.sstatic.net/J3Pbz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J3Pbz.png" alt="enter image description here" /></a></p>
<p>But it's a lot of noise. I didn't filter the noise to only the stop-band, because it's extra computational effort, and because in some applications the signal would be decimated and the noise would end up in the pass band anyhow.</p>
<p>I have also been thinking about randomly switching the output between those of a filterbank. The switching would move the stop-band zeros around while causing minimal frequency response fluctuation in the pass-band. But this approach seems to be a dead end. If the input is a complex sinusoid within the stop band, then in order to kill the correlation between the input and the scrambler's output, the sum of the frequency responses of the filterbank filters would need to be zero at that frequency. If that was simultaneously possible, as we require, for all stop-band frequencies, it would also be possible to take a sum of the impulse responses of the filterbank filters to obtain a filter with a uniformly zero stop band frequency response. It's well known that that is not possible.</p>
<p>Maybe no ideal method exists.</p>
<p>Python code for the plots:</p>
<pre><code>import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
c0 = signal.remez(8, [0, 0.25], [1], maxiter=100)
c = np.zeros(c0.size*2-1)
c[0::2] = c0
c[c0.size-1] = 1
c = c*0.5
freq, response = signal.freqz(c)
plt.plot(range(-c0.size+1, c0.size, 1), c, 'x')
plt.grid(True)
plt.show()
plt.plot(freq/(2*np.pi), 20*np.log10(np.abs(response)))
plt.grid(True)
plt.show()
print("Stop band ripple (dB):")
print(20*np.log10(np.abs(response[-1])))
delay_c = np.concatenate([np.zeros((c.size // 2)), [1], np.zeros((c.size // 2))])
freq, response = signal.freqz(delay_c - c)
plt.plot(range(-c0.size+1, c0.size, 1), delay_c - c, 'x')
plt.grid(True)
plt.show()
plt.plot(freq/(2*np.pi), 20*np.log10(np.abs(response)))
plt.grid(True)
plt.show()
N = 65536
f = 2
x = np.sin((lambda x: 0.5*np.pi*x - 0.5*np.sin(np.pi*x/N*f)*N/f)(np.arange(N)))
f = 5
x += np.sin((lambda x: 0.5*np.pi*x - 0.5*np.sin(np.pi*x/N*f)*N/f)(np.arange(N)))
def get_spectrogram(x):
return signal.spectrogram(x, window=('gaussian', 64), scaling='spectrum', nperseg=1024, noverlap=1024-64, nfft=1024, detrend=False)
def plot_spectrogram(x):
f, t, Sxx = get_spectrogram(x)
plt.figure(figsize=(10,5))
plt.pcolormesh(t, f, 10*np.log10(Sxx/np.max(Sxx) + 10e-100), shading='gouraud', cmap='pink', vmin=-112, vmax=0)
plt.show()
f, t, Sxx = get_spectrogram(x);
plot_spectrogram(x)
y = np.convolve(x, c)
plot_spectrogram(y);
epsilon = 10**(-70/20)*np.sqrt(3) #sqrt(3) = sqrt(1/2)/sqrt(1/6) = sine rms / triangular noise rms
r = (np.random.uniform(size=y.size)-np.random.uniform(size=y.size))*epsilon
z = y + r
plot_spectrogram(z);
epsilon = 10**(-54/20)*np.sqrt(3)
z = y + (np.random.uniform(size=y.size)-np.random.uniform(size=y.size))*epsilon
plot_spectrogram(z);
</code></pre>
Answer: <p>Well, the naive approach would of course be using the complementary of your band-pass filter (letting through the allowed information) as band-stop filter, and use that to shape uncorrelated noise to be where the stop-band of the original filter is. Then, add both.</p>
<p>If you don't want to do that, for example because you've read about dirty paper coding and realize the "attacker" can infer information about the stop-band noise from the transition width to recover part of the "whitened out" regions (this is a bit of speculation – I don't know your filter, your allowed signal / information etc), things get a bit more complicated.</p>
<p>One excellent information eraser is multiplication with a random variable; assuming your signal amplitude at every frequency was continuously distributed, the target distribution after multiplication would be normal, as that's the highest-entropy-per-variance-i.e.-power distribution.</p>
<p>So, problem: how do we multiply all frequencies outside passband with a (individually) white, uncorrelated noise, but not the passband? Well, we start by looking at things as point-wise multiplication in (discrete) frequency domain. Well, that does work (all practical issues of "how to get into frequency domain" aside – that's what OFDM or GFDM would solve, but it puts restrictions on the structure of your signals). So, the trick would be</p>
<ol>
<li>Generate white noise</li>
<li>filter it (time domain) or mask it (frequency domain) to affect the out-of-allowed-band region only</li>
<li>multiply it point-wise in frequency domain with the signal</li>
<li>(if necessary) transform back to time domain</li>
</ol>
<p>Interestingly, steps 2/3 suggest that you could, instead of doing the multiplication in frequency domain, convolve your signal with bandlimited white noise and preserve your signal of interest – which feels wrong. I need to think about this.</p>
|
https://dsp.stackexchange.com/questions/77018/a-filter-removing-forbidden-information-in-the-stop-band
|
Question: <p>Update: the yellow area in the graph below can be ignored, it shows power produced. I'm only interested in the blue line, and how to separate power consumed by the heating system from the rest.</p>
<p>I have data for a signal (blue line) showing overall power consumption. The spikes come from a heating system. The image below shoes data for 24 hours.</p>
<p><a href="https://i.sstatic.net/3sQFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3sQFJ.png" alt="signal" /></a></p>
<p>I would like to split the signal into two separate signals, to find out how much power is consumed by the heating system.</p>
<p>Are there any methods to achieve this?</p>
Answer: <blockquote>
<p>Are there any methods to achieve this?</p>
</blockquote>
<p>Not without some additional information. You may be able to partially separate this by carefully studying the spikes from the heating system. Many heating systems are binary: they are either on or off and during the "on state" the power consumed is constant.</p>
<p>In your case it looks like the value for "heat on" is about 9.5. You could separate the graphs subtracting 9.5 for each value that's above 9.5. That would be the baseline load and the subtracted spikes would be the heating system. Of course, that only works if the constant power assumption is justified</p>
|
https://dsp.stackexchange.com/questions/85536/method-for-splitting-time-sampled-signal-into-two-signals
|
Question: <p>I know overlap save and overlap add are used for long data sequence filtering. Are there any other similar or better techniques like these? </p>
Answer: <p>The main alternative that I can think of is the hybrid method proposed by <a href="http://alumni.media.mit.edu/~billg/projects.html#conv" rel="nofollow">Bill Gardner</a> and <a href="http://www.freepatentsonline.com/6421697.html" rel="nofollow">patented by Lake DSP</a> (now part of Dolby). <a href="http://www.cs.ust.hk/mjg_lib/bibs/DPSu/DPSu.Files/Ga95.PDF" rel="nofollow">There appears to be a copy of Gardner's paper here</a>.</p>
|
https://dsp.stackexchange.com/questions/8771/other-techniques-like-overlap-save-overlap-add
|
Question: <p>I need to know what is iteration and divergence in anisotropic diffusion filter technique.</p>
<blockquote>
<ul>
<li><p><strong>Isotropic diffusion</strong> $$\frac{\partial I(x, y, z)}{\partial t}={\rm div}\left[c\cdot \nabla I\left(x, y, z\right)\right], \quad
\text{where } c \text{ is the diffusion coefficient}$$</p></li>
<li><p><strong>Anisotropic diffusion</strong> $$\frac{\partial I(x, y, z)}{\partial t}={\rm div}\left[g\left(\left\| I\left(x, y, z\right)\right\|\right)\cdot \nabla I\left(x, y, z\right)\right], \quad \text{where } g \text{ is the anisotropic diffusion coefficient }\textbf{(Edge stopping function)}$$</p></li>
</ul>
</blockquote>
<ul>
<li>Here $t$ refers to iteration, what is this iteration ? How is it related to filtering ? I know that iteration refers to number of rounds but how it is related with filtering ?</li>
<li>Also explain to me why we use divergence in these equation what is the purpose of divergence. </li>
</ul>
Answer: <p>This paper - <a href="https://www.scribd.com/document/406513369/The-Structure-of-Images" rel="nofollow noreferrer">The Structure of Images</a> describes quite nicely to get to the diffusion equations from the point of view of a filter. It's about the isotropic case, but anisotropic is really just an extension of that. For anisotropic diffusion see <a href="https://www.scribd.com/document/406513370/Scale-Space-and-Edge-Detection-Using-Anisotropic-Diffusion" rel="nofollow noreferrer">Scale Space and Edge Detection Using Anisotropic Diffusion</a> as Deniz suggests this one is quite good as it also includes discretised versions of the equations.</p>
<p>The basic idea is to solve the <a href="http://en.wikipedia.org/wiki/Heat_equation" rel="nofollow noreferrer">heat equation</a> where <span class="math-container">$t$</span> is the scale of the image, i.e. the amount of filtering to apply. For anisotropic diffusion the diffusion coefficient <span class="math-container">$c$</span> is replaced by <span class="math-container">$g$</span> which is dependent on the image gradient and therefore filters preferentially depending on the gradient. </p>
<p>The divergence here really just arises from the definition of the heat equation.</p>
|
https://dsp.stackexchange.com/questions/14658/anisotropic-diffusion-filter-intuition-behind-parameters
|
Question: <p>Let's say I have a 2 second data set taken at 220Hz sample rate and I would like to filter out the frequency bands associated with the EEG Spectrum:
$$\begin{align}
\Delta:& [1,3]\text{ Hz}\\
\theta:& [4,7]\text{ Hz}\\
\alpha_1:& [8,9]\text{ Hz}\\
\alpha_2:& [10,12]\text{ Hz}\\
\beta_1:& [13,17]\text{ Hz}\\
\beta_2:& [18,30]\text{ Hz}\\
\gamma_1:& [31,40]\text{ Hz}\\
\gamma_2:& [41,50]\text{ Hz}
\end{align}
$$
What would be the most simple approach to do this ?</p>
Answer: <blockquote>
<p>What would be the most simple approach to do this ?</p>
</blockquote>
<p>I'm a GNU Radio person. So the simplest approach was this GNU Radio companion-designed flowgraph:</p>
<p><a href="https://i.sstatic.net/LFQ7o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LFQ7o.png" alt="Bandpass flow graph" /></a></p>
<p>Generating <a href="https://gist.githubusercontent.com/marcusmueller/8c09996dd1d3857fbd35fb3265dfceda/raw/444fa1e9ff94a3f551c1ac3f5e1b2ab9cfeb8175/banpdasses.grc" rel="nofollow noreferrer">this flowgraph file</a> generates a python program that uses GNU Radio to do the signal processing, on as many CPU cores as there are.</p>
<p>You can then run it as</p>
<pre><code>$ ./bandpass_filters.py -f {input file containing float32 samples, one after the other}
</code></pre>
<p>To decrease the complexity of all these bandpasses, I first used a low pass to reduce the sampling rate of the signal by half – that's Ok, since you are only interested in frequencies below half of half of your input sampling rate.</p>
<p>I tried this with a 200MB "dummy" file, creating eight 100MB output files – this, to and from a temp directory, took a whole 13 seconds.
I'm pretty hopeful it's fast enough! I benchmarked this a bit, and it seems the source is able to push through about 20 million samples per second; the slow part is the writing to the 8 files.</p>
<p>Note that this is a quick and easy solution – a proper, speed-optimized solution would probably use</p>
<ol>
<li>decimation, because you're output sampling rate is still 110 Hz, which contains a maximum of 9 Hz (that's a waste of processing power, and hence, speed)</li>
<li>another pre-decimation by 2 through low-pass filtering to 27.5 Hz for the four filters below that rate</li>
</ol>
<p>However, really, at your modest amount of samples (16GB = 4 Billion 32bit floats?) this isn't really necessary, if you wanted to grab a coffee, anyways.</p>
<p>That flowgraph was really nice and easy to design. Here I am, shamelessly plugging GNU Radio's <a href="http://tutorials.gnuradio.org" rel="nofollow noreferrer">Guided tutorials</a> if you want to learn to design such signal processing flow graphs yourself; in fact, it's even more fun to use GNU Radio in live flowgraphs, i.e. in systems where the in- or output (or both) are physical devices (microphones, ecg sensors, sonars, radio frontends). Exactly the same flow graph can work with a sound card, if you replace the "file source" by an "audio source" (both come with GNU Radio) and specify a sound card-compatible sampling rate via the <code>-r</code> flag.</p>
<p>Now, since you probably know better than me how to work with Mathematica, here's what I'd recommend you implement:</p>
<ol>
<li>(first, one time) You use the GNU Radio companion to generate a python program that does the signal processing for you.</li>
<li>You use Mathematica's <code>BinaryWrite()</code> to write your raw samples to a file as 32 bit floating point numbers, e.g. <code>BinaryWrite("/tmp/samples.dat.f32", your_sample_vector, "Real32")</code>.</li>
<li>You use Mathematica's abilities to call executables on your machine to execute that python script. Now, I'm no Mathematica expert, so this is all Google-based-guessing: <code>RunProcess({"/path/of/python/executable", "/path/of/generated/python/program", "-f", "/tmp/samples.dat.f32"})</code></li>
<li>You read (<code>BinaryRead</code>) the resulting <code>/tmp/samples.dat.f32_1_3.dat</code> (and <code>…_4_7.dat</code> etc) in Mathematica.</li>
</ol>
<p>Optionally, you really consider doing less workflow in Mathematica and use the sheer mass of signal processing things that come with GNU Radio or its module ecosystem.</p>
|
https://dsp.stackexchange.com/questions/28965/filtering-frequency-bands-out-of-a-signal
|
Question: <p>Let`s say I have a signal </p>
<p>$m(t)=\cos(4\pi t) + \cos(6\pi t)$</p>
<p>so we can say the signal is containing frequencies $f_1= 2\mathrm{Hz}$ and $f_2= 3\mathrm{Hz}$. The cut-off frequency of the low pass filter is equal to $f_c= 3.5\mathrm{Hz}$, and the sampling frequency is $5\mathrm{Hz}$.</p>
<p>Is there an overlap (aliasing error) when receiving the original signal (After LPF)?</p>
Answer: <p>Given two signals $x_1(t) = \cos(2\pi 2t)$ and $x_2(t) = \cos(2\pi 3t)$ which have frequencies of 2 and 3 Hz respectiveley, if you sample them uniformly with a sampling rate of 5 samples per second (or a sampling period $T$ of $\frac {1}{5}$ = 0.2 seconds) you will get the following two discrete time signals $$x_1[n] = x_1(nT) = \cos(4\pi \frac {n}{5} )$$ and $$x_2[n] = x_2(nT) = \cos(6\pi \frac {n}{5})$$ </p>
<p>It can be easily seen with simple trigonometry, ( by using $\cos(x) = \cos(2\pi-x)$) , that $x_1[n] = x_2[n]$ which is a manifestation of aliasing that occured on the 3hz signal, therefore, the 3Hz signal seems like 2Hz.</p>
|
https://dsp.stackexchange.com/questions/29880/a-question-about-sampling-theory
|
Question: <p>I have a question about using the Dirichlet kernel as a filter. Let us suppose that I have samples of a continuous function sampled with frequency <span class="math-container">$F_s=10 \,\texttt{Hz}$</span>. The function is band-limited and the sampling frequency is well above the Nyquist frequency, but there is noise added to the data. I need a lowpass filter to remove all the high frequency components introduced by the noise above a certain frequency <span class="math-container">$B$</span>.</p>
<p>My understanding is that <span class="math-container">$\text{sinc}(2Bt)$</span> would represent the ideal lowpass filter because computing its Fourier transform (FT) gives a rectangular frequency response of single-sided width <span class="math-container">$B$</span>. However, this filter has an infinite duration in time and we need to truncate it. The abrupt truncation with a rectangular window causes unwanted Gibbs oscillations in the frequency domain and, consequently, we smooth the <span class="math-container">$\text{sinc}$</span> with an appropriate window instead to mitigate this problem.</p>
<p>However, because we implement all our calculations on machines, continuous functions are sampled and FT become Discrete Fourier transforms (DFT). Then, why don't we just use the Dirichlet kernel as the lowpass filter? Let's say that my filter has <span class="math-container">$N$</span> coefficients, with <span class="math-container">$N$</span> odd. The DFT will have spacing <span class="math-container">${\Delta}f=F_s/N$</span>. If I take <span class="math-container">$N_B$</span> as the nearest integer to <span class="math-container">$B/{\Delta}f$</span>, then the Dirichlet kernel <span class="math-container">$D_n=\sin[{\pi}(2N_B+1)n/N]/\sin({\pi}n/N)$</span> will have a rectangular DFT with single-sided bandwidth given by <span class="math-container">$N_B{\Delta}f$</span>. Of course, if I have <span class="math-container">$M$</span> samples of the input function, the output will have <span class="math-container">$M-(N-1)$</span> samples. It seems to me that <span class="math-container">$D_n$</span> is the ideal filter that we need. Am I wrong about this?</p>
<p>I know that <span class="math-container">$D_n$</span> tends to <span class="math-container">$\text{sinc}(2Bt)$</span> as <span class="math-container">$N$</span> goes to <span class="math-container">$\infty$</span>. So I guess an issue related to my question is, for what value of <span class="math-container">$N$</span> is the discrete approximation to the continuous case good enough that we don't have to worry about it anymore? On the other hand, when we are limited to relatively small values of <span class="math-container">$N$</span> (for instance, if we don't want to exacerbate the data loss <span class="math-container">$M-(N-1)$</span>), should we just do things as if everything were discrete or should we carry over results from the continuous case (like filtering with <span class="math-container">$\text{sinc}$</span> for example)?</p>
<p>Thank you in advance for your answers!</p>
Answer: <p>The Dirichlet Kernel is a "Time Aliased Sinc Function", and using it as the coefficients for the low pass filter avoids truncation issues we would see as the OP describes correctly with the truncated Sinc function, but leads to greater distortion in the frequency domain for the frequency response corresponding to all frequencies that are in between the samples given by the DFT (The frequency response is the DTFT of the time domain samples, as even though we are sampled in time, we can have samples of any frequency on the continuous frequency domain). The Dirichlet Kernel would result in the exact frequency values sampled for a rectangular low pass (such as 1,1,1,0,0,0,0 .....0, 0, 1,1] in the frequency domain, but the continuous frequency response would have much more ripple (error) at all the frequencies in between those "bin centers" than what we would get with using samples of the truncated Sinc function as the filter.</p>
<p>Below shows the two functions; the truncated Sinc labeled "IFT", and the Dirichlet Kernel labeled "IDFT"</p>
<p><a href="https://i.sstatic.net/VCoomgMt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCoomgMt.png" alt="Sinc vs Dirichlet Kernel" /></a></p>
<p>The Sinc in orange has it's distortion due to being truncated from the ideal Sinc that extends to <span class="math-container">$\pm \infty$</span>, while the Dirichlet Kernel has it's distortion as deviating from the ideal Sinc, which can be shown to be mathematically equivalent to aliasing from the tails of the Sinc that extend beyond the boundary shown (hence the elevated values). This distortion appears as an error in the frequency response.</p>
<p>I detail the difference between the two approaches in DSP <a href="https://dsp.stackexchange.com/a/31909/21048">#31905,</a> with some bottom-line plots from that post copied below.</p>
<p>Below is the comparative frequency response for a 99-Tap FIR using coefficients of the Dirichlet Kernel (and labeled "Frequency Sampling") in Blue, compared to coefficients of a truncated Sinc (and labeled "Window" as it has been windowed with a rectangular window) in red. In the scale of the first plot we primarily see the difference in the stop-band, in that it has greater error which is attributed to the time domain aliasing of an ideal Sinc response in the time domain.</p>
<p><a href="https://i.sstatic.net/JfXqr0m2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfXqr0m2.png" alt="stop band" /></a></p>
<p>Below is a zoom in of the passband, again showing the greater error that results.</p>
<p><a href="https://i.sstatic.net/UD9LNsvE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UD9LNsvE.png" alt="pass band" /></a></p>
<p>The lesson from above is that both the truncated Sinc and the Dirichlet Kernel are non-ideal (neither is a brick-wall response), with the deviation due to either truncation error for the rectangular windowed Sinc, or time-domain aliasing for the Dirichlet Kernel. But importantly, the truncated Sinc out-performs the Dirichlet Kernel. We can continue to improve the truncated Sinc with higher performing windows, or we can continue to improve the Dirichlet Kernel by oversampling first in the frequency domain to reduce the time domain aliasing (effectively push the time duration out further), and then windowing that result. Oversampling the frequency domain to reduce aliasing for a rectangular frequency response just makes the Dirichlet Kernel approach the ideal Sinc!</p>
|
https://dsp.stackexchange.com/questions/93959/using-the-dirichlet-kernel-as-a-lowpass-filter
|
Question: <p>I can't find this answer anywhere. I have a couple satellite modem manuals and they refer to digital filtering functions that they do, but they say almost nothing about their sample rate. I always thought, without considering it too much, that all the modems I've worked with were only sampling at a rate high enough to extract the bits--something close to the symbol rate.
But if so, how then do they do digital filtering, and how would they be able to display the spectrum, like most of them will do? I believe you must have enough samples to re-create the waveform in order to do those things; I just didn't think all these modems were doing that.</p>
<p>And if all these normal modems implementing digital filtering do sample at the Nyquist rate+, I'm not really seeing the distinction between traditional IF and digital IF, since the first thing the modems are doing is sampling high enough to effectively have a digital IF.</p>
<p>Thanks. There's a lot I don't know, and wish I'd learned 20 years ago.</p>
Answer: <p>All modems sample at higher than the symbol rate up until timing recovery is resolved, at which point the received waveform can de down-sampled to 1 sample per symbol.</p>
<p>As for a digital IF: digital IF means the waveform is centered on some higher frequency, higher than its occupied bandwidth, and can be represented completely as a real signal. In contrast to this is a complex baseband signal, in which case it would be two datapaths that are sampled representing the in-phase (I) and quadrature (Q) components of the complex baseband waveform. When a digital IF is used, a digital down-conversion is required to translate the IF signal to the complex I and Q baseband signal. Both cases are sampled higher than the symbol rate in order for the receiver to resolve carrier and timing offsets and perform optimum matched filtering; meeting the Nyquist requirements as a minimum plus some additional margin for realizable filtering.</p>
|
https://dsp.stackexchange.com/questions/93664/sample-rate-of-digital-modems-how-do-they-do-digital-filtering-if-sampling-belo
|
Question: <p>I'm trying to convolve an input signal <span class="math-container">$x[n]$</span> with two FIR filters <span class="math-container">$h_1[n]$</span> and <span class="math-container">$h_2[n]$</span> in sequence, using block-based overlap-save FFT processing. I want my output to be:
<span class="math-container">$$
y[n]=(x[n]*h_1[n]) * h_2[n]
$$</span>
where <span class="math-container">$∗$</span> denotes linear convolution.</p>
<p>I'm hoping to avoid running the overlap-save method twice. My idea was to perform the overlap-save FFT convolution once, apply <span class="math-container">$h_1$</span> in the frequency domain, and then immediately apply <span class="math-container">$h_2$</span> to the result by multiplying again in the frequency domain, like this (with <span class="math-container">$X[k]$</span>, <span class="math-container">$H_1[k]$</span>, <span class="math-container">$H_2[k]$</span> the respective FFTs):</p>
<p><span class="math-container">$$Y_1[k] = X[k]⋅H_1[k]$$</span>
<span class="math-container">$$Y_2[k] = Y_1[k]⋅H_2[k]$$</span></p>
<p>and then inverse FFT to get the two output blocks (<span class="math-container">$y_1[n]$</span> and <span class="math-container">$y_2[n]$</span>).</p>
<p>However, when trying this with the overlap-save implementation, it does not produce the expected result. Am I missing something?</p>
<p>Here is the Python code that I used:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def overlap_save(signal, filter, filter2, block_size):
filter_fft = np.fft.rfft(np.concatenate((filter, np.zeros(block_size)))) # filter packing
filter_fft_2 = np.fft.rfft(np.concatenate((filter2, np.zeros(block_size)))) # filter packing
num_blocks = len(signal) // block_size
output = np.zeros(len(signal))
output_2 = np.zeros(len(signal))
input_state = np.zeros(2 * block_size) # input packing buffer
for i in range(num_blocks):
start = i * block_size
end = start + block_size
block = signal[start:end]
# input packing
input_state[:block_size] = input_state[block_size:]
input_state[block_size:] = block
block_fft = np.fft.rfft(input_state)
convolved_block_fft = block_fft * filter_fft
convolved_block = np.fft.irfft(convolved_block_fft)
convolved_block_2_fft = convolved_block_fft * filter_fft_2
convolved_block_2 = np.fft.irfft(convolved_block_2_fft)
output[start:end] += convolved_block[block_size:] # output unpacking
output_2[start:end] += convolved_block_2[block_size:] # output unpacking
return output, output_2
# Example code:
signal = np.random.randn(1000)
filter = np.random.randn(10) / 10
filter2 = np.random.randn(10) / 10
block_size = 10
# Perform overlap-save convolution
output, output_2 = overlap_save(signal, filter, filter2, block_size)
# True output for comparison
true_output = np.convolve(signal, filter, mode='full')[:len(signal)]
true_output_2 = np.convolve(true_output, filter2, mode='full')[:len(signal)]
# Assert the outputs are close
assert np.allclose(output, true_output) # True
assert np.allclose(output_2, true_output_2) # False
</code></pre>
Answer: <p>Convolution it commutative, i.e.
<span class="math-container">$$
y[n]=(x[n]*h_1[n]) * h_2[n] = x[n]*(h_1[n]*h_2[n])
$$</span></p>
<p>Hence you only need to convolve the two impulse responses once and then implement a single overlap-save with the resulting IR.</p>
<p><span class="math-container">$$h_3[n] = h_1[n]*h_2[n]; y[n] = x[n]*h_3[n]$$</span></p>
<p>What you have implemented works as well you just have to make the FFT size large enough. If the length of the length of the filters are <span class="math-container">$N_1$</span> and <span class="math-container">$N_2$</span> the FFT size has to be at least <span class="math-container">$N_{FFT} \ge 2\cdot(N_1+N_2)-1 $</span></p>
|
https://dsp.stackexchange.com/questions/96583/how-to-apply-two-fir-filters-in-sequence-with-a-single-overlap-save-fft-step
|
Question: <p>I would like to filter some data in an online sense i.e.</p>
<p><span class="math-container">$$y(t) = a0 + a1*y(t-1) + a2*y(t-2) + ... $$</span></p>
<p>the order not important.</p>
<p>My understanding of the SG is that it is really a smoother - I have to take some point and use points around it, in this case I would end up with e.g.</p>
<p><span class="math-container">$$y(t-3) = a0 + a1*y(t-1) + b1*y(t+1) + ...$$</span></p>
<p>which is not ideal as my filter cannot 'see' into the future.</p>
<p><a href="https://gregstanleyandassociates.com/whitepapers/FaultDiagnosis/Filtering/LeastSquares-Filter/leastsquares-filter.htm" rel="nofollow noreferrer">This article here</a> suggests coefficients for the SG that match what I require, but I cannot find any other resource on the internet.</p>
<p>Can somebody please point out what I should be looking for and/or where the coefficients in that link come from?</p>
<p>Is it plausible to just mirror the data around y(t) and use traditional SG?</p>
Answer: <p>I found the answer from playing around. They are easily computed with <code>scipy.savgol_coeffs</code> e.g.</p>
<pre><code>In [0]: signal.savgol_coeffs(7, 1, pos=0, use='dot')
Out[1]: array([ 0.46428571, 0.35714286, 0.25 , 0.14285714, 0.03571429,
-0.07142857, -0.17857143])
</code></pre>
<p>Matches the values in the website.</p>
|
https://dsp.stackexchange.com/questions/83038/savitzky-golay-filtering-not-smoothing-in-real-time
|
Question: <p>Has anyone attempted to use savitzky-golay filters in conjunction with interpolation of missing observations? It seems very logical to do so, but I was wondering if there are any good reasons for not doing it since my search has yielded no results.</p>
Answer: <p>Savitzky-Golay filters, based on polynomial models, lend themselves well to non-uniform or lacunary sampling, one way of treating missing observations. However, they will likely become more expensive with irregularities. Hints can be found in post
<a href="https://dsp.stackexchange.com/q/1676/15892">Savitzky-Golay smoothing filter for not equally spaced data</a>.</p>
<p>Other sources of interest:</p>
<ul>
<li><p><a href="https://github.com/scipy/scipy/issues/19812" rel="nofollow noreferrer">ENH: Savitsky-Golay filter for non uniformly spaced data</a></p>
</li>
<li><p><a href="https://fr.mathworks.com/help/wavelet/ug/smoothing-nonuniformly-sampled-data.html" rel="nofollow noreferrer">Smoothing Nonuniformly Sampled Data with the multiscale local polynomial transform (MLPT)</a></p>
</li>
</ul>
|
https://dsp.stackexchange.com/questions/95291/savitzky-golay-with-missing-observations
|
Question: <p>On <a href="http://www.cs.toronto.edu/%7Efidler/slides/2019/CSC420/lecture2.pdf" rel="nofollow noreferrer">page 22 of the slide</a>, the 1D array is [1, 1, 1, 1, 1], but the dots are uneven. Then why their values are the same?
What's the value of the point that the arrow is pointing at? Though it gives Moving average in 1D: [1, 1, 1, 1, 1]/5, what exactly is that value?</p>
Answer: <p>The illustration is showing your data (labelled as "original") and the result of convolving your moving average filter with your data (labelled as "smoothed"). The filter/kernel is [1 1 1 1 1]/5 and gives each of the five samples equal weight. Effectively, it takes the average of the five samples it overlaps with during each step of the convolution process. It can also be written as [0.2 0.2 0.2 0.2 0.2] or [1/5 1/5 1/5 1/5 1/5]. The values in this filter are the same, but you can also choose different values depending on what you wish to accomplish. In the next slide, they give the filter [1 4 6 4 1]/16. This filter gives more weight or importance to the samples that are near the center of the filter.</p>
<p>I hope this helps.</p>
|
https://dsp.stackexchange.com/questions/69055/how-does-filter-works-in-noise-reduction
|
Question: <p>I'm working on a dsPIC33, using Audio Codec Board - Proto to read audio samples and for reproduction.</p>
<p>In order to implement a relatively simple signal processing algorithm, I'm reading audio samples and I'm supposed to process them and pass them to DAC for reproduction.</p>
<p>The dsPIC33 I'm using filters the signal using Q15 format, but the samples I'm getting from Codec board are signed 2's complement, 16-bit. Thus far, I managed to get the reading and send it to DAC, without any data manipulation, but now I need to filter the signal.</p>
<p>Is there a way to overcome this format barrier? I suppose that conversions from signed 2's complement to Q15 and vice versa are necessary.</p>
Answer: <blockquote>
<p>I suppose that conversions from signed 2's complement to Q15 and vice versa are necessary.</p>
</blockquote>
<p>Yes & No. Q15 (signed) uses 2's complement, the bits are the same so no conversion is necessary. The difference between a Q15 and Q0 aren't the bit them selves but your interpretation of them. If you multiply two 16-bit numbers you get a 31-bit long result (one less than 32 because of a redundant sign bit). If you want to convert this to Q15 you take the top 16 bits, if you want Q0 you take the bottom 15 bits plus the sign bit.</p>
<p>The problem is that fixed point audio processing is difficult and 16-bit audio processing is extremely difficult. You need to constantly manage head room and optimize signal to noise ratio for each processing step. Fixed point filters require careful management of section ordering, section gain, section topology, noise shaping, rounding strategies, limit-cycle prevention, etc.</p>
<p>If you can afford it, I strongly recommend to do this in 32-bit floating point, which is much, much easier. If you can't, consider 32-bit or at least 24-bit fixed point. If that's not possible either, at the very least use double precision (fixed point) accumulators and filter state variables. If that's also not an option, you have a very rough project ahead of you.</p>
|
https://dsp.stackexchange.com/questions/72661/dsp-signal-filtering-and-number-formats
|
Question: <p>I'm trying to do some simple filtering for an audio signal using a window-sinc low pass filter. Supposing that my buffer has M values and the filter kernel size is N, after convolving these two arrays I would obtain an output of m+n-1 values. Which part should I take into consideration? I'm asking this because, if I read a M values array I should write a M values array. I have tried to take the first M or the last M values for output, but the result is not too good. I was wondering if I have to use a circular buffer in which I put every time M+N-1 values, but I read the first M values. </p>
Answer: <ul>
<li>Output the first M samples of your convolution result</li>
<li>Keep the remaining samples and ADD those to the result next buffer</li>
</ul>
<p>Google "overlap add" for more information. While overlap add is a frequency domain method, it explains the framing and buffer handling well.</p>
|
https://dsp.stackexchange.com/questions/8380/how-to-apply-convolution-on-a-buffer
|
Question: <p>The question says it all. In typical (wavelet-like) decomposition of a signal, why is only the low pass component chosen for successive decomposition ?</p>
Answer: <p>Wavelets decomposition separates out the details/fluctuations/high-pass information from the image or signal. At each step details are separated out from the remaining of the signal. The processing is therefore only applied on the coarse part not the part that already has the details. </p>
|
https://dsp.stackexchange.com/questions/18850/why-is-successive-decomposition-of-a-signal-performed-only-on-low-pass-component
|
Question: <p>I am working on a mixer which lets me mix dry and wet signal in a user defined ratio. Wet signal is basically just the all pass filtered signal. The mixing takes place in frequency domain and so does the filtering. So the requirement here is, after mixing I have to compensate for amplitude loss, i.e. the output signal should have same peak value over a frame as input signal. </p>
<p>So basically </p>
<pre><code>output= x*wet +(1-x)*dry;
</code></pre>
<p>where dry is the input signal and wet is the output of filter, and all the signals are in frequency domain. x is a scalar ratio(0 to 1). </p>
<p>Is there any way to apply this compensation. I tried to multiply by ratio of RMS values of input and output
i.e. <code>output = RMS(input)/RMS(output) *output</code> (again everything in frequency domain)</p>
<p>but even this doesn't produce the desired result. (I am seeing meters in audacity after processing in aforementioned manner). </p>
Answer:
|
https://dsp.stackexchange.com/questions/23419/amplitude-compensation-after-filtering-and-mixing-audio
|
Question: <p>Why median filter is considered as good for removal of salt and pepper noise? What are the other filters used for the same?</p>
Answer: <p>Median filter is considered good because unlike averaging filter which ruins the edges of an image by blurring it to remove the noise, median filter removes only the noise without disturbing the edges.</p>
<p>Well, median filter is the best and only filter to remove salt and pepper noise.</p>
<p>Hope this helps:)</p>
<p>Thank you!!!</p>
|
https://dsp.stackexchange.com/questions/27147/median-filter-for-salt-and-pepper-noise-removal
|
Question: <p>I'm slightly confused about baseband pulse shaping.
Let's assume I have a complex data vector in an arbitrary complex constellation (QAM for example). I would like to pass this complex vector through a pulse shaping filter (RRC or RC).</p>
<p>As far as I know RC/RRC filters have a real impulse response, thus I can generate a sampled version of the filter.
Am I correct to assume the filter is real ? Do I have to represent the filter as an analytic function (ie: $h_{RC}(t)=h_{RC}+j\hat{h}_{RC}$ ) before convolving my vector with it ?</p>
Answer: <p>Your complex baseband signal before modulation is given by</p>
<p>$$s(t)=\sum_{m=-\infty}^{\infty}a_mh(t-mT)\tag{1}$$</p>
<p>where $a_m$ are the complex symbols, $h(t)$ is the impulse response of the transmit filter, and $T$ is the symbol period. As you correctly assumed, $h(t)$ is usually real-valued, so you need to filter the real and imaginary parts of your symbols with the same filter $h(t)$ to get the complex baseband signal $(1)$:</p>
<p>$$s(t)=\sum_{m=-\infty}^{\infty}\text{Re}\{a_m\}h(t-mT)+j\sum_{m=-\infty}^{\infty}\text{Im}\{a_m\}h(t-mT)\tag{2}$$</p>
<p>There is no need to modify the real-valued impulse response $h(t)$.</p>
|
https://dsp.stackexchange.com/questions/28087/pulse-shaping-and-baseband-filtering
|
Question: <p>I have an input sequence $x=\{x_1, x_2 , ... x_n\}$ of reals, where $n=2^m$ for some $m$. I wish to calculate FFT of $x$.</p>
<p>$X=FFT(x)$</p>
<p>However, before I calculate the FFT, the signal $x$ gets corrupted with noise $\eta$, so $\hat{x}=x+\eta$, and calculated FFT, $\hat{X}$ is FFT of $\hat{x}$, rather than $x$.</p>
<p>$\hat{X}=FFT(\hat{x})$</p>
<p>I wish to recover actual or approximate FFT $X$ from $\hat{X}$ under following assumption:</p>
<ol>
<li>$x$ is band limited signal; FFT spectrum dies down quickly.</li>
<li>The noise $\eta$ is a shifted delta function, with unknown shift and magnitude, i.e. $\eta(n)= R\delta (n-l)$, where $R,\ l$ are real and integer respectively.</li>
</ol>
<p>How will I do it in a compute efficient way?</p>
Answer: <p>High-pass filter the input with a zero-phase filter to eliminate the signal of interest, leaving only the high-passed noise. If the signal is periodic, you can do this in the frequency domain and convert back to time domain by IFFT. Find the time of the peak and deduce the magnitude of the unfiltered delta function from the magnitude of the filtered delta function (the peak value or <a href="https://en.wikipedia.org/wiki/Root_mean_square" rel="nofollow">root mean square</a>). Knowing the delay and magnitude, subtract the noise from the original input signal. Do FFT.</p>
|
https://dsp.stackexchange.com/questions/28230/filtering-noise-from-fft-where-noise-is-known-to-be-shifted-delta-function
|
Question: <p>I am trying to understand what filter may be suitable for the following HMM:</p>
<p>The signal is a Wright-Fisher one-dimensional diffusion characterised by the SDE</p>
<p>$$dX_{t}=\frac{1}{2}\left(\alpha(1-x)-\beta x\right)dt+\sqrt{X_{t}(1-X_{t})}dB_{t}$$</p>
<p>with unknown parameters $\alpha$ and $\beta$. This process has stationary distribution $\mathrm{Beta}(\alpha,\beta)$. At discrete times Binomial observations are sampled conditionally on the value of the process $X_{t}=x$, $f_{Y_{t}}(y;x)=\text{Bin}(y,N;x)$.</p>
Answer:
|
https://dsp.stackexchange.com/questions/41578/filtering-for-wright-fisher-hmm
|
Question: <p>I have a 192kHz IQ signal from an RF receiver, and i'm trying to remove signals in the negative (or positive) frequency spectrum.</p>
<p>I see that the negative frequency signals are -90 degrees phase shifted from I, where positive frequencies have the usual +90 degrees shift I vs Q.</p>
<p>Here's the problem:</p>
<p>If i set the center frequency to f, and i have two signals, one at f+1000, one at f-1000, they would both be audible on the same frequency, if i mix the whole thing into audio, either interfering with each other, or cancelling each other out.</p>
<p>I'm missing the step where i can decide whether i want to hear anything below or above center. </p>
<p>How is this usually being filtered?</p>
Answer: <p>I managed to do this now by having a 3 step process after converting the I/Q data to complex numbers. This is far from me understanding what i'm doing, but it's eliminating negative frequencies.</p>
<ol>
<li><p>I shift the the frequencies by multiplying each i/q sample with the complex output of an oscillator at frequency PI/2. The oscillator outputs a complex number with Real being sin(pi*0.5*t), and Imaginary being cos() of the same values. Then I multiply that complex number with each sample (t advances +1 with each sample).</p></li>
<li><p>Next step is to high-pass filter at a quarter sample rate. So if our sample rate is 192000, my high pass cutoff is at 48000.</p></li>
<li><p>Lastly, we shift the whole thing back by doing the same multiplication as in 1), except with PI*1.5.</p></li>
</ol>
<p>The result is I/Q data that's got everything in the negative spectrum filtered out.</p>
<p>I read about Hilbert transforms etc etc, but got lost halfway through that. Maybe someone has an explanation of what i did or why this works? :)</p>
|
https://dsp.stackexchange.com/questions/46301/how-to-eliminate-negative-frequencies-from-iq-signal
|
Question: <p>I am new to signal processing. I am trying to simulate something similar to IIR/FIR filter with $k$ delays to imitate acoustic echo reflection. The difference equations for FIR and IIR respectively are as follows:</p>
<p>\begin{equation}
y(n) = x(n) + \sum_{D=1}^kA(n)x(n-D)+v(n)\;\;\;\;\; (1)
\end{equation}
\begin{equation}
y(n) = x(n) + Ay(n-D(n))+v(n)\;\;\;\;\;\;\;\;\;(2)
\end{equation}
where $D$ is a delay in samples, the coefficient $A(n)$ describes
the changing attenuation related from object reflection and
$v(n) ∼ N(0, 10^{−3})$ is the noise. </p>
<blockquote>
<p>Equation $(1)$ and $(2)$ can be found on section $VII$, second
paragraph of <a href="https://arxiv.org/pdf/1303.0140.pdf" rel="nofollow noreferrer">this</a> paper</p>
</blockquote>
<p>How to can I implement this? I started by writing the following code in R.</p>
<h1>Edit after the answer</h1>
<pre><code>install.packages("signal")
t <- seq(0, 1, len = 100)
x <- rnorm(100) + rnorm(length(t),0,0.001)
y <- filter(Arma(b = 0.1, a = 0.1), x)
</code></pre>
<p>Unfortunately, the above approach does not allow me to have $a=0$ or $b=0$.</p>
<p><strong>Remark</strong></p>
<p>Maybe following function is generating equation $(1)$:</p>
<pre><code>fir1(39, 0.3)+rnorm(40,0,0.001)
</code></pre>
<p>The second equation perhaps is called flange IIR filter, where
the delay is not constant, but changing with time. This effect
imitates time stretching of the audio signal caused by moving
and changing objects in the room.</p>
<h1>Response to the answer below</h1>
<p>The ARMA equation</p>
<p>$$y(n)=\sum_{0}^Ma(m)x(n−m)+\sum_{k=1}^Kb(k)y(n−k)$$</p>
<p>i.e. AR part is as follows:</p>
<p>$$y(n)=\sum_{0}^Ma(m)x(n−m)$$</p>
<p>and MA part is as follows:</p>
<p>$$\sum_{k=1}^Kb(k)y(n−k)$$</p>
<p>which don't match with equations $(1)$ and $(2)$. In equation $(1)$ the coefficient $A$ depends on $n$. The coefficient update at each iteration.</p>
<h1>Probably an answer</h1>
<p>Parameters</p>
<pre><code>t <- seq(1, 4000, by = 1)
x<- sin(2*pi*t*2.3)
A<- rnorm(4000)
v<- rnorm(4000,0,0.001)
k <- 20
</code></pre>
<p>Equation $(1)$</p>
<pre><code>for(i in 1:4000){
if (i>k){
y[i]<- x[i]+ A[i]*sum(x[(i-(k-1)):i]) + v[i]
}else{
y[i]<- x[i]+ A[i]*sum(x[1:i])
}
}
</code></pre>
<p>Equation $(2)$ </p>
<pre><code>for(i in 1:4000){
if (i>k){
y[i]<- x[i]+ A[i]*(y[i-i%%k]) + v[i]
}else{
y[i]<- x[i]+ A[i]*(y[1]) + v[i]
}
}
</code></pre>
<p>Does the solution make sense? Or do I need to generate $x(n)$ using FIR and IIR filter?</p>
Answer: <p>The function <code>filter()</code> in the <code>signal</code> package does exactly what you are asking for. Note that, by default, it uses <a href="https://en.wikibooks.org/wiki/Signal_Processing/Digital_Filters#ARMA_Filters" rel="nofollow noreferrer">ARMA filters</a> (a combination of AR and MA). Thanks to that, you can implement both FIR and IIR filters, because the difference equation for an ARMA model is:</p>
<p>$$y(n) = \sum_{m=0}^{M} a(m)x(n-m)+\sum_{k=1}^{K} b(k)y(n-k)$$</p>
<p>To achieve your FIR filter, note that it is enough to set all $b$ coefficients to $0$. For the IIR filter, you should set $a(0)=1$ and every other value of $a$ to $0$.</p>
<p>Another maybe obvious yet important observation is that you can add the noise $v(n)$ at the end of the script, because the noise is white (at least that's what I understand from your question). Thus, there is no correlation between past and present samples of the noise, letting you adding it after applying the filter.</p>
|
https://dsp.stackexchange.com/questions/49105/implementing-fir-iir-alike-filter-in-r
|
Question: <p>I'm have a sequence of 2-dimensional MxN frames.
I have concatenated these frames to form a 3-dimensional MxNxT matrix.
Now i want to filter this 3D volume by 2 types of filters (a 2D log-Gabor in xy-direction, and a 1D gaussian in z-direction).</p>
<p>Here is my MATLAB implementation:</p>
<pre><code>[yGrid,xGrid] = ndgrid(1:size(D,1),1:size(D,2));
sigmaXY = 0.5;
sigmaT = 1;
numFilts = length(Scales);
w = sqrt(yGrid.^2 + xGrid.^2);
bpFilt = zeros(size(D,1),size(D,2),size(D,3),numFilts);
for sc = 1:numFilts
for t = 1:size(D,3)
w0 = 1.0/Scales(sc);
bpFilt(:,:,t,sc) = exp((-(log(w/w0)).^2) / (2 * log(sigmaXY)^2)) * ...
exp(-t.^2) / (2 * sigmaT^2);
end
end
DF = fftn(D);
DFfilt = bsxfun(@times, DF, bpFilt);
</code></pre>
<p>where <code>D</code> is my 3D data.
The output <code>DFfilt</code> is fft inverted and plotted then. But all frames look the same in the output!</p>
<p>Where am I mistaken?</p>
<p>Is there any better suggestion to implement the filtering?</p>
Answer:
|
https://dsp.stackexchange.com/questions/49155/how-to-filter-a-sequence-of-frames
|
Question: <p>Reading the ARMA model for the first time, and I'm confused.</p>
<p>Let's say I have a time series</p>
<pre><code>x = [1, 2.1, 2.9, 3, 4.1]
</code></pre>
<p>According to the ARMA model, <span class="math-container">$X_t$</span> is a linear combination of previous values and errors, something like</p>
<p><span class="math-container">$$\sum_i \phi_i X_{t-i} + \sum_i \theta_i \epsilon_{t-i}$$</span></p>
<p>But, what are <span class="math-container">$X_i$</span> and <span class="math-container">$\epsilon_i$</span>??</p>
<ul>
<li>is <span class="math-container">$X_i$</span> the actual value of the series at <span class="math-container">$t=i$</span>? Eg, in my example $X_2 = 2.1 ?? (with 1-based indexing)</li>
<li>if so, what are the error terms?</li>
</ul>
<p>The same question for the simple moving-average model, where all the <span class="math-container">$X_i$</span> values are remain unused.</p>
Answer: <p>In the context of discrete-time statistical signal processing, an ARMA-(p,q) random process is defined as (assuming zero initial conditions)</p>
<p><span class="math-container">$$ \sum_{k=0}^p a_k ~ x[n-k] = \sum_{k=0}^q b_k ~ v[n-k] $$</span> </p>
<p>or equivalently</p>
<p><span class="math-container">$$ x[n] = - \sum_{i=1}^p a_i ~ x[n-i] + \sum_{i=0}^q b_i ~ v[n-i] $$</span> </p>
<p>where <span class="math-container">$v[n]$</span> is a white-noise (WSS) random process with variance <span class="math-container">$\sigma_v^2$</span> and <span class="math-container">$x[n]$</span> represents the resulting ARMA process.</p>
<p>In this context your coefficients are related as <span class="math-container">$$\phi_i = - a_i $$</span> and <span class="math-container">$$\theta_i = b_i$$</span> The process reggresses over its past values <span class="math-container">$x[n-i]$</span> (<span class="math-container">$X_{t-i}$</span> acc) and is also a moving average of the input noise <span class="math-container">$v[n-i]$</span> (<span class="math-container">$\epsilon_{t-i}$</span> acc).</p>
|
https://dsp.stackexchange.com/questions/52563/arma-ma-methods-how-do-you-know-the-error-terms
|
Question: <p>Given a input sequence <span class="math-container">$x = {x_1, ... x_n}$</span> and a filter with l elements <span class="math-container">$h = [h_1, ..., h_l]$</span> with l < n. We want to filter the input sequence with the specified filter. </p>
<p>My first thought was to compute the convolution between <span class="math-container">$x$</span> and <span class="math-container">$h$</span>:</p>
<p><span class="math-container">$y = \sum_{k=0}^{l-1} x(k) h(n - k)$</span></p>
<p>For this to work I'll have to pad x with zeros. </p>
<p><code>x = [ zeros(1, l) x zeros(1, l - 1)];</code></p>
<p>And then </p>
<p><code>for i = 1:length(x) - 2
y(k) = h * x(k:k + l)';
end
</code></p>
<p>Is this the right approach for this problem ?</p>
Answer:
|
https://dsp.stackexchange.com/questions/55106/applying-a-filter-on-an-input-sequence-in-matlab
|
Question: <p>I recently came across old recording from the 1930's. Unfortunately not only is the quality low, it's a bunch of carpenters talking as they saw things and make all sorts of other noises. </p>
<p>I've tried:</p>
<ul>
<li>Various filters </li>
<li>Audacity's native Noise Reducer</li>
</ul>
<p>and yet no luck, I find that either what I'm doing is not working, or when I get the speech to sound louder than the tools the amplitude is too low to make out anything. </p>
<p>Is there any well defined technique for this? </p>
Answer:
|
https://dsp.stackexchange.com/questions/55828/noise-reduction-how-can-i-filter-out-saw-or-other-tool-noises
|
Question: <p>Given some difference equation for a filter like </p>
<p><span class="math-container">$y[k] = ax[k] + bx[k-1]+ cy[k-1]$</span>, </p>
<p>how would you initialize it? Since it needs an old output value (feedback) to calculate the new output value, it seems like the equation will keep on chasing its own tail. I would be tempted to just let </p>
<p><span class="math-container">$y[k-1] = x[k-1]$</span> </p>
<p>initially to calculate a value for <span class="math-container">$y[k]$</span> after which we could just use that for <span class="math-container">$y[k-1]$</span>. There might be a better technique for this problem or even a really obvious solution to it, but I am at a loss on how. Most of the digital filter books/papers that I have read have done a great job on explaining how the filter gets derived, but the practical implementation requirements seem to be absent. </p>
<p>This is also my first time posting on Stack Exchange, so any comments on etiquette or style would be appreciated. Just be gentle ;) </p>
Answer: <p>If there is no good reason to choose otherwise, you would initialize with zeros, so in your case <span class="math-container">$y[-1]=0$</span>. Other initializations may be useful in certain situations, e.g., when processing blocks of data to avoid transients between blocks.</p>
<p>Note that if the filter is stable (which it has to be in almost all useful applications), the influence of the initial condition on the output becomes negligible after a while.</p>
|
https://dsp.stackexchange.com/questions/56084/what-is-used-to-initialise-difference-equations-that-require-data-from-the-outpu
|
Question: <p>I am trying to implement basic homomorphic filtering but I can't seem to understand what happens in the frequency domain when you take the natural logarithm of the function.</p>
<p>In Matlab I'm getting not a number or infinity everywhere. </p>
Answer: <p>Plus is the simplest operation : <span class="math-container">$z = x+y$</span>. Fourier is inherently linear, and good at addressing it. However, most processes and data combination are nonlinear, and they should be dealt with. The second simplest operation is <strong>multiplication</strong>. Homomorphic filtering deals with <span class="math-container">$z = x\times y$</span>. It is more complicated, especially because of zeroes. </p>
<p>Logarithms were invented to linearize products. Because <span class="math-container">$\log z = \log x + \log y$</span>, and then we get back to linear. The problem is that logarithms are not defined everywhere in a simple way, especially with negative numbers. So, it is necessary to deal with that, and I do not know of a naturally sound method. But classically, one offsets and scales data, such as with a modified logarithm:</p>
<p><span class="math-container">$$ \mathrm{l}_m (x) = \mathrm{sign}(x) \frac{\log(1+a|x|)}{\log(1+a)} $$</span></p>
<p>related to <code>companding</code>. When <span class="math-container">$a$</span> and <span class="math-container">$|x|$</span> are small, this is close to <span class="math-container">$$\mathrm{sign}(x) \frac{\log(1+a|x|)}{\log(1+a)}\approx \mathrm{sign}(x) \frac{a|x|}{a}\approx x$$</span> thus back to the linear case.</p>
<p>You can also check: <a href="https://dsp.stackexchange.com/a/52773/15892">Why do we substract a background image and not divide it?</a>, talking about LIP (Logarithm Image Processing).</p>
|
https://dsp.stackexchange.com/questions/58781/how-does-the-frequency-spectrum-of-a-signal-change-in-the-logarithmic-space
|
Question: <p>I have implemented FIR filter using tapped delay line method. I start getting output as soon as first input sample is passed to it, I am wondering from which sample I will get the proper output(without transient outputs) from filter.</p>
<p>Does it depend on location of highest magnitude tap or it's just dependent on the length of filter.The number of filter coefficients I am using are N(odd) and symmetric.
for verifying I took the following filter taps as example :-</p>
<p><span class="math-container">$F1$</span> <span class="math-container">$=$</span> <span class="math-container">$[1 0 0 0 0 0 0];$</span></p>
<p><span class="math-container">$F2$</span> <span class="math-container">$=$</span> <span class="math-container">$[0 1 0 0 0 0 0];$</span></p>
<p><span class="math-container">$F3$</span> <span class="math-container">$=$</span> <span class="math-container">$[0 0 1 0 0 0 0];$</span><br>
and so on,if I pass sine wave using filter coefficients <span class="math-container">$F1$</span> then the first sample of the output is itself meaningful but if I use <span class="math-container">$F2$</span> and <span class="math-container">$F3$</span> then meaningful output is from second and third output samples respectively. I am also wondering how matlab command <span class="math-container">$filter()$</span> works.</p>
Answer: <p>If it's a <strong>linear phase</strong> FIR filter, then the inputs signals will be shifted (delayed) by an amount of <strong>group delay</strong> at the output.</p>
<p>For a linear phase FIR filter of length <span class="math-container">$L = 2K+1$</span> the group delay will be <span class="math-container">$N = K$</span> samples. For even length <span class="math-container">$L = 2K$</span> FIR filters it will be at <span class="math-container">$N = (L-1)/2$</span> ; half-sample position.</p>
<p>For non-linear phase FIR filters, group delay will be dependent on the specific frequency of the applied input signals. It should be computed from</p>
<p><span class="math-container">$$ \tau = - \frac{d \phi(\omega)}{d\omega} $$</span></p>
<p>where <span class="math-container">$\phi(\omega)$</span> is the phase response of the filter. </p>
<p>The matlab function <em>filter(b,a,x)</em> computes all the samples begining up to input signal length. So the first samples are transients.</p>
|
https://dsp.stackexchange.com/questions/62337/meaningful-output-of-fir-filter-output
|
Question: <p>I am going through digital filter design. But I really get confused with so many </p>
<ol>
<li><p><strong>Digital filters</strong> : </p>
<ul>
<li>IIR </li>
<li>FIR </li>
<li>Moving Average</li>
<li>Linear phase filter</li>
<li>Allpass filter</li>
<li>Comb filter </li>
</ul></li>
<li><p><strong>Analog filter:</strong> </p>
<ul>
<li>Buterworth </li>
<li>Chebychev </li>
<li>Elliptic</li>
</ul></li>
</ol>
<p>Is their any specific application where each filter is used ? </p>
<p>I suppose biomedical signal sensing or temperature sensor value filtering use digital filter (?).</p>
<p>Please suggest.</p>
Answer:
|
https://dsp.stackexchange.com/questions/42146/digital-filter-selection
|
Question: <p>If I want to remove the baseline drift in my ECG signal, which digital filter should be used without distortion and shift in my filtered output?
What are the necessary things I have to look for the proper type of filters (like Chebyshev, Butterworth filter, etc)?</p>
Answer: <p>The easiest approach would be use a high pass FIR filter, mostly because it has a linear phase shift (constant group delay). Chebyshev, Butterworth are IIR type filters so their phase response is nonlinear (mostly) and may change the output.</p>
<p>Also one can considered using wavelet decomposition to eliminate low frequency component. There is a paper called "Baseline Drift Removal and De-Noisingof the ECG Signal using Wavelet Transform", it can easily be found on google. Using MATLAB or Python one can check this out with low effort.</p>
|
https://dsp.stackexchange.com/questions/74874/digital-filters
|
Question: <p>I am having a LM35 sensor which gives $10\textrm{ mV}$ signal per degree rise of temperature. Now I have one question, which digital filter will be the best for this ?</p>
<p>This sensor will be use in very noisy industrial environment. Also digital filter have some cutoff frequency what should be the cutoff frequency set for this filter ? As signal coming from sensor is an analog signal but not a periodic frequency signal.</p>
<p>One I know is FIR Moving average filer. This takes the mean of previous samples. This I do not want to use as I have used it before,
$$
\frac 1N\sum_{n=0}^{N-1}x[k-n]
$$ </p>
<p>If I am right then DFT is not required as I am not interested in the spectrum analysis of the signal coming from LM35 sensor.</p>
<p>Is there any other DSP digital filter which I can use ?</p>
Answer: <p>It depends a lot on the kind of noise and how fast you need the sensor to track the true temperature, and how fast you need to read the temp.</p>
<p>If you expect large outliers, a median filter is robust to those kinds of perturbations and they can have a relatively short effective window, and a corresponding fast response.</p>
<p>Following the median filter with a conventional low pass is used in some applications.</p>
<p>As was pointed out in the comments, the better you can define your requirements, the better you can make your measurements.</p>
<p>The nice thing about using a median,it is a rank statistic that it is core to nonparametric statistics, where fewer assumptions are made by the underlying probability distribution and there is a large literature on how to design hypothesis tests. </p>
|
https://dsp.stackexchange.com/questions/42230/digital-filter-selection-for-sensor
|
Question: <p>I am a beginner to study about the filter notion and property</p>
<p>Being a real digital filter,
(here "real filter" I means that its impulse response is real-valued)</p>
<p>this formula is established.
But I have no idea how to prove it</p>
<p>$$ |H(\pi +w )|=|H(\pi -w)|$$</p>
<p>What should be $H(w)$ or $|H(w)| $ for generalized proof procedure? </p>
Answer: <p><strong>HINT:</strong></p>
<p>From the definition of the DTFT</p>
<p>$$H(\omega)=\sum_{n=-\infty}^{\infty}h[n]e^{-jn\omega}\tag{1}$$</p>
<p>derive the following facts:</p>
<ol>
<li>$H(\omega)=H(\omega+2\pi)$</li>
<li>$H(\omega)=H^*(-\omega)$ for real-valued $h[n]$</li>
</ol>
<p>where $*$ means complex conjugation. Combine these two results to prove the given equation.</p>
|
https://dsp.stackexchange.com/questions/29659/real-digital-filter-property
|
Question: <p>Would upscaling from a DVD to 4k on a TV be considered Digital Filtering?
Would an AI trained to suppress noise and enhanced desired signal be considered a Digital Filter?</p>
<p>I'm trying to figure out what counts as Digital filtering and what does not.</p>
<blockquote>
<p>In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal.</p>
</blockquote>
<p>By this definition from Wikipedia I would think so but I am not sure if I am being too open in my interpretation.</p>
<p>Or does the original signal have to be analog?</p>
<p>I have found no real list on this. Just the definitions.</p>
Answer: <blockquote>
<p>Would upscaling from a DVD to 4k on a TV be considered Digital Filtering?</p>
</blockquote>
<p>No, that's scaling. It might involve filtering, especially for <em>anti-imaging</em>.</p>
<blockquote>
<p>Would an AI trained to suppress noise and enhanced desired signal be considered a Digital Filter?</p>
</blockquote>
<p>It might. Does it filter? That's a bit up to definition. For example, a lot of voice codecs are designed to suppress noise and enhance intelligibility of the encoded speech.</p>
<p>There's machine learning-designed codecs of that kind. A central part in many such efficient representations of audio is that they reproduce something that <em>sounds like it contains the same information</em> from the space-efficient representation, but is really not just a filtering (in <em>my</em> opinion!) because they effectively re-synthesize what was said – they don't take a "full" audio recording and just suppress some part and emphasize some other.</p>
<p>Putting a low-pass filter after a microphone to reduce the noise and make the voice of someone telling a story is filtering. Using AI to write down what was said (e.g. as syllables and intonation), and sending the resulting book to another machine for that to "read out" the story isn't really filtering anymore.</p>
<p>Such definitions of "what constitutes a filter" are subjective. All human language is. You'll have to live with a bit of ambiguity. All you can do is be precise when defining what <em>you</em> mean with "filtering" when it really matters.</p>
<blockquote>
<p>Or does the original signal have to be analog?</p>
</blockquote>
<p>no.</p>
|
https://dsp.stackexchange.com/questions/82575/what-is-considered-a-digital-filter
|
Question: <p>I am given a notch digital filter with the <span class="math-container">$z$</span>-transform being:
<span class="math-container">$$W(z)=MF(z)F(z^{*})^{*}=M\frac{z-q}{z-p}\frac{z-q^{*}}{z-p^{*}}$$</span>
where <span class="math-container">$M$</span> is the normalisation factor, <span class="math-container">$q=e^{-i2\pi\frac{f_0}{f_s}}$</span>, <span class="math-container">$p=(1-\epsilon)q$</span> and <span class="math-container">$0< \epsilon \ll 1$</span>. It is easy to see the zeros of this system are on the unit circle while the poles are inside the unit circle.</p>
<p>But how do we determine its stability? I think we must know whether this system is causal or anti-causal. I am very new to this subject so please be clear.</p>
Answer: <p>You probably know that a LTI filter is stable if and only if the poles of its transfer function are inside the unit circle.</p>
<p>So I guess you ask for a direct way of showing it in the case of your filter.</p>
<p>The transfer function of a filter is the <span class="math-container">$z$</span>-transform of its impulse response. To show the filter is stable or unstable we have to show that its impulse response does or doesn't converge to zero over time.</p>
<p>Therefore we have to calculate the impulse response, given the transfer function of your filter.</p>
<p>To make things easier we first rewrite <span class="math-container">$W(z)$</span> as</p>
<p><span class="math-container">$$W(z)=M\frac{(z-q)(z-q^*)}{(z-p)(z-p^*)}=M\frac{(1-qz^{-1})(1-q^*z^{-1})}{(1-pz^{-1})(1-p^*z^{-1})}$$</span></p>
<p>and knowing that convolution in the time domain leads to multiplication in the <span class="math-container">$z$</span>-domain, we will just look at the filter with the transfer function</p>
<p><span class="math-container">$$H(z) = \frac{1}{(1-pz^{-1})(1-p^{*}z^{-1})}$$</span></p>
<p>setting</p>
<p><span class="math-container">$$F(z) = \frac{1}{1-pz^{-1}}\text{, }G(z) = \frac{1}{1-p^{*}z^{-1}}$$</span></p>
<p>we get</p>
<p><span class="math-container">$$H(z)=F(z)G(z)\text{.}$$</span></p>
<p>We use the geometric series and write</p>
<p><span class="math-container">$$\sum_{l=0}^{\infty}p^lz^{-l} = F(z)\text{ and } \sum_{l=0}^{\infty}(p^*)^lz^{-l} = G(z)\quad (|z|<|p|)\text{.}$$</span></p>
<p>Therefore the impluse response for the filter with transfer function <span class="math-container">$F$</span> is <span class="math-container">$f_n = p^n\quad (n\in\mathbb{N})$</span> and for the filter with transfer function <span class="math-container">$G$</span> it is <span class="math-container">$g_n = (p^*)^n\quad (n\in\mathbb{N})$</span>.</p>
<p>Now we calculate the impulse response of the filter with transfer function <span class="math-container">$H$</span> by <span class="math-container">$f\circledast g$</span>:</p>
<p><span class="math-container">$$(f\circledast g)_n = \sum_{l=0}^{n}f_lg_{n-l} = \sum_{l=0}^{n}p^l(p^*)^{n-l}$$</span></p>
<p>Using polar representation for <span class="math-container">$p = |p|e^{i\varphi}$</span> and a geometric series we can deduce</p>
<p><span class="math-container">$$(f\circledast g)_n
= \sum_{l=0}^{n}|p|^le^{il\varphi}|p|^{n-l}e^{-i(n-l)\varphi}
=|p|^ne^{-in\varphi}\sum_{l=0}^{n}e^{2il\varphi}
=\left\lbrace\begin{array}{lr}
|p|^n(n+1)\text{,} & \text{for } \varphi = 0 \\
|p|^ne^{-in\varphi}\frac{1-e^{2i(n+1)\varphi}}{1-e^{2i\varphi}} & \text{for } 0<\varphi <\pi\\
(-1)^n|p|^n(n+1)\text{,} & \text{for } \varphi = \pi
\end{array}\right\rbrace$$</span></p>
<p>which converges to <span class="math-container">$0$</span> for <span class="math-container">$t\to\infty$</span> if and only if <span class="math-container">$|p| < 1$</span>. So for <span class="math-container">$|p| < 1$</span> the filter is stable and for <span class="math-container">$|p| \ge 1$</span> the filter is unstable.</p>
<p>Now you can find the impulse response <span class="math-container">$(l_n)_{n=0}^\infty$</span> for the FIR filter <span class="math-container">$\mathcal{L}$</span> with transfer function <span class="math-container">$L(z) = M(1-qz^{1})(1-q^*z^{-1})$</span> (which is not so hard) and convolve it with <span class="math-container">$f\circledast g$</span> and deduce some inequalities to show <span class="math-container">$(l\circledast (f\circledast g))$</span> converges to <span class="math-container">$0$</span> over time if and only if <span class="math-container">$|p| < 1$</span>.</p>
|
https://dsp.stackexchange.com/questions/87124/notch-digital-filter
|
Question: <p>How to subtract a digital filter from another one, if their lengths are different. How to make the length equal.</p>
Answer: <p>If (and only if) the filters (either FIR or IIR) are linear, then you can subtract their coefficients term-wise, supposing they are correctly aligned (below, on the $0$ index), treating them as infinite series with trailing zeroes when the coefficients are not defined. For instance with:</p>
<p>$$h = [h_0, h_1, h_2]$$
and
$$g = [g_{-1},g_0, g_1, g_2,g_3]$$</p>
<p>since the convolution with $h$ yields the same result as the convolution with a zero-padded $h$ (and $g$)</p>
<p>$$h' = [\ldots,0,0,h_0, h_1, h_2,0,0,\ldots]$$
the difference filter $d$ will have (symbolically $h'-g'$):</p>
<p>$$d = [\ldots,0,-g_{-1},h_0-g_{0}, h_1-g_{1}, h_2-g_{2},-g_{3},0,\ldots]$$</p>
<p>For FIR filters (or truncated IIR), you can limit to a finite sequence bounded by the smallest and the biggest index of both filters.</p>
|
https://dsp.stackexchange.com/questions/43331/subtract-a-digital-filter-from-another-filter
|
Question: <p>First of all, I'm new to DSP so excuse my simplified words.
I'm testing the performance of a digital filter on a (partly) noncontinuous signal:</p>
<p><img src="https://i.sstatic.net/X5ECG.png" alt=""></p>
<p>As you can see, the signal is not continuous at some points (like it is stopping and starting over again) When I apply the digital filter to it, I get this:</p>
<p><a href="https://i.sstatic.net/4C7mj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4C7mj.png" alt=""></a></p>
<p>The filter shows ripples at the noncontinuous areas, then it starts to work again. Why does the filter show this ripple at the noncontinuous areas? How do I calculate it (to know the overshoot, etc.)?</p>
Answer: <p>It is not surprising that the filter output resembles the filter's step response at discontinuities of the input signal. It's like applying a (modulated) step at the input. Apparently the cut-off frequency of the high-pass filter is higher than the frequency of the sinusoid, so the output goes to zero after each discontinuity. However, at each point of a discontinuity, the input signal contains frequencies above the cut-off frequency of the high-pass filter and these frequencies are passed to the output.</p>
|
https://dsp.stackexchange.com/questions/9701/digital-filter-performance-with-noncontinuous-signal
|
Question: <p>I have following digital signal which has been retrieved via sampling of an analog signal with sampling period <span class="math-container">$T_s = 100\,\mu\mathrm{s}$</span></p>
<p><a href="https://i.sstatic.net/JfBcY72C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfBcY72C.png" alt="enter image description here" /></a></p>
<p>I have evaluated the frequency spectrum of this signal with the help of <em>Scilab</em> <code>fft</code> command</p>
<pre><code>// analyzed signal length
N = length(x);
// frequency spectrum
X = fft(x);
// amplitude spectrum
M = abs(X)/N;
// phase spectrum
Phi = [];
for i = 1:length(X)
Phi(i) = atan(imag(X(i)), real(X(i)));
end
// normalized frequency
f = (0:(N/2))/N;
k = size(f, '*');
// graphs
scf();
plot(x);
title('<span class="math-container">$x(k)$</span>');
xlabel('<span class="math-container">$k$</span>');
scf();
subplot(2, 1, 1);
plot(f, M(1:k), 'LineWidth', 2);
title('Amplitude');
xlabel('<span class="math-container">$\frac{f}{f_s}$</span>');
subplot(2, 1, 2);
plot(f, Phi(1:k), 'LineWidth', 2);
title('Phase');
xlabel('<span class="math-container">$\frac{f}{f_s}$</span>');
</code></pre>
<p><a href="https://i.sstatic.net/w4hoZjY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w4hoZjY8.png" alt="enter image description here" /></a></p>
<p>From the amplitude part of the spectrum it is obvious that there is a <span class="math-container">$300\,\mathrm{Hz}$</span> component in the signal. I would like to extract this frequency component from the original signal. So I have created a digital narrow band pass filter (based on the <a href="http://www.dspguide.com/" rel="nofollow noreferrer">book</a>). Namely there are formulas for such filter in the Chapter 19. I have chosen <span class="math-container">$f=0.03$</span> (<span class="math-container">$300\,\mathrm{Hz}$</span> at <span class="math-container">$10^4\,\mathrm{Hz}$</span> sampling frequency) and <span class="math-container">$bw=0.001$</span> (the passing frequencies will be <span class="math-container">$\left<295, 305\right>\,\mathrm{Hz}$</span>). I have created the frequency response of the filter with the help of <em>Scilab</em> because I was curious how it looks like</p>
<pre><code>// primary design inputs
bw = 0.001; // desired bandwidth as a fraction of sampling frequency
f = 0.03; // center of the bandpass as a fraction of sampling frequency (f = 300 Hz, fs = 10^4 Hz)
// auxiliary variables for design
R = 1 - 3.0*bw;
K = (1 - 2*R*cos(2*%pi*f) + R^2)/(2 - 2*cos(2*%pi*f));
// difference equation coefficients
a0 = 1 - K;
a1 = 2*(K - R)*cos(2*%pi*f);
a2 = R^2 - K;
b1 = 2*R*cos(2*%pi*f);
b2 = -R^2;
// coefficients of the polynomial in the numerator of the transfer function
B = [a0, a1, a2];
// coefficients of the polynomial in the denominator of the transfer function
A = [1.0, -b1, -b2];
// Frequency response
// polynomial in the numerator of the transfer function in z^-1
num = poly(B, 'invz', 'c');
// polynomial in the denominator of the transfer function in z^-1
den = poly(A, 'invz', 'c');
// relative frequency - w*k*Ts = 2*pi*f/fs*k, maximum frequency f = fs/2
// fr_max = f_max/fs = 0.5
fr = (0:0.0001:0.5);
// complex frequency response (transfer function is in z^(-1))
// z = exp(s*T) = exp([sigma + j*omega]*T)
hf = freq(num, den, exp(-%i*2*%pi*fr));
// amplitude
magnitude = abs(hf);
// phase
hf_imag = imag(hf);
hf_real = real(hf);
phase = atand(hf_imag, hf_real);
scf();
plot(fr, magnitude, 'Linewidth', 2);
title('Magnitude');
xlabel('<span class="math-container">$f_r = \frac{f}{f_s}$</span>');
ylabel('<span class="math-container">$Mag(H(z))\,(\mathrm{-})$</span>');
xgrid;
scf();
plot(fr, phase, 'Linewidth', 2);
title('Phase');
xlabel('<span class="math-container">$f_r = \frac{f}{f_s}$</span>');
ylabel('<span class="math-container">$Arg(H(z))\,(\mathrm{^\circ})$</span>');
xgrid;
</code></pre>
<p><a href="https://i.sstatic.net/XSWRIQcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XSWRIQcg.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/M6QWwr5p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6QWwr5p.png" alt="enter image description here" /></a></p>
<p>Then I have tried to process the input signal via the above digital filter (again in <em>Scilab</em>)</p>
<pre><code>y = filter(B, A, x);
scf();
subplot(2, 1, 1)
plot(t, x, 'b', 'Linewidth', 2);
xlabel('<span class="math-container">$t\,(\mathrm{s})$</span>');
legend('<span class="math-container">$x(k)$</span>');
xgrid;
subplot(2, 1, 2)
plot(t, y, 'r', 'Linewidth', 2);
xlabel('<span class="math-container">$t\,(\mathrm{s})$</span>');
legend('<span class="math-container">$y(k)$</span>');
xgrid;
</code></pre>
<p><a href="https://i.sstatic.net/ejTEuGvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ejTEuGvI.png" alt="enter image description here" /></a></p>
<p>I am not able to explain for myself why the filter output doesn't have constant amplitude. May I ask you for help?</p>
<p><strong>Edit</strong>:</p>
<p>I have made an experiment when I have sent <span class="math-container">$x(k) = \sin(2\pi\cdot\frac{300}{10\,000}\cdot k)$</span> to the filter input and the signal at the filter output is <span class="math-container">$300\,\mathrm{Hz}$</span> sine with fixed amplitude (after the transient decay).</p>
<p><a href="https://i.sstatic.net/9SrsSFKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9SrsSFKN.png" alt="enter image description here" /></a></p>
<p>Based on that it could be said that the original filtered signal contains <span class="math-container">$300\,\mathrm{Hz}$</span> sine with variable amplitude.</p>
Answer: <blockquote>
<p>Based on that it could be said that the original filtered signal contains 300Hz sine with variable amplitude.</p>
</blockquote>
<p>Pretty much. The FFT can only give the total energy in that frequency bin of the entire signal. The energy at a specific frequency can definitely vary over time, but the FFT cannot resolve this since it has no time dimension. For that you would need a transform that's both a function frequency and of time. Examples would be the STFT (short term Fourier transform) or the wavelet transform.</p>
|
https://dsp.stackexchange.com/questions/95524/digital-filter-output-understanding
|
Question: <p>I was just studying an old circuit analysis textbook that was describing how to design a Butterworth filter, and that seemed easy enough.. then, I started to wonder if I can take this analog filter and convert it into a digital filter. Its not really an exercise in the textbook, i was just curious how to convert the analog filter into a digital filter just for fun without the heavy DSP theory.</p>
<p>So I was tried to taking a toy Butterworth filter to do just that. For example, Let's suppose I had an analog filter:</p>
<p><span class="math-container">$$H_a(j\Omega) = \frac{1}{(1+j\Omega)(2+j\Omega)}$$</span></p>
<p>and i wanted to convert this into a digital filter with say a sampling period <span class="math-container">$T=200\pi$</span> rad/sec, and neglecting the effects of aliasing, using this formula:</p>
<p><span class="math-container">$$H(e^{j\omega}) = H_a(j\Omega)\Big|_{\Omega = \omega/T}$$</span></p>
<p>What would <span class="math-container">$H(z)$</span> and <span class="math-container">$h[n]$</span> look like for the digital filter?</p>
Answer: <p>Converting the analog filter <span class="math-container">$H_a(s)$</span> into a digital filter <span class="math-container">$H_d(z)$</span> using the <strong>bilinear transform</strong> where T is the sampling period:</p>
<p><span class="math-container">$\Large H_d(z) = H_a(s)\bigg|_{s=\frac{2}{T}\frac{z-1}{z+1}}$</span></p>
<p><strong>Example</strong>:</p>
<p>Given a first order Butterworth filter</p>
<p><span class="math-container">$H_a(s) = \frac{1}{1+RCs}$</span></p>
<p><span class="math-container">$H_d(z) = H_a\bigg( \frac{2}{T} \frac{z-1}{z+1} \bigg)$</span></p>
<p><span class="math-container">$H_d(z) = \frac{1}{1+RC\Big(\frac{2}{T}\frac{z-1}{z+1} \Big)}$</span></p>
<p><span class="math-container">$H_d(z) = \frac{1}{1+RC(\frac{2}{T}\frac{z-1}{z+1})}$</span></p>
<p><span class="math-container">$H_d(z) = \frac{1+z}{(1-2RC/T)+(1+2RC/T)z}$</span></p>
<p><span class="math-container">$H_d(z) = \frac{1+z^{-1}}{(1+2RC/T)+(1-RC/T)z^{-1}}$</span></p>
<p>The coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used to implement a real-time digital filter.</p>
|
https://dsp.stackexchange.com/questions/63538/converting-a-simple-analog-butterworth-filter-into-a-digital-filter
|
Question: <p>I'd like to find some book or books to get information about how digital filter is built depending on specifications. Like depending on price, or speed and etc. As I know it could be built by some D flip-flops and summation blocks, but there are more kind of them depending on specs, so if it's possible to get book or website about this it would be great.</p>
<p>Thanks in advance.</p>
Answer: <p>Digital Filters as dedicated pieces of hardware are quite rate these days. Most filtering just happens on general purpose CPUs such as ARM core and may be partially accelerated by specific instruction sets or co-processor (e.g. NEON). Your smart phone or your laptop do a lot of digital filtering every day.</p>
<p>Dedicated filter hardware is typically part of a special purpose SOC (system on a chip) that's design for a specific application like a cable modem or ANC processor). The filter implementations are optimized for the specific application and "configurable".</p>
|
https://dsp.stackexchange.com/questions/70010/digital-filter-as-physical-device
|
Question: <p>Having only dealt with digital filter scarcely, the question dawned to me when I used the firls function in matlab to design an equalizer with a certain gain response. </p>
<p>In general, can we prescribe an arbitrary shape as the equalizer filter response and hope to obtain a FIR/IIR filter that matches this shape within a set accuracy? </p>
<p><a href="https://dsp.stackexchange.com/questions/37646/filter-order-rule-of-thumb">Filter Order Rule of Thumb</a> explains the relationship between filter order and the required ripple, rejection and transition bandwidth. But my question is slightly different. The equalizer I'm designing doesn't have a stringent requirement on rejection or transition bandwidth, what matters is the in-band amplitude/phase matching the prescription. For example, the prescribed shape could be predominantly a simple straight line with a slope (to equalize linear gain slope), or more practically on top of that it could have slight perturbations here and there of higher orders. I'm thinking if a polynomial of a certain order can model the prescribed in-band shape, then the FIR would have require the same order. Question is, has this been studied before?</p>
Answer:
|
https://dsp.stackexchange.com/questions/42816/digital-filter-design-accuracy
|
Question: <p>Why is it that we Z-transform a difference equation to get a the transfer function of an digital filter?</p>
<p>How come a digital filter is given in the Z-domain, and what is the Z-domain?</p>
<p>And for that sake, why do analog filters operate in the S-domain, and what is the S-domain?</p>
Answer: <p>The S-transform allows you to deal with differential equations in an algebraic manner - so they become easier to solve. Since continuous/analog filters consist of integrators and differentiators the S-transform is therefore a natural way to deal with these systems.</p>
<p>The z-transform provides an algebraic way of dealing with finite difference systems and therefore it is a natural way to deal to discrete-time systems i.e. digital.</p>
<p>In the S-transform, setting $s=j\omega$ results in the Fourier transform. For continuous time systems we are interested in whether poles are on the right or left side of the $j\omega$ axis, because that determines the stability.</p>
<p>For the Z-transform, setting $z=e^{j\omega}$ results in the Discrete-Time Fourier transform. To determine stability, we are interested in whether the poles are inside or outside the unit circle.</p>
<p>So the S and Z domains are similar - they allows you solve continuous and discrete time systems, respectively, using algebra.</p>
|
https://dsp.stackexchange.com/questions/13612/digital-filters-and-the-z-transform
|
Question: <p>I'm new to digital filters. So I'm trying to get things right and I can't find an explicit answer to my question on the internet.</p>
<p><strong><em>Question: Digital filters only accept samples as input?</em></strong> I mean the input can not be zeros nor ones. It can only be samples. If that's the case, then the typical position for a digital filter would be after the receiver (I mean before converting to bits).</p>
<p>So receiver would be like: </p>
<p>$$
\mbox{antenna} \rightarrow \mbox{amplifier} \rightarrow \mbox{Sampler} \rightarrow \mbox{Digital Filter} \rightarrow \underbrace{\mbox{DAC}}_{\tiny \mbox{here I should have filtered samples}}
$$</p>
<p>Please correct me and kindly ignore any coding or LNAs and stuff like that.</p>
Answer: <p>You are almost right: digital filters do deal with samples, but a sample can be any numerical representation of a given signal value at a given instant (so in general, they may accept zeros or ones).</p>
<p>Moreover, a sample is usually represented by a binary word (e.g. 0001), so a digital filter actually deals with 0s and 1s.</p>
|
https://dsp.stackexchange.com/questions/30851/digital-filters-deal-only-with-samples-right
|
Question: <p><em>To introduce my situation:</em> I'm developing a digital synthesizer in a form of a C++ library, working with low level APIs like WASAPI, ASIO, ALSA etc. It's probably not very practical and I'm mostly "reinventing the wheel" but my intention is to learn about digital synthesis in depth. So far I have successfully implemented basic concepts like oscillators and modulation of their properties. The next logical step is a filter.</p>
<p><em>So my question is:</em> <strong>How does a digital filter work on this low level?</strong> How exactly does it modify the individual samples?</p>
<p>I understand, that this involves a lot of math. That's not a problem for me. I only need a good starting point (some sources to learn) and an intuitive explanation as all I was able to find were either analogue explanations or just formulas explained with a lot of advanced terminology that I'm not familiar with.</p>
Answer: <p>Consider a moving average over N samples- this is a simple FIR filter where each new output is the average of the past N samples. It is easy to see how high frequency noise can be filtered out (so is a low pass filter), and the longer time duration we include in the averaging window the lower will be the frequency cut off (just compare a stock market 30 day moving average to a 1 day moving average).</p>
<p>A moving average is a poor low pass filter, having a frequency response that approaches a Sinc function, which rolls off relatively slowly in frequency. By doing a weighted moving average where different samples are given different weights in the averaging process, we can significantly improve the frequency response - and coming up with the correct weights is the science of digital filter design.</p>
<p>IIR filters are similar except we are performing the average with previous outputs instead of past inputs. </p>
|
https://dsp.stackexchange.com/questions/51755/how-does-a-digital-filter-work
|
Question: <p>I am designing a High pass Digital filter. I calculated the filter coefficients using fdatool and got some negative filter values.</p>
<p>Now, I need to convert them into values from 0 to 255.</p>
<p>In case of low pass filter , I used </p>
<pre><code>value = value*255/sum(value)
</code></pre>
<p>I have heard that same needs to be done for negative coefficients as well by taking absolute values of all the coefficients, but that seems a little odd to me.</p>
<p>Can anyone please tell me how to convert my filter coefficients to values ranging from 0 to 255</p>
<p>Thanks</p>
Answer:
|
https://dsp.stackexchange.com/questions/1427/normalizing-negative-filter-coefficients-for-digital-filter-design
|
Question: <ul>
<li>What tools to use for practicing elementary filter design?</li>
<li>Is MATLAB all there is?</li>
<li>Do I need some specific toolboxes?</li>
<li>What functions do I need?</li>
</ul>
<p>I'm starting from the ground up in digital filter design and I thought that I need to decide on a program that lets me experiment with different filter designs (not pre-built functions, but rather self-written). Make plots and perhaps <code>.wav</code> outputs and such.</p>
<p>I'm looking for something higher level than C++, because I think the "design language" should allow for more rapid prototyping than the "implementation language".</p>
<p>The tools should facilitate the evaluation of filter designs.</p>
Answer: <p>There are lots and lots of software that can aid you in designing digital filters. MATLAB is probably the most used software, at least in the university sector. The <a href="http://au.mathworks.com/products/dsp-system/features.html#single-rate-and-multirate-fir-and-iir-filter-design%2C-and-adaptive-filters" rel="nofollow">DSP toolbox</a> and the <a href="http://au.mathworks.com/products/signal/features.html#digital-and-analog-filters" rel="nofollow">signal processing toolbox</a> probably cover all of the well-known methods for digital filter design, as already mentioned.</p>
<p>Alternatives to MATLAB (that are free) include: <a href="http://julialang.org" rel="nofollow">Julia</a>, <a href="https://www.gnu.org/software/octave/" rel="nofollow">Octave</a>, <a href="http://www.scilab.org" rel="nofollow">Scilab</a>, and <a href="https://www.scipy.org" rel="nofollow">SciPy</a> (Python with libraries for technical computing). There are a lot of others, but these are the ones that I know have high quality libraries/methods for filter design.</p>
<p>For Julia, you can use the <a href="http://dspjl.readthedocs.io/en/latest/filters.html" rel="nofollow">Filters.jl</a> package, for Octave the <a href="http://octave.sourceforge.net/signal/overview.html" rel="nofollow">signal package</a>, for Scilab the <a href="https://help.scilab.org/docs/5.5.2/en_US/section_dbbac6be408104de3049eddefaf6b9c9.html" rel="nofollow">Signal Processing toolbox</a>, and for SciPy you have <a href="http://docs.scipy.org/doc/scipy/reference/signal.html" rel="nofollow">scipy.signal</a>. </p>
<p>A lot of filter design is done using optimization, and if you want to customize the cost-function or make other tweaks to e.g. the least-squares method or the Parks-McClellan method, you could have a look at a high-level optimization library such as <a href="http://cvxr.com/cvx/" rel="nofollow">CVX</a> (works with MATLAB and Julia ++), or <a href="http://www.juliaopt.org" rel="nofollow">JuMP</a> for Julia.</p>
<p>These are just the tools that I have a fair bit of familiarity with...</p>
|
https://dsp.stackexchange.com/questions/31153/basic-tools-for-digital-filter-design
|
Question: <p>i have the following question:
in digital filter design what's the difference between the methods of transformations :
bilinear vs impulse invariance vs Euler vs step invariance.
thank you!</p>
<hr />
<p><a href="https://i.sstatic.net/FwRTY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FwRTY.png" alt="enter image description here" /></a></p>
<p>Here is a picture of what I got.</p>
Answer:
|
https://dsp.stackexchange.com/questions/86536/transformation-methods-for-digital-filters
|
Question: <p>I am getting conflicting advise regarding how to clean my EEG data:</p>
<p>1) Manually remove artefacts first and then apply digital filters</p>
<p>OR</p>
<p>2) Apply digital filters first and then manually remove artefacts</p>
<p>The reason given for 1) is because artefacts are more visible and avoids accidentally accepting artefacts as EEG data.</p>
<p>The reason given for 2) is because it avoids accidentally rejecting EEG data.</p>
<p>Both reasoning makes sense to me and I am a little confused as to what I should do: 1) or 2). Anyone can advise? Thank you very much.</p>
Answer: <p>When it comes to preprocessing of EEG data, the first step is to filter the signal. Filtering is done such that it preserves information across different bands of the EEG data (alpha, beta, gamma, delta). Typically, many researchers use 0.5 - 50 Hz band in the first step (removes DC components). Note that the filter should be zero-phase such that no delay or phase changes are introduced in the EEG data on using the filter. One simple way is to use <code>filtfilt</code> command in MATLAB.</p>
<p>In addition, a notch filter is also designed to remove 60 Hz. You might be wondering why we need a notch filter at 60 Hz if we are filtering the original signal using a band-pass filter of 0.5 - 50 Hz. To answer this, we need to first understand the structure of the band-pass filter. If the band-pass filter is of lower order, then the transition bands are not very steep (or sharp). This in turn introduces some part of the 60 Hz interference into the band-pass filtered signal (which typically dominates in EEG data). So, to suppress the strong 60 Hz interference, a notch filter is also used.</p>
<p>After filtering the EEG data using a band-pass and a notch filter, you might want to address different types of artifacts that show up in the EEG data. Some of the them include EKG artifact, movement artifact, sweat artifact, etc. Each artifact removal requires a sophisticated approach such that the EEG data is preserved but the artifact is removed completely. </p>
|
https://dsp.stackexchange.com/questions/52050/manual-filter-or-digital-filter-first-for-eeg
|
Question: <p>I have 2 simultaneous signals that both are designed to measure eye movements.
They are sampled at 250 Hz.
We have 12 subject recordings.
For 3 stable periods in each subject, we choose 256 points and did an FFT.
Prior to the FFT, the data were mean-centered and detrended with a 2nd
order polynomial. They were also windowed with a Hann window.
We are focused on the magnitude spectra plots.<br>
We have a total of 12 X 3 = 36 magnitude spectra, which we average.
These averages are shown in the attached figure. </p>
<p><a href="https://i.sstatic.net/BVdnY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BVdnY.png" alt="enter image description here"></a></p>
<p>My hypothesis is that the second signal is a low-pass filtered version of
the first. The filter has ringing in the passband.</p>
<p>I want to design a digital filter that I would apply to signal 1 that,
after fft, analyses would produce a magnitude spectra like that of
signal 2.</p>
<p>How would I go about this?</p>
Answer: <p>Signal 1 was an average of a left and right signal and signal 2 was a binocular signal.</p>
<p>I followed the suggestion of MBaz, and computed the frequency response of the filter for each segment (N = 3) for each subject (N=12). Then I averaged the frequency responses of the filters. Here is the result:
<a href="https://i.sstatic.net/3NhKQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3NhKQ.png" alt="enter image description here"></a></p>
<p>MBaz and Matt: Thank you so much for solving my problem.
Matt, it looks close to your filter.
Lee</p>
|
https://dsp.stackexchange.com/questions/60098/reverse-engineering-a-digital-filter
|
Question: <p>I want to design a digital filter for pulse shaping. Pulses are of 100us Fall time. and the sampling rate is 100MegaSamples/sec. and the Shaping time is 5us. What should my coefficients be??? And how to obtain them using matlab or any other related software.</p>
Answer:
|
https://dsp.stackexchange.com/questions/2760/finding-the-coefficients-of-the-digital-filter
|
Question: <p>By negative frequency, I refer to Fourier transform. Often, the frequency response of a digital filter is only displayed for positive frequencies. For a linear IIR digital filter, what happens for negative frequencies? Are frequency response for negative frequencies mirror images of what happens for positive frequencies?</p>
Answer: <p>It may be easier to start with analog signals. If the filter's impulse response $h(t)$ is real, then its frequency response $H(f)$ is conjugate symmetric: $H(-f)=H^*(f)$, where $(\cdot)^*$ indicates complex conjugate. Since $H(f)=|H(f)|e^{j\angle H(f)}$, then conjugate symmetry implies that $|H(f)|=|H(-f)|$ and $\angle H(f) = - \angle H(-f)$. This is true for any kind of filter, FIR or IIR.</p>
<p>So, if the filter input $x(t)$ is $$x(t)=A_0\cos(2\pi f_0t+\phi_0)=\frac{A_0}{2}\left(e^{j2\pi f_0t}e^{j\phi_0}+e^{-j2\pi f_0t}e^{-j\phi_0}\right),$$ then the filter output is </p>
<p>$\begin{align*}
y(t) &= \frac{|H(f_0)|A_0}{2}\left(e^{j2\pi f_0t}e^{j(\phi_0+\angle H(f_0))}+e^{-j2\pi f_0t}e^{-j(\phi_0+\angle H(f_0)}\right) \\
&= |H(f_0)|A_0\cos\left(2\pi f_ot+\phi_0+\angle H(f_0)\right).
\end{align*}$</p>
<p>As you can see, the amplitude of negative frequencies are affected exactly the same as positive frequencies. The phases are affected in a similar fashion, but with phase of opposite sign for negative frequencies. Note that this is necessary to preserve the symmetries that real signals have in the frequency domain.</p>
<p>In the case of discrete signals and filters, the conclusion regarding negative frequencies is the same. Finding the actual gain and phase at a particular frequency is slightly more difficult, though. The reason is that the DFT of the filter's impulse response defines the filter's gain and phase only at a finite number of frequencies. In general, the filter's gain and phase at an arbitrary frequency $f_0$ can be found by evaluating the filter's transfer function at $z=e^{j2\pi f_0}\,$.</p>
|
https://dsp.stackexchange.com/questions/24386/for-linear-iir-digital-filter-what-happens-for-negative-frequencies
|
Question: <p>I am reading a chapter on digital filter design from analog filter design using difference equations. What they do first of all is that they map <span class="math-container">$s$</span> (Laplace variable) to <span class="math-container">$z$</span> (<span class="math-container">$z$</span>-transform) by the following relation,
<span class="math-container">$$z = \frac{1}{1-sT}$$</span> where <span class="math-container">$T$</span> is the sampling period.Now there is a complete paragraph on how a derivative in continuous time can be approximated by a difference equation in the discrete time. I have two doubts. The first doubt is regarding the mapping done above. If I replace <span class="math-container">$s=j\Omega$</span>, where <span class="math-container">$\Omega$</span> is the continuous time angular frequency, I get <span class="math-container">$$z=\frac{1}{1-j\Omega T}$$</span>. The book says that this can be further simplified to <span class="math-container">$$z=\frac{1}{2}(1+e^{j2\tan^{-1}(\Omega T)})$$</span> That's fine, I have no problem with the simplification, but now it says that it is not a unit circle but a circle with center at <span class="math-container">$z=1/2$</span> and radius equal to <span class="math-container">$1/2$</span>. I didn't understood this.</p>
<p>My second doubt is regarding this statement from the book.</p>
<blockquote>
<p>If a bandlimited analog <strong>signal</strong> is sampled at the Nyquist rate, then the spectrum is non-zero over the entire unit cicle. If sampling period <span class="math-container">$T$</span> is sufficiently small, then the response of the digital filter will be concentrated on the small circle in the vicinity of <span class="math-container">$z=1$</span>.</p>
</blockquote>
<p>I am unable to prove this statement mathematically and I am even unable to understand it intuitively. The second line of the quoted text could be proved easily by the same mapping equation, but the first line is still not clear. Please help.</p>
Answer: <p>Let me show you a quick and easy way to see that <span class="math-container">$z=\frac12+\frac12 e^{j\phi}$</span> is a circle in the complex plane with center <span class="math-container">$\frac12$</span> and radius <span class="math-container">$\frac12$</span>. First, note that <span class="math-container">$z=e^{j\phi}$</span> is the unit circle, i.e., a circle centered at <span class="math-container">$z=0$</span> with radius <span class="math-container">$r=1$</span>. Changing the radius is easy, just multiply the expression for the unit circle with some positive number <span class="math-container">$r$</span>: <span class="math-container">$z=re^{j\phi}$</span>. If you want to change the center of the circle just shift it by adding a complex number <span class="math-container">$z_0$</span>; this number is the new center: <span class="math-container">$z=z_0+re^{j\phi}$</span>. Comparing this general equation for a circle in the complex plane with the given expression gives <span class="math-container">$z_0=\frac12$</span> (i.e., the circle is centered at <span class="math-container">$z_0=\frac12$</span>), and <span class="math-container">$r=\frac12$</span>, i.e., the circle has a radius of <span class="math-container">$\frac12$</span>.</p>
<p>Concerning the second part of your question, I think you should not try to directly relate that statement to the given mapping (that's maybe where your confusion comes from). The spectrum <span class="math-container">$X_d(e^{j\omega})$</span> of a discrete-time signal <span class="math-container">$x_d[n]$</span> that is obtained from sampling a continuous-time signal <span class="math-container">$x(t)$</span> is given by the aliased spectrum <span class="math-container">$X(\omega)$</span> of the continuous-time signal:</p>
<p><span class="math-container">$$X_d(e^{j\omega})=\frac{1}{T}\sum_kX\left(\omega-\frac{2\pi k}{T}\right)\tag{1}$$</span></p>
<p>where <span class="math-container">$T$</span> is the sampling interval (i.e., the inverse of the sampling frequency <span class="math-container">$f_s$</span>). Now if <span class="math-container">$x(t)$</span> is sampled at the Nyquist rate, which is the minimum rate that avoids aliasing, there will be no gap between the shifted spectra in the sum of Eq. <span class="math-container">$(1)$</span>, i.e., the discrete-time frequency axis will be completely filled with images of the continuous-time signal. On the other hand, if the sampling frequency is higher than the Nyquist rate, there will we gaps between the images. If the sampling frequency is very high compared to the upper frequency limit of the continuous-time signal then the gaps between the images will be very large, and the spectrum of the discrete-time signal will be concentrated around frequency zero, i.e., around <span class="math-container">$z=1$</span>.</p>
|
https://dsp.stackexchange.com/questions/53218/digital-filter-design-using-difference-equations
|
Question: <p>Of the four classic analog filter types: Butterworth, Chebyshev, Elliptic and Bessel- are any of these relegated to obsolescence for purposes of digital filter design in comparison to optimized algorithms such as least squares (<code>firls</code>), Parks-McClellan (<code>firpm</code> or <code>remez</code>), maximally flat (<code>maxflat</code>), etc?</p>
<p><em>(NOTE: This question is not in regards to the common and useful application of simple IIR structures for loop filters, leaky accumulators, notch filters, or in regards to using optimized IIR structures as direct digital designs. My question is specific to the approach of designing higher performance low pass, high pass or band pass structures specifically by copying the analog classics-- there may be actual utility in doing this beyond my current narrow view).</em></p>
<p>I have been taught (fred harris and others) to avoid the trap of "copying the analog" given those techniques with the classic types are limited to what we can feasibly do with a relatively low number of inductors and capacitors, while in the digital world we have the full power of the underlying mathematics and scalability with simple delays and multiplies (and non-linear commutators for multi-rate design resulting in very efficient FIR structures). My use of the mapping from s to z (as was common prior to the late 1960's given the wealth of knowledge in analog filter design) is mostly limited to simulation and modelling of existing analog filters, but not for the creation of new digital filters for common low pass, high pass and band pass structures.</p>
<p>That said I could be missing succinct and good practical applications beyond modelling and simulation where "copying the analog" would result in the better solution.</p>
<p>The best answer will list out applications for common low pass, high pass and band pass filters designs (not notch filters where an IIR would certainly rule) that the optimized algorithms specific to FIR filters (including optimized multi-rate structures) cannot possibly surpass in performance, for any of the class filter types (or prove why the optimized algorithms are always preferred if that is the case). A second best answer if it's only an assumed statement or suspicion is to at least demonstrate a specific case of such a mapped filter to allow for testing against an "optimized" direct digital solution.</p>
Answer: <blockquote>
<p>and simulation where "copying the analog" would result in the better solution.</p>
</blockquote>
<p>That' missing the point a bit. It's not that one cares much about matching or copying the "analog" but that digital IIR filters have some very nice and useful properties.</p>
<p>For example in audio, IIR filter are very common place and I use Butterworths on a daily basis.</p>
<p>The most straight forward reason is simply filter length. To do anything meaningful at 40 Hz when your sample rate is 48 kHz, an FIR filter needs to be thousands of taps long. In some cases you can use multi rate filters to implement this very efficiently, but I'm not sure if this is universally applicable. Another possible option are FFT based filter (overlap add, etc.) but that presents a non-trivial trade off between latency and efficiency.</p>
<p>If you stick with linear phase filters, the latency quickly gets prohibitively large and you loose causality. Technically you can turn any linear phase FIR into a minimum phase FIR by simply inverting the zeros that are outside the unit circle, but that's numerically awkward procedure at high orders.</p>
<p>The human ear is also fairly insensitive to monaural phase distortions (e.g. minimum phase) as long as its causal. However, it's very sensitive to pre-ringing, so linear phase filters can be perceptually problematic.</p>
<p>A more "philosophical reason" is that the human auditory uses more or less logarithmic frequency axis and IIR filters are much more natural fit for this. For example, an IIR octave bandpass filter at 8kHz looks exactly the same as one at 125 Hz (other than the center frequency). The FIR filters would be dramatically different.</p>
<p>IIRs are great for cross overs: odd order Butterworth low pass and high pass filters sum to a flat frequency response (although it's typically an all pass). For even order, you can use Linkwitz Riley filters (which are based on Butterworths). You could use linear phase FIRs instead, but again you run into latency and causality problems.</p>
<p>It's also very easy to do things like real time adjustable high and low pass filters: It's very easy to design Butterworth filters on the fly or to simply table up the pole locations and interpolate. Filter order always stays constant and you are guaranteed to have the same slope & shape.</p>
<p>In other words: turn on any loudspeaker in your home and you will be hearing plenty of Butterworth & friends in action :-)</p>
<h1>Example 1</h1>
<p>This just a very simple highpass you would in any garden variety active speaker. 40 Hz sampled at 48 kHz. I designed both a Butterworth Highpass and a "matching" FIR using <code>firls()</code> . Matching was done manually so they kind of look the same. I needed about 6000 taps on the FIR filter to get it in the ballpark</p>
<pre><code>% IIR
[z,p,k] = butter(6,40*2/48000,'high');
%FIR
h=firls(6000,[0 10 65 fs/2]*2/fs,[0 0 1 1])';
</code></pre>
<p>Transfer function look like this.
<a href="https://i.sstatic.net/nTnOH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nTnOH.jpg" alt="enter image description here" /></a></p>
<p>To me it seems like the FIR is very much inferior in terms of latency, memory footprint. A 6th order BW has only 13 coefficients and 6-8 state variables. You can reduce this even further by leveraging that the zeros are all at <span class="math-container">$z = 1$</span>.</p>
<h1>Example 2:</h1>
<p>This one is expands on the previous example by making the cutoff frequency of the high pass adjustable in real time. This creates a "sliding high pass" filter which is commonly found in sealed smart speakers. I tried to make requirements realistic, as you would find them in real products.</p>
<p>The Butterworth allows an extremely efficient implementation with no audible artifacts, no inherent latency, very low memory footprint and very low CPU consumption.</p>
<pre><code>%% Script to implement a sliding high pass filter, that can be adjusted on the fly
% This type of "sliding highpass" is typically used in smart speakers with a
% "closed box" topology to control trasnducer excursion at high output
% volume
%
% Requirements:
% - sample rate: 48 kHz
% - frame size: 128 samples
% - cutoff frequency varies from 40 Hz to 100 Hz
% - highpass slope: 36 dB/octave. In other words level at half the cutoff
% frequency should be <= -36 dB
% - Level at cutoff frequency not less than -3dB
% - passband ripple: < .01 dB above 200 Hz (where spectral perception is
% more sensitive)
% - cutoff frequency is updated once per frame
% - no additional latency
% - smooth updating filter, no pops or clicks
%
% Implementation
% - 6th order butterworth
% - pole location and filter gain as a function of frequency are
% approximated as a polynomial fit. At these low frequencies, a linear
% fit (1rst order polynomial) works perfectly.
% - The filter coefficients are calcuated once per frame based on the
% current cutoff frequency
% - Filter is implemented as cascaded second order sections in Direct Form
% I. For Direct Form I, the state variable are simply the inputs and
% outputs, so updateing the filter deosn't create a discontinuity in the
% state variables.
%
% Test
% - input signal is a 50 Hz sine wave, low frequency sine waves are very
% sensitive to artifacts
% - cutoff frequency input is a mixture of step function, up and down sweeps
% and uniformly distributed random numbers between 40Hz and 100Hz
%
%% Table up the poles and the gain of the filters, perform polynomial fit
ord = 6;
fs = 48000;
fr = (40:.1:100)';
nfr = length(fr);
p0 = cell(nfr,1); % pole locations
k0 = zeros(nfr,1); % filter gains
for i = 1:nfr
[z,p,k] = butter(ord,2*fr(i)/fs,'high');
p0{i} = p(imag(p)>0);
k0(i) = k;
end
p0 = [p0{:}].'; % convert cell array to regular array
%% do a simple linear for poles and gains
% we fit the real part and the gain in (1-x) since they are very close to 1
ppPoles = cell(3,2);
for i = 1:3
ppPoles{i,1} = polyfit(fr,1-real(p0(:,i)),1); % 1 minus real part
ppPoles{i,2} = polyfit(fr,imag(p0(:,i)),1); % imginary part
end
% and the gains in 1-k
ppGain = polyfit(fr,1-k0,2);
%% test the polyfit, calculate the resulting transfer functions at a few
% frequencies
frTest = 40:10:100;
nfrTest = length(frTest);
nFFT = 16384*2;
d0 = zeros(nFFT,1); d0(1) = 1;
fy = zeros(nFFT,nfrTest);
sos = zeros(ord/2,6);
sos(:,1) = 1; sos(:,2) = -2; sos(:,3) = 1;
for i = 1:nfrTest
f = frTest(i);
for ip = 1:3
x = 1-polyval(ppPoles{ip,1},f); % real part
y = polyval(ppPoles{ip,2},f); % imginary part
sos(ip,4:6) = [1 -2*x x^2+y^2];
end
k = 1-polyval(ppGain,f);
fy(:,i) = fft(k*sosfilt(sos,d0));
end
% this checks out fine
%% Real time part, preperation
frameSize = 128; % frame size
frameRate = fs/frameSize; % number of frames per second
% let's do 10 seconds but an integer number of frames
nx = 10*fs;
numFrames = floor(nx/frameSize);
nx = numFrames*frameSize;
% build a test signal: 50 hz sine wave
xin = sqrt(.5)*sin(2*pi*(0:nx-1)'*50/fs); % 50 Hz sine wave at -3 dB
% build control frequency signal, one frequency per frame
frInput = 40*ones(numFrames,1);
% let's do a bit of rectangular switching plus some random stuff
t = (1:frameRate);
frInput(frameRate+t) = 100;
frInput(3*frameRate+t) = 70;
% up and down sweep
frInput(5*frameRate+t) = linspace(40,100,frameRate);
frInput(6*frameRate+t) = flip(linspace(40,100,frameRate));
% some random numbers for good measure
frInput(7*frameRate+1:end) = 40+60*rand(3*frameRate,1);
%
% in order to smooth the very aprupt frequency transitions in the test
% vector, we smooth the frequency input with a time constant to 30ms
timeConstant = 0.03;
frSmooth = exp(-1./(timeConstant*frameRate));
frCurrent =40;
%% Real time over all frames
% we implement this as driect form I so the states are always guaranteed to
% be continous
signalState = zeros(2,4); % filter state, we need total of 8
freqState = frInput(1); % cutoff frequency states
xout = 0*xin; % initialize output
t = 1:frameSize; % time vector
t0 = 0; % current time
yy = [xout xout xout];
for iFrame = 1:numFrames
% get frequency input and apply smoothing
frCurrent = frSmooth*frCurrent + (1-frSmooth)*frInput(iFrame);
% calculate filter gain, grab input and scale it
k = 1-polyval(ppGain,frCurrent);
y = k*xin(t0+t);
% over all biquads
x1 = signalState(1,1); % grab input state for the first biquad
x2 = signalState(2,1);
for iPole = 1:3
% calculate the filter coeefficents
pReal = 1-polyval(ppPoles{iPole,1},frCurrent); % real part
pImag = polyval(ppPoles{iPole,2},frCurrent); % imaginary part
a1 = -2*pReal; % filter coefficient "a1"
a2 = pReal*pReal+pImag*pImag; % biquad coefficient "a2"
% grab the output state
y1 = signalState(1,iPole+1);
y2 = signalState(2,iPole+1);
% inner loop. Here efficieny is the most important
for i = t
x0 = y(i); % get input sample
y0 = x0-2*x1+x2-a1*y1-a2*y2; % DF1 Butterworth stage
% update state
x2 = x1; x1 = x0;
y2 = y1; y1 = y0;
y(i) = y0; % write output
end % end sample loop
% save the output state of this stage
signalState(1,iPole) = x1;
signalState(2,iPole) = x2;
% grab the input state for the next stage
x1 = signalState(1,iPole+1);
x2 = signalState(2,iPole+1);
% now write the output state
signalState(1,iPole+1) = y1;
signalState(2,iPole+1) = y2;
yy(t0+t,iPole) = y;
end % end poles/stages
% write output
xout(t0+t) = y;
t0 = t0 + frameSize;
end % end frame
plot(xout);
</code></pre>
|
https://dsp.stackexchange.com/questions/79400/mapping-of-classic-filters-for-digital-filter-design
|
Question: <p>I want learn digital filter design. My knowledge of math is at high school level. I can learn math through the Internet. Then, what fields of math do I have to learn? </p>
Answer: <p>If you have the balls to learn math by yourself. The two fields of Mathematics that you need to dominate in order to do filter design are: Functional Analysis and convex optimization.
Pretty much every filter design is the result of an optimization problem, like:
Find these set of $N$ numbers such that the absolute value of the fourier transform in these frequency region has the following shape (between these two limits when frequency is 0Hz to 320Hz, and between these other two when frequency is greater than 340Hz).
Or, what is the set of $N$ numbers such that when applying the discrete convolution of the sequence of the numbers to this signal $x(n)$, the result is this signal $y(n)$.
And there are many other ways of defining them. </p>
<p>And you will need functional analysis in order to understand how to model a signal, how to model a system, and how to model the interactions and operations between signals (transforms, convolutions, etc).</p>
<p>Hope it Helps.</p>
|
https://dsp.stackexchange.com/questions/26582/fields-of-math-needed-for-digital-filter-design
|
Question: <p>I have a system that performs wireless sampling of data about every 7.5ms (133Hz). Due to it being wireless, I get occasional data drop out. I want to construct a LP butter filter with cut-off frequency of 10Hz using Python's scipy butter method and then downsample everything to a lower frequency. </p>
<p>One of the options that it asks for is whether I want to use analog or digital filter. Isn't running this filter offline in python automatically assume that it's digital? Or does the fact that my samples don't necessarily come in at regular intervals mean that I should stick with analog filter? </p>
<p>I understand how a physical analog filter differs from a digital one, but does this difference apply in the same way to a python-based butter filter?</p>
Answer: <blockquote>
<p>Isn't running this filter offline in python automatically assume that it's digital?</p>
</blockquote>
<p><a href="https://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.signal.butter.html" rel="nofollow noreferrer"><code>butter()</code></a> doesn't filter your signal, it just designs the filter. It can design an analog filter or a digital filter.</p>
<p><a href="https://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.signal.lfilter.html" rel="nofollow noreferrer"><code>lfilter()</code></a> is what actually filters your signal, using the filter you designed, and as you can see it is only digital filtering. It doesn't make sense to filter a digital signal with an analog filter.</p>
<p>If your data is regularly sampled and you want to process it in a computer, then you need a digital filter. However it sounds like it isn't:</p>
<blockquote>
<p><strong>about</strong> every 7.5ms<br>
...<br>
my samples don't necessarily come in at regular intervals</p>
</blockquote>
<p>Do you have a timestamp of when each sample was taken? If you do, then do <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow noreferrer">interpolation</a> first:
</p>
<pre><code>from scipy import interpolate
f = interpolate.interp1d(timestamps, measurements)
new_timestamps = np.linspace(min(timestamps), max(timestamps), len(timestamps)*3)
new_measurements = f(new_timestamps)
</code></pre>
<p>and then digitally filter the interpolated signal. (I'm just picking 3x oversampling arbitrarily.)</p>
<p>Probably your data is not bandlimited? So you need to plot the interpolation and decide which type of interpolation is the most realistic fit for the underlying data. </p>
<p>See <a href="https://en.wikipedia.org/wiki/Nonuniform_sampling" rel="nofollow noreferrer">Wikipedia: Nonuniform sampling</a> and <a href="https://dsp.stackexchange.com/q/8488/29">What is an algorithm to re-sample from a variable rate to a fixed rate?</a></p>
|
https://dsp.stackexchange.com/questions/40697/python-butter-filter-choosing-between-analog-and-digital-filter-types
|
Question: <p>I am very new in signal processing and using digital filters. I have to use a low-pass filter to analyze my data in LabVIEW and have a question about it. Any help and advice is appreciated.</p>
<p>I am trying to simplify my problem here:</p>
<p>Let’s say there is a digital sine wave (made by LabVIEW) with $V_{offset}=1 \ \mathrm{V}$, $V_{peak}=0.1 \ \mathrm{V}$, $f=10 \ \mathrm{kHz}$, $N=2000$ (number of samples), and sampling rate $f_s=200 \ \mathrm{kHz}$. Now, if I pass this signal through a low-pass filter with cutoff frequency $f_c=1 \ \mathrm{kHz}$, then the output should be a constant number equals the DC offset (here $1 \ \mathrm{V}$), is it true? </p>
<p>Another question is the concept of “cutoff freq” and “sampling freq” as the inputs of the filters in LabVIEW. Cutoff frequency as an input of a filter makes sense to me but what is that “sampling freq” ? Can anyone explain to me please? I am very confused. Is it the same rate at which the sine wave is created? For example in the attached code, what is the real cutoff frequency (with $f_l=200000$ and $f_l=1000$)?</p>
<p>I have attached the screenshots of the Front panel and Block diagram of my simple vi.</p>
<p><a href="https://i.sstatic.net/1Zy8v.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Zy8v.jpg" alt="blockdiagram"></a></p>
<p><a href="https://i.sstatic.net/MhVTY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MhVTY.jpg" alt="frontpanel"></a></p>
Answer: <p>Let me answer your two questions in turn:</p>
<p>For your first question, generally, yes that is correct; if you filter a 10KHz sinewave that has a DC offset with a filter that has a cutoff frequency below the frequency of the sinewave, then the sinewave would be rejected. The amount of rejection specifically depends on the performance of the filter, but given you said you have a 1KHz cutoff frequency, the sinewave is significantly higher and therefore sufficiently rejected. </p>
<p>I see in your plot that the order of the filter is 5, which for a Butterworth filter as also shown would have a rejection of 20dB/decade *5 (where 5 is the order of your filter), or 100 dB per decade. Depending on other factors such as your digital dynamic range, this suggests that you would be able to filter your 10KHz sine wave up to 100 dB (10KHz is a decade above the cutoff frequency). The DC signal, which is below the cutoff frequency would pass through to the output, unless something in your system blocked DC or introduced other DC -offsets (which is possible). Also the filter itself can have gain or loss, so the actual DC output level if it did pass through can be modified by this gain or loss accordingly.</p>
<p>For your second question, sampling frequency is the sampling rate for the signals passing through this digital filter implementation. From the figure, you are using a sampling rate of 200KHz, and yes this would be the sampling rate of the sinewave that is created. </p>
<p>I hope this helped to clear up some of your questions.</p>
|
https://dsp.stackexchange.com/questions/37591/digital-filters-in-labview
|
Question: <p>I am new to DSP and I am trying to find the cutoff frequency of a HP digital filter. I know the equation that describes the system, its frequency response: <span class="math-container">$H(e^{jω})= 1 - \frac{e^{jω}}{2} - \frac{1}{2e^{jω}}$</span> and of course amplitude response / phase diagrams.</p>
<p>I have <a href="https://www.analog.com/media/en/technical-documentation/dsp-book/dsp_book_Ch14.pdf" rel="nofollow noreferrer">found</a> (p. 8) that for analog filters I can calculate <span class="math-container">$\frac{1}{\sqrt{2}} |H(e^{jω})|$</span>. However I don't know how to do this for digital filters. Reading <a href="https://dsp.stackexchange.com/questions/45224/how-can-i-know-the-type-of-filter-from-its-cutoff-frequency">this</a> makes it look like it's the same procedure, although this seems (to me) like a contradiction to </p>
<blockquote>
<p>Digital filters are less standardized, and it is common to see 99%, 90%, 70.7%, and 50% amplitude levels defined to be the cutoff frequency.</p>
</blockquote>
<p>of the first source. Am I confusing something? Please keep in mind that I am new to DSP. Thank you.</p>
Answer: <p>Concerning the cut-off frequency, there's really not much of a difference between analog and digital filters. <span class="math-container">$3\textrm{ dB}$</span> is common, but any other value is fine, as long as you specify it and people know what you're talking about.</p>
<p>In the case of the given filter, it's quite straightforward to compute any cut-off frequency. Let's choose the <span class="math-container">$3\textrm{ dB}$</span> cut-off frequency:</p>
<p><span class="math-container">$$H(e^{j\omega})=1-\frac12\left(e^{j\omega}+e^{-j\omega}\right)=1-\cos(\omega)\tag{1}$$</span></p>
<p>Since <span class="math-container">$H(e^{j\omega})$</span> is real-valued and since <span class="math-container">$H(e^{j\omega})\ge 0$</span>, we can simply solve the equation</p>
<p><span class="math-container">$$1-\cos(\omega_c)=\frac{2}{\sqrt{2}}=\sqrt{2}\tag{2}$$</span></p>
<p>Note that the maximum of <span class="math-container">$H(e^{j\omega})$</span> is <span class="math-container">$2$</span>, so the value of <span class="math-container">$H(e^{j\omega})$</span> at the <span class="math-container">$3\textrm{ dB}$</span> cut-off frequency <span class="math-container">$\omega_c$</span> is <span class="math-container">$\sqrt{2}$</span>.</p>
<p>From <span class="math-container">$(2)$</span> we get <span class="math-container">$\omega_c\approx 1.9979$</span>. The actual cut-off frequency in Hertz is given by</p>
<p><span class="math-container">$$f_c=\frac{\omega_c}{2\pi}f_s\tag{3}$$</span></p>
<p>where <span class="math-container">$f_s$</span> is the sampling frequency.</p>
|
https://dsp.stackexchange.com/questions/67670/calculating-the-cutoff-frequency-of-a-highpass-digital-filter
|
Question: <p>I am new to the world of digital filters and am educating myself with the book <em>Introduction to Digital Filters by J.O Smith III,</em>. The author derives the frequency response of a very simplistic filter:</p>
<p><span class="math-container">$$y(n) = x(n) + x(n-1)$$</span>
<span class="math-container">$$H\left(e^{j\omega T}\right) = 1+e^{-j\omega T}$$</span></p>
<p>where the phase response is <span class="math-container">$\Theta(\omega)=-\omega T/2$</span>, which varies linearly with <span class="math-container">$\omega$</span>. He then claims this phase response gives rise to a <strong>constant time delay</strong> <em>irrespective of the signal frequency</em>.</p>
<p>Could someone kindly shed light on why time delay is constant? Isn't the time delay the same as phase shift as reflected by the phase response? It looks like the time delay is the derivative of the phase response but I don't know why.</p>
Answer: <p>To help your intuition, consider a sinusoidal signal with frequency <span class="math-container">$\omega_0$</span> and some arbitrary but constant phase <span class="math-container">$\phi$</span>:</p>
<p><span class="math-container">$$x[n]=A\sin(\omega_0n+\phi)\tag{1}$$</span></p>
<p>Delaying the signal <span class="math-container">$x[n]$</span> by <span class="math-container">$n_0$</span> samples gives</p>
<p><span class="math-container">$$\begin{align}x[n-n_0]&=A\sin\big(\omega_0(n-n_0)+\phi\big)\\&=A\sin\big(\omega_0n-\omega_0n_0+\phi\big)\\&=A\sin\big(\omega_0n+\varphi(\omega_0,n_0)+\phi\big)\tag{2}\end{align}$$</span></p>
<p>with the additional phase term</p>
<p><span class="math-container">$$\varphi(\omega_0,n_0)=-n_0\omega_0\tag{3}$$</span></p>
<p>Consequently, for the delay <span class="math-container">$n_0$</span> to be independent of the sinusoids frequency, the additional phase must be a linear function of frequency. Note that a linear time-invariant (LTI) system introduces exactly such an additional phase term to a sinusoidal input signal.</p>
<p>For general signals and general LTI systems, very little can be said about the time delay introduced by the system. There are a few special cases, however, for which something useful can be said:</p>
<p><strong>1.</strong> Systems with linear phase: <span class="math-container">$H(j\omega)=A(\omega)e^{-jn_0\omega}$</span></p>
<p>Apart from amplitude scaling by <span class="math-container">$A(\omega)$</span>, each frequency component of the input signal is delayed by <span class="math-container">$n_0$</span> samples (assuming that <span class="math-container">$n_0$</span> is an integer).</p>
<p><strong>2.</strong> Sinusoidal input signals are delayed by the system's phase delay evaluated at the input frequency: <span class="math-container">$$\tau=-\frac{\phi(\omega_0)}{\omega_0}$$</span></p>
<p><strong>3.</strong> For narrow-band input signals, the delay of the signal's envelope is approximately given by the group delay of the system at the input signal's center frequency: <span class="math-container">$$\tau=-\frac{d\phi(\omega_0)}{d\omega}$$</span></p>
|
https://dsp.stackexchange.com/questions/80795/issue-understanding-time-delay-of-a-digital-filter
|
Question: <p>One of the known methods for discretizing analog filters is impulse response invariant. We get the impulse response in time domain, discretize it and then get the Z transform.</p>
<p>What I am trying to understand is why the freq response of the resulting digital filter has a freq response magnitude scaled by (1/T) T:sampling time?</p>
<p>Matlab, using c2d command, modifies it by multiplying by T so that the freq response is similar to the analog filter, but this is not the result of the Z transform I described earlier.</p>
Answer: <p>This is just as it turns out when you do the math. The discrete-time Fourier transform (DTFT) of the sampled continuous-time impulse response <span class="math-container">$h(t)$</span> is</p>
<p><span class="math-container">$$H_d(e^{j\omega T})=\sum_nh(nT)e^{-jn\omega T}\tag{1}$$</span></p>
<p>With</p>
<p><span class="math-container">$$h(nT)e^{-jn\omega T}=\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\delta(t-nT)dt\tag{2}$$</span></p>
<p>this can be written as</p>
<p><span class="math-container">$$\begin{align}H_d(e^{j\omega T})&=\sum_n\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\delta(t-nT)dt\\&=\int_{-\infty}^{\infty}\left[h(t)\sum_n\delta(t-nT)\right]e^{-j\omega t}dt\\&=\mathcal{F}\left\{h(t)\sum_n\delta(t-nT)\right\}\\&=\frac{1}{2\pi}H(\omega)\star\frac{2\pi}{T}\sum_k\delta\left(\omega-\frac{2\pi k}{T}\right)\\&=\frac{1}{T}\sum_kH\left(\omega-\frac{2\pi k}{T}\right)\tag{3}\end{align}$$</span></p>
<p>where <span class="math-container">$H(\omega)$</span> is the Fourier transform of <span class="math-container">$h(t)$</span>, and <span class="math-container">$\star$</span> denotes convolution. From <span class="math-container">$(3)$</span> we see that the DTFT of the sampled impulse response equals the sum of shifted spectra of <span class="math-container">$h(t)$</span>, scaled by <span class="math-container">$1/T$</span>.</p>
<p>If we assume that <span class="math-container">$H(\omega)$</span> is approximately band-limited and that <span class="math-container">$T$</span> is chosen sufficiently small such that aliasing becomes negligible, we obtain the approximation</p>
<p><span class="math-container">$$H_d(e^{j\omega T})\approx\frac{1}{T}H(\omega),\qquad |\omega|<\frac{\pi}{T}\tag{4}$$</span></p>
<p>For the step-invariance method, we use samples of the step response instead of samples of the impulse response, and we obtain a relation analogous to <span class="math-container">$(3)$</span> between the DTFT <span class="math-container">$G_d(e^{j\omega T})$</span> of the step response of the discrete-time system, and the Fourier transform <span class="math-container">$G(\omega)$</span> of the continuous-time step response:</p>
<p><span class="math-container">$$G_d(e^{j\omega T})=\frac{1}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{5}$$</span></p>
<p>In order to obtain the frequency response <span class="math-container">$H_d(e^{j\omega T})$</span> we multiply <span class="math-container">$(5)$</span> by <span class="math-container">$1-e^{-j\omega T}$</span>, because the impulse response is obtained by computing a first-order difference of the step response:</p>
<p><span class="math-container">$$H_d(e^{j\omega T})=\left(1-e^{-j\omega T}\right)G_d(e^{j\omega T})=\frac{1-e^{-j\omega T}}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{6}$$</span></p>
<p>For frequencies that are small compared to the sampling frequency, i.e., for <span class="math-container">$|\omega T|\ll 1$</span> we obtain from <span class="math-container">$(6)$</span></p>
<p><span class="math-container">$$\begin{align}H_d(e^{j\omega T})&\approx\frac{1-(1-j\omega T)}{T}\sum_kG\left(\omega-\frac{2\pi k}{T}\right)\\&=j\omega \sum_kG\left(\omega-\frac{2\pi k}{T}\right)\tag{7}\end{align}$$</span></p>
<p>If we again assume that aliasing can be neglected, we arrive at</p>
<p><span class="math-container">$$H_d(e^{j\omega T})\approx j\omega G(\omega)=H(\omega),\qquad |\omega|<\frac{\pi}{T}\tag{8}$$</span></p>
<p>From <span class="math-container">$(8)$</span> we see that, unlike for the impulse invariance method, the step invariance method doesn't involve a scaling of the continuous-time frequency response.</p>
|
https://dsp.stackexchange.com/questions/68903/impulse-invariant-method-for-digital-filter-design
|
Question: <p>Beginning from the basic definition of decibel that expresses the ratio of two amplitudes as <span class="math-container">$20\log_{10}(A_{2}/A_{1}) $</span>, how do we arrive at the expression <span class="math-container">$-20\log_{10}(\delta_{s})$</span> measured in dB, for the stopband deviation of a digital filter?</p>
<p><a href="https://i.sstatic.net/w8ZNE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w8ZNE.png" alt="enter image description here" /></a></p>
Answer: <p>The stop band ripple, <span class="math-container">$\delta_s$</span> is measured with respect to the pass band amplitude (usually 1).</p>
<p>That means:</p>
<p><span class="math-container">$$
20 \log_{10}\left (\frac{\delta_s }{ 1 } \right) = 20 \log_{10}\left (\delta_s \right)
$$</span></p>
<p>but this is the stop band <strong>gain</strong> and most people think about the stop band in terms of <strong>attenuation</strong>. So it's more usual to have:</p>
<p><span class="math-container">$$
-20 \log_{10}\left (\delta_s \right)
$$</span>
Because, for a good filter, <span class="math-container">$\delta_s \ll 1$</span>, <span class="math-container">$\log_{10}(\delta_s)$</span> is negative, so the minus sign just makes the number positive (and an attenuation rather than a gain).</p>
|
https://dsp.stackexchange.com/questions/91648/expression-for-stopband-deviation-of-a-digital-filter
|
Question: <p>In an ideal design, a digital filter has a target gain in the passband and a zero gain (−∞ dB) in the stopband. In a real implementation, a finite transition region between the passband and the stopband, which is known as the transition band, always exists. The gain of the filter in the transition band is unspecified. The gain usually changes gradually through the transition band from 1 (0 dB) in the passband to 0 (−∞ dB) in the stopband <a href="http://zone.ni.com/reference/en-XX/help/371325F-01/lvdfdtconcepts/dfd_filter_spec/" rel="nofollow noreferrer">http://zone.ni.com/reference/en-XX/help/371325F-01/lvdfdtconcepts/dfd_filter_spec/</a>.</p>
<p>Question:
If an ideal design, a digital filter has a target gain in the passband and a zero gain (−∞ dB) in the stopband.
It has a role in the appearance of the ripples in some way in the stopband and passband?</p>
<p>Why the transition band always exists? does this is because the gradually through the transition band from the passband to the stopband?</p>
<p>In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input to the output port.by adding energy converted from some power supply to the signal <a href="https://en.wikipedia.org/wiki/Gain_(electronics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Gain_(electronics)</a>. I want to know how this definition is applied in the design of digital filters.</p>
Answer: <p>In an actual design you need to allow for a smooth transition from the passband to the stopband because the magnitude response of a realizable (i.e., causal and stable) filter is smooth; it can't jump. Of course you can try to approximate a jump in the magnitude, but you'll always get a smooth magnitude response (cf. Gibbs phenomenon). Defining a "don't care" transition band with no specification will decrease the approximation error in the bands of interest.</p>
<p>I don't understand your question about the ripples in the passband and stopband. Maybe you can clarify this and I'll edit my answer.</p>
<p>The passband gain of a filter is simply the amplification factor for signal components that are in the filter's passband.</p>
|
https://dsp.stackexchange.com/questions/46454/transition-bands-and-passband-gain-in-digital-filter-design
|
Question: <p>I am strugling with a question that I hope someone can help me with.</p>
<p>I am recording single molecule events which I detect is picoampere square deflections.</p>
<p>I wish to use as gentle low-pass bessel filtering as possible.</p>
<p>The lowest filter settings my amplifier allow are 10 kHz and 100 kHz, and my digitizer have a maximal sampling rate of 500 kHz. I am afraid of corrupting my signal to much, but do not have the intuitive understanding of sampling and filtering to know if I am doing something wrong. Here is what I do:</p>
<p>I filter the signal with a 100 kHz bessel filter and digitize it with a 500 kHz sampling rate.
I then wish to filter my digitized data with a 35 kHz digital filter.</p>
<p>Would this mess up my data?
I hear people say that I am on safe ground if i sample at appropximatly 10x my filter settings, but I get to this 'safe zone' only when I do the post-sampling digital filtering. So I guess what I realy do not understand is if the order of filtering, sampling, filtering does something nasty to the data.</p>
<p>I hope I was able to communicate my question clear enough.</p>
<p>Thank you very much,
Best regards,
Michael</p>
Answer: <p>An ideal square pulse - which I assume is a model for your up-and-down deflections - would have an infinite bandwidth, but the bulk of the energy is within a bandwidth of <span class="math-container">$1/T$</span>, where <span class="math-container">$T$</span> is the pulse duration. Roughly speaking, therefore, the 100 kHz Bessel filter will thus allow you to detect pulses of duration <span class="math-container">$10 \mu s$</span> and above. It will also limit the sharpness of the up-and-down transition.</p>
<p>You want the Bessel filter to prevent aliasing, which means that it needs to attenuate signals in the 500 kHz +/- 100 kHz range, which would alias back into the passband of the filter. By my calculations, the attenuation of a 4th order Bessel filter would be about 23 dB in this range (but check your datasheet), which isn't great but may be good enough in some applications. It depends on how high of a signal-to-noise ratio you need, how noisy your signal is within this range, and how much spectral content your deflections have in this range (which will depend on their width and the sharpness of their transitions).</p>
<p>The rule-of-thumb you mentioned about sampling 10x above your filter settings may be specific to a particular filter. In general, you only need to sample at twice the bandwidth of your signal, which is the well-known Nyquist Criteria. In practice, we usually sample at higher rates because it allows us to use a more realistic, lower cost filter - such as your 4th order Bessel filter. The required sampling rate, the filter specifications, and the spectral properties of the signal you are sampling are all related.</p>
<p>Having sampled at 500 kHz, you would want to filter your signal with a 35 kHz filter if you are only interested in spectral content less than 35 kHz. This also allows you to reduce your sample rate to some value greater than 70 kHz, which could reduce the computational burden of whatever processing you are doing. But you would not be concerned with the 10x rule-of-thumb anymore, because any aliasing due to the original sampling has already occurred. You can implement a digital filter that is considerably tighter than the Bessel filter.</p>
|
https://dsp.stackexchange.com/questions/55053/sample-rate-filtering-digital-filtering-and-aliasing
|
Question: <p>I am making a 9th order digital bandpass filter with lower and upper corners of 200 kHz and 40 MHz respectively. I am using this filter to filter a 1D time domain signal which is 64k samples long sampled at a frequency of 150MHz. </p>
<p>I have done some digital filtering before in university, so I know what to expect, but it's been a while. </p>
<p>I have used this site: <a href="http://www-users.cs.york.ac.uk/~fisher/mkfilter/trad.html" rel="nofollow">http://www-users.cs.york.ac.uk/~fisher/mkfilter/trad.html</a>
to generate the filter coefficients and the gain, and I can see how the code works, my question is this:</p>
<p>I start with the value of X[0], then to calculate the value of Y[0] I need values for Y[-1] ... Y[-18] and x[-1] ... X[-18]. </p>
<p>I know these do not exist, so I think (from what I remember from university) that I pad with zeros, however, doing a bit or reading I heard it mentioned that padding with zeros changes the sampling frequency. </p>
<p>So how do I go about calculating this new sampling frequency (if indeed required..)? </p>
Answer: <p>Zero padding your input signal does not change your sampling frequency, it just changes (in this case probably very slightly) the time duration of your input signal. An example would be that whether you filter one second's worth of data or two second's worth of data doesn't have any impact on the sampling frequency of the data.</p>
<p>One thing to keep in mind is that there will be a start up transient when zero-padding because your input signal instantaneously goes from zero to the value of the signal, essentially applying a step function on top of your signal of interest. Generally speaking, you'll want to make sure you have some amount of data before a feature you're really interested in to make sure the start up effects have sufficient time to decay. The website you mention gives the step response of the generated filter so you can check to see what effect it will have on your signal of interest.</p>
|
https://dsp.stackexchange.com/questions/18323/implementing-digital-filter-by-padding-with-zeros
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.