text
stringlengths
81
47k
source
stringlengths
59
147
Question: <p>Given I have coefficients a0, a1, a2, b1, and b2, defining the difference equation for a digital filter as:</p> <p><code>y[n] = a0 * x[n] + a1 * x[n - 1] + a2 * x[n - 2] - b1 * y[n - 1] - b2 * y[n - 2]</code></p> <p>Which defines a low-pass filter with particular cutoff frequency, how can I obtain the coefficients A0, A1, A2, B1, B2, which similarly define a high-pass filter with the same cutoff frequency? I'm aware there are so-called &quot;bandform transformations&quot; for converting a prototype low-pass into a high-pass, but to my knowledge, these are not directly applicable to discrete-time/digital filters, so I am unaware of any way to apply them to this problem.</p> <p>If these coefficients are derived from a complex-conjugate pair of zeros and/or of poles given the discrete transfer function for the low-pass filter would be:</p> <p><code>H(z) = (z - Zero[0]) * (Z - Zero[1])/[(Z - Pole[0]) * (Z - Pole[1])]</code></p> <p>Is there then a way to transform this function to the corresponding high-pass filter I'm looking for to get the poles and zeros from the new transfer function?</p> Answer: <p>You can apply a so-called all-pass transformation to a discrete-time low-pass prototype filter in order to convert it to other standard filters (such as high-pass, band-pass, and band-stop). This is accomplished by transforming the complex variable <span class="math-container">$z$</span> in the transfer function of the prototype filter by a function <span class="math-container">$G(z)$</span> which satisfies <span class="math-container">$|G(e^{j\omega})|=1$</span>, i.e., <span class="math-container">$G(z)$</span> is an all-pass function. This makes sure that the transformation maps the unit circle onto itself, i.e., the frequency response of the new filter is just a shifted and/or warped version of the prototype frequency response.</p> <p>The most straightforward way to transform a discrete-time low-pass filter to a high-pass filter is to use the trivial transform <span class="math-container">$G(z)=-z$</span>, i.e.,</p> <p><span class="math-container">$$H_{HP}(z)=H_{LP}(-z)\tag{1}$$</span></p> <p>If <span class="math-container">$H_{LP}(z)$</span> is given by</p> <p><span class="math-container">$$H_{LP}(z)=\frac{\displaystyle\sum_{k=0}^Nb[k]z^{-k}}{\displaystyle 1+\sum_{k=1}^{N}a[k]z^{-k}}\tag{2}$$</span></p> <p>then <span class="math-container">$H_{HP}(z)$</span> becomes</p> <p><span class="math-container">$$H_{HP}(z)=\frac{\displaystyle\sum_{k=0}^N(-1)^kb[k]z^{-k}}{\displaystyle 1+\sum_{k=1}^{N}(-1)^ka[k]z^{-k}}\tag{3}$$</span></p> <p>This transformation shifts the low-pass frequency response by <span class="math-container">$\pi$</span> (i.e., by half the sampling frequency). Consequently, if <span class="math-container">$\omega_c$</span> is the cut-off frequency of the low-pass prototype filter, the cut-off frequency of the resulting high-pass filter is given by <span class="math-container">$\omega'_c=\pi-\omega_c$</span>.</p> <p>Other cut-off frequencies can be obtained by applying the more general lowpass-to-highpass transformation</p> <p><span class="math-container">$$G(z)=-\frac{z+\alpha}{1+\alpha z},\qquad |\alpha|&lt;1\tag{4}$$</span></p> <p>The simple transformation shown above is obtained from <span class="math-container">$(4)$</span> by the choice <span class="math-container">$\alpha=0$</span>.</p> <p>This and other frequency transformations applicable to discrete-time filters are treated in some detail in Chapter <span class="math-container">$7.4$</span> of the third edition of Oppenheim and Schafer's <em>Discrete-Time Signal Processing</em>.</p> <p>Also take a look at these related questions and their answers: <a href="https://dsp.stackexchange.com/q/69293/4298">Q1</a>, <a href="https://dsp.stackexchange.com/q/53622/4298">Q2</a>.</p>
https://dsp.stackexchange.com/questions/69493/digital-filter-coefficients-from-low-pass-to-high-pass
Question: <p>A digital low pass Butterworth filter that has been designed using Bi-linear transformation has been a pole at $z=0.6$. It is also known that the filter's attenuate (at digital frequency) $\omega = 1.2$ is about $44$ dB. Find the filter order. Give at least one other pole of the digital filter (in Z domain).</p> <p><img src="https://i.sstatic.net/BMDxd.png" alt="enter image description here"></p> <p><img src="https://i.sstatic.net/O9PKe.png" alt="enter image description here"></p> Answer: <p>I'll try to give you some hints to get you started. First of all, you should know that the bilinear transform is given by</p> <p>$$s=k\frac{z-1}{z+1}\tag{1}$$</p> <p>If the analog prototype filter is normalized such that its cut-off frequency is $\Omega_c=1$, then the constant $k$ in (1) is given by</p> <p>$$k=\frac{1}{\tan\left(\frac{\omega_0}{2}\right)}\tag{2}$$</p> <p>where $\omega_0$ is the desired cut-off frequency of the discrete-time filter. Some more background on this is given in <a href="https://dsp.stackexchange.com/questions/22726/my-butterworth-lowpass-formulas-do-not-agree-with-fisher-webpage/22770#22770">this answer</a>.</p> <p>You should also know that the poles of the normalized analog filter lie on a circle with radius $1$ centered at $s=0$ (of course the poles only lie on the left half of the circle). If the filter order is odd as in your example (why?), there must be a pole at $s=-1$. Now you can try to figure out how that pole is transformed to the $z$-plane, i.e.</p> <p>$$\frac{1}{1+s}{\huge|}_{s=k\frac{z-1}{z+1}}=\ldots\tag{3}$$</p> <p>From (3), knowing that that pole is transformed to a pole at $z=0.6$, you can determine the constant $k$, and from that, via Eq. (2), you can compute $\omega_0$. Now you know that the filter's attenuation is $3\text{ dB}$ at $\omega_0$, and it is $44\text{ dB}$ at $\omega=1.2$.</p> <p>The last thing you should know is that with a Butterworth lowpass filter of order $N$, you get approximately $6N\text{ dB}$ attenuation per octave, at least in the frequency range considered here. Now figure out how many octaves there are between $\omega_0$ and $\omega=1.2$, figure out how many dB's difference in attenuation there must be between those two frequencies, and from this compute an estimate of the filter order $N$.</p>
https://dsp.stackexchange.com/questions/23137/digital-low-pass-butterworth-filter
Question: <p>I want to design a digital filter with the following phase response in MATLAB. i.e. at 1kHz, the phase response should be 9 degrees, at 2khz phase response should be 18 degrees ,at 3kHz phase response should be 27 degrees, at 4kHz phase response should be 36 degrees and so on upto 8kHz.How to design such filters having desired phase response?</p> Answer: <p>What you describe is a linear phase. Assuming that your sample rate is 48 kHz, you can implement this simply with a delay of -1.2 samples.</p> <p>The tricky parts here are that the delay is negative, i.e. the filter is non causal and that the delay is fractional (and not an integer number of samples).</p> <p>This can all be done, but needs to be carefully tailored to the specific requirements of your application.</p> <p>Here is an excellent article on the topic: <a href="http://home.agh.edu.pl/%7Eturcza/sr/Splitting%20the%20Unit%20Delay.pdf" rel="nofollow noreferrer">http://home.agh.edu.pl/~turcza/sr/Splitting%20the%20Unit%20Delay.pdf</a></p>
https://dsp.stackexchange.com/questions/76933/design-of-digital-filter-with-desired-phase-response
Question: <p>I recently designed the LPF of the IQ demodulator using the Butterworth LPF refering to <a href="https://dspillustrations.com/pages/posts/misc/baseband-up-and-downconversion-and-iq-modulation.html" rel="nofollow noreferrer">https://dspillustrations.com/pages/posts/misc/baseband-up-and-downconversion-and-iq-modulation.html</a>. But I have a question. If you look at the block in the picture below, you can see the baseband block after passing the A/D Conversion. I think I applied IIR digital filter, but the composition of the picture below shows that LPF is analog domain. So I'm suddenly very curious. Is this LPF analog filter or a digital filter?</p> <p><a href="https://i.sstatic.net/LlnJn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LlnJn.png" alt="enter image description here" /></a></p> <p>I designed Lowpass filter corresponding to LP block using Butterworth IIR filter design method in the IQ demodulator block diagram above. I first calculated analog butterworth filter coefficient, and then I calculated IIR filter coefficient b,a of z-domain using the bilinear transform. And with this, we calculated the waveform through the lpf using the filter(b,a,x) function of the matlab. So, we got the spectrum for Inphase and Quadrature phase as below. But in this process, I have a question. Obviously, I designed a digital filter of z-domain, and I wonder why digital filter exists in analog domain because it is before ADC in block diagram. <a href="https://i.sstatic.net/Z5gkI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z5gkI.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/KLmxG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KLmxG.png" alt="enter image description here" /></a></p> Answer: <p>The A/D can be placed as a single real A/D before the multipliers, OR as shown in the diagram as two A/Ds one after each multiplier to sample the I and Q channels. In either case, an analog filter is required before any A/D conversion as an anti-alias filter. This can be a bandpass filter or a low-pass filter, depending on which image in the analog domain's frequency spectrum is desired as the signal of interest. Even if there is no actual additional signal in the image bands, with no filter (and gain in the receiver) the noise figure will be significantly degraded due to the accumulated noise floor from each of the images. An analog filter prior to any sampling is important.</p> <p>So in this case given the placement of the A/D converter, the filters as shown would be the analog anti-alias filter that I describe.</p> <p>If the reason for image rejection is confusing, please refer to this posts that details how aliasing occurs in the digital sampling process: <a href="https://dsp.stackexchange.com/questions/54423/higher-order-harmonics-during-sampling/54432#54432">Higher order harmonics during sampling</a></p> <p>So in summary, for the implementation of an IQ demodulator, there could be two A/D converters if I and Q are to be sampled separately: one for the I channel and one for the Q channel resulting in a complex representation of the baseband signal. Alternatively there can be a single A/D converter prior to the multipliers and then the multiplier and filter structures as shown can be implemented all digitally (referred to as a Digital Down-Converter or DDC). In this case, a single analog anti-alias filter (band-pass or low pass as desired for the implementation) is still needed in front of the A/D converter, in addition to the digital low pass filters after the multipliers to select the low frequency component after the multipliers.</p> <p>The block diagram as shown in the question does not make that clear that <span class="math-container">$v(t)$</span>, <span class="math-container">$y(t)$</span> and <span class="math-container">$\hat d[k]$</span> are complex signals (although clearer if you read through the linked reference).</p>
https://dsp.stackexchange.com/questions/74501/lpf-in-the-stage-of-iq-demodulator-is-it-a-analgor-filter-or-digital-filter
Question: <p>I designed a digital filter using fdatool of matlab and obtained the filter coefficients from the tool.</p> <p>The problem is that i designed a 4th order filter. This gave me 5 filter values </p> <pre><code>h[] = {0.1930,0.2035,0.2071,0.2035,0.1930} x[k] = Discrete time input signal </code></pre> <p>Now on using the formula</p> <pre><code>Output = h[k]*x[n-k]; </code></pre> <p>Output represents the final filtered value.Although the results are coming fine, but I am not able to find out how those coefficients are obtained by matlab and how mere multiplication(convolution) gives the final filtered response for any sample.</p> <p>Any link or explanation will do. I wish to know the complete back-end working of filter coefficient calculation.</p> <p>Please comment if i am unclear in my doubt somewhere.</p> <p>Thanks :)</p> Answer: <p>We can try a very short introduction:</p> <ol> <li>Every filter represents a Linear Time Invariant System (LTI)</li> <li>Every Linear Time Invariant System can be completely described by it's transfer function or it's impulse response. The two can be converted into each other by the Fourier Transform</li> <li>Filter coefficients are derived from impulse response or transfer function</li> <li>The exact nature of the filter coefficients depends on the algorithm (there are quite a few of those)</li> <li>In the case of the simplest algorithm, the direct convolution FIR (Finite Impulse Response) filter, the filter coefficients are simply the impulse response of the LTI system.</li> <li>In most other algorithms the relationship is much more complicated and text book study is indeed required.</li> <li>The whole subject of LTI systems, transfer functions, Fourier Transforms, amplitude responses, phase responses etc. is probably another text book worth of stuff</li> </ol>
https://dsp.stackexchange.com/questions/1243/what-do-the-filter-coefficients-in-a-digital-filter-represent
Question: <p>My objective is to build a noise shape filter from a given transfer function (in one case) and from a given PSD (for another case). Checking my precedent questions you can see that this argument is keeping me busy by long time. You can check <a href="https://dsp.stackexchange.com/questions/52657/noise-shape-filter-to-obtain-a-given-psd">this</a> for my first attempt. First, I have noticed that <span class="math-container">$SD = \sqrt{PSD}$</span> and not the amplitude spectral density. Secondly I have tried to use the simple approach of putting a white noise in a filter to get a shaped function. My questions are: Is what I used as source a real white noise? - I have used the simulink block: Band-Limited White Noise which asks for: Noise power, sample time, seed. I started filling by trial and error procedure to get an averega power spectral density of 1</p> <pre><code> prop = 1; sample = prop*0.1; pwr_white = prop; % autotuning toperforme pwr == 1 unit^2/Hz seed = randi([1 23341]); </code></pre> <p>Applying these parameters to the block in simulink and feeding the output of the Band_imited white noise block into an Averaging Power Spectral Density block, the average PSD value is <span class="math-container">$\pi$</span> times smaller than the Noise power <code>pwr_white</code>. This is exactly how should it works. So from this information I am supposing that my noise is actually with PSD = 1. I have tried a similar procedure in MALAB. Using <code>wgn(t_sim/sample,1,0)</code> I get compleatly different results. But <code>var(wgn(t_sim/sample,1,0))</code> is 1 and <code>var(band limited white noise)</code> is 10. I don't know where I am wrong.</p> <p>-Using <span class="math-container">$H(s)$</span> as filter will produce in continuos domain the right noise shape? Looking the aswer cited before, I think yes. - taking the simulated noise in continuous domain then I need to obtain the PSD (and then SD) to confront with the noise characteristic that I want to perform: I came up to the idea that my initial approach was wrong: I was trying to use a noise shape filter simulated in continous to compute than a welch approximation. In this approach I used the MATLAB function <code>pwelch</code></p> <pre><code>for i =[1e-5, 1e-4, 1e-3, 1e-2, 1e-1 1] f_range = linspace(i,i*10,1000) [pxx_noise_t] = pwelch(noise_t,[],[],f_range); loglog(f_range,sqrt(pxx_noise_t),'r','LineWidth',1.2) hold on end </code></pre> <p>The welch approximation is used here in a hamming window (as default) with 50% of overlapping. The f_range given as a vector enables the approximation to be done on the specified range of frequencies. The for cycle is used to put as many points as possible in the approximation, actually the code computes a number of length(i)-1 = 5 welch approximation each in a decade. Actually I am not sure of the division by <span class="math-container">$2\pi$</span> of the noise_t. I have some doubts about the unit of the PSD. Are as deaful in (unit/rad/s)^2?</p> <p>However, the approach used seems to be not suitable for the welch approximation, or better, for the implementation of a noise shape filter. One main reason is that I am not working in a discrete domain. So I need to built a filter in discrete. This is something that I have never done before, but off course I have never used the pwelch either.</p> <p>So I need this digital filter. But even if the basic idea of filter is the same for continuous and discrete domain, I need some help. Even in the definition of white gaussian noise (WGN). Actually, checking <a href="https://dsp.stackexchange.com/questions/8629/variance-of-white-gaussian-noise/8632#8632">this answer and the comment</a> I have noticed some problem in applying the approach used in continuos to the case in discrete. Now I am starting from understanding FIR and IIR filters but I am not sure which one I have to use and how shoul I convert my H(s) in z domain.</p> Answer:
https://dsp.stackexchange.com/questions/53226/noise-shape-digital-filter
Question: <p>I am currently working on digital filters that can predict my input signal(assume that input signal is bandlimited). In other words, I want my filter to have a flat magnitude response in bandwidth of interest (let's say, <span class="math-container">$0$</span> to <span class="math-container">$\pi/4$</span>), as well as a constant and negative group delay in bandwidth of interest. Since I have to implement it, the order of the filter should be as low as possible.</p> <p>I noticed that <span class="math-container">$H(s)=1+s\tau$</span> reaches all my requirements. However, transfering it to digital filter using bilinear transform will destroy its group delay response.</p> <p>I wonder is there any methodology or something like a MATLAB toolbox to help me design such a filter? Or if anyone could provide a prototype?</p> Answer: <p>Here is another version.</p> <p>This is a tricky problem. Allpass filters clearly don't work here since the have a strictly monotonic decreasing phase, so the group delay is always positive. That means the best we can do is optimize over a limited frequency area.</p> <p>A better choice are probably minimum phase filters. These can indeed have negative group delay. Here is why: the inverse of a stable minimum phase filter is also stable and minimum phase. The inverse has the negative group delay of the original filter, i.e.</p> <p><span class="math-container">$$\tau_g(\frac{1}{H[z]}) = -\tau_g(H[z])$$</span></p> <p>The group delay tends to be negative on an upwards slope. An interesting choice are lowshelf filters. Here is an example of a lowshelf with a gain of -6dB, a center frequency of 4 kHz and Q of 1 sampled at 48 kHz. You have a group delay of about -1.1 samples and a fairly flat amplitude response up to about 1kHz or thereabouts.</p> <p><a href="https://i.sstatic.net/KzT0I.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KzT0I.jpg" alt="enter image description here" /></a></p>
https://dsp.stackexchange.com/questions/90236/design-of-digital-filters-with-negative-group-delay
Question: <p>I have been researching Wave Digital Filters and looking for one of the foundational papers by Alfred Fettweis (1971)</p> <p>A. Fettweis &quot;Digital filter structures related to classical filter networks&quot;, Archiv für Elektronik und Übertragungstechnik, 25, 79-89 (1971)</p> <p>This is self-referenced in some of his early papers, and seems to be referenced everywhere by much of the subsequent work on this topic, even in very recent work.</p> <p>I can't seem to be able to find it though. Any one has a copy or can post a link?</p> <p>Many thanks F</p> Answer: <p>After some further research I managed to find a reprint of the paper on Internet Archive.</p> <p><a href="https://archive.org/details/selectedpapersin0000unse_k6m6/" rel="nofollow noreferrer">IEEE Acoustics, Speech, and Signal Processing Society. Digital Signal Processing Committee; Selected papers in digital signal processing, II, page 475</a></p>
https://dsp.stackexchange.com/questions/95993/where-to-get-fettweis-1971-digital-filter-structures-related-to-classical-fil
Question: <p>First the question(s):</p> <blockquote> <p>How should I write unit tests for a digital filter (band-pass/band-stop) in software? What should I be testing? Is there any sort of <em>canonical test suite</em> for filtering?</p> <p>How to select test inputs, generate expected outputs, and define &quot;conformance&quot; in a way that I can say the actual output <em>conforms</em> to expected output?</p> </blockquote> <p>Now the context:</p> <p>The application I am developing (electromyographic signal acquisition and analysis) needs to use digital filtering, mostly band-pass and band-stop filtering (C#/.Net in Visual Studio).</p> <p>The previous version of our application has these filters implemented with some legacy code we could use, but we are not sure how mathematically correct it is, since we don't have unit-tests for them.</p> <p>Besides that we are also evaluating <a href="http://filtering.mathdotnet.com/" rel="noreferrer">Mathnet.Filtering</a>, but their unit test suite doesn't include subclasses of <code>OnlineFilter</code> yet.</p> <p>We are not sure how to evaluate one filtering library over the other, and the closest we got is to filter some sine waves to eyeball the differences between them. That is not a good approach regarding unit tests either, which is something we would like to automate (instead of running scripts and evaluating the results elsewhere, even visually).</p> <p>I imagine a good test suite should test something like?</p> <ul> <li>Linearity and Time-Invariance: how should I write an automated test (with a boolean, &quot;pass or fail&quot; assertion) for that?</li> <li>Impulse response: feeding an impulse response to the filter, taking its output, and checking if it &quot;conforms to expected&quot;, and in that case: <ul> <li>How would I define <em>expected</em> response?</li> <li>How would I define <em>conformance</em>?</li> </ul> </li> <li>Amplitude response of sinusoidal input;</li> <li>Amplitude response of step / constant-offset input;</li> <li>Frequency Response (including Half-Power, Cut-off, Slope, etc.)</li> </ul> <p>I could not be considered an expert in programming or DSP (far from it!) and that's exactly why I am cautious about filters that &quot;seem&quot; to work well. It has been common for us to have clients questioning our filtering algorithms (because they need to publish research where data was captured with our systems), and I would like to have <em>formal proof</em> that the filters are working as expected.</p> Answer: <p>Some thoughts on it at least.</p> <p>First, if you can shown linearity and time-invariance and you know that the filter has the correct impulse response you are home. Given that the filter is stable. So in this case there is no need to run different other input signals and the frequency response is given based on the impulse response.</p> <p>Checking impulse response is obviously quite simple. Linearity and time-invariance I guess is actually really complicated (any specific input signal you test may just work out of coincidence, but others may not). However, as the filter code in most cases shouldn't be that complicated, it should be possible to eye-ball the code and see that it does what is expected (no non-linear operators, no if-cases based on signal values etc).</p> <p>Remains stability. For IIR filters this is a really complicated issue and I will not go into that discussion. For FIR filters, they will always be stable (unless they are implemented using some recursive algorithm). However, you may run into numerical issues relating to overflow (most likely not underflow) and round-off noise. You can find formal proofs that an FIR filter will not overflow (maximum output is the sum of the absolute values of the impulse response coefficients times the maximum input magnitude). For round-off errors you can find statistical expressions, but they are not of much use in formally proving anything (but rather as an argument that the expected output with "infinite precision" and the actual output still conforms).</p> <p>Also not that all impulse responses etc should be evaluated using the actual (possibly rounded) coefficients not the (possibly higher precision) ones obtained from the filter design tool. This is because the filter will always realize the transfer function given by the coefficients that it use, no matter how they were derived. This may still not be the impulse response you get at the output though, as there may be round-off issues. An easy way to see that is to use a scaled impulse (i.e. not one) and note that typically not only the magnitude of the frequency-response will change, but also the shape. The filter will still implement the same transfer function, but the round-off noise added to the two different input signals are different.</p> <p>Hope this gives you a bit more insight.</p>
https://dsp.stackexchange.com/questions/24819/how-to-test-digital-filters
Question: <p>Normal data acquisition consist of:</p> <ol> <li>Analog anti aliasing filter( Sampling frequency : $5\textrm{ kHz}$) </li> <li>ADC - Digital Filter - (Sampling : 200K samples /sec)</li> <li>Digital low pass filters Filters</li> <li>DAC</li> </ol> <p>Questions:</p> <ol> <li><p>My question is why analog anti-aliasing filter is used when their is already a digital low pass filters after ADC to prevent anti aliasing.</p></li> <li><p>If analog anti-aliasing filter have sampling frequency $5\textrm{ kHz}$ , the system will not take frequency greater than $2.5\textrm{ kHZ}$, </p> <ul> <li>then why ADC frequency is $200\textrm{ kHz}$? </li> <li>Doesn't analog anti-aliasing filter sampling frequency limit overall system frequency ?</li> </ul></li> </ol> Answer: <p>Question 1: The anti-aliasing filter before the ADC is exactly for the purpose of rejecting high frequencies, that will become lower frequencies (i.e. aliasing) after the ADC. The digital lowpass after the ADC cannot help here, as the aliasing has already happened. Consider this example:</p> <ul> <li>Your ADC has a sampling frequency of Fs=100kHz. </li> <li>Your input signal is a sum of two sine waves, with frequencies 10kHz and 220kHz. </li> <li>After ADC you would find two sine waves: one at 10kHz, one at 20kHz (220kHz-2*Fs). </li> <li>Hence, you have aliasing occured, and no digital lowpass can remove this aliasing.</li> </ul> <p>Question 2: Without more information on the system this cannot be answered. However, here are some thoughts:</p> <ul> <li>filter of 5kHz requires a sampling frequency of at least 10kHz (ideally). You state you only need 2.5kHz. I think you mix something here. </li> <li>in reality, no anti-aliasing filter is a perfect low-pass, hence its cutoff-frequency does not mean, that higher frequencies are perfectly blocked. Instead, they are more and more attenuated. To cope with non-ideal anti-aliasing filters, the sampling frequency should be higher than 2 times the cutoff. However, 200kHz from your example occurs still quite high for me.</li> </ul>
https://dsp.stackexchange.com/questions/35562/why-analog-anti-aliasing-filter-is-used-before-analog-to-digital-converter-when
Question: <p>I have experience with the design of FIR, IIR digital filters. I also know about the Kalman filter, but I am not skilled at using them. Consider the case of a low frequency signal from discrete samples and the signal is corrupted by high frequency noise. It seems a digital low pass filter and a Kalman filter are two ways of removing the high frequency noise. When is it best to use a digital low pass filter, and when is it best to use a Kalman filter?</p> <p><strong>* EDIT *</strong> More specifically, it seems a FIR filter with linear phase or an IIR filter with nearly linear phase might be a better estimator than a Kalman filter in some cases. This might be true when the desired signal is low frequency and the noise is limited to the upper frequencies. A Kalman filter is designed for Gaussian noise, and I described a case where a linear phase digital low pass filter would work very well.</p> Answer:
https://dsp.stackexchange.com/questions/25518/digital-low-pass-filter-vs-kalman-filter
Question: <p>In one system there is a maximum sampling limit of 500 Hz. And in the analog signal, there are waves with a frequency in the range up to 1600 Hz. With 500 Hz sampling, is it possible to remove frequencies higher than 200 Hz using a digital filter so that the aliasing does not occur? Or is an analog low-pass filter necessary? Thanks.</p> Answer: <p>The aliasing occurs when the analog signal is sampled. Therefore, you need to make sure that the analog signal does not contain frequencies higher than Nyquist <strong>before</strong> sampling. In your case, the Nyquist frequency is <span class="math-container">$250 \, \texttt{Hz}$</span>.</p> <p>So you <strong>need an analog filter</strong>. The cut-off should be set lower than Nyquist, depending on how much aliasing your application can accept.</p> <p>You can use a digital filter once the signal is sampled with whichever cut-off you want.</p>
https://dsp.stackexchange.com/questions/93973/digital-filter-response-at-frequencies-higher-than-the-nyquist-frequency
Question: <p><strong>Preambule:</strong><br> I'm designing a sound model for my small submarine game. Model is running on the server, and I want to present the client with a mono-channel wav-stream from his hydrophone (20kHz discretization should suffice, I target 20Hz-10kHz band). I want that signal to be relatively-realistic. Specifically, I need to account for the fact that water dissipates high frequencies much faster. </p> <p><strong>Sound model:</strong><br> I plan to pre-generate (with help of some RNG and IFFT) raw propeller time-domain sound sample from it's intensity spectrum. Then I plan to amplitude-modulate it with shifted shaft rpm x propeller blade count sine wave to get that propeller beat. At this point i need to materialize that signal on the player's sensor. It will probably involve two filters: </p> <ul> <li><strong>Some filter</strong> (wich is the filter in question) wich will account for water dissipating signal energy.</li> <li>Bandpass filter, wich corresponds to this particular sensor's sensitivity band.</li> <li>Simple amplitude multiplication to account for range, sensor directivity etc.</li> </ul> <p>According to R.J. Urik "Principles of Underwater Sound", on the distance of <strong>r</strong> meters from the target, band level (dB) in frequency bin <strong>f</strong> of such sound can be approximated using following formula:</p> <p>[1]: ResultBL = SourceBL - 10 * log10(r * r) - r * F(f)<br> or without wave front expansion term (wich is trivial):<br> [2]: ResultBL = SourceBL - r * F(f)</p> <p>where F(f) is some smooth monotonic function (big radical with multiple squared frequencies and constants) of frequency.</p> <p><strong>The questions (all about this one filter, or lack thereof):</strong></p> <ol> <li>Is it possible to design a time-domain digital filter for given range <strong>r</strong>, that implements\approximates the signal distortion [2]? </li> <li>What type of filter would you recommend? Problem is not realtime, but calculations should be fast. I'm not limited to causal filters, so I'm sensing something like non-causal IIR? </li> <li>What algorithms are applicable to this problem. </li> <li>If such algorithm involves transfer function, how should I transform\approximate F(f) term in it? </li> <li>Will it be fast? What factors affect it's speed?</li> <li>Will such filter work good in the whole band (20Hz-10kHz)?</li> <li>Will such filter work well on large intensity level variances, e.g. on both very weak and very strong signals? </li> <li>Is such synthesis computationally-expensive? </li> </ol> <p>Thank you.</p> Answer: <p>I wouldn’t bother much with the precise shape of the filter because any real passive sonar will boost the higher frequencies at the receiver and any realistic source levels are going to around 70 years old and most likely a double screw, and aspect dependent. Cavitation is depth dependent. A single pole low pass filter with a corner around 10 Hz is probably ok.</p> <p>Fidelity is one of those things that can become an obsession. There is ambient noise and self noise associated with hydro acoustic flow. Higher frequencies exhibit hull shading. No one band covers all the signal types of interest. </p> <p>If you want to be obsessive, look at Mike Porter's Ocean Acoustics web site.</p> <p><a href="http://oalib.hlsresearch.com/" rel="nofollow noreferrer">http://oalib.hlsresearch.com/</a></p> <p>I'm partial to RAM PE and there at least was a broad band (decomposition of narrow band) demo for the MATLAB version. </p> <p>There was a video game that came out around 20 years ago that collaborated with a company named SONOLYST. They actually got an Oscar for sound effects for the Hunt for Red October movie. </p> <p>I like Urick. I actaully meet him a few times and took a week long class from him but you might find Ross a bit more helpful.</p> <p>Mechanics of Underwater Noise, Ross, D. isbn={9781483160467}, url={<a href="https://books.google.com/books?id=sdwgBQAAQBAJ" rel="nofollow noreferrer">https://books.google.com/books?id=sdwgBQAAQBAJ</a>, 2013, Elsevier Science </p>
https://dsp.stackexchange.com/questions/47454/digital-filter-simulating-hydroacoustic-signal-distortion
Question: <p>I'm using digital filters to apply spectral mangling-type special effects to audio.</p> <p>When using a digital filter (vsts/standalone DSP programs/outboard digital filter, etc.), especially when using narrow transition bands/brickwall filters, are there any effective ways to remove ringing artifacts introduced by the filter? Please bear in mind I am new to DSP and my level of understanding is very basic.</p> Answer: <p>Sticking with linear systems, removing the ringing is nearly the same as adding back some of the spectral content that your really steep transition filters removed. Why use some crazy scheme to add back the stuff in the "softer" transitions that your hard-edged filters cut out? Just use a more reasonable total filter response in the first place.</p> <p>Going to non-linear systems, you could use some sort of AI pattern matching to determine what kind of sound waveforms would be perceived by a human a ringing, and just gate those waveforms out. But that might just add weird sounding artifacts of its own (as well as also spreading spectral content outside of your really steep filters).</p>
https://dsp.stackexchange.com/questions/2182/what-methods-can-be-used-to-remove-ringing-artifacts-in-the-output-of-a-digital
Question: <p>I am trying to implement a Chebyshev type I low-pass IIR digital filter in C. I have got the SOS Matrix and scale values from Matlab. </p> <p>What is the direct equation or algorithm to implement such a filter?</p> Answer: <p>okay, this is, or can be, stuff straight outa a textbook. by "SOS", you mean "2nd-order sections"? i usually call those "biquads". maybe that's not the best term for it in the s-plane. i dunno.</p> <p>anyway, defending on your passband ripple, you should have the resonant frequency and Q for each LPF biquad. you can use those two parameters and get digital biquads directly out of that using the Audio EQ Cookbook. google will find it fast for you.</p> <p>otherwise convert your Tchebyshev from s-plane to z-plane however you want. the Cookbook uses the bilinear transform, which means every bump or feature in the analog frequency response will have a corresponding bump or feature in the digital frequency response. if you're more interested in matching the impulse response, you would transform H(s) -> H(z) using the Impulse Invariant method.</p> <p>someone will probably vote me down for not answering the question completely (they've done it in the past). big deel.</p>
https://dsp.stackexchange.com/questions/13204/algorithm-for-implementing-an-iir-digital-filter-chebyshev-type-i-low-pass
Question: <p>Can oversampling decrease the delay of digital IIR filter? Imagine there is some digital signal going into processor that applies low pass filter.Lets say its 1 KHz sample rate and the filter is second order gaussian lowpass with -3db point at 100 Hz.</p> <p>The putput of this digital filter will be delayed,this delay will vary with frequency.Can this delay,in the band of our interest which is 0 - 500 Hz for 1 KHz sample rate,be decreased if we increase the sample rate of the incoming signal?</p> <p>What if the incoming signal was 2 KHz samplerate ( 2x oversampling ),would it decrease the digital filter delay?</p> <p>I have read that the delay can be decreased by increasing bandwidth or by putting -3db cut off point higher in frequency.Is that true? I know making the low pass have higher frequency decrease the delay,but what about the bandwidth? </p> <p>If we increase the sample rate,we increase the bandwidth,but in case of low pass filter it will be the bandwidth thats being attenuated.I have feeling that the author who wrote about the bandwidth thing meant the passband bandwidth and in that case its same thing as making the low pass higher.</p> <p>I tried two filter simulating softwares,the Iowa Hills IIR and MicroModeler.com and got conflicting results.The Iowa Hills showed that the delay in the band of interest is increased slightly by oversampling while MicroModeler showed small decrease.</p> <p>Does oversampling increase,decrease or not change the delay? </p> Answer: <p>Note that a given digital filter impulse response $h[n]$ will have a corresponding equivalent analog frequency response $H(e^{j\omega})$ through the <strong>sampling relations</strong>; hence with the given sampling rate Fs.</p> <p>This means that if you change the sampling rate of the input of this filter, then the effective analog filter response will also be changed, probably that's not what you want to have. So you have to <strong>redesign</strong> the digital filter to recreate the desired analog filter response.</p> <p>In particular, when you increase the sampling rate by two, then the digital filter $h[n]$ should be redesigned such that its digital frequency response cutoff frequency is <strong>halved</strong>, so that the associated analog cutoff frequency would remain the same. </p> <p>That means that, since the digital cutoff frequency is halved, the filter impulse response is doubled in length, the delay is doubled too. But don't worry, as the sampling period is also halved, the associated analog delay remains the same.</p> <p>Of course, delay here refers to that associated with the decay time of the digital IIR impulse response $h[n]$.</p>
https://dsp.stackexchange.com/questions/51498/digital-iir-filter-delay-and-oversampling
Question: <p>I'm trying to implement a digital filter, which is given by the following transfer function:</p> <p><span class="math-container">$$ 1+2V K \frac{K+c_m+2Kz^{-1}+(K-c_m)z^{-2}}{1+2Kc_m+K^2+(2K^2-2)z^{-1}+(1-2Kc_m+K^2)z^{-2}} $$</span> <span class="math-container">$$ +V^2K^2\frac{1+2z^{-1}+z^{-2}}{1+2Kc_m+K^2+(2K^2-2)z^{-1}+(1-2Kc_m+K^2)z^{-2}} $$</span></p> <p>My interpretation of the coefficients is this:</p> <p><span class="math-container">$a0_1 = 1+2V K$</span><br/> <span class="math-container">$a1_1 = 2Kc_m+K^2+(2K^2-2)$</span><br/> <span class="math-container">$a2_1 = (1-2Kc_m+K^2)$</span><br/> <span class="math-container">$b0_1 = K+c_m$</span><br/> <span class="math-container">$b1_1 = 2K$</span><br/> <span class="math-container">$b2_1 = K-c_m$</span><br/><br/></p> <p><span class="math-container">$a0_2 = V^2K^2$</span><br/> <span class="math-container">$a1_2 = 2Kc_m+K^2+(2K^2-2)$</span><br/> <span class="math-container">$a2_2 = (1-2Kc_m+K^2)$</span><br/> <span class="math-container">$b0_2 = V^2K^2$</span><br/> <span class="math-container">$b1_2 = 2*(V^2K^2)$</span><br/> <span class="math-container">$b2_2 = V^2K^2$</span><br/><br/></p> <p>After dividing all the coefficients by a0, I try implementing them using a standard biquad:</p> <p><span class="math-container">$y[n] = b0*x[n] + b1 * x[n-1] + b2 * x[n-2] - a1 * y[n-1] - a2 * y[n-2]$</span></p> <p>in parallel, with their outputs summed.</p> <p>I haven't managed to get it working. <br/> Am I on the right path? Is there anything I'm doing terribly wrong, or not doing? Thanks very much for any help!</p> Answer: <p>You seem to do a few things wrong</p> <ol> <li>Start with just transcribing each fraction, ignore the factors in front of the fraction and &quot;1&quot; in front</li> <li>Make sure you sort by powers of <span class="math-container">$z$</span></li> <li>Normalize all coefficients to <span class="math-container">$a_0$</span></li> <li>Multiply all <span class="math-container">$b$</span> coefficients with the factor for the fraction.</li> <li>The &quot;1&quot; is a third filter</li> </ol> <p>Example <span class="math-container">$$a0_1 = 1+2Kc_m+K^2 \\ a1_1 = 2K^2-2$$</span></p>
https://dsp.stackexchange.com/questions/70912/help-implementing-digital-filter-from-tansfer-function
Question: <p>I'm currently attempting to study up on adaptive digital filters. My book presents the diagram I've included below and I'm having trouble understanding conceptually what it's indicating. The problem deals with noise cancelation. The idea is that someone is driving and makes a phone call. The <em>x(k)</em> is their voice input. There's a reference mic at <em>v(k)</em> which picks up road noise. I know that the ultimate goal is to filter road noise from our transmitted voice signal.</p> <p>The desired output is obviously:</p> <p>$$ d(k)=x(k)+v(k) $$</p> <p>The error in this case is:</p> <p>$$ e(k)=x(k)+v(k)-y(k) $$</p> <p>Taking a quote from my book, it states</p> <blockquote> <p>If the speech <em>x(k)</em> and the additive road noise <em>v(k)</em> are uncorrelated with one another, then the minimum possible value for $e^2(k)$ occurs when <em>y(k) = v(k)</em>, which corresponds to the road noise being removed completely from the transmitted speech signal e(k).</p> </blockquote> <p>I don't understand how <em>e(k)</em> is our output of the system though. It seems to me that if we minimize our error, then it approaches zero. This means that $d(k)-y(k) = e(k)=0$ Consequently if our output is the error and we've minimized it, it seems like we're outputting 0 not a transmitted signal <em>e(k)</em> with the road noise removed!? I guess I'm asking why our desired output <em>d(k)</em> isn't our output....why is the error the output?</p> <p>Can somebody help me understand this conceptually? Thank you for your help! Please let me know if I need to clarify anything.</p> <p><img src="https://i.sstatic.net/Jggp9.png" alt="enter image description here"></p> Answer: <p>Judging from the figure, the situation is slightly different from your explanation in the question. The noise $v(k)$ is the actual noise in the signal, not the noise picked up by the reference microphone. So the noisy signal is $d(k)=x(k)+v(k)$. If you knew $v(k)$ you could simply subtract it from $x(k)$ without the need to use an adaptive filter. What you have is another noise signal $r(k)$, which is a filtered version of the noise $v(k)$. This unknown filter is depicted by the "black box" in the figure. It is filtered because the transfer function from the noise source (road, tires, etc.) to the reference microphone is different from the transfer function to the microphone recording the speech. What the adaptive filter is trying to do is estimate the noise in the speech signal from the reference noise, i.e. it tries to invert the unknown filter in the black box. This can be achieved by minimizing the power of the error signal $e(k)$. The reason is that if you assume that speech and noise are uncorrelated, the output of the adaptive filter can only reduce the noise component in $d(k)$, not the speech component. So the power of $e(k)$ becomes a minimum if the output of the adaptive filter $y(k)$ equals $v(k)$. You don't need to worry that the speech signal is removed because $y(k)$ cannot model the speech signal at all, because noise and speech are assumed to be uncorrelated. So ideally the error signal $e(k)$ contains only clean speech. Note that due to noise and speech being uncorrelated, the power of $e(k)$ can never become zero. Neither can the error signal itself become zero (for all $k$).</p>
https://dsp.stackexchange.com/questions/22325/adaptive-digital-filter-block-diagram-question
Question: <p><strong>Short question</strong><br> What are main stages (steps) of calculation <a href="http://en.wikipedia.org/wiki/Frequency_response" rel="nofollow noreferrer">frequency response</a> of digital filter by their structure?</p> <p><strong>Detailed question</strong><br> Let suppose that there is discrete FIR filter with known structure which is implemented in programming language (for instance with structure which shown in picture):</p> <pre><code> +-----+ +-----+ | | | | x (k) &gt;---+---| T |---| T | | | | | | | +-----+ +-----+ | | | | +-----+ | | | | | | | x 2 | | | | | | | +-----+ | | | | | +----------------+ +-----+ | | ___ | | 1 | \---| \ |---| x - |---&gt; y (k) | /__ | | 4 | +----------------+ +-----+ </code></pre> <p>It is possible to pass into the filter input signal (as vector of integers which describe magnitude), like this:</p> <pre><code>in [0, 0, 0, 0, 4, 0, 0, 0, 0, 0] </code></pre> <p>Which describe signal with following magnitude:</p> <pre><code> | x (k) | | . . . . . | . . . . k 9 8 7 6 5 4 3 2 1 0 </code></pre> <p>And get corresponding output signal, like this:</p> <pre><code>out [0, 0, 0, 0, 1, 2, 1, 0, 0, 0] </code></pre> <p>Which describe signal with following magnitude:</p> <pre><code>y (k) | . . . | | | . . . . k 9 8 7 6 5 4 3 2 1 0 </code></pre> <p>Question is how to calculate frequency response of the filter (by stages)?</p> <p><strong>Notes</strong><br> With known input and output signals <a href="https://dsp.stackexchange.com/questions/16671/output-of-a-system-given-its-transfer-function-and-input-beginner">we can calculate</a> transfer function of the filter by their images of Laplace transform (z-transform for discrete signals):<br> 1. $$X(z) = \mathcal{L} \{x(k)\}$$<br> 2. $$Y(z) = \mathcal{L} \{y(k)\}$$<br> 3. $$H(z) = Y(z) / X(z)$$<br> And after if we get Fourier transform from transfer function we will get AFC:<br> 4. $$FR = \mathcal{F} \{H(z)\}$$<br> Is it correct?</p> Answer: <p>For the given system you can write down the input-output relation as</p> <p>$$y[k]=\frac14\left(x[k]+2x[k-1]+x[k-2]\right)\tag{1}$$</p> <p>because $T$ (or $z^{-1}$) denotes a delay element, which delays its input by one sample interval. The $\mathcal{Z}$-transform of (1) is (assuming zero initial conditions)</p> <p>$$Y(z)=\frac14\left(X(z)+2X(z)z^{-1}+X(z)z^{-2}\right)=\frac{X(z)}{4}\left(1+2z^{-1}+z^{-2}\right)\tag{2}$$</p> <p>From (2) you get the system's transfer function</p> <p>$$H(z)=\frac{Y(z)}{X(z)}=\frac14\left(1+2z^{-1}+z^{-2}\right)\tag{3}$$</p> <p>Since the system is stable (any FIR filter is), the frequency response can be obtained by evaluating the transfer function on the unit circle $z=e^{j\omega}$:</p> <p>$$H(e^{j\omega})=\frac14\left(1+2e^{-j\omega}+e^{-2j\omega}\right)\tag{4}$$</p> <p>You also could have arrived at (4) by writing down the impulse response from (1)</p> <p>$$h[k]=\frac14\left(\delta[k]+2\delta[k-1]+\delta[k-2]\right)\tag{5}$$</p> <p>and taking the (discrete-time) Fourier transform, which also results in (4).</p> <p>Finally, a few notes concerning misconceptions in your question:</p> <ul> <li><p>$\mathcal{L}$ usually denotes the Laplace transform, which is defined for continuous functions. $X(z)$ is the $\mathcal{Z}$-transform of the sequence $x[k]$, which can be written as $X(z)=\mathcal{Z}\{x[k]\}$.</p></li> <li><p>The frequency response of a linear time-invariant (LTI) system is the Fourier transform of the system's impulse response: $H(e^{j\omega})=\mathcal{F}\{h[k]\}$, not the Fourier transform of $H(z)$ (which you seem to think, judging from the last equation in your question).</p></li> </ul>
https://dsp.stackexchange.com/questions/21806/calculation-frequency-response-of-digital-filter-with-known-structure
Question: <p>I am having trouble wrapping my head around digital filters with different orders of numerator and denominator. Let me know if any of these points is wrong:</p> <ol> <li>All (digital or analog) transfer functions have the same number of poles and zeros, <em>if</em> you include the ones at infinity. So $H(s) = 1/s$ has a pole at the origin and a zero at infinity, which is important for visualizing the amplitude response surface in the S plane.</li> <li>But usually when we say "number of poles" or zeros we mean <em>finite</em> poles or zeros. So $H(s) = 1/s$ is considered to have one pole and no zeros.</li> <li>Digital filters designed by bilinear transform from analog filters always have the same number of poles and zeros (and none are ever at infinity). But digital filters can also be made with different orders of numerator and denominator.</li> <li>To find the poles and zeros of a digital transfer function in the general case, you first must express it as positive powers of z ("controls engineer format") and then find the roots of the numerator and denominator, same way as an analog filter. So for example, a single-sample delay in "DSP engineer format" $H(z) = z^{−1}$ is rewritten as $H(z) = 1/z$, which shows that it has a pole at the origin and a zero at infinity, so it's considered to have "one pole and no (finite) zeros".</li> </ol> <p>So then I become confused:</p> <ol> <li>FIR filters are described as "all-zero filters". <a href="http://www.mathworks.com/help/signal/ref/filtfilt.html" rel="noreferrer">1</a> <a href="http://www.vyssotski.ch/BasicsOfInstrumentation/SpikeSorting/Design_of_FIR_Filters.pdf#page=4" rel="noreferrer">2</a> They can be represented as a transfer function like $$H(z) = b_0 + b_1 z^{-1} + b_2 z^{-2}$$ But if you convert to positive powers of z to find the poles and zeros, you get: $$H(z) = \frac{b_0 {z}^{2} + b_1 z + b_2}{z^2}$$ which has just as many poles at the origin as there are zeros. What is the significance of these poles? These are each the $H(z) = z^{−1}$ delay elements used to produce the feedforward signals? Seems like the poles end up at the origin if the delay elements are not fed back to the input? So FIR filters are not actually all-zero filters?</li> <li>Similarly, maxflat filters (Selesnick-Burrus generalized Butterworth) are described as having "more zeros than poles", which is supposed to be computationally advantageous. (Why?) The <a href="http://www.mathworks.com/help/signal/ug/iir-filter-design.html" rel="noreferrer">Matlab example</a> produces <code>b = [0.0950 0.2849 0.2849 0.0950]</code> and <code>a = [1.0000 -0.2402]</code>. I think this is "negative powers of z" format, so this would represent a transfer function $$H(z) = \frac {0.0950 + 0.2849 z^{-1} + 0.2849 z^{-2} + 0.0950 z^{-3}} {1.0000 - 0.2402 z^{-1}} $$</li> <li>But again, if you convert to positive powers of z, you get: $$H(z) = \frac {0.0950 z^3 + 0.2849 z^2 + 0.2849 z + 0.0950} {1.0000 z^3 - 0.2402 z^2} $$ which has 3 poles again, 2 of which are at the origin. So the number of poles and zeros is again the same, and they're all finite, too.</li> </ol> <p>If adding a zero to a digital transfer function always also adds a finite pole, I don't understand how a filter with "more zeros than poles" can exist, or be advantageous. Might as well use those poles to affect the frequency response if you have them?</p> <p>Is there some convention where poles at the origin are not included in the tally, like the way poles at infinity are not included? If so, why?</p> Answer: <p>If you consider the transfer function of a causal IIR filter</p> <p>$$H(z)=\frac{B(z)}{A(z)}=\frac{\sum_{m=0}^M b_mz^{-m}}{\sum_{n=0}^N a_nz^{-n}},\quad a_0=1$$</p> <p>then you always get the same number of poles and zeros, regardless of the choice of $M$ and $N$ (as already pointed out by Robert). However, what is meant by a system with "more zeros than poles", is a system with a numerator degree $M$ greater than the degree $N$ of the denominator. In this case "poles" refers to the poles away from the origin. It is only these poles that have to be implemented. So a system with some poles at the origin is cheaper to implement than a system with all its poles away from the origin. So your argument that you "might as well use those poles ... if you have them" does not really hold, because poles at the origin come without any implementation costs.</p> <p>The remaining question is why such filters with more zeros than poles (away from the origin) can be useful. Let me just give two examples:</p> <ol> <li><p>In implementation as well as in the design process, poles can give a lot of trouble (instability, noise enhancement, local optima in the design process, etc.). On the other hand, FIR filters with good frequency selective properties must often have a very large degree (resulting in a large delay and in high implementation costs). Adding just a few poles can give us the best of both worlds: better filter behavior at a greatly reduced (numerator) degree $M$, and only little trouble because there are only very few poles.</p></li> <li><p>For frequency selective filters with a small desired phase distortion in the passbands, IIR systems with $M&gt;N$ are very useful, because the poles ideally contribute only to the passbands, whereas the zeros must contribute to the passbands (for phase equalization) as well as to the stopbands (zeros on or close to the unit circle). Consequently, we need poles only at angles corresponding to passband frequencies, but we need zeros everywhere.</p></li> </ol> <p>The figure below shows such a system (lowpass filter with approximately linear passband phase response, $M=12, N=6$): <img src="https://i.sstatic.net/xK8sx.png" alt="enter image description here"></p>
https://dsp.stackexchange.com/questions/14739/digital-filters-with-more-zeros-than-poles
Question: <p>Hi i'm a beginner in signal processing i want to know what'sthe pass band ripple and stop band attenuation of a digital filter ? Thanks.</p> Answer: <p>I hope the plot below helps answer your question. Typically I have seen the &quot;passband ripple&quot; and &quot;stopband attenuation&quot; expressed in dB as shown in the picture translating the magnitude of the ripples to dB using <span class="math-container">$20log_{10}$</span> as shown. So the passband ripple is the amount of variation in the amplitude, within the designated passband of the filter, and stop band attenuation is the minimum attenuation level with the designated rejection band of the filter. The frequencies are given as normalized frequencies in units of cycles/sample, where the sampling rate = 1.</p> <p><a href="https://i.sstatic.net/SLjMr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SLjMr.png" alt="Filter specification" /></a></p> <p>Here is a design example showing proper use of the ripple and rejection, along with common techniques used to get a first estimate of the number of taps (in an FIR) that will be needed to achieve the desired specifications. These estimators have been detailed in other posts under the topic of &quot;How many taps do I need...&quot;.</p> <p><a href="https://i.sstatic.net/inS4M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/inS4M.png" alt="Example" /></a></p>
https://dsp.stackexchange.com/questions/38564/whats-the-pass-band-ripple-and-stop-band-attenuation-of-a-digital-filter
Question: <p>Suppose I have a signal <span class="math-container">$\mathbf{x}\in \mathbb{C}^{N}$</span> and a digital filter with impulse response <span class="math-container">$\mathbf{h}\in\mathbb{C}^L$</span>, where <span class="math-container">$L&lt;N$</span>. If we pass the signal through the filter, the output will be <span class="math-container">$\mathbf{y}\in\mathbb{C}^{N+L-1}$</span>. My question is: if I want to truncate the output vector to be the same length as <span class="math-container">$\mathbf{x}$</span>, which elements should I discard? The first <span class="math-container">$L$</span>? and why?</p> Answer: <p>Truncation will also will always result in an error. Which truncation schemes is best depends on the specific filter, your signal and what classes of error your application is more or less sensitive to.</p> <p>As a rough rule of thumb for a minimum phase filter you most likely want to truncate the end, for a linear phase filter you may want to split it 50/50 between the beginning and the end for a channel equalizer it really depends on how the equalizer was designed.</p>
https://dsp.stackexchange.com/questions/89904/truncating-the-output-of-a-digital-filter-which-part-to-discard
Question: <p>Although I have a solid experience in designing audio engines and such, I am fairly new to the realm of Digital Filter Design, particularly IIR and FIR filters. In other words, I'm trying to learn as much as I can on how to design filters and derive their difference equations. I'm starting from the basics, so please bear with me, like I said, I'm trying to learn.</p> <p>Herein is my question: </p> <p>Say I want to design a low-pass filter with a particular cutoff - say 300 Hz. What would be the best way of mathematically deriving the transfer function and then deriving the particular difference equation to implement the filter in Direct Form I and Direct Form II (or only DF-1 for now...)? </p> <p>I have some understanding of transfer functions and how they relate to difference equations from some excellent material on the web, unfortunately some of it assumes a good bit of prior knowledge, so it is more confusing than helpful in my quest. So I guess I need a more step-by-step example which will help me connect the dots.</p> <p>So I'm basically looking for help with a breakdown of the process starting from choosing the cutoff frequency up to deriving the difference equation.</p> <p>Any help will be much appreciated. I am familiar with a lot of the concepts - impulse response, DFT's, the math behind it, I guess what I need more help with is the concept of the z-transform and the pole/zero use to design the filter's transfer function and then how does the cutoff freq. play in all this to finally derive the difference equation.</p> <p>Since I tend to learn best from example, I thought I'd ask here. Thanks a lot to anyone who finds the time to help me out.</p> Answer: <p>Digital filter design is a very large and mature topic and - as you've mentioned in your question - there is a lot of material available. What I want to try here is to get you started and to make the existing material more accessible. Instead of digital filters I should actually be talking about discrete-time filters because I will not consider coefficient and signal quantization here. You know already about FIR and IIR filters, and you also know some filter structures like DF I and II. Nevertheless, let me start with some basics:</p> <p>A non-recursive linear time-invariant (LTI) filter can be described by the following difference equation</p> <p>$$y(n)=h_0x(n)+h_1x(n-1)+\ldots +h_{N-1}x(n-N+1)=\sum_{k=0}^{N-1}h_kx(n-k)\tag{1}$$</p> <p>where $y(n)$ is the output sequence, $x(n)$ is the input sequence, $n$ is the time index, $h_k$ are the filter coefficients, and $N$ is the filter length (the number of taps). The filter taps $h_k$ are also the impulse response of the filter because if the input signal is an impulse, i.e. $x(n)=\delta(n)$, then $y(n)=h_n$ (if the filter's memory has been initialized with zeros). Equation (1) describes a linear time-invariant finite impulse response (FIR) system. The sum on the right-hand side of (1) is a convolution sum, i.e. the output signal is obtained by convolving the input signal with the impulse response. This is always true, but for IIR filters we cannot explicitly compute the convolution sum because the impulse response is infinitely long, i.e. there are infinitely many coefficients $h_k$. One important advantage of FIR filters is that they are always stable, i.e. for a bounded input sequence, the output sequence is always bounded. Another advantage is that FIR filters can always be realized with an exactly linear phase, i.e. they will not add any phase distortion apart from a pure delay. Furthermore, the design problem is usually easier, as we will see later.</p> <p>A recursive LTI filter is described by the following difference equation:</p> <p>$$y(n)=b_0x(n)+b_1x(n-1)+\ldots+b_Mx(n-M)-\\ -a_1y(n-1)-\ldots-a_Ny(n-N)\tag{2}$$</p> <p>Equation (2) shows that the output is not only composed of weighted and delayed input samples, but also of weighted past output samples. In general, the impulse response of such a system is infinitely long, i.e. the corresponding system is an IIR system. However, there are special cases of recursive filters with a finite impulse response. Note that the impulse response is not anymore given by either the coefficients $b_k$ or $a_k$ as in the case of FIR filters. One advantage of IIR filters is that steep filters with high stopband attenuation can be realized with much fewer coefficients (and delays) than in the FIR case, i.e. they are computationally more efficient. However, one needs to be careful with the choice of the coefficients $a_k$ because IIR filter can be unstable, i.e. their output sequence can be unbounded, even with a bounded input sequence.</p> <p>Filters can be designed according to specifications either in the time (sample) domain or in the frequency domain, or both. Since you've mentioned a cut-off frequency in your question, I assume you're more interested in specifications in the frequency domain. In this case you need to have a look at the frequency responses of FIR and IIR systems. The frequency response of a system is the Fourier transform of its impulse response, assuming that it exists (which is the case for stable systems). The frequency response of an FIR filter is</p> <p>$$H(e^{j\theta})=\sum_{k=0}^{N-1}h_ke^{-jk\theta}\tag{3}$$</p> <p>where $\theta$ is the discrete-time frequency variable:</p> <p>$$\theta=\frac{2\pi f}{f_s}$$</p> <p>with the actual frequency $f$ and the sampling frequency $f_s$. From (3) you can see that approximating a desired frequency response by an FIR system is basically a problem of polynomial approximation. For recursive systems we have</p> <p>$$H(e^{j\theta})=\frac{\sum_{k=0}^Mb_ke^{-j\theta}}{1+\sum_{k=1}^Na_ke^{-j\theta}}\tag{4}$$</p> <p>and you get a rational approximation problem, which is usually much more difficult than the polynomial approximation problem in the case of FIR filters. From (3) and (4) you can see that the frequency response of an FIR filter is of course only a special case of the response of a recursive filter with coefficients $a_k=0$, $k=1,\dots,N$.</p> <p>Let's now take a quick look at filter design methods. For FIR filters you could take an inverse Fourier transform of the desired frequency response to get the impulse response of the filter, which directly corresponds to the filter coefficients. Since you approximate the desired response by a finite length impulse response you should apply a smooth window to the obtained impulse response to minimize oscillations in the actual frequency response due to Gibbs' phenomenon. This method is called frequency-sampling method.</p> <p>For simple standard filters like ideal lowpass, highpass, bandpass or bandstop filters (and a few others), you could even analytically calculate the exact impulse response by taking the inverse Fourier transform of the ideal desired response:</p> <p>$$h_k=\frac{1}{2\pi}\int_{-\pi}^{\pi}H(e^{j\theta})e^{jk\theta}d\theta$$</p> <p>This integral is easy to evaluate for piecewise constant desired responses, as is the case for ideal frequency-selective filters. This will give you an infinitely long, non-causal impulse response, which needs to be windowed and shifted to make it finite and causal. This method is know as window-design.</p> <p>There are of course many other FIR filter design methods. One important numerical method is the famous Parks-McClellan exchange algorithm which designs optimal filters with constant passband and stopband ripples. It is a numerical approximation method and there are many software implementations available, e.g. in Matlab and Octave.</p> <p>The most common IIR design method for frequency selective filters is the bilinear transformation method. This method simply uses analytical formulas for the design of optimal analog filters (such as Butterworth, Chebyshev, Cauer/elliptic, and Bessel filters), and transforms them to the discrete-time domain by applying a bilinear transformation to the complex variable $s$ (analog domain) which maps the (imaginary) frequency axis of the complex $s$-plane to the unit circle in the complex $z$-plane (discrete-time domain). Don't worry if you do not yet know much about complex transfer functions in the analog or discrete-time domain because there are good implementations available of the bilinear transform method, e.g. in Matlab or Octave.</p> <p>There are of course many more interesting and useful methods, depending on the type of specifications you have, but I hope that this will get you started and will make any material you come across more understandable. A very good (and free) book covering some basic filter design methods (and a lot more) is <a href="http://www.ece.rutgers.edu/~orfanidi/intro2sp/orfanidis-i2sp.pdf">Intoduction to Signal Processing</a> by Orfanidis. You can find several design examples there. Another great classic book is <a href="http://rads.stackoverflow.com/amzn/click/0471828963">Digital Filter Design</a> by Parks and Burrus.</p>
https://dsp.stackexchange.com/questions/9541/digital-filter-design-basic-principles-iir-fir
Question: <p>I have a second order analogue high pass transfer function (unity gain at infinity). It's magnitude response hits the -20 decibel line at a frequency of 5706 Hz (the corner frequency is 18000 and sample rate is 44100). When I convert this analogue filter to a digital IIR via the BLT method, the digital filter's magnitude response hits the -20 line at 11149 Hz. This will happen with the standard <a href="https://www.w3.org/TR/audio-eq-cookbook/" rel="nofollow noreferrer">RBJ audio EQ cookbook</a> formula i.e. it's not a problem with my maths. I know that digital filters suffer from cramping and that's what I'm interested in. Is there a formula to predict what an analogue frequency will be cramped to when the digital filter is realized given the analogue filter's frequency? i.e. something that will tell me 11149 given 5706. Just to be super clear I'm not asking how to do pre-warping using: <span class="math-container">$ tan(\frac{w_c}{2}) $</span>.</p> Answer: <p>I've come up with a solution.</p> <p>Let $ f_a = $ analogue frequency, $ f_d $ = digital frequency, $ f_s = $ sampling rate and $ f_c = $ corner frequency with $ \omega_c = \frac{\pi f_c}{f_s} $ and $ \omega_a = \frac{\pi f_a}{f_s} $ then:</p> <p>$$ c = \frac{\omega_c}{\tan(\omega_c)} $$</p> <p>$$ \omega_d = \arctan\left(\frac{\omega_a}{c}\right) $$</p> <p>Giving:</p> <p>$$ f_d = \frac{\omega_d f_s}{\pi} $$ </p>
https://dsp.stackexchange.com/questions/22071/how-to-predict-the-cramped-frequency-of-a-digital-filter-based-on-an-analogue-fr
Question: <p>I am looking into designing a Bandpass Butterworth filter in python, but, I was not sure I am designing my filter correctly. What I have are the following:</p> <ul> <li>High cutoff frequency = 200Hz</li> <li>Low cutoff frequency = 10Hz</li> <li>Sampling frequency = 1000Hz</li> <li>for my data, I used Filter order = 6</li> </ul> <p>My code definition are below:</p> <pre><code># section of my imports: from scipy.signal import find_peaks, find_peaks_cwt, argrelextrema, welch, lfilter, butter, savgol_filter, medfilt, freqz, filtfilt from scipy.signal import argrelextrema, filtfilt, butter, lfilter def butter_bandpass(lowcut, highcut, fs, order): nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq b, a = butter(order, [low, high], btype='bandpass', output='ba') # sos = butter(order, [low, high], btype='bandpass', output='sos') return b, a # return sos def butter_bandpass_filter(data, lowcut, highcut, fs, order): # sos = butter_bandpass(lowcut, highcut, fs, order=order) # y = signal.sosfilt(sos=sos, x=data) # y = signal.sosfiltfilt(sos=sos, x=data) b, a = butter_bandpass(lowcut, highcut, fs, order=order) y = filtfilt(b=b, a=a, x=data) # y = lfilter(b, a, data) return y </code></pre> <p>How can I get the passband and stopband attenuation, also, where can I find the required equations to use in order for me to get my Butterworth filter design equation |H(w)|? Similar to the following link: (<a href="https://www.globalspec.com/reference/81796/203279/8-33-bandpass-and-bandstop-filter-design-examples" rel="nofollow noreferrer">Bandpass and Bandstop Filter Design</a>). I calculated the digital frequencies in radians per second:</p> <ul> <li>wh = 400π rad/sec</li> <li>wl = 20π rad/sec</li> <li>w(ah) ≈ 21.93 rad/sec</li> <li>w(al) ≈ 1.096 rad/sec</li> <li><strong>W</strong> ≈ 20.84 rad/sec</li> <li>w^2 ≈ 578.53</li> </ul> <p>Last steps are the prototype transformation from lowpass-to-bandpass and transforming the equation into Bilinear Transformation Technique (BLT) to get the digital filter are missing. So, what equation do I need to get the digital filter?</p> Answer: <p>In python the direct command is scipy.signal.butter. This will return the filter coefficients (numerator and denominator) based on an array of critical frequencies as described here:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html</a></p> <p>Once you have the numerator and denominator coefficients you can use <code>sicpy.signal.freqz</code> to evaluate the frequency response:</p> <pre><code>import scipy.signal as sig w, h = sig.freqz(num, den) </code></pre> <p>(Then simply plot w vs h typically as <span class="math-container">$20\log_{10}(|h|)$</span> to view the magnitude in dB, along with the angle of h to view the phase.</p> <p><code>freqz</code> simply evaluates the frequency response of the filter (returning <span class="math-container">$H(z)$</span> when <span class="math-container">$z=e^{j\omega}$</span> (the unit circle on the z-plane). From which we can see the magnitude and phase versus frequency. To use the filter coefficients to filter a time domain signal <span class="math-container">$x$</span>, use <code>sicpy.signal.lfilter</code> which will provide the convolution of the filter coefficients with the signal to return the filtered result. <code>sig.signal.filtfilt</code> is a &quot;zero-phase&quot; filter which will pass the signal through the filter implementation in both the forward and reverse directions, eliminating the phase component through cancellation in the time reversal, but then also doubling the magnitude response. <code>filtfilt</code> is a non-causal filter that is useful in post-processing applications when we want the output and input to be perfectly in alignment without having to compensate for filter delay between the input and output, but it is not a filter that can be implemented and provide such zero-phase, zero-delay (non-causal).</p> <p>Note that I am of the opinion that digital filters when mapped from analog prototypes such as this are typically inferior to direct digital designs with FIR filters using optimized algorithms (such as that provided by <code>scipy.signal.firls</code> and <code>scipy.signal.firpm</code>), other than being useful exercise for educational purposes or when modelling an analog system. This point may be my own personal myth, so posted that specifically as another question <a href="https://dsp.stackexchange.com/questions/79400/mapping-of-classic-filters-for-digital-filter-design?noredirect=1#79400">here</a>.</p>
https://dsp.stackexchange.com/questions/79394/how-to-design-a-digital-butterworth-bandpass-filter
Question: <p>I'm trying to understand how diode circuits are implemented in wave digital filters, particularly for clippers. The research papers and other sources I've looked at use the equation</p> <p><span class="math-container">$$I(V) = 2I_s \sinh\left(\frac{V}{V_t}\right)$$</span></p> <p>for two reverse-polarity diodes in parallel, then transform to the wave domain using</p> <p><span class="math-container">\begin{align} V &amp;= \frac{a+b}{2}\\ I &amp;= \frac{a-b}{2R_p} \end{align}</span></p> <p>Where <span class="math-container">$R_p$</span> is the port resistance, <span class="math-container">$a$</span> is the incident wave and <span class="math-container">$b$</span> is the reflected wave:</p> <p><span class="math-container">$$ \frac{a-b}{2R_p} = 2I_s\sinh\left(\frac{a+b}{2V_t}\right)$$</span></p> <p>I'm not the best with WDFs, but my understanding is that the incident wave <em>a</em> is propagated from the rest of the circuit up to the root (the nonlinearity, in this case the diodes), and then <em>b</em> is computed and reflected back down through the circuit. So <em>a</em> is the given in the above equation. But this is still not a trivial equation to solve, and has multiple solutions if I'm not mistaken.</p> <p>None of the resources I've found actually show the solution to this equation, and instead just express it as <strong>b = f(a)</strong>. <a href="https://aaltodoc.aalto.fi/bitstream/handle/123456789/14420/article4.pdf?sequence=7" rel="nofollow noreferrer">This paper</a> briefly mentions using a Lambert function?</p> <p><a href="https://people.eecs.berkeley.edu/%7Echua/papers/Meerkotter89.pdf" rel="nofollow noreferrer">This paper</a> is the closest I've found to being helpful, but even here the conditions seem rather arbitrary? (What are G0, Gv or Gn for a diode??)</p> <p>The last source I've looked at is <a href="https://www.ntnu.edu/documents/1001201110/0/DAFx-2015-jos-keynote2part2.pdf/b6dbef08-f552-4d8b-8a29-eeaee0b14b99" rel="nofollow noreferrer">this one</a> which has some MATLAB code for a circuit with a singular diode. Don't know if anybody knows more about this stuff than I do, but if you have any pointers it would help a lot.</p> <p><strong>EDIT:</strong> after re-visiting the MATLAB code it looks like they're using a combination of the relationship between <em>b</em> and <em>a</em> for linear resistances:</p> <p><strong>b = a(R-Rp)/(R+Rp)</strong></p> <p>and substituting R for the nonlinear diode resistance, which according to <a href="http://www.learningaboutelectronics.com/Articles/Diode-resistance.php" rel="nofollow noreferrer">this source</a> is</p> <p><strong>R = Vt/I</strong></p> <p>I'm not 100% sure that's what's happening there, but it's the closest I've found to a tangible solution thus far</p> Answer:
https://dsp.stackexchange.com/questions/73135/wave-digital-filter-diode-equation
Question: <p>Doing some work at the minute on digital filters in matlab, I have a file with artifical noise added (sine wave added at specific frequency). The goal is to filter the signal and get it as close as possible to the clean signal provided.</p> <p>I've done an FFT and plotted the results and found a very large spike at 29.3Hz which is not present in the clean signal.</p> <p>I've tried using a notch filter, which I thought would work since it operates at such a specific frequency, however it just seems to attenuate the signal and remove some power but not block it completely. I then added a bandstop filter to try and block any signals in that region and it simply attenuated the signal also. Does anyone have any thoughts? I just seem to be lowering power of the entire signal and not actually removing anything, getting the basic shape of the clean signal but still a lot of noise present after both filters. Thanks!</p> <p><a href="https://i.sstatic.net/quKvb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/quKvb.png" alt="enter image description here"></a></p> <pre><code>[b1,a1] = iirnotch((29.3*(2/fs)),0.99999); IIR1 = filter(b1,a1,ecg58_DC_removed); FFT_resultFilter1 = (1/length(t))*fft(IIR1); f=(0:1024)/1024*(200/2); figure(4) stem(f, 2*abs(FFT_resultFilter1(1:1025))); xlabel ('Frequency (Hz)'); ylabel ('Spectral Magnitude'); title('First filter') grid on [a2,b2] = butter(2,[29.2 29.4]*2/fs, 'stop'); IIR2 = filter(a2,b2,IIR1); FFT_resultFilter2 = (1/length(t))*fft(IIR2); f=(0:1024)/1024*(200/2); figure(5) stem(f, 2*abs(FFT_resultFilter2(1:1025))); xlabel ('Frequency (Hz)'); ylabel ('Spectral Magnitude'); title('First filter') grid on figure (6) plot(t(1:1000), IIR2(1:1000)); xlabel('time (s)') ylabel('amplitude') title('two filters'); </code></pre> <p>b1 =</p> <p>1.0e-04 *</p> <pre><code>0.1571 -0.1902 0.1571 </code></pre> <p>a1 =</p> <pre><code>1.0000 -0.0000 -1.0000 </code></pre> Answer: <p>As mentioned in my comment, the filter returned by <code>iirnotch</code> is useless. From your filter coefficients you can see that the filter is only marginally stable due to two poles on the unit circle at DC and at Nyquist. Furthermore, even though the filter has a notch, it also attenuates all other frequencies quite strongly (apart from DC and Nyquist). The reason for that behavior is the extremely large bandwidth in your specification.</p> <p>The figure below shows the magnitude responses of the filter you designed (top) and of a notch filter with a bandwidth <code>BW = w0/35</code> (bottom) (note that the extremely large values very close to DC and Nyquist due to the poles are not shown in the top figure):</p> <p><a href="https://i.sstatic.net/u6PIZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u6PIZ.png" alt="enter image description here"></a></p> <p>In any case, the bottom figure is what a notch filter should look like. If you tried that filter and it didn't do what you expected it to do, then the reason might be that your estimation of the noise frequency is wrong. Could it be that you got it wrong by a factor of $2$ (i.e., it would be a 60Hz hum)? [Also, doesn't the file name <code>ecg58...</code> suggest a disturbance at $2\cdot 29=58$Hz?]</p> <p>So there might be several problems in your approach, but one is definitely the design of the notch filter, and if I may guess I would say that the other is the estimation of the noise frequency.</p>
https://dsp.stackexchange.com/questions/36041/digital-filter-not-removing-noise-at-specific-frequency-matlab
Question: <p>In a <a href="https://dsp.stackexchange.com/questions/70960/given-a-3db-octave-filter-that-makes-pink-noise-how-can-i-make-a-3db-octave">related question</a> a probable solution was given to build a first-order digital filter and then cascade three of them in order to turn white noise into pink. I have applied the C++ as follows but still the signal sounds and looks like white noise.</p> <p>I would like to know what is wrong with my implementation and why I don't hear pink noise. For reference, the poles and zeros come from Robert Bristow-Johnson's work <a href="https://www.firstpr.com.au/dsp/pink-noise/" rel="nofollow noreferrer">here</a>.</p> <p>Header:</p> <p><code>float *state = nullptr;</code></p> <p>Implementation:</p> <p><code>state = new float[0.0];</code> in Constructor.</p> <p>Then in the loop, <code>for (int i=0; i &lt; numSamples; i++)</code></p> <pre><code>float first = first_order_filter(whiteNoise, 0.99572754, 0.98443604, state); float second = first_order_filter(first, 0.94790649, 0.83392334, state); float third = first_order_filter(second, 0.53567505, 0.07568359, state); out1 = third; </code></pre> <p>Where first_order_filter is defined as in Robert's answer here:</p> <p><a href="https://dsp.stackexchange.com/a/70963/11391">https://dsp.stackexchange.com/a/70963/11391</a></p> <p>I would love to know if this code is approximately correct/ where the problem lies.</p> Answer: <p>My late night brain made foolish mistakes, for the record, if anyone needs this, the working code is as follows:</p> <p>Header:</p> <pre><code>float state1; float state2; float state3; </code></pre> <p>Implementation:</p> <p>In Constructor:</p> <pre><code>state1 = 0; state2 = 0; state3 = 0; </code></pre> <p>Robert's function:</p> <pre><code>// this processes one sample float first_order_filter(float input, float pole, float zero, float *state) { float new_state = input + pole*(*state); float output = new_state - zero*(*state); *state = new_state; return output; } </code></pre> <p>Then in the loop, <code>for (int i=0; i &lt; numSamples; i++)</code></p> <pre><code>float first = first_order_filter(whiteNoise, 0.99572754, 0.98443604, &amp;state1); float second = first_order_filter(first, 0.94790649, 0.83392334, &amp;state2); float third = first_order_filter(second, 0.53567505, 0.07568359, &amp;state3); out1 = third; </code></pre> <p>Of course this works like a charm, producing Pink Noise and if the poles and zeros are swapped it produces Blue/ Azure Noise.</p>
https://dsp.stackexchange.com/questions/70969/cascading-first-order-digital-filters-in-c
Question: <p>I am reading Introduction to Digital Filters by J.O Smith III, which is an amazing book. The part for which I have a question is quoted below.</p> <blockquote> <p>By virtue of Euler's relation and the linearity of the filter, setting the input to <span class="math-container">$ x(n) = e^{j\omega nT}$</span> is physically equivalent to putting <span class="math-container">$ \cos(\omega nT)$</span> into one copy of the filter and <span class="math-container">$ \sin(\omega nT)$</span> into a separate copy of the same filter. The signal path where the cosine goes in is the real part of the signal, and the other signal path is simply called the imaginary part. Thus, a complex signal in real life is implemented as two real signals processed in parallel; in particular, a complex sinusoid is implemented as two real sinusoids, side by side, one-quarter cycle out of phase. <strong>When the filter itself is real, two copies of it suffice to process a complex signal. If the filter is complex, we must implement complex multiplies between the complex signal samples and filter coefficients.</strong></p> </blockquote> <p>I am having difficulty understanding the part in bold. Could someone kindly shed light on this? For example, why do we need two copies of the filter?</p> Answer: <p>A real operation on a complex input can be implemented as two real operations in parallelle, rather than a truely complex operation.</p> <p>An example would be multiplying a complex number with a real number. Rather than a full complex multiply, you get away with two real multiplies, one for the real part of the input, another for the imaginary part.</p> <p>-k</p>
https://dsp.stackexchange.com/questions/80740/issue-understanding-implementing-digital-filters-in-practice
Question: <p>Suppose I have a digital filter implemented in Direct Form II. How do I initialize the state of the filter as if the input <span class="math-container">$x[n]$</span> had a fixed value <span class="math-container">$x_0$</span> for all <span class="math-container">$n&lt;0$</span>?</p> <p><a href="https://i.sstatic.net/IrDlp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IrDlp.png" alt="Direct Form II Filter Topology"></a></p> Answer: <p>The difference equations for this filter are:</p> <pre><code>y[n] = b0 w[n] + b1 w[n-1] + b2 w[n-2] w[n] = x[n] - a1 w[n-1] - a2 w[n-2] </code></pre> <p>To achieve steady-state, <code>w[n] == w[n-1] == w[n-2]</code>. Call this value <code>w</code>. Solving the second difference equation, we find <code>w = x/(1 + a1 + a2)</code>, where x is the steady-state input value.</p>
https://dsp.stackexchange.com/questions/56476/how-do-i-initialize-the-state-of-a-digital-filter-in-direct-form-ii
Question: <p>I can't find this answer anywhere. I have a couple satellite modem manuals and they refer to digital filtering functions that they do, but they say almost nothing about their sample rate. I always thought, without considering it too much, that all the modems I've worked with were only sampling at a rate high enough to extract the bits--something close to the symbol rate. But if so, how then do they do digital filtering, and how would they be able to display the spectrum, like most of them will do? I believe you must have enough samples to re-create the waveform in order to do those things; I just didn't think all these modems were doing that.</p> <p>And if all these normal modems implementing digital filtering do sample at the Nyquist rate+, I'm not really seeing the distinction between traditional IF and digital IF, since the first thing the modems are doing is sampling high enough to effectively have a digital IF.</p> <p>Thanks. There's a lot I don't know, and wish I'd learned 20 years ago.</p> Answer: <p>All modems sample at higher than the symbol rate up until timing recovery is resolved, at which point the received waveform can de down-sampled to 1 sample per symbol.</p> <p>As for a digital IF: digital IF means the waveform is centered on some higher frequency, higher than its occupied bandwidth, and can be represented completely as a real signal. In contrast to this is a complex baseband signal, in which case it would be two datapaths that are sampled representing the in-phase (I) and quadrature (Q) components of the complex baseband waveform. When a digital IF is used, a digital down-conversion is required to translate the IF signal to the complex I and Q baseband signal. Both cases are sampled higher than the symbol rate in order for the receiver to resolve carrier and timing offsets and perform optimum matched filtering; meeting the Nyquist requirements as a minimum plus some additional margin for realizable filtering.</p>
https://dsp.stackexchange.com/questions/93664/sample-rate-of-digital-modems-how-do-they-do-digital-filtering-if-sampling-belo
Question: <p>I am trying to implement a digital filter over a uC (it doesn't really matter which filter and which micro controller because I'm looking forward to learn how to do it in the future with different filters and different microcontrollers). I've been told that you can design, implement and debug a digital filter in python and when everything is ready you can port the code to C without changing anything. How can I do that? I've been searching for a while and I can not find how to do this.</p> <p>Also, is there a way to plot the transfer function of an implemented fiter?. I mean the actual filter, a function made by me, that takes the input values and performs the calculations. I don't want to plot the transfer function of a filter made with functions like <code>scipy.signal.butter</code>. I want to plot the transfer function of a filter made with a for, some multiplications and sums.</p> <p>I would really appreciate any help or information that you can provide me.</p> Answer: <h3>Filter representation and design</h3> <p>A DTLTI IIR filter is characterized by its transfer function <span class="math-container">$ H(z) = \frac{Y(z)}{X(z)} = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \dots + b_{P} z^{-P}}{a_0 + a_1 z^{-1} + a_2 z^{-2} + \dots + a_{Q} z^{-Q}} $</span>. The transfer function is closely related to the difference equation: <span class="math-container">$$ y[n] = \frac{1}{a_0} \left( \sum_{i=0}^{P} b_i x[n-i] - \sum_{j=1}^{Q} a_j y[n-j] \right) $$</span> (<span class="math-container">$x[n]$</span> is the input at time step <span class="math-container">$n$</span>, <span class="math-container">$y[n]$</span> is the output at time step <span class="math-container">$n$</span>).<br> This is equivalent to <span class="math-container">$$ \sum_{i=0}^{P} b_i x[n-i] = \sum_{j=0}^{Q} a_j y[n-j] $$</span> By applying the Z transform to both sides, using the time shift property, and rearranging some factors, you arrive at the formula for the transfer function <span class="math-container">$H(z)$</span> mentioned previously.</p> <p>When you design a Butterworth filter (or any IIR filter) using SciPy, it returns the <span class="math-container">$b_i$</span> and <span class="math-container">$a_j$</span> coefficients. These coefficients determine the transfer function and therefore the frequency response of the filter. You can use the <code>freqz</code> function to calculate this response (essentially, it evaluates <span class="math-container">$H(z)$</span> along the unit circle <span class="math-container">$z=e^{j\omega}$</span>).</p> <p>For example, using Python:</p> <pre><code>from scipy.signal import butter, freqz import matplotlib.pyplot as plt from math import pi import numpy as np f_s = 360 # Sample frequency in Hz f_c = 45 # Cut-off frequency in Hz order = 4 # Order of the butterworth filter omega_c = 2 * pi * f_c # Cut-off angular frequency omega_c_d = omega_c / f_s # Normalized cut-off frequency (digital) # Design the digital Butterworth filter b, a = butter(order, omega_c_d / pi) print('Coefficients') print("b =", b) # Print the coefficients print("a =", a) w, H = freqz(b, a, 4096) # Calculate the frequency response w *= f_s / (2 * pi) # Convert from rad/sample to Hz # Plot the amplitude response plt.subplot(2, 1, 1) plt.suptitle('Bode Plot') H_dB = 20 * np.log10(abs(H)) # Convert modulus of H to dB plt.plot(w, H_dB) plt.ylabel('Magnitude [dB]') plt.xlim(0, f_s / 2) plt.ylim(-80, 6) plt.axvline(f_c, color='red') plt.axhline(-3, linewidth=0.8, color='black', linestyle=':') # Plot the phase response plt.subplot(2, 1, 2) phi = np.angle(H) # Argument of H phi = np.unwrap(phi) # Remove discontinuities phi *= 180 / pi # and convert to degrees plt.plot(w, phi) plt.xlabel('Frequency [Hz]') plt.ylabel('Phase [°]') plt.xlim(0, f_s / 2) plt.ylim(-360, 0) plt.yticks([-360, -270, -180, -90, 0]) plt.axvline(f_c, color='red') plt.show() </code></pre> <p>The coefficients can be calculated manually, as explained <a href="https://tttapa.github.io/Pages/Mathematics/Systems-and-Control-Theory/Digital-filters/Discretization/Discretization-of-a-fourth-order-Butterworth-filter.html" rel="nofollow noreferrer">here</a> (do note that the indices of the coefficients are flipped compared to the formulas above), but it's much easier to use filter design tools like SciPy to calculate them.</p> <h3>Implementation</h3> <p>The difference equation can be used directly to implement the filter. Just loop over the previous inputs and outputs, multiply everything with the respective coefficients, and sum all terms.</p> <p>This means that once you have designed your filter in Python, you just need to copy the <span class="math-container">$b$</span> and <span class="math-container">$a$</span> coefficients to your microcontroller to use the filter.</p> <p>A possible C++ implementation could be:</p> <pre><code>class IIRFilter { public: template &lt;size_t B, size_t A&gt; IIRFilter(const double (&amp;b)[B], const double (&amp;_a)[A]) : lenB(B), lenA(A-1) { x = new double[lenB](); y = new double[lenA](); coeff_b = new double[2*lenB-1]; coeff_a = new double[2*lenA-1]; double a0 = _a[0]; const double *a = &amp;_a[1]; for (uint8_t i = 0; i &lt; 2*lenB-1; i++) { coeff_b[i] = b[(2*lenB - 1 - i) % lenB] / a0; } for (uint8_t i = 0; i &lt; 2*lenA-1; i++) { coeff_a[i] = a[(2*lenA - 2 - i) % lenA] / a0; } } ~IIRFilter() { delete[] x; delete[] y; delete[] coeff_a; delete[] coeff_b; } double filter(double value) { x[i_b] = value; double b_terms = 0; double *b_shift = &amp;coeff_b[lenB - i_b - 1]; for (uint8_t i = 0; i &lt; lenB; i++) { b_terms += x[i] * b_shift[i]; } double a_terms = 0; double *a_shift = &amp;coeff_a[lenA - i_a - 1]; for (uint8_t i = 0; i &lt; lenA; i++) { a_terms += y[i] * a_shift[i]; } double filtered = b_terms - a_terms; y[i_a] = filtered; i_b++; if(i_b == lenB) i_b = 0; i_a++; if(i_a == lenA) i_a = 0; return filtered; } private: const uint8_t lenB, lenA; uint8_t i_b = 0, i_a = 0; double *x; double *y; double *coeff_b; double *coeff_a; }; </code></pre> <p>Circular buffers are used to keep track of the previous inputs and outputs. The coefficients are duplicated in a circular fashion as well, in order to simplify the indices in the loop.<br> If you're not allowed to use dynamic memory, you could easily create a generic class with stack allocated arrays.</p> <p>For higher order filters, numerical issues may arise, as a small rounding/quantization error on one of the coefficients affects the location of all of the zeros/poles of the filter. This is especially problematic for poles that lie very close to the unit circle.<br> A solution is to factor the transfer function into a product of second order sections. The entire filter is then implemented as a cascade of biquad filters, each implementing one of the sections.<br> SciPy can produce the coefficients of these sections as well, just pass the <code>output='sos'</code> parameter to the <code>butter</code> function.</p> <h3>Checking the result of the implementation</h3> <p>Numerically calculating the frequency response from a given filter implementation is not straightforward. However, you can compare the impulse response of your implementation with the impulse response produced by SciPy (using <code>scipy.signal.dimpulse</code>).<br> The transfer function is the Z transform of the impulse response, so if the impulse responses match, the frequency characteristics will match as well.</p>
https://dsp.stackexchange.com/questions/59688/how-to-design-a-digital-filter-in-python-that-will-run-over-an-uc
Question: <p>I am currently working with vibration measurements in structures. In the netherlands there is a guideline for verifying vibration measurements for damage to machinery. This is the so-called "SBR Trillingsrichtlijn". In this guideline a frequency weighting function is specified to modify the time series from vibration measurements:</p> <p><a href="https://i.sstatic.net/5sjye.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5sjye.png" alt="Frequency weight function"></a></p> <p>with small f being the frequency in Hz, f_0 a constant frequency of 5.6 Hz and v_0 a predefined constant velocity of 1 mm/s. To illustrate the filter behaviour i show it graphically here as well (axes are linear):</p> <p><a href="https://i.sstatic.net/RIF4j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RIF4j.png" alt="filter response with linear axes"></a></p> <p>In post-processing this is fine as i can perform an FFT on the time series data, apply this weighing function in the frequency domain (through multiplication with the FFT transformed time series) and then do an inverse FFT to get the adjusted time signal data. </p> <p>My question is the following: the equipment we use to measure vibrations does this weighing internally using a digital filtering on the time series in real-time. To verify the data i would like to design a digital filter corresponding to the weighing function above, can anyone point me to a resource how to approach this? From what i have looked up so far on IIR / FIR filter design usaully the examples start from a transfer function in the s-domain or z-domain (which correspond to a laplace transform in discrete or continuous time) but now i only have a frequency response in the fourier / frequency domain. </p> <p>EDIT: Based on suggestion of @Hilmar it appears a simple first order high pass function. i made a python implementation to check the method of FFT and the highpass filter. shown in the graph here. Strangely the frequency response of the highpass filter and SBR specified response match exactly but the implementation using the highpass has some artefacts whereby non existant frequencies are generated with non-zero amplitudes:</p> <p><a href="https://i.sstatic.net/JO0vf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JO0vf.png" alt="graph showing comparison of implementation using fft and highpass filtering in python."></a></p> <p>this can be seen in the green dotted line which is a mock signal filtered in python using the scipy package. </p> Answer: <p>This is <em>the</em> classical problem of filter design: </p> <blockquote> <p>You've got a frequency response that you want, how to implement it using a FIR?</p> </blockquote> <p>I'm not going to lay out all the theory here, because it can easily be found by looking for <em>Fourier approximation methods for FIR design</em> or <em>windowing methods</em>, but the idea is easy enough:</p> <ul> <li>Take your frequency response. Sample it at many points.</li> <li>Take the inverse FT. You get something long, which, when convolved with a signal, would have the sampled frequency response (because of the convolution theory of the Fourier Transform).</li> <li>Because the result is a very long filter and exhibits undesirable fringe behaviour, apply a window function to cut off some of the coefficients. The result is your filter taps.</li> </ul> <p>There's iterative methods of increasing the match between desired and actual response given a limited amount of freedom by a limited number of taps; but this would exceed the scope of the question.</p>
https://dsp.stackexchange.com/questions/55303/digital-filter-design-of-time-series-for-specified-frequency-response-function
Question: <p>In 'Digital Filters' by Hamming there is a cryptic section where he describes how the Gibbs phenomenon can be viewed as the displacement between the centers of two functions as they are convolved together. This is on pages 112 - 113 of the 3rd edition.</p> <p>In the process of this he shows that truncating the Fourier Series is the same as multiplying the Fourier coefficients with a rectangle function. He then goes on to show that the function that has 2N+1 coefficients which are 1 is: </p> <p>$h(\theta)$ = $\frac{sin(2N+1/2)\theta}{sin(\theta /2)}$</p> <p>I'm confused: I thought the frequency response of a rectangle was a sinc function, and another page he shows this (when he derives the Lanczos smoothing factors). </p> <p>Could anyone please clear this up for me?</p> Answer: <p>I think the main issue is you are jumping ahead of yourself. You probably remember or read somewhere that the Fourier transform of a $rect$ function is a $sinc$ function. This is true; however, no where in this section does he mention Fourier transform! In fact, what he is doing is not Fourier transform. </p> <p>What he does in this section is to represent any periodic function as a Fourier series:</p> <p>(1) $f(\theta)=\frac{a_0}{2}+\sum_{k=0}^\infty{(a_k \cos k\theta + b_k \sin k\theta)}$</p> <p>The key here is that this function doesn't contain every frequency. It contains frequency 0 (the DC) and $\frac{k}{2\pi}$ where $k=1,2,3,...\infty$. This is actually <em>very</em> important. This function is always periodic with a period of $2\pi$. The coefficients $a_k$ and $b_k$ are almost sampled values of the Fourier transform (but not quite, why? homework exercise! :p). You have countably many of these coefficients. Later in the book, you will learn that when you sample data in one space (frequency or time) it necessarily makes the counter part in the other space (time or frequency) periodic. </p> <p>In the first case, he does the Lanczos smoothing derivation where he averages the function by running a rectangular window through it (convolving with rect). What he shows is, not surprisingly, that the coefficients get multiplied by this term:</p> <p>(2) $\frac{\sin k\pi/N}{k\pi N}$</p> <p>which should look very familiar to you, of course, because it is the $sinc$ function. However, what you are missing is that $k$ is discrete! It is actually a sampled version of the $sinc$ function.</p> <p>Effectively, he convolves a function with a $rect$ and shows that the coefficients of the resulting Fourier series (read loosely as: sampled Fourier transform) is a sampled $sinc$ function. No surprise there. Convolution thm says convolution in time turns into multiplication in frequency. Fourier transform, which you know, of $rect$ is $sinc$, so convolution by $rect$ is multiplication by $sinc$ in frequency space. </p> <p>In the next section, he does something different. He takes the Fourier series (read: <em>sampled</em> Fourier transform) and removes all the higher frequency coefficients. In effect, he is taking the Fourier transform, multiplies by a $rect$, and then samples it. For simplicity, he sets all the Fourier coefficients that did not get discarded to $1$.</p> <p>What he's left with is this:</p> <p>(3) $h(\theta)=\frac{sin(N+1/2)\theta}{sin(\theta/2)}$</p> <p>And you ask, why isn't this a $sinc$ function? Can you answer it now?</p> <p>The quick answer is <strong>because what is applied in frequency domain is not just a truncation, it's a truncation and a sampling operator</strong>. What you know is that when you truncate (i.e. multiply by rect) in frequency domain, the time domain gets convolved with a $sinc$ (by Convolution thm and Fourier transform of $rect$), but this is <strong>without sampling</strong>. </p> <p>As for why the formula looks the way it does, there are two ways to look at it. The first, which he shows, is that you can just sum the Fourier series from $-N$ to $N$, and that's what you get.</p> <p>The second, which is more profound and may come up later, is that (*) when you sample a function in one space (say frequency), the corresponding function in the other space (say time) becomes the sum of shifted versions of itself. In fact, it probably won't come up exactly like that. The typical scenario is that sampling time domain creates periodic replication in the frequency domain (btw: this is the reason for what people call aliasing). However, you can apply the duality property of Fourier transform to get (*).</p> <p>Does equation (3) make sense now? It is periodic. The closer you get to 0, the more it looks like a $sinc$ function.</p> <p>So, an exercise for you is to derive equation (3) by sampling and truncating in frequency space and applying inverse transform.</p>
https://dsp.stackexchange.com/questions/7605/gibbs-phenomenon-in-hammings-digital-filters
Question: <p>(I am trying to create an IIR audio filter that adds reverb to an initial sample)</p> <p>Say I designed an analog filter to model acoustic attenuation based on the following mathematical model:</p> <p><span class="math-container">$$ I = I_0 e^{pt}, $$</span></p> <p>Where <span class="math-container">$p$</span> is some constant such that <span class="math-container">$-1 \leq p &lt; 0$</span>.</p> <p>In the laplace domain, this is simply</p> <p><span class="math-container">$$ \mathfrak{L}\{I\} = \dfrac{1}{s-p} $$</span></p> <p>Using the impulse invariant transform, I get the digital equivalent</p> <p><span class="math-container">$$ Z\{I\} = \dfrac{z}{z-e^{pT_s}}, $$</span></p> <p>Which inherently has one pole.</p> <p><strong>How do I add more resolution to this filter?</strong> Implementing this as a digital filter makes my audio sample sound like garbage. Would adding arbitrary taps keeps it from modeling the original analog decay?</p> <p>Using experimental attenuation data and frequency sampling for an FIR equivalent, I can easily obtain 50000 poles and create a very clear reverb-adding filter.</p> <p>Both IIR and FIR methods were tested in MATLAB, with the FIR using MATLAB's built-in church impulse response.</p> <p>(Sorry if the question is unclear, I'm a bit new at this.)</p> Answer: <p>Doing a decent sounding audio reverb with IIR filters is difficult. You need way more poles that you can generate with a normal IIR structure and you need a way to do it efficiently.</p> <ol> <li>There is a significant trade off between sound quality versus MIPS, memory, latency &amp; controllabilty</li> <li>A good starting point is the original Schroeder reverb, which uses comb filters and warped allpass filters. See for example: <a href="https://ccrma.stanford.edu/~jos/pasp/Example_Schroeder_Reverberators.html" rel="nofollow noreferrer">https://ccrma.stanford.edu/~jos/pasp/Example_Schroeder_Reverberators.html</a></li> <li>You want to get lots of poles cheaply. For example, wrapping a feedback loop around a long delay will create lots of poles with just a single multiply + add.</li> <li>Ideally the poles are randomly distributed and have no regularity to them. The simple Schroeder algortihm doesn't quite do this, so it tends to sound "ringy" or "metallic".</li> <li>A better alternative are feedback delay networks. See for example <a href="https://ccrma.stanford.edu/~jos/pasp/FDN_Reverberation.html" rel="nofollow noreferrer">https://ccrma.stanford.edu/~jos/pasp/FDN_Reverberation.html</a>. These work by having multiple delay lines of different lengths with a feedback matrix: i.e. each delay is fed back into all delays. This creates a very large number of relatively low Q poles.</li> <li>Finally you can dial in all the frequency dependencies and also add some early reflections as sparse FIR filters (with IIR frequency shaping on the spares tabs)</li> </ol>
https://dsp.stackexchange.com/questions/53669/how-do-i-add-more-resolution-taps-to-an-analog-digital-filter
Question: <p>Please elaborate on why this mathematical transform can help analyzing as well as designing any type of digital filter.</p> Answer: <p>The Z Transform is to discrete-time (digital) signals precisely the same role that the Laplace Transform is to continuous-time (analog) signals.</p> <p>Linear Time-Invariant (LTI) Systems (a.k.a. "filters"), are made up of signal-processing elements that fall into 3 fundamental classes:</p> <ol> <li>adders (devices that add two signals).</li> <li>scalers (devices that scale a signal by a constant).</li> <li>"reactive" elements (devices that are able to discriminate w.r.t. frequency).</li> </ol> <p>Element classes 1. and 2. are essentially the same for analog or digital filters. The are sometimes called <em>"memoryless"</em> devices or elements.</p> <p>For an analog filter (or "analog LTI system"), those reactive elements would be capacitors or inductors. They integrate (<span class="math-container">$s^{-1}$</span>) one signal to become another. That turns a sine signal into a cosine signal or shifts the phase by <span class="math-container">$\pm$</span> 90°.</p> <p><span class="math-container">$$\cos(\Omega t) = \sin(\Omega t + \tfrac{\pi}{2})$$</span></p> <p>For digital filters, the reactive elements are delay elements. A unit delay (a delay of exactly one sample period <span class="math-container">$T$</span>) will delay any signal, including a sinusoid, by 1 sample or <span class="math-container">$T$</span> units of time (<span class="math-container">$z^{-1}$</span>). That shifts the phase by an amount that is dependent on frequency</p> <p><span class="math-container">$$\sin(\Omega (t-T) ) = \sin(\Omega t - \Omega T)$$</span></p> <p>or</p> <p><span class="math-container">$$\sin(\omega (n-1) ) = \sin(\omega n - \omega )$$</span></p> <p>Any LTI system that acts as a <em>"filter"</em>, a device to filter out some frequency components and leave others, <strong>must</strong> have reactive elements (or <em>"non-memoryless"</em> elements or components having memory) in order to discriminate one frequency from another. And such a filter will shift phase which will normally be different for different frequencies. But a memoryless LTI system (which is just a scaler) will not discriminate between frequencies nor will shift phase, except for possibly by 180°, which is just a polarity reversal or scaling by a negative constant.</p>
https://dsp.stackexchange.com/questions/55043/why-is-the-z-transform-so-important-in-digital-filters-analysis-and-design
Question: <p>In Digital Filter Design by Parks and Burrus, p. 19.</p> <hr> <p>The transfer function of an FIR filter is given by the $\mathcal Z$-transform of $h(n)$ as:</p> <p>$$H(z)=\sum_{n=0}^{N-1}h(n)z^{-n}$$</p> <p>(where $h$ is the filter)</p> <p>The frequency response of a filter is defined as</p> <p>$$H(\omega)=\sum_{n=0}^{N-1} h(n)e^{-j\omega n}$$</p> <p>where $\omega$ is frequency in $\textrm{rad/sec}$.</p> <p>Then the text proceeds to show that $H(\omega)$ is periodic with period $2\pi$:</p> <p>\begin{align} H(\omega + 2 \pi)&amp;= \sum_{n=0}^{N-1} h(n) e^{-j(\omega+2\pi)n}\\ &amp;= \sum_{n=0}^{N-1} h(n) e^{- j \omega n} \color{red}{e^{-j2\pi n}}\\ &amp;=H(\omega) \end{align} Could someone clarify how is that equal to $H(\omega)$ when there's the extra term $\color{red}{e^{-j2 \pi n}}$?</p> Answer: <p>$$\sum_{n=0}^{N-1}h(n)e^{-j\omega n}e^{-j2\pi n}=\sum_{n=0}^{N-1}h(n)e^{-j\omega n}\cdot1$$</p> <p>Since \begin{align} e^{-j2\pi n}&amp;=\cos(-2\pi n) + j\sin(-2\pi n)\\ &amp;=\cos(2\pi n) - j\sin(2\pi n)\\ &amp;=1-0\\ &amp;=1 \end{align}</p>
https://dsp.stackexchange.com/questions/31155/periodicity-of-transfer-function-of-fir-filter-proof-parks-and-burrus-digital
Question: <p>Practical <a href="https://en.wikipedia.org/wiki/Infinite_impulse_response" rel="nofollow noreferrer">infinite impulse response</a> (IIR) filters are usually based upon analogue equivalents (Butterworth, Chebyshev, etc.) using a transformation known as the <a href="https://en.wikipedia.org/wiki/Bilinear_transform" rel="nofollow noreferrer">bilinear transform</a> which maps the <span class="math-container">$s$</span>-plane poles and zeros of the analogue filter into the <span class="math-container">$z$</span>-plane. However, it is quite possible to design an IIR filter without any reference to analogue designs, for example, by choosing appropriate locations for the poles and zeroes. Can somebody please explain the latter design of digital IIR filters (i.e., without any reference to analogue design) for the following simple example?</p> <p>For a digital system with sampling frequency of 60 MHz, design a digital IIR filter with two complex conjugate poles at 23 MHz, and one zero at 18 MHz.</p> <p>This is basically an equalizer for a lossy channel. The filter is flat at lower frequencies (DC attenuation), with a peaking at higher frequency and then drops rapidly. For that, only knowing the poles and zero locations should be enough which defines the DC attenuation, bandwidth, and boost (peaking) of the filter. The amount of boost or DC attenuation does not matter as they can be tweaked by changing the poles and zeros locations. So I don't think any further information is required here. But if so, simply make an assumption.</p> Answer: <p>An approach to design IIR filters without mapping from classical analog designs is the least squares method where the poles and zeros are selected within a constraint of filter order and targets for the magnitude and phase of the frequency response. This can result in non-causal solutions, so some experience is necessary to do this properly. MattL who frequently posts here has provided an excellent example of this in <a href="https://dsp.stackexchange.com/a/15016">DSP.SE #15007</a> and for the narrower case of all pass filters with more detail as to the algorithm at his <a href="https://mattsdsp.blogspot.com/2022/10/design-of-iir-allpass-filters-least.html" rel="nofollow noreferrer">blog post</a>.</p> <p>Similar to this is <a href="https://dsp.stackexchange.com/a/18207">Greg Berchin's FDLS Algorthm</a>.</p>
https://dsp.stackexchange.com/questions/72729/how-to-design-iir-digital-filters
Question: <p>As far as I have seen, almost all theoretical filter design occurs in Laplace or Z-space. Also, there is a pervasive connection to real life analog filters in the design. If one is just thinking in a mathematical theoretical thing (or something that could be implemented digitally), why wouldn't one filter signals in Fourier Space?</p> <p>Why is, say, multiplying the Fourier Transform of a certain function by a unit step up - step down function, and then making the Inverse Transform of the resulted signal a &quot;band pass filter&quot;? Why should one use Butterworth filter or similar things to make a digital filter?</p> Answer: <p>As far as I know and I have experienced, filtering in Fourier space has the advantages of modifying the frequencies directly on the frequency domain. Let's say that you have a frequency component at 50 Hz and you can manually remove then even better than a Butterworth filter. That being said, you might modify the phase response of your filter introducing phase distortion and the real effect is the modification of the group delay. In other terms, you are delaying the signal and moreover, you are giving different magnitudes or weight to the frequencies of your signal. This is a nightmare in a physical implementation.</p> <p>In this context, the use of filters and unit step up-step down (similar to a digital antialiasing filter) can maintain the linear phase response.</p> <p>I hope this helps and I have not confused you!</p>
https://dsp.stackexchange.com/questions/70754/why-is-fourier-space-not-adequate-for-theoretical-or-digital-filters
Question: <p>I have the numerator and denominator of a lowpass digital elliptic filter. I know how to create a minimum-phase filter with the same magnitude response using cepstrum technique. But I came across <a href="https://www.dsprelated.com/freebooks/filters/Linear_Phase_Really_Ideal.html" rel="nofollow noreferrer">this</a> from Julius Smith III website.</p> <pre><code>dosounds = 1; N = 8; % filter order Rp = 0.5; % passband ripple (dB) Rs = 60; % stopband ripple (-dB) Fs = 8192; % default sampling rate (Windows Matlab) Fp = 2000; % passband end Fc = 2200; % stopband begins [gives order 8] Ns = 4096; % number of samples in impulse responses [B,A] = nellip(Rp, Rs, Fp/(0.5*Fs), Fc/(0.5*Fs)); % Octave % [B,A] = ellip(N, Rp, Rs, Fp/(0.5*Fs)); % Matlab % Minimum phase case: imp = [1,zeros(1,Ns/2-1)]; % or 'h1=impz(B,A,Ns/2-1)' h1 = filter(B,A,imp); % min-phase impulse response hmp = filter(B,A,[h1,zeros(1,Ns/2)]); % apply twice % Zero phase case: h1r = fliplr(h1); % maximum-phase impulse response hzp = filter(B,A,[h1r,zeros(1,Ns/2)]); % min*max=zp % hzp = fliplr(hzp); % not needed here since symmetric elliptplots; % plot impulse- and amplitude-responses % Let's hear them! while(dosounds) sound(hmp,Fs); pause(0.5); sound(hzp,Fs); pause(1); end </code></pre> <p>I tried to check minimum phase properties for <code>hmp</code> but <code>hmp</code> did not satisfy them. For example, I compare the cumulative energy of impulse responses <code>hmp</code> and the impulse response of the elliptic filter. Would someone please shed some light why <code>hmp</code> is a minimum phase.</p> Answer: <p>The impulse response <code>hmp</code> is computed by convolving the impulse response of the elliptic filter with itself. We know that the elliptic filter is marginally minimum-phase, i.e., it has no zeros outside the unit circle. Convolving the impulse response with itself corresponds to squaring the transfer function. This doubles the multiplicity of all poles and zeros but it doesn't change their location. Hence, the impulse response <code>hmp</code> must also be marginally minimum-phase.</p> <p>A more direct way to compute <code>hmp</code> would be</p> <p><code>hmp = impz( conv(B,B), conv(A,A), Ns );</code></p> <p>But note that <code>hmp</code> is of course only a finite length approximation of the ideal infinitely long impulse response. Even though it looks virtually identical to the ideal impulse response, its properties are different. Take as an example the pole-zero plot. The ideal impulse response has <span class="math-container">$2N$</span> poles and zeros (with <span class="math-container">$N$</span> being the order of the original elliptic filter). All poles have a radius greater than zero (and less than <span class="math-container">$1$</span>), and all zeros lie on the unit circle. On the other hand, <code>hmp</code> has several thousand poles and zeros, and all its poles lie in the origin (because it is a causal FIR filter).</p>
https://dsp.stackexchange.com/questions/94048/create-a-minimum-phase-filter-from-an-elliptic-digital-filter
Question: <p>I want to A-weight a time series with arbitrary sample rate. </p> <p>An analog A-weighting filter is defined exactly by IEC 61672-1. But there's no definition for a digital filter. One method is to use the bilinear transform (BLT) to convert the analog filter to the digital filter (as done here <a href="https://dsp.stackexchange.com/questions/410/applying-a-weighting">Applying A-weighting</a>). However this method suffers from extreme warping near nyquist (even when the analog poles/zeros are pre-warped):</p> <p><a href="https://i.sstatic.net/Fb527.png" rel="noreferrer"><img src="https://i.sstatic.net/Fb527.png" alt="enter image description here"></a></p> <p>Figure 1: A-weighting frequency response comparison where the sample rate is $25600\textrm{ Hz}$. </p> <p>Instead I'm thinking of using an algorithm than can design a digital IIR filter with arbitrary frequency response and plugging in the frequency response of the analog A-weighting filter. </p> <ul> <li>Is this a good approach? </li> <li>If so, is there a particular algorithm that would be well suited for this? </li> </ul> <p>I've looked into MATLAB's <a href="http://www.mathworks.com/help/signal/ref/yulewalk.html" rel="noreferrer"><code>yulewalk</code></a> but I would need a corresponding Python implementation to try out. I've also come across Berchin's FDLS method in a few places, like <a href="https://dsp.stackexchange.com/questions/10428/berchins-fdls-arbitrary-filter-design-algorithm?rq=1">this question</a> for instance, but all of the links appear to be broken.</p> Answer: <p>It's a common misconception that the approximation of an analog filter by a digital filter must be bad close to Nyquist. This idea might come from the ubiquity of the bilinear transform, for which this is usually indeed the case. Of course, there are certain constraints on the frequency response of discrete-time filters at Nyquist, but they do not necessarily need to result in a bad approximation of an analog filter in that frequency range. The quality of the approximation close to Nyquist depends on several factors, among which are the properties of the frequency response of the analog filter, and the fact whether only the magnitude or also the phase of the analog filter need to be approximated.</p> <p>I've designed a 6th order IIR filter approximating the frequency response of an analog A-weighting filter, as defined <a href="https://en.wikipedia.org/wiki/A-weighting#Transfer_function_equivalent" rel="nofollow noreferrer">here</a>:</p> <p><span class="math-container">$$H(s)=\frac{k\cdot s^4}{(s+129.4)^2(s+676.7)(s+4636)(s+76655)^2}\tag{1}$$</span></p> <p>with <span class="math-container">$k=7.39705×10^9$</span>.</p> <p>I chose a sampling frequency of <span class="math-container">$48$</span> kHz. The design procedure is a heuristic iterative procedure I came up with some time ago. It's a least squares approximation based on the equation error method, and I might write up all the details some day.</p> <p>Below is a plot of the design result. Note that both plots go up to Nyquist (<span class="math-container">$24$</span> kHz). The left figure shows that one can't see any difference between the logarithmic plots of the analog and the digital frequency responses. The right-hand figure shows the approximation error, defined as the absolute value of the difference of the magnitude responses. The error shows a typical least squares behavior.</p> <p><a href="https://i.sstatic.net/umJo2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/umJo2.png" alt="enter image description here" /></a></p> <p>You can check out the filter yourself. Here are the coefficients:</p> <pre> b = 0.169994948147430 0.280415310498794 -1.120574766348363 0.131562559965936 0.974153561246036 -0.282740857326553 -0.152810756202003 a = 1.00000000000000000 -2.12979364760736134 0.42996125885751674 1.62132698199721426 -0.96669962900852902 0.00121015844426781 0.04400300696788968 </pre> <hr> <p>Below is the Octave/Matlab code I used to design above filter. The function <code>eqnerror.m</code> can be found <a href="https://gist.github.com/mattdsp/7617379e9920c5cbb016" rel="nofollow noreferrer">here</a>. Note that the iteration step is purely heuristic and it might not work well for other specificiations.</p> <pre> fs = 48000; nw = 500; w = logspace( -7, pi, nw ); w = w(:); s = 1i * w * fs; % analog filter % from https://en.wikipedia.org/wiki/A-weighting#Transfer_function_equivalent ka = 7.39705e9; Ha = ka * s.^4 ./ ( (s+129.4).^2 .* (s+676.7) .* (s+4636) .* (s+76655).^2 ); W = ones( nw, 1 ); H = Ha; Imax = 50; N = 6; for k = 1:Imax D = abs( Ha ) .* exp( 1i * angle(H) ); [b,a] = eqnerror( N, N, w, D, W, 30 ); H = freqz( b, a, w ); end % map any poles outside the unit circle into the circle p = roots(a); g = 1; for k = 1 : N, if abs( p(k) ) > 1, g = g * abs( p(k) ); p(k) = 1 / conj( p(k) ); end end a = g * poly(p); a = a(:); sc = a(1); a = a / sc; b = b / sc; </pre>
https://dsp.stackexchange.com/questions/36077/design-of-a-digital-a-weighting-filter-with-arbitrary-sample-rate
Question: <p>I have read some articles on Allan deviation and understand that the slope of the <span class="math-container">$\sigma(\tau)$</span> diagram corresponds to the exponent of power-law noise:</p> <p><span class="math-container">$$S_y(f)\sim f^\alpha \implies \sigma(\tau) \sim \tau ^{-\frac{\alpha + 1}{2}}$$</span></p> <p>(taken from <a href="https://liquidinstruments.com/white-papers/understanding-and-performing-allan-variance-measurements/" rel="nofollow noreferrer">understanding and performing allan-variance measurements</a>)</p> <h2>Question</h2> <ul> <li><p>what do <span class="math-container">$\tau_w$</span> and <span class="math-container">$\sigma(\tau_w):\ \sigma'(\tau_w)=-1/2$</span> tell me about the properties of the white noise in the signal?</p> </li> <li><p>(how) can I exploit that information to design a digital filter that optimally suppresses e.g. the signal's white noise?</p> </li> </ul> <p><strong>Addendum:</strong><br /> out of curiosity I passed the question to MS Copilot:</p> <blockquote> <p><em>I have read some articles on Allan deviation and understand that the slope of the σ(τ) diagram corresponds to the exponent of power-law noise: Sy(f)∼fα⟹σ(τ)∼τ−α+12 (taken from understanding and performing allan-variance measurements) Question what do τw and σ(τw): σ′(τw)=−1/2 tell me about the properties of the white noise in the signal? (how) can I exploit that information to design a digital filter that optimally suppresses e.g. the signal's white noise?</em></p> </blockquote> <p>and here is the generated answer:</p> <blockquote> <p><em>Great question! The relationship between Allan deviation and white noise properties can be quite insightful for filter design.</em></p> <p><em>Understanding τw and σ(τw):</em></p> <p><em>When σ′(τw) = -1/2, it indicates that the noise follows a white frequency noise pattern. This means that the power spectral density (PSD) of the noise is proportional to 1/f², which corresponds to a random walk in phase.</em></p> <p>*The Allan deviation σ(τ) for white frequency noise decreases as τ^(-1/2), meaning that averaging over longer time intervals reduces the noise variance.</p> <p>*Exploiting this for filter design: * <em>Since white noise is characterized by a flat PSD, an averaging filter (such as a moving average or low-pass filter) can effectively suppress it.</em></p> <p><em>The optimal filter depends on the application, but a finite impulse response (FIR) low-pass filter or a Kalman filter can be useful for minimizing white noise while preserving signal integrity.</em></p> <p><em>The Allan deviation plot can help determine the best averaging time τw to minimize noise without introducing excessive lag.</em></p> <p><em>For practical implementation, you might want to explore overlapping Allan variance techniques, which improve confidence in noise characterization. If you're working with MEMS sensors, there are specific guidelines on how to apply Allan variance for noise suppression1. You can also check out discussions on using Allan deviation to determine optimal averaging time.</em></p> <p><em>Would you like help designing a specific filter for your signal? I can suggest some MATLAB or Python implementations!</em></p> </blockquote> <p>What a difference to the human generated answer!</p> Answer: <p>The Allan Deviation is a powerful tool for assessing the stationarity of noise processes. However it is not suited for designing frequency-selective filters, which generally assume stationary signals. For filter design, what's more useful is the power spectral density (PSD) of the signal - understanding which portions of the spectrum contain desired signal versus noise helps determine how to shape the filter response.</p> <p>Importantly, the mapping from PSD to Allan Deviation is many-to-one, meaning multiple different PSDs can produce the same Allan Deviation curve. As a result, the transformation is not reversible—you cannot uniquely recover the full spectral content of a signal from its Allan Deviation alone. This underscores a key distinction: while the PSD provides detailed frequency resolution, the Allan Deviation captures broader statistical trends over time. Each has its role, but they are not interchangeable.</p> <p>This is why the use of the Allan Deviation (and it's associated Sigma-Tau diagrams) is widely used in fields where non-stationary noise is prevalent, such as atomic clock development (which I am currently involved in) and gyroscope drift analysis. In these applications, the signal's statistical behavior evolves over time, making assumptions of stationarity invalid over longer time intervals.</p> <p>One of the most valuable utilities of the Allan Deviation is that it provides a practical way to determine the maximum time duration over which the assumption of stationarity is valid. All real-world systems eventually exhibit drift, so while terms like &quot;white noise&quot; and &quot;stationary&quot; are often convenient approximations, they must be applied judiciously. The Allann Deviation allows us to quantify how long the assumption of stationarity can be applied, something a traditional PSD analysis does not easily reveal.</p> <p>Rather than restate everything here, I've discussed the utility of the Allan Deviation in more depth in the following answers:</p> <p><a href="https://dsp.stackexchange.com/questions/79121/is-allan-variance-still-relevant/79124#79124">Is Allan variance still relevant?</a></p> <p><a href="https://dsp.stackexchange.com/questions/53970/how-to-interpret-allan-deviation-plot-for-gyroscope/53993#53993">How to interpret Allan Deviation plot for gyroscope?</a></p> <p><a href="https://dsp.stackexchange.com/questions/87402/allan-deviation-to-determine-averaging-time/87403#87403">Allan deviation to determine averaging time</a></p> <p><a href="https://dsp.stackexchange.com/questions/87466/usefulness-of-allan-deviation-with-dc-signals/87468#87468">Usefulness of Allan deviation with DC signals</a></p> <p><a href="https://dsp.stackexchange.com/questions/88879/1-f-noise-why-does-the-allan-deviation-remain-constant-while-standard-error-of/88892#88892">1/f noise: Why does the Allan Deviation remain constant, while standard error of mean keeps decreasing for long averages?</a></p> <p>Finally, while ideal white noise doesn't exist in physical systems, stationary white noise processes are well-defined in discrete-time signal processing. for such a process, each sample is uncorrelated with the next, and the PSD is flat over the complete Nyquist band. In this case, the optimal filter for estimating the average of a constant signal corrupted by white noise is simply a uniformly weighted moving average filter - a direct and effective solution when the assumption of stationarity holds.</p>
https://dsp.stackexchange.com/questions/97781/designing-digital-filters-on-basis-of-sigma-tau-diagrams
Question: <p>I'm trying to self learn the art of signal processing whilst moving through my third year pure maths degree. </p> <p>Sorry if my terminology is incorrect however I hope I am understandable!</p> <p>I am looking at data which is coming from an accelerometer, distance data from a separate sensor and time data incrementing by 0.01 seconds e.g. to be clear in case my terminology is incorrect I have a dataset which has a row for each 0.01 seconds with the row having data from the accelerometer and distance sensors. I believe this means the data is sampled at 100 Hz.</p> <p>Please can someone confirm that my choice of using a digital filter is correct?</p> <p>My reasoning is that the data is not analogue and is digital and as such I should not use a standard Butterworth (or other) filter and should look for a digital version. Is this reasoning correct?</p> <p>I want to use the data to compare the second derivative with respect to time of the distance with the RSS of acceleration and before I do this I want to 'clean' up the data as much as possible and it is my understanding that filtering will give me this.</p> <p>I am using Octave to perform the maths and have various pieces of code to filter the data, however I do not feel like I understand the filter settings I should use! Before I start to try and understand the filter settings are all my assumptions and reasoning reasonable?</p> <p>My Octave code for my filter is as follows:</p> <pre><code>%I use Octave, however I believe Matlab will be very similar if not identical % I believe that my sample frequency is 100 Hz. mysamf = 100; % Nyquist frequency. I believe this is set to half the sample frequency Fnyq = mysamf/2; % I do not understand the cut-off frequency, however I set it to 45 % I can see through plotting the results that it does make an impact. mycutf=45; % Here I create a 1st order Butterworth filter, using the above restrictions. [b,a]=butter(2, mycutf/Fnyq); % I pass my dataset that contains the displacements. output=filter(b,a,v_dist); </code></pre> <p>Edit:</p> <p>I realize I should have explained this at the beginning however I didn't think that the source of the data would influence the filtering approach (digital/analogue).</p> <p>My Sensor data is coming from a Pogo stick - the acceleometer mounted at the base of the stick and a displacement sensor measuring the movement of the stick and spring assembly.</p> <p>The Pogo stick is being used by a Gazelle on steroids so is going wild, however is always bouncing around on and off axis, big jumps small hops, soft ground hard ground.</p> <p>Thanks,</p> <p>Sam.</p> Answer:
https://dsp.stackexchange.com/questions/52920/digital-or-analogue-filtering
Question: <p>I'm trying to implement a digital filter that has the frequency response shape equal to the image below:</p> <p><a href="https://i.sstatic.net/rYXJ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rYXJ2.png" alt="enter image description here" /></a></p> <p>Where i will use equation (11) to implement it with a sampling frequency of 48kHz. <a href="https://i.sstatic.net/fSbQ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fSbQ0.png" alt="enter image description here" /></a></p> <p>The filter coefficients can be found in the same document: Where each w' follows the formula above.</p> <p><a href="https://i.sstatic.net/83Cal.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/83Cal.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/nffaE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nffaE.png" alt="enter image description here" /></a></p> <p>So i put everything into matlab:</p> <pre><code>fs = 48000; f1 = 20.598997; f2 = 107.65265; f3 = 737.86223; f4 = 12194.217; w1 = 2*tan(pi*(f1/fs)); w2 = 2*tan(pi*(f2/fs)); w3 = 2*tan(pi*(f3/fs)); w4 = 2*tan(pi*(f4/fs)); %testeb2 = 2*tan(pi*(250/1000))*2*tan(pi*(250/1000))*(1/sqrt(2)) % Filter coefficients for the A weighting filter a0 = 64 + (16*w2*w1*w3) + (4*w2*w1*w1*w3) + (32*w2*w1*w4) + (8*w2*w1*w1*w4) + (32* w1 * w3 * w4) + (16 *w2 * w3 * w4) + (64 *w1) + (32 *w2) + (32 *w3) + (64 *w4) + (32 *w2 * w1) + (8 *w2 * w1*w1) + (16 *w1*w1) + (16 *w2 * w1 * w3 * w4) + (4 *w2 * w1*w1 * w3 * w4) + (32 *w1 * w3) + (16 *w2 * w3) + (8 *w1*w1 * w3) + (64 *w1 * w4) + (32 *w2 * w4) + (32 *w3 * w4) + (16 *w1*w1 * w4) + (8 *w1*w1 * w3 * w4) + (16 *w4*w4) + (16 *w4*w4 * w1) + (4 *w4*w4 * w1*w1) + (4 *w4*w4 * w1 * w2 * w3) + (w4*w4 * w1*w1 * w2 * w3) + (8 *w4*w4 * w1 * w2) + (2 *w4*w4 * w1*w1 * w2) + (8 *w4*w4 * w2) + (8 *w4*w4 * w3) + (8 *w4*w4 * w1 * w3) + (2 *w4*w4 * w1*w1 * w3) + (4 *w4*w4 * w2 * w3) a1 = -128 + (64 *w2 * w1 * w3) + (24* w2 * w1*w1 * w3) + (128 *w2 * w1 * w4) + (48 *w2 * w1*w1 * w4) + (128 *w1 * w3 * w4) + (64 *w2 * w3 * w4) + (64 *w2 * w1) + (32 *w2 * w1*w1) + (32 *w1*w1) + (96 *w2 * w1 * w3 * w4) + (32 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) + (32 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) + (64 *w1*w1 * w4) + (48 *w1*w1 * w3 * w4) + (32 *w4*w4) + (64 *w4*w4 * w1) + (24 *w4*w4 * w1*w1) + (32 *w4*w4 * w1 * w2 * w3) + (10 *w4*w4 * w1*w1 * w2 * w3) + (48 *w4*w4 * w1 * w2) + (16 *w4*w4 * w1*w1 * w2) + (32 *w4*w4 * w2) + (32 *w4*w4 * w3) + (48 *w4*w4 * w1 * w3) + (16 *w4*w4 * w1*w1 * w3) + (24 *w4*w4 * w2 * w3) a2 = -192 + (48 *w2 * w1 * w3) + (52 *w2 * w1*w1 * w3) + (96 *w2 * w1 * w4) + (104 *w2 * w1*w1 * w4) + (96 *w1 * w3 * w4) + (48 *w2 * w3 * w4) - (320 *w1) - (160 *w2) - (160 *w3) - (320 *w4) - (96 *w2 * w1) + (24 *w2 * w1*w1) - (48 *w1*w1) + (208 *w2 * w1 * w3 * w4) + (108 *w2 * w1*w1 * w3 * w4) - (96 *w1 * w3) - (48 *w2 * w3) + (24 *w1*w1 * w3) - (192 *w1 * w4) - (96 *w2 * w4) - (96 *w3 * w4) + (48 *w1*w1 * w4) + (104 *w1*w1 * w3 * w4) - (48 *w4*w4) + (48 *w4*w4 * w1) + (52 *w4*w4 * w1*w1) + (108 *w4*w4 * w1 * w2 * w3) + (45 *w4*w4 * w1*w1 * w2 * w3) + (104 *w4*w4 * w1 * w2) + (54 *w4*w4 * w1*w1 * w2) + (24 *w4*w4 * w2) + (24 *w4*w4 * w3) + (104 *w4*w4 * w1 * w3) + (54 *w4*w4 * w1*w1 * w3) + (52 *w4*w4 * w2 * w3) a3 = 512 - (128 *w2 * w1 * w3) + (32 *w2 * w1*w1 * w3) - (256 *w2 * w1 * w4) + (64 *w2 * w1*w1 * w4) - (256 *w1 * w3 * w4) - (128 *w2 * w3 * w4) - (256 *w2 * w1) - (64 *w2 * w1*w1) - (128 *w1*w1) + (128 *w2 * w1 * w3 * w4) + (192* w2 * w1*w1 * w3 * w4) - (256 *w1 * w3) - (128 *w2 * w3) - (64 *w1*w1 * w3) - (512 *w1 * w4) - (256 *w2 * w4) - (256 *w3 * w4) - (128 *w1*w1 * w4) + (64 *w1*w1 * w3 * w4) - (128 *w4*w4) - (128 *w4*w4 * w1) + (32 *w4*w4 * w1*w1) + (192 *w4*w4 * w1 * w2 * w3) + (120 *w4*w4 * w1*w1 * w2 * w3) + (64 *w4*w4 * w1 * w2) + (96 *w4*w4 * w1*w1 * w2) - (64 *w4*w4 * w2) - (64 *w4*w4 * w3) + (64 *w4*w4 * w1 * w3) + (96 *w4*w4 * w1*w1 * w3) + (32 *w4*w4 * w2 * w3) a4 = 128 - (224 *w2 * w1 * w3) - (56 *w2 * w1*w1 * w3) - (448 *w2 * w1 * w4) - (112 *w2 * w1*w1 * w4) - (448 *w1 * w3 * w4) - (224 *w2 * w3 * w4) + (640 *w1) + (320 *w2) + (320 *w3) + (640 *w4) + (64* w2 * w1) - (112 *w2 * w1*w1) + (32 *w1*w1) - (224 *w2 * w1 * w3 * w4) + (168 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) - (112 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) - (224 *w1*w1 * w4) - (112 *w1*w1 * w3 * w4) + (32 *w4*w4) - (224 *w4*w4 * w1) - (56 *w4*w4 * w1*w1) + (168 *w4*w4 * w1 * w2 * w3) + (210 *w4*w4 * w1*w1 * w2 * w3) - (112 *w4*w4 * w1 * w2) + (84 *w4*w4 * w1*w1 * w2) - (112 *w4*w4 * w2) - (112 *w4*w4 * w3) - (112 *w4*w4 * w1 * w3) + (84 *w4*w4 * w1*w1 * w3) - (56 *w4*w4 * w2 * w3) a5 = - (448 *w2 * w1 * w3 * w4) - (224 *w1*w1 * w3 * w4) + (384 *w3 * w4) - (112 *w2 * w1*w1 * w3) - (112 *w4*w4 * w1*w1) + (384 *w1 * w3) - (224 *w4*w4 * w1 * w3) + (192 *w2 * w3) - (224 *w2 * w1*w1 * w4) + (192 *w1*w1) + (252 *w4*w4 * w1*w1 * w2 * w3) + (384 *w2 * w1) - (768) - (224 *w4*w4 * w1 * w2) - (112 *w4*w4 * w2 * w3) + (384 *w2 * w4) + (192 *w4*w4) + (768 *w1 * w4) a6 = 128 + (224 *w2 * w1 * w3) - (56 *w2 * w1*w1 * w3) + (448 *w2 * w1 * w4) - (112 *w2 * w1*w1 * w4) + (448* w1 * w3 * w4) + (224 *w2 * w3 * w4) - (640 *w1) - (320 *w2) - (320 *w3) - (640 *w4) + (64 *w2 * w1) + (112 *w2 * w1*w1) + (32 *w1*w1) - (224 *w2 * w1 * w3 * w4) - (168 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) + (112 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) + (224 *w1*w1 * w4) - (112 *w1*w1 * w3 * w4) + (32 *w4*w4) + (224 *w4*w4 * w1) - (56 *w4*w4 * w1*w1) - (168 *w4*w4 * w1 * w2 * w3) + (210 *w4*w4 * w1*w1 * w2 * w3) - (112 *w4*w4 * w1 * w2) - (84 *w4*w4 * w1*w1 * w2) + (112 *w4*w4 * w2) + (112 *w4*w4 * w3) - (112 *w4*w4 * w1 * w3) - (84 *w4*w4 * w1*w1 * w3) - (56 *w4*w4 * w2 * w3) a7 = 512 + (128 *w2 * w1 * w3) + (32 *w2 * w1*w1 * w3) + (256 *w2 * w1 * w4) + (64 *w2 * w1*w1 * w4) + (256 *w1 * w3 * w4) + (128 *w2 * w3 * w4) - (256 *w2 * w1) + (64 *w2 * w1*w1) - (128 *w1*w1) + (128 *w2 * w1 * w3 * w4) - (192 *w2 * w1*w1 * w3 * w4) - (256 *w1 * w3) - (128 *w2 * w3) + (64 *w1*w1 * w3) - (512 *w1 * w4) - (256 *w2 * w4) - (256 *w3 * w4) + (128 *w1*w1 * w4) + (64 *w1*w1 * w3 * w4) - (128 *w4*w4) + (128 *w4*w4 * w1) + (32 *w4*w4 * w1*w1) - (192 *w4*w4 * w1 * w2 * w3) + (120 *w4*w4 * w1*w1 * w2 * w3) + (64 *w4*w4 * w1 * w2) - (96 *w4*w4 * w1*w1 * w2) + (64 *w4*w4 * w2) + (64 *w4*w4 * w3) + (64 *w4*w4 * w1 * w3) - (96 *w4*w4 * w1*w1 * w3) + (32 *w4*w4 * w2 * w3) a8 = -192 - (48* w2 * w1 * w3) + (52 *w2 * w1*w1 * w3) - (96 *w2 * w1 * w4) + (104 *w2 * w1*w1 * w4) - (96 *w1 * w3 * w4) - (48 *w2 * w3 * w4) + (320 *w1) + (160 *w2) + (160* w3) + (320 *w4) - (96 *w2 * w1) - (24 *w2 * w1*w1) - (48 *w1*w1) + (208 *w2 * w1 * w3 * w4) - (108 *w2 * w1*w1 * w3 * w4) - (96 *w1 * w3) - (48 *w2 * w3) - (24 *w1*w1 * w3) - (192* w1 * w4) - (96 *w2 * w4) - (96* w3 * w4) - (48 *w1*w1 * w4) + (104 *w1*w1 * w3 * w4) - (48 *w4*w4) - (48 *w4*w4 * w1) + (52 *w4*w4 * w1*w1) - (108 *w4*w4 * w1 * w2 * w3) + (45 *w4*w4 * w1*w1 * w2 * w3) + (104 *w4*w4 * w1 * w2) - (54 *w4*w4 * w1*w1 * w2) - (24 *w4*w4 * w2) - (24 *w4*w4 * w3) + (104 *w4*w4 * w1 * w3) - (54 *w4*w4 * w1*w1 * w3) + (52 *w4*w4 * w2 * w3) a9 = -128 - (64* w2 * w1 * w3) + (24 *w2 * w1*w1 * w3) - (128 *w2 * w1 * w4) + (48 *w2 * w1*w1 * w4) - (128 *w1 * w3 * w4) - (64 *w2 * w3 * w4) + (64 *w2 * w1) - (32 *w2 * w1*w1) + (32 *w1*w1) + (96 *w2 * w1 * w3 * w4) - (32 *w2 * w1*w1 * w3 * w4) + (64 *w1 * w3) + (32 *w2 * w3) - (32 *w1*w1 * w3) + (128 *w1 * w4) + (64 *w2 * w4) + (64 *w3 * w4) - (64 *w1*w1 * w4) + (48* w1*w1 * w3 * w4) + (32 *w4*w4) - (64 *w4*w4 * w1) + (24 *w4*w4 * w1*w1) - (32 *w4*w4 * w1 * w2 * w3) + (10 *w4*w4 * w1*w1 * w2 * w3) + (48 *w4*w4 * w1 * w2) - (16 *w4*w4 * w1*w1 * w2) - (32 *w4*w4 * w2) - (32 *w4*w4 * w3) + (48 *w4*w4 * w1 * w3) - (16 *w4*w4 * w1*w1 * w3) + (24 *w4*w4 * w2 * w3) a10 = 64 - (16 *w2 * w1 * w3) + (4 *w2 * w1*w1 * w3) - (32 *w2 * w1 * w4) + (8 *w2 * w1*w1 * w4) - (32 *w1 * w3 * w4) - (16 *w2 * w3 * w4) - (64 *w1) - (32 *w2) - (32 *w3) - (64 *w4) + (32 *w2 * w1) - (8 *w2 * w1*w1) + (16 *w1*w1) + (16 *w2 * w1 * w3 * w4) - (4 *w2 * w1*w1 * w3 * w4) + (32 *w1 * w3) + (16 *w2 * w3) - (8 *w1*w1 * w3) + (64 *w1 * w4) + (32 *w2 * w4) + (32 *w3 * w4) - (16* w1*w1 * w4) + (8 *w1*w1 * w3 * w4) + (16 *w4*w4) - (16 *w4*w4 * w1) + (4 *w4*w4 * w1*w1) - (4 *w4*w4 * w1 * w2 * w3) + (w4*w4 * w1*w1 * w2 * w3) + (8 *w4*w4 * w1 * w2) - (2 *w4*w4 * w1*w1 * w2) - (8 *w4*w4 * w2) - (8 *w4*w4 * w3) + (8 *w4*w4 * w1 * w3) - (2 *w4*w4 * w1*w1 * w3) + (4 *w4*w4 * w2 * w3) b0 = 16*w4*w4 b1 = 32*w4*w4 b2 = -48*w4*w4 b3 = -128*w4*w4 b4 = 32*w4*w4 b5 = 192*w4*w4 b6 = 32*w4*w4 b7 = -128*w4*w4 b8 = -48*w4*w4 b9 = 32*w4*w4 b10 = 16*w4*w4 Ga = 10^(2/20) %teste x=[0 0 0 0 0 0 0 0 0 0 0]; y=[0 0 0 0 0 0 0 0 0 0 0]; t = linspace(0,1,48000); yy = zeros(1, 48000); for c= 1:48000 x(1) = sin(2*pi*100*t(c)); y(1) = (1/a0)*(b0*x(1) + b1*x(2) + b2*x(3) + b3*x(4) + b4*x(5) + b5*x(6) + b6*x(7) + b7*x(8) + b8*x(9) + b9*x(10) + b10*x(11) + a1*y(2) + a2*y(3) + a3*y(4) + a4*y(5) + a5*y(6) + a6*y(7) + a7*y(8) + a8*y(9) + a9*y(10) + a10*y(11)); yy(c) = y(1); % update x and y data vectors for i = 10:-1:1 x(i+1) = x(i); % store xi y(i+1) = y(i); % store yi end end plot(t,yy) </code></pre> <p>And nothing but disaster happens when testing with a sine wave with 100Hz:</p> <p><a href="https://i.sstatic.net/vWhZN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vWhZN.png" alt="enter image description here" /></a></p> <p>Signal just gets huge, same happens with other frequencies. What am i doing wrong?</p> <p>Edit:</p> <pre><code>sys = tf(10^(2/20).*[b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10],[a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10],1/fs); P = pole(sys) zer=0 zplane(zer,P) P = -1.0001 + 0.0001i -1.0001 - 0.0001i -0.9999 + 0.0001i -0.9999 - 0.0001i 0.9973 + 0.0000i 0.9973 - 0.0000i 0.9860 + 0.0000i 0.9078 + 0.0000i -0.0127 + 0.0000i -0.0127 - 0.0000i </code></pre> <p><a href="https://i.sstatic.net/uCGS8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uCGS8.png" alt="enter image description here" /></a></p> <p>The poles seem to be almost in the unstable region. Do the poles with real part equal to 1.0001 ruin everything? how can i fix them?</p> Answer: <ol> <li>IIR filters should, almost exactly universally* be broken down into first- and second-order sections and cascaded**. The sensitivity of filter pole locations to coefficient values goes up with filter order; even the slightest rounding error will screw up a 10th-order filter.</li> <li>If you have the signal processing toolbox, you should use the built-in IIR filter function. <ol> <li>If you don't, you should still vectorize a and b</li> </ol> </li> </ol> <p>My second-choice recommendation is to vectorize a and b and use a polynomial root-finder to verify that the poles and zeros are in sensible locations, then try to find the bug that's making them wrong.</p> <blockquote> <p>The poles seem to be almost in the unstable region.</p> </blockquote> <p>No, the poles <em>are</em> in the unstable region. Anything outside the unit circle (<span class="math-container">$|z| &gt; 1$</span>) is unstable.</p> <blockquote> <p>Do the poles with real part equal to 1.0001 ruin everything?</p> </blockquote> <p>No. They are diagnostic of the fact that <em>everything is ruined</em>. You have some underlying trouble that needs to be fixed.</p> <blockquote> <p>how can i fix them?</p> </blockquote> <ul> <li>Sensibly, take my first-choice recommendation, below, or do what I'd be tempted to do myself. That's why I'm recommending them. <ul> <li>Note that <strong>someone has already volunteered you a link to a solution</strong>.</li> </ul> </li> <li>If you must persist, try a math package that has arbitrarily high precision (there <strong>may</strong> be a Matlab extension that does this), and try the root-finding at higher precision. But be aware that you're going down a rabbit-hole, and in a world where people publish designs for this sort of thing <em>for free</em> and <em>because it's fun</em>, it's a pointless rabbit-hole.</li> </ul> <p>My first-choice recommendation is to try to find a paper that gives you the poles and zeros of a cascade of 2nd-order filters.</p> <p>What <em>I'd</em> be tempted to do if my first choice didn't work out would be to fit my own IIR filters to the recommended filter shape. Even if I did have that first-choice reference, I may do it anyway and compare which looks better.</p> <p>* unless you're an <strong>absolute freaking expert</strong> <em>and</em> willing to argue with your colleagues <em>and</em> expect to win, you should read this as &quot;absolutely universally&quot;.</p> <p>** or, rarely, cascade-parallel -- this would apply if you have a wide notch filter or similar.</p>
https://dsp.stackexchange.com/questions/85733/trying-to-implement-a-digital-a-frequency-filter
Question: <p>I have posted this question &quot;Electrical Engineering&quot;, but this seems a more appropiate place. I am trying to model a bireciprocal Cauer filter in LTspice but I don't get the expected results. More precisely, using this formula for the coefficients</p> <p><span class="math-container">$\gamma=\frac{re(p_i)−1}{re(p_i)+1}$</span></p> <p>where <span class="math-container">$re(p_i)$</span> is the realpart of the pole, gives this result:</p> <p><a href="https://i.sstatic.net/dOE8W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dOE8W.png" alt="normal way" /></a></p> <p>At this point it doesn't really matter what settings were in the beginning, the &quot;why&quot; is in the following. Among the few references I found online, one that gives a numerical example is a thesis, <i>Design and Realization Methods for IIR Multiple Notch Filters and High Speed Narrow-band and Wide-band Filters, L. Barbara Dai</i> and, simply by looking at the numbers and comparing them with what I had, it seemed as if the poles need to be &quot;normalized&quot; to the single real pole, <span class="math-container">$p_{\frac{N+1}{2}}$</span>. And so I tried:</p> <p><span class="math-container">$\gamma=\frac{\frac{re(p_i)}{re_{\frac{N+1}{2}}}-1}{\frac{re(p_i)}{re_{\frac{N+1}{2}}}+1}$</span></p> <p>and, even if the numerical values still differed, but a not as before, I got this result:</p> <p><a href="https://i.sstatic.net/X1Yqz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X1Yqz.png" alt="&quot;normalized&quot; coefficients" /></a></p> <p>The example used here is not the one used in the thesis, but I seem to get good results (I cannot verify them) with either stop-band, or transition-band optimizations and for any (odd) order.</p> <p>So, my question is: are the <span class="math-container">$A_i$</span> terms from the formula <span class="math-container">$\gamma=\frac{A_i-2}{A_i+2}$</span> calculated as <span class="math-container">$A_i=2\sigma_i$</span>, where <span class="math-container">$\sigma_i$</span> is the realpart of the complex <i>s-domain</i> <span class="math-container">$s_i=-\sigma_i\pm j \omega_i$</span> or somehow else? If else, how?</p> <hr /> <p>Just for the sake of comparison, the following is a test using the same settings as in the thesis: <span class="math-container">$A_s=68 =&gt; A_p, \omega_s=\frac{2}{3} =&gt; \omega_p , f_0=2$</span>. The order is calculated based on these four parameters, which will result in a stop-band attenuation optimization, rather than a transition-band or a pass-band optimization. I say this because I don't know what approach Barbara Dai has.</p> <p>The first simulation is with the raw values from the thesis for <span class="math-container">$\gamma_i$</span> (black trace) and the quantized values (blue trace) (surprisingly, the quantized values seem to get a better result):</p> <p><img src="https://s27.postimg.org/txv5djaw3/thesis.png" alt="thesis" /></p> <p>If I calculate the values for <span class="math-container">$\gamma_i$</span> according to the equation from p.26 from the thesis, I get these values:</p> <p><span class="math-container">$\gamma_1=−0.098365443613057, \gamma_2=−0.34760115224764, \gamma_3=−0.7329991130665$</span></p> <p>where the values for the real part(s) of the poles, <span class="math-container">$\sigma_i$</span>, are:</p> <p><span class="math-container">$\sigma_1=0.15406868065906, \sigma_2=0.48411864791316, \sigma_3=0.82088758493805$</span></p> <p>The results of the simulation with LTspice is this:</p> <p><img src="https://s29.postimg.org/8h40dsfnr/test1.png" alt="my results" /></p> <p>where the black trace is with the above coefficients and the blue trace is with Barbara Dai's unquantized.</p> <p>Seeing this I tried to transform back the values for <span class="math-container">$\gamma_{1,2,3}$</span> from the thesis, to see what values for the poles were originally and compare them against my results:</p> <p><span class="math-container">$\sigma_1^{BD}=0.15537305159045, \sigma_2^{BD}=0.48869842253758, \sigma_3^{BD}=0.83127957288835$</span></p> <p>which are different than mine. However, at a glance, it seemed that I could try to divide each real pole from my calculations to the value of the single, real pole at <span class="math-container">$s_4=\sigma_4+j0 , \sigma_4=0.98572364533093$</span>, in order to calculate the values for the lattice coefficients (which is how the 2nd eq. from the beginning appeared), and I got these values:</p> <p><span class="math-container">$\gamma_1=−0.091240471459003, \gamma_2=−0.34126450145251, \gamma_3=−0.72965482018797 $</span></p> <p>and the result of the simulation is this:</p> <p><img src="https://s16.postimg.org/bs9pdbupx/test2.png" alt="surprise!" /></p> <p>with the blue trace being this result and the black trace Barbara Dai's unquantized, which seems even better even if the lobes in the stop-band aren't quite equiripple:</p> <p><img src="https://s22.postimg.org/vwrlg7ykh/lobes.png" alt="lobes" /></p> <hr /> <p>[edit]</p> <p>The case of the BLWDF implies that, given the stop-band attenuation and frequency, the pass-band attenuation and frequency can be deduced, or vice-versa. For this case, I'll impose <span class="math-container">$A_s$</span> and <span class="math-container">$\omega_s$</span> and deduce <span class="math-container">$A_p=-10 log_{10}(1-10^{-\frac{A_s}{20}})$</span> (eq. 2.51 in the above thesis) and <span class="math-container">$f_p=\frac{f_0}{2}-f_s$</span> (in the analog domain) or <span class="math-container">$\omega_p=\frac{1}{\omega_s}$</span> (in the digital domain, eq. 2.52a,b).</p> <p>The example at p.27 gives <span class="math-container">$A_s=68 \omega_s=\frac{16}{48}kHz=\frac{2}{3}$</span> (normalized to <span class="math-container">$\frac{f_0}{2}=1$</span>). From these: <span class="math-container">$A_p=6.8831e-7$</span> and <span class="math-container">$f_p=\frac{1}{3}$</span>, or <span class="math-container">$\omega_p=\frac{1}{1.732}=0.57735$</span>. Using these to find the poles would imply several approaches, due to the complexity if Cauer filters. I don't know what approach the thesis uses but, whichever the case, it shouldn't yield such differences as the ones shown in picture#3. For my case, I'll use stop-band optimization, obtained by imposing <span class="math-container">$A_s, A_p, \omega_s$</span> and <span class="math-container">$\omega_p$</span> and determining the order. The poles (zeroes are not needed here) are <span class="math-container">$\sigma_{1,2,3}$</span> below picture#3 and the result is picture#4.</p> <p>If I try to reverse Barbara Dai's process, to determine what poles were used to calculate her version of <span class="math-container">$\gamma_i$</span>, I get the values of <span class="math-container">$\sigma_{1,2,3}^{BD}$</span> below picture#4, which are slightly different than mine.</p> <p>At this point, back then when I obtained them, it seemed to me that I <i>could try</i> to divide each pole to <span class="math-container">$\sigma_4$</span>, the real, single pole, and so I did (second formula from above) which gave the results in picture#5, but this can't be the normal way of doing it, it was a whim tried at the moment, which gave quite the unexpected pleasant surprise. But now I'm left with the question: how are the poles derived in order to calculate the values for <span class="math-container">$\gamma_i$</span>? Because it's not meant to be any different than any other Cauer filter design, with the differences in symmetry due to the bireciprocal nature.</p> <hr /> <p>Ultra-short-summary:</p> <ul> <li><p>using the coefficients calculated as <span class="math-container">$\gamma_i=\frac{A_i-2}{A_i+2}$</span>, where <span class="math-container">$A_i=2\sigma_i$</span> (s-domain <span class="math-container">$s_i=\sigma_i+j \omega_i$</span>), is not working (see picture#1, #4 - black trace)</p> </li> <li><p>since only odd orders are valid, there is an extra single, real pole. By sheer ogling, dividing <span class="math-container">$\sigma_i$</span> to <span class="math-container">$\sigma_{\frac{N+1}{2}}$</span> gives the results in pictures #2 and #5 - black trace.</p> </li> </ul> <p>Question: Are the terms <span class="math-container">$A_i$</span> calculated as <span class="math-container">$2 \sigma_i$</span> or somehow else? If else, how?</p> <hr /> <p>I don't know how to explain better at this time. If there are any English errors, my apologies, it's not my native language.</p> Answer: <p>Let's try to sort of answer this from the BLWDF point of view (without much of the WDF-theory, since this can to a large extent be skipped as you know which structure you want).</p> <p>Starting from a second-order BLWDF allpass section (based on symmetric two-port adaptors without any negations in the feedback), the transfer function is $$\frac{z^{-2}-a}{1-a z^{-2}} = \frac{1-a z^2}{z^2-a},$$ where $a$ is the adaptor coefficient (assuming the input connected to the negative side of the subtractor is the input). This has roots in $z=\pm \sqrt{a}$. Hence, the poles can be either on the real or imaginary axis in the $z$-domain. Typically, you would like to map them to the imaginary axis. This clearly hold for the standard approximations such as Cauer/Elliptic filters.</p> <p>So, one approach is to design your filter directly in the $z$-domain, making sure that the poles end up on the imagniary axis and then take every other pole pair and position in every other branch.</p> <p>As you mention, for this to happen, you need a anti-symmetric power complementary filter, so it should meet Feldtkeller's equation $$ |H(e^{-j\omega})|^2 + |H_C(e^{j\omega})|^2 = 1,$$ where $H_C$ is the complementary filter (in the case of parallel allpass filters the sum/difference if the original filter is obtained by subtracting/adding the branches). This gives that</p> <p>$$(1-\delta_c)^2 + \delta_s^2 = 1 \Rightarrow \delta_s^2 = 2\delta_c - \delta_c^2 \approx 2\delta_c \Rightarrow \delta_C \approx \frac{\delta_s^2}{2},$$ where $\delta_c$ and $\delta_s$ are the passband and stopband ripples, respectively, leading to $A_p = -20 \log_{10} (1-\delta_c) $ and $A_s = -20 \log_{10} (\delta_s)$. In addition, the passband and stopband edges should be related as $\omega_c = \pi - \omega_s$. However, the trick here is to know exactly how to select your specification so that you get a specification without any over design. If you manage that, you are home.</p> <p>The same type of problem arises when you go from an analog filter. You need to have a filter, such that the specification become a spec which can be mapped to a BLWDF. Now, the relation is quite straightforward to compute, but you will need to find a spec where all four parameters (passband/stopband ripple/edge) results in an odd order filter without any over design. </p> <p>While LWDF (and all filters constructed of the allpass branches in parallel) are very sensitive to coefficient quantization, the quantized results in your first comparison figure are really better since they come from a mini-max solution and are not quantized that hard. Your values are, as you've noticed, probably not computed the right way. I tend to believe the reason being that your analog filter is over designed in one way or the other, leading to that it is not actually suitable for a BLWDF, but rather an LWDF, i.e., the poles do not end up exactly on the imaginary axis after the transform. Reading your text again, I think I can confirm that this is the reason: </p> <blockquote> <p>The order is calculated based on these four parameters, which will result in a stop-band attenuation optimization, rather than a transition-band or a pass-band optimization. I say this because I don't know what approach Barbara Dai has.</p> </blockquote> <p>Hence, you need to adjust the specification such that there are no "optimization" in the design process.</p> <p>I can extend the answer where required, but please point out where it is needed (I will, e.g., not go into the bi-linear transform for time constraint reasons right now).</p>
https://dsp.stackexchange.com/questions/15112/bireciprocal-lattice-wave-digital-filter
Question: <p>Consider a set of dense, but generally irregularly-spaced frequency response measurements of some real low-pass analog filter. Denote the maximum frequency for which a frequency response measurement is available as F_a_max.</p> <p>I would like to create a digital filter model for this analog filter. The sample rate of this filter F_s needs to be upsampled with respect to the rate implied by the measurements, i.e., F_s > 2 * F_a_max. I have used Matlab's invfreqz function to obtain the transfer function of the digital filter from the frequency response measurements. In principle, I can associate an arbitrarily high sample rate with the frequency response measurements. However, I'm concerned about what the response of the digital filter will look like for frequencies greater than F_a_max. While I don't know the exact desired response for such high frequencies (since I don't have frequency measurements for such high frequencies), I don't want the response to go "crazy" in that high frequency band. </p> <p>An alternative may be using invfreqs to obtain coefficients of an analog transfer function and then use, e.g., a bilinear transform at the desired sample rate to obtain a digital filter. However, the same concern exists.</p> <p>Are there standard approaches to this problem? I have seem some recommend Lagrange interpolation of the invfreqz output where F_s is first set to 2*F_a_max, and then Lagrange interpolation would resample the response at the higher rate.</p> Answer: <p>You mention <span class="math-container">$2F_{a_{max}}$</span>, which makes me think that you're trying to invoke the Nyquist-Shannon sampling theorem. But that theorem doesn't deal with any "highest measured values". It deals with that process of getting a signal from the continuous-time domain (or some higher-rate sampled domain) to a sampled-time domain.</p> <p>I think if you have an existing filter that you want to measure and duplicate, that your best bet would be to reverse-engineer the analog filter transfer function, and then use any of the numerous filter design methods to design a digital filter that replicates whatever part of the filter behavior you want to replicate.</p> <p>Note that unless you sample really fast compared to your highest frequency of interest, you're going to have trouble (or not manage to) get both the phase and the amplitude response to match.</p>
https://dsp.stackexchange.com/questions/64207/obtaining-a-high-sample-rate-digital-approximation-to-an-analog-filter-from-lowe
Question: <p>I'm trying to create a digital filter from a first order analog filter with transfer function $$H(s)=\frac{1}{1+\tau s}$$ with time constant $\tau=.1\text{s}$, and sampling rate $f_s=1000\text{Hz}$.</p> <p>Applying the bilinear transform in Matlab however appears to yield a filter with the a different 3dB point than expected. I expect the 3dB point to be at $\frac{1}{\tau}=10\text{Hz}$, but it appears to be around $1.6\text{Hz}$. Any idea what I could be doing incorrectly? <img src="https://i.sstatic.net/mXRXo.png" alt="frequency response of low pass filter"></p> <p>Matlab code:</p> <pre><code>fs = 1000; tau = .1; num = 1; den = [tau, 1]; [numd,dend]=bilinear(num,den,fs); [h, f] = freqz(numd,dend,4096, fs); figure(1); clf(); subplot(211); semilogx(f,20*log10(abs(h))); hold on plot([.1, 1000], [-3 -3],'r'); grid on; ylim([-40,1]); ylabel('gain (db)'); xlim([.1, fs/2]); subplot(212); semilogx(f, angle(h)*180/pi); grid on; ylabel('phase(rad)'); xlim([.1, fs/2]); xlabel('frequency(Hz)'); </code></pre> Answer: <p>The expected 3dB frequency is wrong because of radian conversion. With $s=j\omega$, $$H(j\omega) = \frac{1}{1+\tau j\omega} $$ and $20\log_{10}|H(j\omega)|\approx-3$ when $\omega=\omega_{3dB}=\frac{1}{\tau}$. However $\omega=2\pi f$, so $$f_{3dB}=\frac{\omega_{3dB}}{2\pi}=\frac{1}{2\pi\tau}.$$ With $\tau=0.1\text{s}$, $\ f_{3dB}=1.59\text{Hz}$ as on the original plots.</p>
https://dsp.stackexchange.com/questions/11345/digital-implementation-of-first-order-analog-filter-using-bilinear-transformatio
Question: <p>I have an analog filter with its frequency response curve in dB described by the following expression: $$ N_{dB}=20log_{10}\omega t_1 \sqrt{\frac{1+(\omega t_2)^2}{1+(\omega t_1)^2}} $$ This expression is derived from the series connection of two lowpass filters each associated with the following RC circuit:<br> <img src="https://dl.dropboxusercontent.com/u/617319/3.jpg" alt="RC Circuit"><br> where, for each circuit, the <em>time constant</em> $t_i=RC$ and of course $\omega = 2\pi f$, where $f$ is the frequency (this is actually the equalization curve for magnetic tape recording/playback, see <a href="http://www.richardhess.com/tape/history/NAB/NAB_Reel_Tape_Standard_1965_searchable.pdf" rel="nofollow">Annex B, page 14 of this document</a>).</p> <p>I would like to obtain an approximation of this frequency response using a digital filter. I don't know if there is a method to exploit our knowledge of the analog frequency response or if I should design the filter myself from scratch. The end goal is to obtain the impulse response and save it as a a .wav file (I know how to do this last part). I just took a basic DSP course at my Uni but we didn't work with analog filters so I am a little bit lost.</p> Answer: <p>From the given magnitude response (and from what is written in the document), the transfer function of your system is</p> <p><span class="math-container">$$H(s)=\frac{st_1(1+st_2)}{1+st_1}\tag{1}$$</span></p> <p>Note that this system is not stable. I suppose that this response is only to be approximated in a certain frequency range, which means that a stable filter can approximate this transfer function well enough in that frequency range.</p> <p>There are several methods for transforming an analog transfer function to the digital domain. You could use the <a href="http://en.wikipedia.org/wiki/Bilinear_transform" rel="nofollow noreferrer">bilinear transform</a>, which will give you a recursive digital filter. This filter will have a pole at <span class="math-container">$z=-1$</span> (i.e. it will not be stable), but you could move this pole away from the unit circle (e.g. to <span class="math-container">$z=-.98$</span>, just try a few values), and this will probably result in a useful system. Another method is frequency sampling. You replace <span class="math-container">$s$</span> in (1) by <span class="math-container">$j\omega$</span> and evaluate <span class="math-container">$H(j\omega)$</span> on an equidistant frequency grid in the range <span class="math-container">$[0,f_s/2]$</span> where <span class="math-container">$f_s$</span> is your sampling frequency. Then you extend your desired frequency response to the range <span class="math-container">$[0, f_s]$</span> by taking the conjugate symmetry property of the DFT into account. Then you simply obtain the impulse response by applying an inverse FFT to the desired frequency response.</p>
https://dsp.stackexchange.com/questions/15134/approximate-the-magnitude-response-of-an-analog-filter-with-a-digital-filter-st
Question: <p>I am trying to design a digital ButterWorth filter for the given specifications.</p> <pre><code>rp=3; rs=15; FS=1; wp=0.5*pi; ws=0.75*pi; pwp=2*FS*tan(wp/2); pws=2*FS*tan(ws/2); [n,wn]=buttord(pwp,pws,rp,rs,'s') [b,a]=butter(n,wn,'s'); [bn,an]=bilinear(b,a,FS);%error freqz(bn,an,512,FS); </code></pre> <p>If I design a filter manually i am getting different values for <code>bn</code> and <code>an</code> but Matlab gives different answer. Can anybody please help me?</p> Answer: <p>I cannot run your code because I have not matlab on this pc, but I can try to give you some advice.</p> <p>The first thing I will check is the way you defined the cut-off frequencies. The Matlab function uses normalized frequencies (look at the examples here <a href="http://it.mathworks.com/help/signal/ref/buttord.html" rel="nofollow">http://it.mathworks.com/help/signal/ref/buttord.html</a>), which means you have to define it in this way: lets assume you want to cut your signal at 5 Hz and your sampling frequency is $Fs$</p> <pre><code>cut_freq = 5; Wn = cut_freq/(Fs/2); </code></pre> <p>Of course if you have a band-pass filter you have to define 2 cut-off frequencies and therefore $Wn$ will be a vector of 2 elements.</p> <p>Hope this helps. As soon as I get my laptop I will try to run your code and see what happens.</p>
https://dsp.stackexchange.com/questions/19279/digital-butterworth-filter-design-error
Question: <p>A reviewer has asked me to re-filter my data to remove baseline drift.</p> <p>Each sweep is 170ms sampled at 1000Hz and the reviewer wants it high-pass filtered at 0.5Hz. The original bandpass settings on the hardware were 0.15 to 1000Hz</p> <p>Although this is easy to code in matlab, the epoch is very short compared with the stop frequency. How does the epoch length affect the filter order selection and is there any point attempting it at all?</p> Answer:
https://dsp.stackexchange.com/questions/17246/digital-filter-and-epoch-length
Question: <p>I found this digital filter in code I am working on. It is a low pass filter. In the code it is called an "alpha filter", but it is not the same as the <a href="https://en.wikipedia.org/wiki/Alpha_beta_filter#Alpha_filter" rel="nofollow noreferrer">alpha filter mentioned here</a>.</p> <p>I post the relevant code below:</p> <pre> float alpha = 0.7; float prev = 0.0; float filter(float sample) { float filtered = (sample * alpha) + (prev * (1.0 - alpha)); prev = sample; return filtered; } </pre> <p>It looks like alpha should be in range $[0, 1]$. By increasing <code>alpha</code>, the filter tracks the measurement with less delay, but lets more noise through.</p> <p>I am tasked with writing library functions, and I would like to give this thing its proper name, and hopefully link to a Wikipedia article in the doc tags.</p> Answer: <p>I'm new, so I can't add this comment to Matt L.'s answer.</p> <p>It is not an exponential filter, the equation is actually:</p> <p>$$ y[n] \ = \ \alpha \, x[n] \ + \ ( 1 - \alpha ) \, x[n-1] $$</p> <p>So it is a very short FIR filter, not an IIR filter. I'm not expert enough to know a specific name.</p> <p>Ced</p> <p>============================================= Followup:</p> <p>I want to thank everyone for their upvotes. Yes, I've contributed to comp.dsp and I am a blogger at dsprelated.com.</p> <p>Like all of you, I suspect that the intent of the function was to be an exponential filter and was coded erroneously.</p> <p>If I were to name the filter as coded, I would call it a "Linear Interpolation Filter", or perhaps a "Subsamplesize Time Shift filter" when $\alpha$ is between zero and one. It would make more sense as such if the $\alpha$ and $(1-\alpha)$ coefficients were reversed.</p>
https://dsp.stackexchange.com/questions/46278/what-is-the-name-of-this-digital-low-pass-filter
Question: <p>I have an <a href="https://github.com/bcrowell/kcals" rel="nofollow noreferrer">open-source software project</a> whose purpose is to analyze a GPS track, or a similar track made by an application such as google maps, and estimate the physical exertion required to hike or run that route. Traditionally, people have gotten a gut feeling for this sort of thing by specifying the horizontal distance and the vertical gain (i.e., the amount of climbing, ignoring any descents). My software tries to do better than that, but in any case, all such measures are extraordinarily sensitive to small, spurious fluctuations in the elevation data. For instance, I could carry a GPS receiver with me on a totally flat run along a straight road in Kansas, and if the elevation is constantly fluctuation up and down by a few meters, due to GPS errors, it could show up as if I have a huge amount of vertical gain. Similar things happen with map-tracing methods, because the digital elevation models (DEMs) have fairly large errors. GPS elevations can be extremely accurate when the position of the satellites in the sky is favorable, but it can be incredibly far off in unfavorable cases (I've seen errors of thousands of meters).</p> <p>Can anyone recommend a suitable method for filtering, so that I'm not reinventing the wheel, and doing it badly?</p> <p>The crude method I've been doing so far is the following. I do an initial iteration of the data analysis to find the integrated horizontal distance $h$ for each point on the track. That gives me the coordinates $x(h)$, $y(h)$, and $z(h)$ as functions of distance. Then I convolve the elevation $z(h)$ with a rectangular window 500 meters wide horizontally. This gives me what appear to be accurate and reproducible estimates of the total gain. However, I also find that my initial estimate of $h$ is quite crude, because the track has a sort of fractal structure, some of which may be real and some of which is probably measurement errors. So I probably want to do some kind of low-pass filtering on $x$ and $y$ as well, maybe cutting out anything with a wavelength less than about 100 m, and then do a second iteration on the integration of $h$.</p> <p>Googling turns up a lot of material on Kalman filters. Am I correct in understanding that a Kalman filter is not really the right tool for this job, since it's meant for an object like a missile or a helicopter, which has a lot of inertia? Also, my data is a track in Euclidean space, not a time series.</p> <p>I'm looking for a method that is fairly robust, and can be easily implemented and played with using open-source software on linux. My code is written in ruby, but I would be OK with shelling out to something like scipy, as long as it has a fast startup time. (E.g., I don't want to use Julia because of the slow startup time.) I would like to implement this using a library that is well tested, has a large user base, and is likely to be around and well maintained for a long time.</p> Answer:
https://dsp.stackexchange.com/questions/37197/appropriate-digital-filter-for-gps-tracks
Question: <p>I am working on the demodulation of digital signals, I am following <a href="https://pysdr.org/_images/sync-diagram.svg" rel="nofollow noreferrer">this</a> block diagram.</p> <p><img src="https://pysdr.org/_images/sync-diagram.svg" alt="block diagram"><br> <sup>Source: <a href="https://pysdr.org/content/sync.html" rel="nofollow noreferrer">Synchronization — PySDR: A Guide to SDR and DSP using Python</a> </sup></p> <p>The problem is, I don't know what kind of filter tx is using, so what kind of matched filter i have to use? And is it necessary to use the matched filter for proper demodulation?</p> Answer: <blockquote> <p>The problem is, I don't know what kind of filter tx is using, so what kind of matched filter i have to use?</p> </blockquote> <p>You need to estimate the transmit filter than, and also, if anything, what filtering the &quot;Wireless channel&quot; in your block diagram adds to that.</p> <p>The job of estimating the pulse shape is fulfilled by the <em>equalizer</em> in your receiver.</p> <p>So, if you don't know the transmit pulse shape, you can't just have a block that's called &quot;matched filter&quot;; you need a different block &quot;equalizer&quot;.</p> <blockquote> <p>And is it necessary to use the matched filter for proper demodulation?</p> </blockquote> <p>I'm sure you've read about the advantages of matched filtering. Absence of a matched filtering thus just doesn't give you these advantages. It depends on your channel and modulation, channel code and decoder algorithm and the robustness of your synchronization whether your system would work without. Can't give you a general answer.</p>
https://dsp.stackexchange.com/questions/96513/matched-filter-in-digital-demod
Question: <p>I'm studying the IIR filter design that is described in the book: <a href="http://www.nt.tuwien.ac.at/fileadmin/users/gerhard/diss_Lang.pdf" rel="nofollow noreferrer">Algorithms for the constrained design of digital filters with arbitrary phase and magnitude responses</a>. </p> <p>You can get the code at page 171 (at least the main function), and here is an example of a filter design :</p> <pre><code>M=4; N=4; tau=5; om=pi*[linspace(0,0.2,20),linspace(0.4,1,60)]; D=[exp(-1i*om(1:20)*tau),zeros(1,60)]; W=ones(1,80); [b,a,e]=mpiir_l2(M,N,om,D,W,0.98); </code></pre> <p>This is the design of a pass-band linear phase lowpass filter</p> <p><img src="https://i.sstatic.net/bj26y.jpg" alt="enter image description here"></p> <p>There is something I don't understand in this code: <code>exp(-1i*om(1:20)*tau)</code> is a way to create regularly spaced points with constant phase change and magnitude = 1. But I don't understand the parameter <code>tau</code>. I tried to change that parameter and I had totally different results: like a notch instead of a lowpass.</p> <ul> <li>How do you choose that number ? </li> <li>I have to design my filters according to the user desire but how to guess that parameter? </li> <li>And also how to you choose the phase angles shift ?</li> </ul> <p>Jeff</p> Answer: <p>The variable <code>tau</code> was chosen so that the phase of <code>D</code> at $om = 0.2$ is $-\pi$.</p> <p>It is easier to understanding what is going on if you plot <code>D</code>, which is the desired magnitude/phase response of the filter. Add the following code to help you visualize what is going on:</p> <pre><code>figure(1); subplot(2,1,1); plot(om/pi,10*log10(abs(D)+realmin)); title('Magnitude Response'); xlabel('Frequency (normalized)'); ylabel('Magnitude (dB)'); axis([0, 1, -80, 10]); subplot(2,1,2); plot(om/pi,angle(D)*180/pi); title('Phase Response'); xlabel('Frequency (normalized)'); ylabel('Phase (degrees)'); axis([0, 1, -180, 180]); </code></pre> <p>When you look at the resulting plot, you can see that the current value of tau gives a phase response which dips down to $-180^\circ$ at the cutoff frequency. Notice that the phase will wrap back around from $-180^\circ$ to $180^\circ$. This will almost always lead to undesired results in a system.</p>
https://dsp.stackexchange.com/questions/10455/least-squares-digital-iir-filter-design-with-arbitrary-responses
Question: <p>In the process of applying a lowpass Bessel filter to my signal, I realized that besself function does not support the design of digital Bessel filters and the bilinear function can be used to convert an analog filter into a digital form, except for Bessel filters. The digital equivalent for Bessel filters is the Thiran filter. The only thing I know for my filter is that it should have less than 5GHz bandwidth ( let's say 3GHz bandwidth). I do appreciate if someone could help me write the code in Matlab. The code which I have so far is:</p> <pre><code>%lowpass filter sig = MY SIGNAL; sig_length = 5000001; % my signal length fs = 10000e9 % sampling rate fc = 3e9; % cutt off frequency order = 4; wo = 2*pi*fc; [z,p,k] = besself(order, wo,'low'); % zero, pole and gain form % Convert to digital fileter [zd,pd,kd] = bilinear(z,p,k,fs); % z-domain zero/pole/gain [sos,g] = zp2sos(zd,pd,kd); % convert to second order section filteredSignal = filtfilt(sos, g, sig); </code></pre> Answer: <p>I don't have Matlab but the coefficients for the Thiran filter are given by the Gaussian hypergeometric function. If you have <a href="https://wxmaxima-developers.github.io/wxmaxima/" rel="nofollow noreferrer">wxMaxima</a> there already is a built-in function. If you'll run these two lines (the first one is only needed once), you'll get the denominator:</p> <pre><code>expand_hypergeometric : true$ N : 4$ D : 1$ hypergeometric([-N, 2*d], [2*D + N + 1], z); </code></pre> <p><code>N</code> is the order and <code>D</code> the delay, in seconds. The above will show:</p> <pre><code>z^4/42-(4*z^3)/21+(9*z^2)/14-(8*z)/7+1 </code></pre> <p>If Matlab doesn't have that function, you could implement it as:</p> <p><span class="math-container">$$H(z)=\dfrac{(2N)!}{N!}\dfrac{1}{\prod_{i=N+1}^{2N}{2D+i}}\dfrac{1}{\sum_{k=0}^N{(-1)^k\binom{N}{k}\left[\prod_{i=0}^N{\left(\dfrac{2D+i}{2D+k+i}z^{-k}\right)}\right]}} \tag{1}$$</span></p> <p>Thenumerator will be the sum of the coefficients, for unity gain. It will be more numerically stable to split the denominator into 2nd order sections.</p> <p>Also, as it is, the filter has very poor frequency attenuation, but if you add a numerator in <span class="math-container">$z$</span>, you may get better results. See <a href="https://dsp.stackexchange.com/q/82395/17189">this</a> for example, where the two zeroes can give a much better attenuation, at the cost of some extra delay. Or, if only a zero at Nyquist is fine, just add <span class="math-container">$1+z^{-1}$</span>.</p>
https://dsp.stackexchange.com/questions/82411/matlab-how-to-design-digital-equivalent-for-a-lowpass-bessel-filter-thiran-fil
Question: <p>I'm pretty well versed in statistics, but not really digital signal filtering. I have a data scenario where I expected to be able to pretty easily filter out some noise (human pulse) that's at a known frequency band, but I'm having a lot of trouble using the standard tools in the scipy.signal library and think I must be misunderstanding how to design digital filters. I have a <a href="https://mybinder.org/v2/gh/mike-lawrence/filter_play/master" rel="nofollow noreferrer">notebook here</a> that walks through my explorations thus far, but the gist is that the standard scipy filters seem to cause large distortions at the start and end of my signal, with the precise behaviour dependent on the phase of the noise signal I'm trying to subtract. Just in case the above binder link goes down, I'll include some of the key points below as well:</p> <p>First generating some synthetic data that's similar to my real data:</p> <pre><code>#generate time vector samples_per_sec = 10.0 total_time = 40.0 time = np.linspace(0, total_time, int(total_time*samples_per_sec)) #generate the pulse signal pulse_hz = 1.0 pulse_phase = np.radians(0) pulse = np.sin(time*(2*np.pi)*pulse_hz - pulse_phase) #generate the BOLD signal (just something that goes up then down) dist = stats.beta(2, 2) bold = dist.pdf((time-10)/20) / 10.0 # division by 10 to make bold a small signal #combine pulse_plus_bold = pulse+bold plt.plot(time, pulse_plus_bold); </code></pre> <p><a href="https://i.sstatic.net/4oh5r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4oh5r.png" alt="combined signals" /></a></p> <p>Try a 1st order butterworth:</p> <pre><code>#1st order butterworth filter in ba mode ba1 = signal.butter( output = 'ba' , N = 1 #needs to be low if using output='ba', else use output='sos' and sosfiltfilt , Wn = [0.5,1.5] , btype = 'bandstop' , fs = samples_per_sec ) filtered_ba1_nopad = signal.filtfilt( b = ba1[0] , a = ba1[1] , x = pulse_plus_bold , padtype = None ) plt.plot(time, filtered_ba1_nopad, 'b'); plt.plot(time, bold, 'r--'); plt.legend(['Filtered', 'Expected'], loc=(1.04,.5)); </code></pre> <p><a href="https://i.sstatic.net/6frvR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6frvR.png" alt="enter image description here" /></a></p> <p>First-order butterworth with even padding:</p> <pre><code>filtered_ba1_pad_even = signal.filtfilt( b = ba1[0] , a = ba1[1] , x = pulse_plus_bold , method = 'pad' , padtype = 'even' ) plt.plot(time, filtered_ba1_pad_even, 'b'); plt.plot(time, bold, 'r--'); plt.legend(['Filtered', 'Expected'], loc=(1.04,.5)); </code></pre> <p><a href="https://i.sstatic.net/0LoFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0LoFJ.png" alt="enter image description here" /></a></p> <p>First-order butterworth with odd padding:</p> <pre><code>filtered_ba1_pad_odd = signal.filtfilt( b = ba1[0] , a = ba1[1] , x = pulse_plus_bold , method = 'pad' , padtype = 'odd' ) plt.plot(time, filtered_ba1_pad_odd, 'b'); plt.plot(time, bold, 'r--'); plt.legend(['Filtered', 'Expected'], loc=(1.04,.5)); </code></pre> <p><a href="https://i.sstatic.net/5GCg5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5GCg5.png" alt="enter image description here" /></a></p> <p>This latter looks really good! But after playing around I discovered that whether odd or even (or either) padding works better seems to be contingent on the phase of the signal being filtered-out. As an example, while the above obtained excellent filtering with odd-padding, here's the same scenario but with a phase-shift added to the pulse signal that yields edge artifacts in both odd and even:</p> <pre><code>phase = np.radians(45) pulse_shifted = np.sin(time*(2*np.pi)*pulse_hz - phase) pulse_shifted_plus_bold = pulse_shifted+bold filtered_shifted_ba1_pad_odd = signal.filtfilt( b = ba1[0] , a = ba1[1] , x = pulse_shifted_plus_bold , method = 'pad' , padtype = 'odd' ) filtered_shifted_ba1_pad_even = signal.filtfilt( b = ba1[0] , a = ba1[1] , x = pulse_shifted_plus_bold , method = 'pad' , padtype = 'even' ) fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(5, 3)) axes[0].plot(time, filtered_shifted_ba1_pad_odd, 'b') axes[0].plot(time, bold, 'r--') axes[1].plot(time, filtered_shifted_ba1_pad_even, 'b') axes[1].plot(time, bold, 'r--') fig.tight_layout() plt.title('Odd (left) and Even (right)') plt.legend(['Filtered', 'Expected'], loc=(1.04,.5)); </code></pre> <p><a href="https://i.sstatic.net/kE73Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kE73Q.png" alt="enter image description here" /></a></p> <p>I've also tried the 'gust' padding method as well as higher order filters (using sos of course), and observe the same phase-dependent edge artifacts in everything I've tried. Any tips?</p> Answer: <p>Your basic problem is that filtfilt (and most other linear filtering routines) take filters that are designed for infinitely long time expanses, and apply them to a chunk of data as if the data were extended infinitely in both directions with zeros.</p> <p>So you have a legitimate bandpass filter, and it's &quot;seeing&quot; a legitimate jump in the signal at the starting point of <em>your</em> signal.</p> <p>There's three basic approaches you can take; the first two are ad-hoc and easy, the third one is difficult if you're starting from first principles. It's certainly been solved out there someplace, but a brief search here on &quot;filter finite-length data&quot; didn't find me joy.</p> <p>Approach 1: window the input data</p> <p>Take your input data, and multiply it by something that'll make it taper off at the ends. I.e. a ramp from 0 to 1 over 10 samples at each end, or <span class="math-container">$\frac{1}{2}\left (1 - \cos \frac{\pi n}{N} \right)$</span> for N samples at each end (suitably reversed on the left end). You'll have some artifacts (a rising sine wave isn't the same as a steady one, after all), but they'll be attenuated. Here's python code implementing a cosine edge attenuation with the ability to customize what central % of the signal is kept as 1:</p> <pre><code>def attenuate_edges(signal,time,edge_attenuation_percent): start = int(np.floor(len(time)*edge_attenuation_percent)) end = int(len(time)-start) ramp = (1-np.cos(np.pi*(np.arange(start)/start)))/2 edge_attenuator = np.ones(len(time)) edge_attenuator[0:start] = ramp edge_attenuator[end:len(time)] = np.flip(ramp) return(signal*edge_attenuator) </code></pre> <p>Approach 2: Trim the output data</p> <p>Do what you're doing now, and lop off the nastiness at the ends. This is probably the easiest, and if you can just collect a bit more data, doesn't lose you anything.</p> <p>Approach 3: Do a proper estimate of the interfering signal, and subtract it out</p> <p>This will be fun-fun if you love math and have the time. Basically you'll use the fact that the value of your interfering signal at time <span class="math-container">$n$</span> correlates in a specific way with the values of your interfering signal at time <span class="math-container">$k$</span> for all values of <span class="math-container">$n$</span> and <span class="math-container">$k$</span> in your data set. You'll probably end up with something that looks a lot like a Wiener or a Kalman filter, that takes the end-effects into account. Your estimate will be worse at the ends, but this will show up as a bit of noise on the ends -- not as those honkin' big pulses.</p> <p>If I couldn't figure out the search terms for this, it would take me a day to do and another day to verify, and supposedly an expert. OTOH, Gauss or Laplace probably invented it in the 19th century, and may even have thought it important enough to write down, somewhere. So I'm sure the method exists.</p>
https://dsp.stackexchange.com/questions/69643/advice-on-designing-a-digital-filter-that-doesnt-have-phase-sensitive-edge-arti
Question: <p>I'm interested in composing filters for realtime audio processing on an microcontroller (MCU). Ideal frequency response is unity as a default, with deviations up and down at specific freq-domain pointers according scalers, and some type of smooth transition between these points. This is conceptually similar to an equalizer.</p> <p>For example, you might want to scale the 800Hz response by 1.2, and the 1600Hz response by 0.8. it would then taper to unity everywhere else.</p> <p>Normally you could sort this out by designing a filter with proper parameters on a PC (eg Scipy), but I don't know if this is viable on an MCU. MCU libs are avail that process signals if you already have coefficients, but don't create coefficients. Could you compose coefficients from various pre-built coefficients?</p> <p>I've had some success with averaging weighted coefficients, as well as convolving them. Adding seems to work better, but the response still isn't ideal. How would you approach this problem?</p> <p>Perhaps this could be done by composing weighted overlapping bandpass FIR (or IIR?) kernels across the entire spectrum of interest. Convolve the kernels together, or perhaps add them. (IIR you convolve bandpass filters and add bandstop?)</p> Answer: <p>One canonical solution is described by an old Motorola Application Note on how to design a 10-band equalizer for the 56000 DSP chip. Occasionally recomputing the IIR coefficients usually requires far fewer ops than running the filters.</p>
https://dsp.stackexchange.com/questions/79091/composing-digital-filters
Question: <p>I'm trying to create a digital filter in code(C) but any language is fine. Now I've got an analogue filter that I have represented by an equation in the Laplace domain and I want to try and implement it digitally. </p> <p>So my filter has this form in the Laplace domain: $$\frac{as+b}{cs^2+ds}$$</p> <p>I then use MATLAB's <code>c2d</code> command which uses the zero order hold transformation (I have a really poor grasp on this, so this might be wrong) and it gives me this formula:</p> <p>$$\frac{\left(5\cdot 10^5\right)z-67}{z^2-z}$$</p> <p>I tried following an <a href="http://liquidsdr.org/blog/pll-howto/" rel="nofollow">example</a> that I found that used the Tustin's method, though when I use the <code>c2d</code> function in MATLAB with Tustin it gives me an error.</p> <p>My attempt has been</p> <p>$$\frac{hz-i}{jz^2-kz}$$</p> <p>$b_0=-i, b_1=h, b_2=0, a_0=0, a_1=-k, a_2=j$</p> <p>Then from this I've tried (which is wrong) \begin{align} \text{output}&amp;=z_0 b_0+z_1b_1+z_2b_2\\ z_2&amp;=z_1\\ z_1&amp;=z_0\\ z_0&amp;=\text{input}-a_0z_0-a_1z_1-a_2z_2 \end{align}</p> Answer: <p>The example I looked at used a tustin or bilinear conversion not a zero order hold(the default for matlabs "c2d" command). So this is more an answer to what i wanted to do rather than the question that i asked above.</p> <p>I solved the following (converting the s domain function into code) by taking the s domain function. $$\frac{as+b}{cs^2+ds}$$</p> <p>and putting this into matlab (matlab command "g=tf([a b],[c d 0])"). Then performed the bilenear conversion with the matlab command "c2d(g,Ts,'tustin')" where g was my transfer funtion and Ts my sampling rate. This produced the output</p> <p>$$\frac{ez^2+fz+g}{iz^2+jz+k}$$</p> <p>The a and b coeficients can then be taken from this equation such that(if $i!=1$ the equation needs to be multiplied through by the inverse of "i"): $b0=e$ $b1=f$ $b2=g$ $a0=i$ $a1=j$ $a2=k$</p> <p>this can then be converted to code by setting the initial states for simplicity let $$z0=z1=z2=0$$</p> <p>then set up a loop that repeats the following algorithm</p> <p>$$output=z0*b0+z1*b1+z2*b2$$ $$z2=z1$$ $$z1=z0$$ $$z0=input-a1*z1-a2*z2$$</p> <p>For anyone else that got lost like me, this is known as an IIR filter and googling IIR filter design helped sooo much. </p>
https://dsp.stackexchange.com/questions/18329/creating-a-digital-filter-from-laplace-to-mathcal-z-transform-zero-order-ho
Question: <p>I'm currently using MATLAB's fdatool for filter design. Using that tool, I can easily design different kind of filters. For example, let's take a band-pass FIR filter with 10-40 Hz passband, and 5-10 Hz and 40-45 Hz transition bands. Usually, I design the filter with the selection "least-squares", which, if I understand correctly, uses the aforementioned method to find the best impulse response according to filter spesifications. To actually filter the signal, I use the command <em>filtfilt</em>, which does zero-phase FIR filtering.</p> <p>Now, an alternative way to implement the filter would be to take the FFT of my signal, set frequencies outside the range 10-40 Hz as zeros, and then take the IDFT.</p> <p>Is there <em>any</em> practical/theoretical difference between these two approaches? Will the frequency responses (magnitude and phase) be the same?</p> Answer: <p>Filtering can be done in the frequency domain which is actually a very efficient technique (and it can very well be, that Matlab does this internally). However, for very long signals it's not as straight-forward as "taking the FFT and applying a frequency response". You can read up on overlap-add filtering which is a possibility to filter in frequency domain. However, this is mainly done because it can be faster.</p> <p>However, this question is independent from filter design which is a completely different thing. Every FIR filter has a corresponding frequency response and it does not matter whether you convolve in time domain or apply the frequency response in frequency domain. So your approach becomes a question of filter design and the consequences it has.</p> <p>Just setting unwanted frequencies to zero might completely eliminate those frequencies but it usually comes at the price of a significant ringing in the time domain which is usually unwanted (<a href="http://en.wikipedia.org/wiki/Ringing_artifacts" rel="nofollow">Wikipedia on ringing</a>). So in fact, what you are proposing is just one way to design a filter and frankly not a good one in most cases.</p> <p>By allowing a transition band in which the frequency response can gradually go from passband to stopband, degrees of freedom are gained that can be used to improve other properties of the resulting filter (for example eliminate the ringing or obtain a shorter impulse response). That's why Matlab implements so many different filter types, they all have different properties and selecting the most suited one is actually part of designing your signal processing system.</p> <p>This topic is actually quite complicated and I suggest to read up on filter design. Whole books can be filled with this.</p>
https://dsp.stackexchange.com/questions/10056/about-designing-digital-filters
Question: <p>@Jazzmaniac has a good answer to the question of how to design an alias-free digital nonlinear time-invariant filter here: <a href="https://dsp.stackexchange.com/a/28787/18276">https://dsp.stackexchange.com/a/28787/18276</a></p> <p>Basically, according to that answer, a digital nonlinear time-invariant filter is alias free if and only if it commutes with subsample translations. Meaning that it doesn't matter whether you filter and then translate, or translate and then filter. Sinc interpolation is required for perfect subsample translations, but of course you can always use a finite interpolator that is good enough.</p> <p>This question is to elaborate:</p> <ol> <li><p>How can we see the link between subsample translation invariance and aliasing?</p></li> <li><p>Is there any easy way to see what these filters look like?</p></li> <li><p>Do the filters have some standard form they can be put in?</p></li> <li><p>Do we know what the alias-free version of the monomials look like? (i.e. the alias-free version of $y[t]=x[t]^n$ for some positive natural number $n$)</p></li> <li><p>Are there any good references or published works on the topic of alias-free nonlinear filter design?</p></li> </ol> Answer: <p>First, allow me to address the subsample shift property in relation to non-linear signal mappings. It is fairly straightforward to see time shifting is not as simple a property as for linear systems. Consider a discrete time signal given by the sequence $$\dots ,1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, \dots$$ This sequence is invariant under memoryless nonlinearities of the form $x\mapsto x^n$ for natural $n$. If the above sequence is shifted by a fraction of the sampling interval, the sample values will not be $0$ and $1$ anymore, and the resulting sequence will lose its invariance under the nonlinear map given.</p> <p>Enforcing this invariance and allowing only nonlinear maps that preserve it is equivalent to removing aliasing, as I will be showing here when I find a little time to expand upon my answer.</p> <p>Edit: Some more details.</p> <p>For simplicity, we will look at a discrete time signal centred around $t=0$ with a sampling interval of $T=1$. The discrete signal can be expanded using a power series $$x(t) = \sum_{n=0}^{2N} D_{N}^{(n)} \frac{t^n}{n!}$$ which describes $x[t]$ in the interval $t\in [-N,N]$. The coefficients $D_{N}^{(n)}$ are chosen so that the polynomial is the minimal interpolating polynomial on this interval. They coincide with the $n$-th discrete derivatives of $x[t]$ at $t=0$, also on this interval. The interpolating nature of this expansion makes it a natural linear homomorphism to the space of continuous time signals. It is worth noting, that in the limiting case $N\to\infty$, the interpolation approaches the bandlimited interpolation of the $\mathrm{sinc}$ kernel.</p> <p>Applying a differential time shift $dt$ to a smooth continuous time signal $x(t)$ can be written as $$x(t-dt)=x(t) - dt \frac{\partial}{\partial t}x(t)=(1-dt \frac{\partial}{\partial t})x(t)$$ For finite shifts $\delta t$, we can concatenate many smaller shifts and take the limit $$x(t-\delta t)=\lim_{n\to\infty}\left(1-\frac{\delta t}{n}\frac{\partial}{\partial t}\right)^n \,x(t)=\exp\left(-\delta t \frac{\partial}{\partial t}\right)\,x(t)$$ Therefore the linear operator $S(\delta t)=\exp\left(-\delta t\frac{\partial}{\partial t}\right)$ shifts smooth continuous time signals. </p> <p>The interpolation polynomial above is also an expansion of the $\exp$ function into a power series, with the discrete derivative taking the place of the partial derivative of the shift operator. For finite orders of the interpolation polynomial, the continuous time shift operator therefore approximates shifts of the interpolating function. This approximation becomes exact in the limit of infinite order.</p> <p>With this understanding, we can calculate an (approximately) alias free memoryless nonlinearity $f$ acting on $x[t]$. We only need to evaluate $f(x[0])$, all other times follow by integer sample shifts. </p> <p>With the interpolated continuous-time signal $x(t)$ approximating a band-limited reconstruction of $x[t]$, we can apply the nonlinearity in continuous time and then band-limit the result using an approximation of a band-limiting kernel $b(t)$ to avoid aliasing. A sufficient condition for a feasible symbolic evaluation of the band-limited result is that $f$ and $b$ be polynomials. Then $f(x(t))$ is a polynomial and we can directly calculate</p> <p>$$y(0) = \int_{-N}^{N} f\left(x(t)\right) b(t) dt$$</p> <p>which, in the limit of large orders $N$, achieves both full translation invariance and perfect alias rejection. This is not a proof that both are equivalent, but a good starting point to understand how these two properties are linked.</p> <p>Suitable choices for $b$ include polynomial expansions of the $\mathrm{sinc}$ function. For example </p> <p>$$b_N(t)=\frac{(N^2-t^2) \prod_{n=1}^{2N}(t^2-n^2) }{N^2 \prod_{n=1}^{2N}n^2}$$</p> <p>for an approximately bandlimited kernel on the interval $t \in [-N,N]$.</p> <p>This much must suffice as theoretical motivation for now. </p> <p><strong>Example</strong></p> <p>The simplest, non-trivial example is that with the minimal neighbourhood involvement and a crude approximation to a band-limited kernel. Do not expect good anti-aliasing properties from it. It's only here to demonstrate the general procedure of creating an anti-aliased memoryless nonlinearity.</p> <p>We use the lowest possible order, $N=1$, and arrive at the expansion</p> <p>$$x(t) = D_1^{(0)} t^0 + D_1^{(1)} t^1 + D_1^{(2)} \frac{t^2}{2}$$</p> <p>where </p> <p>$$ D_1^{(0)} = x[0]\\ D_1^{(1)}=\frac{1}{2}(x[1] - x[-1])\\ D_1^{(2)}=x[1]-2x[0]+x[-1]$$</p> <p>and as a single expression</p> <p>$$x(t)= x[0] + \frac{1}{2}(x[1]-x[-1])\, t + \frac{1}{2}(x[1]-2x[0]+x[-1])\,t^2$$</p> <p>The nonlinearity is assumed to be a monomial $x\mapsto x^k$ and we take the $b_1$ from above and get</p> <p>$$y_k[0] = \int_{-1}^{+1} \left(x[0] + \frac{1}{2}(x[1]-x[-1])\, t + \frac{1}{2}(x[1]-2x[0]+x[-1])\,t^2\right)^k \, \cdot \frac{1}{4}(4-t^2)(t^2-1)^2 dt$$</p> <p>We can evaluate this expression for $k=2$ and also remove the $t=0$ simplification made earlier to arrive at a nonlinear filter</p> <p>$$y_2[n] = \frac{4}{3465}(688 x[n]^2 + 40 x[n-1]^2 + 40 x[n+1]^2 - 41 x[n-1] x[n+1] + 82 x[n](x[n+1]+x[n-1]))$$</p> <p><strong>Generalising for non-smooth nonlinearities</strong></p> <p>The argument so far has required that the non-linearity can be well approximated by a polynomial. If that is not the case, then the integral will generally be harder to evaluate and the existence of a closed form solution is not even guaranteed. This is where the equivalence of commutativity of the filter with sub-sample shifts comes in. </p> <p>The most general form of a nonlinear filter on a neighborhood $[-N,N]$ around $t=0$ is that of a nonlinear map</p> <p>$$ y[0] = F( x[-N],x[-N+1],\dots,x[N-1],x[N] )$$</p> <p>For a memoryless non-linearity, we want to map constant input signals to constant output signals, according to the non-linear transfer function $f$. If the constant input is $X$, then we have the condition</p> <p>$$F(X,X,\dots,X) = f(X)$$</p> <p>and from the shift-invariance we have the condition</p> <p>$$ F( S(\delta t) x[-N], S(\delta t) x[-N+1],\dots, S(\delta t)x[N-1],S(\delta t) x [N]) = S(\delta t)F( x[-N],x[-N+1],\dots,x[N-1],x[N] )$$ for all $\delta t \in \mathbb{R}$</p> <p>In general, there is no symbolic solution for this problem. It is however approachable with numerical optimization methods. In many cases, the free parameters can be reduced further by restricting the form of $F$. </p> <p>I believe I should have answered all your questions with exception of the request for literature pointers. I am not aware of any. I don't know if this theory has ever been presented. But I believe the idea is not too difficult to come up with, so it has probably been done before. If you find something, please do let me know.</p>
https://dsp.stackexchange.com/questions/51533/alias-free-digital-nonlinear-filter-design
Question: <p>I am working in something were I should use a upsampling filter. I have decided to use a Nyquist filter(Lth filter). I know that there are two constraints. The first The frequency vector values must mirror each other in pairs around $\pi/2$. The second is the amplitude vector values must mirror each other in pairs around a magnitude of 0.50 What I am looking for are references to actually design these types of filters. I cant find any references that show how to implement them. </p> <p>So dose anyone have a decent reference on hoe to design these filters? Does anyone have a reference on implementing these in Verilog?</p> Answer: <p>This Mathworks documentation gives a good overview about the different parameters: <a href="http://uk.mathworks.com/help/dsp/examples/fir-nyquist-l-th-band-filter-design.html?requestedDomain=uk.mathworks.com" rel="nofollow">http://uk.mathworks.com/help/dsp/examples/fir-nyquist-l-th-band-filter-design.html?requestedDomain=uk.mathworks.com</a></p> <p>This tutorial goes into more detail: <a href="http://www.analog.com/media/en/training-seminars/tutorials/MT-002.pdf" rel="nofollow">http://www.analog.com/media/en/training-seminars/tutorials/MT-002.pdf</a></p> <p>And this one may be useful to clarify any doubt in the concepts presented before: <a href="http://www.lumerink.com/courses/ECE697A/docs/Matlab%20Filter%20Design%20and%20Implementation.pdf" rel="nofollow">http://www.lumerink.com/courses/ECE697A/docs/Matlab%20Filter%20Design%20and%20Implementation.pdf</a></p> <p>hope this helps</p>
https://dsp.stackexchange.com/questions/29386/nyquist-nth-digital-filters
Question: <p>I want a highly damped highpass filter (damping of at least $2$), with a cutoff somewhere around $1\textrm{ Hz}$ ($51.2\textrm{ Hz}\quad f_s$)</p> <ul> <li>How do i go about designing the filter with adequate control over the damping?</li> </ul> <p>My best guess was to use a standard second order response and frequency transform it, giving </p> <p>$$ \frac{s^2 \omega_0}{\omega _0^2 s^2 + 2\zeta\omega_0 ^2 + 1} $$ This gives me the required damping (after using MATLAB to transform it to the discrete domain using Tustin's method with <code>tustin</code>), but the frequency response is terrible (the $\rm dB$ point is much, much higher than it should be). </p> <ul> <li>Is there any way to solve this? </li> <li>Can I design a FIR filter with control over the damping? </li> <li>Is there a higher order $s$-domain function that still gives me control over the damping?</li> </ul> <p>Notes: filter order and phase delay are not overly important to my design.</p> Answer: <p>Your second-order high pass transfer function is wrong. You should use </p> <p>$$H(s)=\frac{s^2}{s^2+2\zeta\omega_0 s+\omega_0^2}$$</p> <p>But - as mentioned in the comments - a high damping and a sharp cut-off are incompatible requirements.</p>
https://dsp.stackexchange.com/questions/37089/highly-damped-iir-fir-digital-filter
Question: <p>In context of transition width vs. stopband attenuation, different windows (Blackman, Hamming, etc) are compared in terms of <em>tradeoffs</em> between the two, always noting that one cannot perfect both.</p> <p>Why not? Make it long enough - problem solved. We're working with <em>finite</em> frequency and amplitude resolution, not infinitely granular as in continuous. If not per ADC, then float precision's dynamic range inherently limits a signal's very sample values, and any filtering thereafter - and it doesn't end there<span class="math-container">$^1$</span>.</p> <p>Then the <em>actual</em> limitations on frequency/amplitude filtering stem from ADC/DAC/float precision, <em>not</em> filter design. Of course, in practice, we also care for <em>performance</em> - and a trillion-sample filter just won't do, so tradeoffs matter. That said, isn't below the more accurate formulation?</p> <p><strong>For a given filter length, each design has tradeoffs. If length is limited, none resolve a signal perfectly - otherwise, the differences between filters vanish within ADC/float dynamic range.</strong></p> <hr> <p>1: The Universe itself, per standard model, is finitely-resolved. Electron energy levels, Planck length, etc - limit the smallest possible frequency/amplitude increment of a signal. (... probably. Not an expert.)</p> <hr> <p><strong>Note1</strong>: assume &quot;the signal&quot; here is post-ADC, rather than the analog input, which would then add ADC itself to the question. So take a &quot;perfect&quot; or &quot;good enough to not distort/lose anything&quot; ADC. With short filters, we can quantify transition bands, stopband attenuations, ripples, etc, and their deviations from &quot;ideal&quot;, <span class="math-container">$\Delta$</span>.</p> <p>&quot;Perfect&quot;, then, is defined as <span class="math-container">$\Delta$</span> being so small, that even with a mathematical function describing the signal (perfect resolution), and mathematically convolving with the filter (literal perfection), the resulting (discretely sampled) signal would be <em>identical</em> as if convolved discretely with a finite filter.</p> <p><strong>Note2</strong>: I'm not concerned with eliminating noise from the signal, or any other parameter besides what's explicitly named; consider noise as part of &quot;the signal&quot;.</p> <p><strong>Note 3</strong>: Mathematically: Suppose we have the function for the input, <span class="math-container">$i(t)$</span>, and for the filter, <span class="math-container">$f(t)$</span>. Then, we can convolve mathematically ('ideally'); call the result <span class="math-container">$g(t)$</span>. Now discretize: <span class="math-container">$\rightarrow g[n]$</span>. Next, do all this digitally, with <span class="math-container">$i[n]$</span> and the finite filter <span class="math-container">$f[n]$</span>; call the result <span class="math-container">$h[n]$</span>. Then, the filter is &quot;perfect&quot; if <span class="math-container">$h[n] = g[n]$</span>.</p> <hr> <p><strong>Re: Dan's answer</strong> -- Many practical points, but not quite addressing the question. See Note 3, <em>in context</em> of Note 1; former asks for much more than latter, implying <em>any</em> <span class="math-container">$f(t)$</span>. The context here is <em>frequency filtering</em>, i.e. low-pass, band-pass, etc, example being windowed sinc.</p> <p>To answer this question, one must show one or the other:</p> <ol> <li><strong>No perfect filter</strong>: there is no <span class="math-container">$f[n]$</span> satisfying Note 3. Approximation error, plus add/multiply float(64) error, exceed float's <em>representation error</em> (i.e. <span class="math-container">$g(t) \rightarrow g[n]$</span>).</li> <li><strong>Yes perfect filter</strong>: present <span class="math-container">$i(t),\ f(t),\ g(t),\ g[n],\ i[n],\ f[n],\ h[n]$</span>, and code for generating and computing the latter four, and <span class="math-container">$\text{MAE}(g[n] - h[n])$</span>. If insufficient RAM, show that it'd work with more RAM.</li> </ol> <p>By these criteria, 1 will obviously win, as float is notorious for add/multiply imprecisions. But that's not meaningful, nor useful; if add/multiply is the bottleneck, we can use better than float64 to compensate. Thus, the condition for &quot;perfection&quot; is a &quot;negligible&quot; <span class="math-container">$\text{MAE}$</span>, or &quot;very close&quot; to float roundoff. The <em>representation</em> error is also a matter of <a href="https://stackoverflow.com/q/64001537/10133797">granularity</a>.</p> Answer: <p>Yes Virginia, There is a perfect digital filter.</p> <p>I assume the OP means by &quot;perfect filter&quot; what we would typically call an &quot;ideal filter&quot;: which is a filter that passes a finite block of frequencies with no alteration and completely removes all other frequencies, which is referred to as a &quot;brick wall filter&quot;. Otherwise if the OP simply means a filter whose distortion is less than our &quot;increment of concern&quot;, well then all properly designed filters will do this, achieving sufficient rejection, minimum passband distortion and minimum transition bandwidth so as to not degrade our requirements often for use in communication waveforms summarized in an SNR metric on our waveform-- I would prefer to refer to these as &quot;sufficient filters&quot; as referring to this as a perfect filter would confuse most with the brickwall filter previously mentioned- which, like Santa Claus, &quot;exist in our hearts and minds as certainly as love and generosity and devotion exist&quot;.</p> <p>That said, let me elaborate in the challenges to achieve the &quot;perfect filter&quot;. The transition band requirement is often the most challenging, especially when the OP has clarified performance is limited by ADC technology, and there is no ADC available that would surpass the precision available in the remaining digital system (meaning for any ADC available we can easily design a digital system with passband ripple or stop band rejection that is less than the ADC quantization). For transition band requirements this is not the case, as it is not the amplitude quantization, or time quantization (sampling rate) that limits the transition band. What limits the achievable transition band specifically is the time duration of the filter's impulse response, which applies equally to digital as well as analog filters. At a given sampling rate, the number of samples is directly proportional to the time duration, but focusing on time provides more direct insight into the restriction. This is the time frequency duality that in order to have infinitely small frequency resolution (brick wall filter) we need to have an infinitely long time duration. By choosing the number of samples in the filter, we are choosing the time duration which then drives filter complexity for any given sample rate. Further for real-time filters, there will be an inevitable delay due to causality that is proportional to this time duration: filters with steeper rejection (higher selectivity) must have longer delay. Adding delay in many applications is a concern and is far from &quot;perfect&quot;.</p> <p>The last statement by the OP that the filter is to not eliminate noise from the signal, and that the signal represents everything the ADC presents- well in this case if the &quot;filter&quot; is not rejecting any other noise (where noise can be interference, other signals we aren't interested in, quantization and thermal noise etc), then this is not a &quot;filter&quot; at all in any traditional sense of the word- and the simple answer of what the &quot;perfect filter&quot; is that would not change the ADC in any way is a one-tap FIR filter with coefficient = 1 (the unit-sample function). I don't think this is the question, and then that last statement and this trivial answer doesn't really make sense.</p> <p>If the OP assumes that all noise is only introduced by quantization, this is not typically true in a well designed system since we are interested in measuring or being limited by the noise that is in the original waveform that was sampled. We would typically choose quantization so that we are observing the actual noise in the signal (rather than drowning that out with more noise that we artificially add) --so it is not necessarily the quantization that allows for a &quot;perfect filter&quot; since regardless of quantization we still filter to reject the noise components in the signal itself that the quantized samples are representing.</p> <p>For example, if we had a continuous-time sine-wave with an SNR of 20 dB, I would typically choose a quantization such that the additional noise added is at least 10 dB lower in the final filtered signal (limiting the SNR degradation to 0.4 dB), so for a full-scale sine-wave this would be a quantization of approximately 5 bits. Thus the noise that we would observe in this case is NOT the quantization noise that is 30 dB down but the noise in the original waveform itself that is 20 dB down. Any less quantization and would simply be further degrading the SNR.</p> <p>So given a filter with interest in passing frequencies up to <span class="math-container">$f_1$</span>, the signal will have noise components at <span class="math-container">$f_1+\Delta$</span> that the ideal filter would need to remove but for all practical filters there will exist a <span class="math-container">$\Delta$</span> that would be in the inevitable &quot;transition band&quot; of the filter. Thus we have the trade of filter complexity, sampling rate and frequency planning considerations in our digital filter designs.</p> <p>This graphic may help illustrate what occurs in the digital filter design and how &quot;windowing&quot; can help improve rejection at the expense of widening the transition band:</p> <p><a href="https://i.sstatic.net/waqPh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/waqPh.png" alt="Rectangular Window" /></a></p> <p>In the upper left we see the ideal filter with a &quot;brick-wall&quot; frequency response. To realize such a response, the filter would need to have a Sinc function as the impulse response (the inverse Fourier Transform of the desired frequency response is the impulse response). The Sinc function on its own is non-causal: extending to <span class="math-container">$\pm \infty$</span>, so we need to both delay the Sinc function in time and truncate it in length to be realizable. This step alone is delaying and then multiplying the desired Sinc function with a rectangular window (in time). The discrete version of this window is <span class="math-container">$N$</span> samples long, and the product in the time domain results in a convolution of our desired brickwall filter in the frequency domain with the Dirichlet Kernel (which is the Fourier Transform of the rectangular window, basically an aliased Sinc function in the frequency domain: for very large <span class="math-container">$N$</span> the Dirichlet Kernel approaches a Sinc function). The main lobe of the Dirichlet Kernel has a first null at <span class="math-container">$2\pi/N$</span> in frequency where the sampling rate is <span class="math-container">$2\pi$</span> radians/sample. Thus our perfect brickwall filter will now because of the windowing have a transition band that is <span class="math-container">$2\pi/N$</span> to the first null in frequency. It will also have significant sidelobes in the stop band and passband ripple in the passband due to the this convolution. Windowing with improved windows (Kaiser, Hannning, Blackman-Harris, etc) serve to significantly reduce the sidelobes and passband ripple but in all cases they will have an even wider transition band! The transition band is usually what limits the performance or drives the complexity of the filter and is typically a design consideration at the system level.</p> <p>This result here with the rectangular window where the transition to the first null at <span class="math-container">$\Delta \omega 2\pi/N$</span> factor is not coincidental to the estimates for the number of taps needed to realize a digital filter with a certain transition band requirement as detailed here: <a href="https://dsp.stackexchange.com/questions/31066/how-many-taps-does-an-fir-filter-need/31210#31210">How many taps does an FIR filter need?</a> when you make the frequency axis normalized radian frequency. Here we get with a rectangular window <span class="math-container">$N = 2\pi/(\Delta \omega)$</span> (which works out to be <span class="math-container">$\Delta F = 1/T$</span> in continuous time), while with fred harris' estimate (for windowed and least squares designs) we get:</p> <p><span class="math-container">$$N \approx \frac{A}{22}\frac{2\pi}{\Delta \omega}$$</span></p> <p>Where <span class="math-container">$A$</span> is the stopband attenuation needed in dB, and <span class="math-container">$\Delta \omega$</span> is the fractional radian frequency of the transition band, and <span class="math-container">$N$</span> is the number of taps needed to realize this rejection within this frequency distance from the passband.</p> <p>This is detailed further at this post, which also contains &quot;Kaiser's Formula&quot; which also has the <span class="math-container">$\Delta \omega/(2\pi)$</span> factor but includes the effects of passband and stopband ripple explicitly. These are estimators and the typical approach is to use these as starting points and then iterate the number of taps needed once the performance of the filter with a given number of taps is reviewed in comparison to target requirements.</p> <p>Next as MBaz has suggested in a comment to the OP below the question, the precision of the coefficients themselves will limit our ability to achieve a filter that can provide rejection beyond the dynamic range of the precision of those coefficients. But as I stated, if we are limited by ADC technology then achieving this is trivial and failure here would be a result of poor design rather than limits of technology. However, if &quot;perfect&quot; means provide a rejection beyond the noise floor of the precision of the number system, this too is not achievable.</p> <p>The typical guideline is to use 2 more bits of quantization for the coefficients over the datapath. The rejection is limited by coefficient precision by a typical factor of 5 to 6 dB/bit (5 dB/bit due to correlation in the coefficients as fred harris points out, 6 dB/bit is what would be expected for uncorrelated samples). So if we limited the coefficients to 8 bits (for example) the rejection of the filter would be degraded to 40 to 48 dB even if we had designed the filter for more (such as in the graphic below which in this case resulted in being closer to 6 dB/bit). An 8 bit datapath can provide 50 dB SNR otherwise to a sine-wave so the filter with 8 bit coefficients would fall far short of perfect. The same argument would apply to a filter with double-precision floating point data-path and coefficients if &quot;perfect&quot; means we wish the filter rejection to exceed this.</p> <p><a href="https://i.sstatic.net/e9obN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e9obN.png" alt="Coefficient quantization effects" /></a></p>
https://dsp.stackexchange.com/questions/70462/is-there-really-no-perfect-digital-filter
Question: <p>It would be greatly appreciated if the usage of the python package scipy's filter (e.g. butter) analog=True argument could be explained. I don't understand what is meant by this (any signal being processed by scipy in python on a computer is discrete and will always be digital?). I an pretty familiar with DSP but have little knowledge of analog signals.</p> <p>This seems like a simple question, but I have scoured the docs and online forums and not found anything addressing this directly, apologies if I have missed something!</p> Answer: <p>The function <code>butter()</code> doesn't do any signal processing. It is a routine to <em><strong>design</strong></em> a filter, either digital or analog. I.e., it computes the filter coefficients.</p> <p>I use Matlab/Octave where you have basically the same function. The command</p> <pre><code>[b,a] = butter(2,1,'s'); </code></pre> <p>designs an <em><strong>analog</strong></em> (<code>'s'</code>) second-order filter with angular cut-off frequency <span class="math-container">$\omega_c=1$</span>. The coefficients are <code>b = 1</code> and <code>a = [1,sqrt(2),1]</code>, i.e., the transfer function is</p> <p><span class="math-container">$$H(s)=\frac{1}{s^2+\sqrt{2}s+1}$$</span></p>
https://dsp.stackexchange.com/questions/87005/scipy-filter-analog-vs-digital
Question: <p>I am working on a board that has no antialisaing filter at the input of the ADC. I have option to I implement my own filter using RC + Opamp circuit. But is it also possible to implement Anti Aliasing filter after sampling by ADC and processing in Digital domain: a digital Anti aliasing filter? </p> Answer: <p>Just to support Matt's answer and provide a few more details:</p> <p>Most modern ADCs do most of the hard antialiasing job in the digital domain. Reason is that digital filters tend to produce less by-products for a much lower cost. The actual chain is:</p> <ul> <li>Analog Input.</li> <li>Analog Anti-aliasing filter.</li> <li>Oversampling (eg, at 8x).</li> <li>Digital Anti-Aliasing Filter.</li> <li>Decimating (reduction to 1x).</li> <li>Digital Output.</li> </ul> <p>The further illustrate, consider the following:</p> <ul> <li>The audio is sampled at 44100Hz.</li> <li>This provides a Nyquist frequency of 22050 Hz.</li> <li>Any frequencies above 24100 Hz will alias back to the audible range (below 20kHz).</li> <li>20000Hz to 24100 is about quarter of an octave.</li> <li>Even with a steep 80dB/8ve filter you will only be reducing the aliasing frequencies by 20dB.</li> </ul> <p>But with 8x oversampling:</p> <ul> <li>The audio is sampled at 352.8kHz (44.1kHz x 8).</li> <li>Nyquist is 176.4 kHz.</li> <li>Only frequencies above 332.8kHz will mirror to the audible range.</li> <li>That's about 4 octaves.</li> <li>So you can apply a 24dB/8ve analog filter to reduce aliasing frequencies by 96dB.</li> <li>Then oversample.</li> <li>Then apply linear phase digital filter between 20kHz and 24.1kHz</li> </ul> <p>The <a href="http://rads.stackoverflow.com/amzn/click/141960001X">following book</a> is an excellent, clear resource for these sort of things.</p>
https://dsp.stackexchange.com/questions/9205/can-we-have-a-digital-anti-aliasing-filter
Question: <p>I came across this paper entitled "Design of Efficient Digital Interpolation Filters and Sigma-Delta Modulator for Audio DAC" where the author oversamples an input frequency, fsig = 1kHz with ratio L = 128 and update frequency, fsi = 64kHz. The interpolation filter specification is given by:</p> <ul> <li>passband ripple = 0.001dB for frequency &lt;0.45*fsi and </li> <li>stopband attenuation = 174dB for frequency > 0.55*fsi.</li> </ul> <p>For two-stage system using IIR Butterworth: </p> <ul> <li>L1 = 2 and L2 = 64, how to get filter order of N1 = 77 and N2 = 21 respectively?</li> <li>L1 = 16 and L2 = 8, how to get filter order of N1 = 120 and N2 = 7 respectively?</li> </ul> Answer: <p>Butterworth low-pass filters are not going to work for this, as demonstrated in the following. You'd need to use other types of filters, for example linear-phase finite impulse response (FIR).</p> <p>The magnitude frequency response of an order <span class="math-container">$N$</span> discrete-time Butterworth low-pass filter can be approximated by that of an analog prototype (<a href="https://en.wikipedia.org/wiki/Butterworth_filter" rel="nofollow noreferrer">from Wikipedia</a>, with a small notation change):</p> <blockquote> <p><span class="math-container">$$G(\omega) = {\frac{1}{\sqrt{1+{\omega}^{2N}}}},$$</span></p> </blockquote> <p>with normalized frequency <span class="math-container">$\omega$</span>. The Butterworth filter magnitude frequency response is strictly decreasing. For a pass-band edge gain of <span class="math-container">$a = 10^{-0.001/20}$</span>, that is, -0.001 dB, we'd employ a further gain factor of 0.001 dB to have a gain of 0.001 dB, that is, <span class="math-container">$\frac{1}{a}$</span> at 0 Hz. With this gain factor the magnitude frequency response is <span class="math-container">$\frac{1}{a}G(\omega)$</span>. Then the transition band begins at the frequency <span class="math-container">$\omega_0$</span> found by solving:</p> <p><span class="math-container">$$\frac{1}{a}G(\omega_0) = a$$</span> <span class="math-container">$$\Rightarrow \omega_0 = \left(\frac{1 - a^4}{a^4}\right)^{\textstyle\frac{1}{2N}}$$</span></p> <p>Then we plot the magnitude frequency response at the beginning of the stop band (the end of the transition band) at frequency <span class="math-container">$\omega_1 = \frac{0.55}{0.45}\omega_0$</span> as function of filter order <span class="math-container">$N$</span>:</p> <p><a href="https://i.sstatic.net/2DhNC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2DhNC.png" alt="enter image description here"></a><br><em>Figure 1. <span class="math-container">$\frac{1}{a}G(\omega_1)$</span> as function of <span class="math-container">$N$</span>.</em></p> <p>Based on the graph, -174 dB of stop band corner gain would require a filter order of about 120. I don't think such a high order is feasible for a recursive filter as it will lead to numerical problems. Filter types other than Butterworth will be able to meet the specification with a lower filter order.</p>
https://dsp.stackexchange.com/questions/59929/design-of-efficient-digital-interpolation-filter
Question: <p>I am using an adaptive RLS adaptive filter for noise cancellation. My sampling freq. is 500 Hz, but I am interested in only frequencies of up to 60 Hz. I filter the input and the reference signal to the desired frequency range and then apply the adaptive filter. The adaptive filter does a good job at removing the noise. But, since I do not need such a high sampling freq. when my highest freq. is 60 Hz. I decimate by a factor of 4 after running an anti-aliasing filter and before applying the adaptive filter. Even if the input and the reference signal still display high correlation at the decimated rate, the adaptive filter does not perform well and produces spikes in the output. I wonder what the reason could be? Would it help if I use conditioning on the input, i.e. add some white noise before adaptive filtering (I am thinking in eigen value analogy and the smallest corresponding to white noise)?</p> <p>Thanks in advance.</p> Answer:
https://dsp.stackexchange.com/questions/10017/rls-adaptive-filter
Question: <p>How often do problems arise that let you use adaptive filters? Unless I am understanding something incorrectly it seems the requirement that the input signal be stationary(or even WSS) is too strong for most places I would want to use adaptive filters.</p> <p>Am I wrong? How often do adaptive filters come up in communications and control? </p> Answer:
https://dsp.stackexchange.com/questions/52380/how-general-are-adaptive-filtering-techniques
Question: <p>I'm trying to code the algorithm described in <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ICASSP01makurt.pdf" rel="nofollow noreferrer">Speech dereverbaration via maximum-kurtosis subband adaptive filtering</a> by <em>Gillespie, Malvar and Florencio</em>, and the signal looks cleaner in when I plot it. However there are 2 aspects of the results that are worrying me:</p> <ol> <li>The sound level seems lower than the original when played.</li> <li>And the sound comes out muffled.</li> </ol> <p>I'm quite new to speech signal processing so I was wondering whether the latter issues occur commonly or is it just my bad coding?</p> <p>Here is the part of the code that corresponds to the adaptive filter. I believe it should be the problematic part.</p> <hr> <pre><code>for n=1:500 %Sig is the sum of the product of the FFT of the kusrtosis gradient and %the complex conjugate of the FFT of the LP residual of the reverberant %signal sig=0; ii=1; while ii&lt;length(F)-L sig=sig+sum(F(ii:ii+L).*Yconj(ii:ii+L)); ii=ii+L; end %H is the frequency domain representation of the filter and the following are the update equations %Hpr is G' in the paper Hpr(n+1)=H(n)+(mu/M)*sig; if Hpr(n)==0 || isnan(Hpr(n))==1 H(n+1)=0; else H(n+1)=Hpr(n)/abs(Hpr(n)); end %getting the optimized signal Zt=Yleftres.*H; zt=ifft(Zt); %updating the value of the kurosis gradient q=1:length(zt); while q&lt;length(zt); secmoment=beta*secmoment+(1-beta)*zt(q:q+881).^2; fourthmoment=beta*fourthmoment+(1-beta)*zt(q:q+881).^4; f(q:q+881)=4*(secmoment*(zt(q:q+881).^3)-fourthmoment*zt(q:q+881))/(secmoment.^3); q=q+881; cleaner=isNaN(f(q:q+881)); cleaner=cleaner-1; cleaner=abs(cleaner); f(q:q+881)=cleaner.*f(q:q+881); end F=fft(f); % Hpr(n+1)=H(n)+(mu/M)*sig; end </code></pre> <p><a href="https://i.sstatic.net/fk9ti.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fk9ti.png" alt="enter image description here"></a></p> Answer: <p>As was advised to me by A_A in the comment section I will close this thread with a "self-answer". The code present in the question is a code for a speech signal de-reverberation adaptive filter based on the Kurtosis signal of the signal's Lp residual. The original idea isn't mine (the reference is in the question). But the problem I faced was mainly due to the fact that my sensor did not convert to the desired result. I drastically increased the number of iterations the adaptive filter does and the result converged.(from n=500 to n=2000) There are others parameters that can be tweaked too (like mu) but I haven't had the time recently to thoroughly evaluate their effects. </p> <p>Here's a link to download the code hope it helps. (expires by April 18th)</p> <p><a href="https://mysnu-my.sharepoint.com/:u:/g/personal/chemalielias_seoul_ac_kr/EQf85jBkW3JMhkcbTgIBQBEBzzteflcaQ0yCfkDNL8VRZA?e=UZn2XU" rel="nofollow noreferrer">Code download link</a></p> <p>Here's the link for the RIR code I used, I did not write the code though. I downloaded the file from matlab file exchange. The original author is Stephen G. McGovern, he also wrote a paper with the theoretical explanations for the RIR filter.</p> <p><a href="https://mysnu-my.sharepoint.com/:u:/g/personal/chemalielias_seoul_ac_kr/ETERzOu0BepOv40rKYu1vLABVIEU_j-BTCX1ecu0ZQyOsA?e=NUl4SL" rel="nofollow noreferrer">RIR code</a></p>
https://dsp.stackexchange.com/questions/45344/speech-dereverbaration-via-maximum-kurtosis-adaptive-filtering
Question: <p>I am using a series-cascade of multiple NLMS adaptive filters each with step size 0.0040, leakage factor 1.0, and 100 filter taps. My signal gains magnitude at each step of the filtering, say the peak magnitude increases from 0.2 originally to 2.5 after using the first adaptive filter to 12.5 after the using the second adaptive filter on the output of the first one and finally 30 after using the third adaptive filter on the output of the second one. Why is this happening?</p> <p>I have tweaked the step size and leakage factor but that did not help. Isn't my filter supposed to reduce the magnitude of my signal?</p> Answer: <p>I don’t have the specific details for your filter but with digital filters in general it is typical for the filter to grow the signal in band in contrast to analog filters that shrink the signal out of band. It is all just a matter of scaling. Consider the simple case of a moving average FIR filter consisting of the summation of the previous N samples; such a filter will grow any low frequency signal within its passband, specifically growing a DC component by a factor of N. (We could then divide the result by N to get the actual average of try samples, but the primary filter itself here is growing the signal). </p> <p>This approach of considering digital filters as "growing a signal in band" is particularly important for noise considerations in fixed point design, where you should typically avoid scaling the signal prior to the filter or the filters coefficients in order to normalize the result, but always allow the filter to grow the signal and then scale afterward. This is the reason for extended precision accumulators, and the reason is rather straightforward: if you scale the signal prior to the filter in a fixed point design you are effectively increasing the quantization noise. This quantization noise is typically modeled as independent white noise from sample to sample so in the filter will increase at a rate of <span class="math-container">$10Log_{10}(N)$</span> for the <span class="math-container">$N$</span> taps in the filter. If you scale after you are only adding this higher quantization noise level once. Scaling the coefficients can be shown to accumulate quantization noise contributions in a similar way. This is also a reason why equalizers and adaptive filters should not be any longer than they need to be (to avoid noise enhancement). </p>
https://dsp.stackexchange.com/questions/65901/why-does-my-signal-magnitude-increase-after-adaptive-filtering
Question: <p>I have designed an adaptive filter for noise cancellation. Is there any standard way of testing adaptive filters?</p> Answer: <p>It is usually evaluated using the Mean Square Error:</p> <p>$$ e(n) = \frac{\displaystyle\sum_{i=1}^{N}(d_{i}(n) - y_{i}(n))^2}{N} $$</p> <p>Where $ d(n) $ are the values of the samples used to train your filter, and $ y(n) $ are the samples of the filter output. So, you train your filter a number of times $ N $, so as to, for each iteration $ n $ of the training, you have $ N $ values. Than you just compute the equation above, the mean of the $ i $ realizations of training, and plot the error $ e $ by the iteration $ n $.</p>
https://dsp.stackexchange.com/questions/28054/performance-of-adaptive-filter
Question: <p>When studying neural networks from Neural Networks and Learning Machines, by Simon Haykin, the author highlights the close similarity between of adaptive filtering and neural networks.</p> <p>From a scalar-valued signal, if we put a tapped delay input along with an ADALINE (Adaptive Linear Neuron), do we have an adaptive filter? From this point of view, both areas become closely related.</p> <p>PS: A good reference is <a href="https://www.mathworks.com/help/deeplearning/ug/adaptive-neural-network-filters.html#bss4gqu-3" rel="nofollow noreferrer">MathWorks - Adaptive Neural Network Filters</a></p> Answer: <p>I think adaptive filter, and a single layer perceptron with MSE error, will be equivalent.</p>
https://dsp.stackexchange.com/questions/87750/tapped-delay-line-adaline-adaptive-filter
Question: <p>In the audio domain, I am currently attempting to use MATLAB to distil:</p> <ol> <li><p>$\textrm{signal}$ from $\textrm{noise + signal}$</p></li> <li><p>$\textrm{noise}$ from $\textrm{noise + signal}$ using two adaptive filters $\rightarrow$ two results. </p></li> </ol> <p>I get the first answer quite effectively using:</p> <pre><code>hFDAF1 = adaptfilt.fdaf(AdaptFiltLength,StepSize,Leakage,Delta,Lambda) [Errors1, Adapt_Out_Audio] = filter(hFDAF1, NoisePlusAudio, Audio) </code></pre> <p>where <code>NoisePlusAudio</code> is a microphone in the room, and <code>Audio</code> is music playing through speakers in the room.</p> <p>How do I get the adaptive filter to remove the <code>Audio</code> from <code>NoisePlusAudio</code>, just leaving the background noise of the room, much like an echo canceller? The following does not work, it just lets all the sound through.</p> <pre><code>[Errors2, Adapt_Out_Noise] = filter(hFDAF2, Audio, NoisePlusAudio) </code></pre> <p>I'd love to know the answer ....</p> Answer: <p>Reason for it might be the continuous adaptation, even in non speech regions. So, you can try Voice Activity Detector (VAD) for knowing the exact regions for adaptation of filter, and processing will be frame wise (20 ms). </p>
https://dsp.stackexchange.com/questions/28672/matlab-adaptive-filters
Question: <p>I'm trying to find the optimum filter length for an Adaptive Filtering, using RLS Algorithm.</p> <p>I'm using this design: <a href="https://i.sstatic.net/UG9w9.png" rel="noreferrer"><img src="https://i.sstatic.net/UG9w9.png" alt=""></a></p> <p>So the "error" signal is the signal without noise (and that's the signal that I want).</p> <p>If I have $e(n) = d(n)-y(n)$ but $d(n)$ is my desired signal, I need that $e(n) \rightarrow 0$ so I find the optimum filter length (and the delay) using the MSE criterion, but now I have the signal that I want as the error, so I don't know how to find the optimum filter length, because I have NO idea what MSE do I have to get at the output!</p> <p>Can anyone tell me what should I do?</p> <p>Thanks!</p> Answer: <p>In order to be able to choose an optimal value for the delay $\Delta$ it's important to understand how the system works. The purpose of the delay is to decorrelate the desired signal $s(n)$ and the signal component $s(n-\Delta)$ at the input of the adaptive filter. This means that $\Delta$ must be chosen such that the autocorrelation $R_{ss}(k)$ of $s(n)$ is (close to) zero for lags greater than $\Delta$:</p> <p>$$R_{ss}(k)\approx 0,\qquad |k|&gt;\Delta$$</p> <p>However, we cannot choose $\Delta$ arbitrarily large because the delayed interference at the input of the filter must be correlated with the interference added to the signal, i.e., the autocorrelation $R_{rr}(k)$ of the interference must still be significant at a lag of $\Delta$, otherwise the adaptive filter cannot predict the interference. If we can assume that $r(n)$ is narrow-band compared to $s(n)$, it's always possible to find an appropriate value for $\Delta$.</p> <p>With an appropriate value for $\Delta$, the adaptive filter will try to predict the interference, i.e., it will try to undo the effect of the delay in the frequency band where the interference has significant frequency components. So the output of the filter will approximate $r(n)$: $y(n)\approx r(n)$. Consequently, the error signal will approximate the desired signal: $e(n)\approx s(n)$.</p> <p>After having chosen a value for $\Delta$ based on the autocorrelation of $s(n)$, the filter length must be chosen by trial and error. A long filter will give a better suppression at the cost of slower convergence.</p>
https://dsp.stackexchange.com/questions/37902/adaptive-filtering-optimum-filter-length-and-delay
Question: <p>Does anyone know about different adaptive filtering implementations (LMS, RLS ...) in 2D or even 3D ? I have sequences of 2D images and 3D volumes with repeating patterns but small differences. I was thinking of using one as my reference input and extract differences between the pair (A simple subtraction doesn't work as every little random difference is magnfied in the result). I cannot find any Matlab implementation and using 1D on columns or rows of the images doesn't seem to work. Thought perhaps a 2D version using a 2D neighbourhood would do a better job. The noise I am trying to remove is not white noise but rather coherent noise. New images are produced every second. The differences between two successive ones are small but gets larger over time.</p> <p>Thanks in advance </p> <p><img src="https://i.sstatic.net/N0Eut.png" alt="enter image description here"></p> Answer: <p><a href="http://en.wikipedia.org/wiki/Unsharp_masking#Local_contrast_enhancement" rel="nofollow noreferrer">Local contrast enhancement</a> a.k.a. Unsharp masking is a simple, fast method for modeling, then removing, smooth (low-frequency) background noise. In a nutshell,</p> <ol> <li>extract a smooth background image with a wide-radius lowpass filter</li> <li>sharper_image = image + c * (image - background), c ~ 10 % or so: highpass</li> </ol> <p>Using <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#smoothing-filters" rel="nofollow noreferrer">scipy.ndimage</a>, this is :</p> <pre><code>def sharpen( image, radius, howfar, background ): """ in: greyscale image, a 2d, 3d ... numpy array out = extrapolated highpass background: lowpass the image, in time ~ Npixel * (2 radius + 1) * ndim then highpass: background ---&gt; image ---&gt; sharpened image, in time ~ Npixel howfar -1 0 .5 ... """ sigma = int( radius / 4. + .5 ) # r = int( 4 * sigma + .5 ) ndimage.gaussian_filter( image, output=background, sigma=sigma, mode="nearest" ) return image + howfar * (image - background) # clip </code></pre> <p>Some notes:</p> <p>Of course you'll have to experiment with <code>radius</code> and <code>howfar</code> for your data.</p> <p>Calculate the smoothing filter (1d) outside the loop, then do <code>convolve</code> or <code>convolve_1d</code> for each frame. If the background changes slowly, update only 1/2 or 1/10 of it on each frame. For example, alternate <code>convolve_1d</code> ( horizontal lines, vertical lines, horizontal ... )<br> or ( every 5 th H line, every 5 th V line, next 5 th H ... ).</p> <p>Experts may know of smarter ways of tracking <code>background</code> only where it's changing.<br> (As I undersand it, that's your original question, but LMS seems to me, non-expert, overkill for that; here we have a fast simple inner loop.)</p> <p>Color: you don't want to interpolate colors in RGB space, much less extrapolate, because "between" gets screwy colors.</p> <p>(Some follow-up questions, maybe enough for a wiki:<br> What C++ image libraries have fast 2d / 3d gaussian_filter / fast extrpolation<br> &nbsp;&nbsp;&nbsp;&nbsp; <em>and</em> reasonable doc, clean, small, opensource, bindings for Python ... ?<br> Is there a constant-time 2d / 3d gaussian filter, independent of radius ?<br> Color: RGB -> Lab or YIQ -> sharpen luma only, leave color asis ?</p> <p>See also:<br> <a href="https://stackoverflow.com/questions/2938162/how-does-an-unsharp-mask-work">how-does-an-unsharp-mask-work</a> on SO<br> Haeberli and Voorhies, <a href="http://www.graficaobscura.com/interp" rel="nofollow noreferrer">Image Processing By Interp and Extrapolation</a>, 1994, 3p.</p>
https://dsp.stackexchange.com/questions/10482/2d-adaptive-filters
Question: <p>I am confused as to the difference between neural networks and adaptive filters: As far as I understand it, &quot;neural networks&quot; are largely used for solving inverse problems, where an unknown system is to be identified by the neural network in order to, for example, predict some output. The same is true for linear and nonlinear adaptive filtering algorithms for solving inverse problems such as system identification and predicting the output.</p> <p>Q: Is there a difference between a linear/nonlinear adaptive filter (chain) that approximates an unknown system and a &quot;neural network&quot; that does the same, or is it called a neural network because it is &quot;fancy&quot;?</p> Answer: <p>An adaptive filter is a special case of a neural network (NN). They have in common that they multiply an input x[n] with weights w[n], the result y[n]=x[n]<em>w[n] is compared to the target t[n] (e.g. the system to be identified or the prediction to be made). The resulting error e[n] = t[n] - y[n] is used to adapt the weights w[n] e.g. with the algorithm w[n+1] = w[n] + µ</em>e[n], where µ is the learning rate i.e. the speed the adaption of the weights takes place. The error function can also be non-linear e.g. e[n] = sign(t[n] - y[n]). So far the things adaptive filters an NN have in common. NNs can have multiple neurons and hidden layers which would be a difference to adaptive filters. Therefore adaptive filters are a subset of NN.</p>
https://dsp.stackexchange.com/questions/78687/is-a-neural-network-an-adaptive-filter
Question: <p>Can somebody please provide an intuitive answer or reference for the following questions?</p> <p><strong>Q1: Dependence of estimation performance on number of data points</strong> -- I could not find any information whether the estimation performance of Adaptive filters such as Least Mean Square (LMS), Constant Modulus Algorithm (CMA) and Kalman Filters depend on the number of data points or not. Is there any information whether the mean square error between the actual and estimated parameters reduces with the increase in the number of data points or not? </p> <p><strong>Q2: Dependence of convergence on number of data points</strong> -- For instance, information such as if convergence of these adaptive filters (or in general) depends on the number of data points i.e., if these require a large number of data points to have good estimation performance. </p> Answer: <blockquote> <p>Q1: Dependence of estimation performance on number of data points</p> </blockquote> <p>Since LMS and RLS are <strong>adaptive</strong> filters, their estimation performance improves as the number of their <strong>iterations</strong> increase. Hence more data points will make their outputs closer to the expected performance, until the <strong>convergence</strong> is achieved (this requires either a WSS data or a slowly changing, statistically, nonstationary data, so that filter can <strong>track</strong> it's statistical character). Once the filter is operating in convergence conditions, then there won't be any improvements in its output, providing what is called as the (minimum) <em>steady-state error</em>. This steady-state error depends on a number of factors but not on the data length.</p> <p>But until the convergence is achieved, the estimation error decreases as more data points processed. However most typical adaptive filter users would be interested in its steady state (convergent) results rather than the transient response.</p> <p>Note that increased data points can either come from a <strong>longer</strong> observation or from <strong>higher sampling rate</strong>, the results of which can be different. For example for a Kalman filter (in its extended mode) with mechanical applications, an increase in sampling rate can actually <strong>improve</strong> the steady state error as well.</p> <blockquote> <p>Q2: Dependence of convergence on number of data points</p> </blockquote> <p>For both LMS, RLS and Kalman filters, convergence primarily depends on the number of data points being processed; i.e., number of iterations. However this <strong>convergence rate</strong> is different for different filter types, reflecting their complexity and/or sophistication. The simple LMS filter has the slowest rate of convergence (roughly 10 to 20 times the filter tap weight length) whereas the RLS and Kalman filters display a convergence rate of roughly $2$ times the filter length (Haykin_Adaptive Filter Theory) for WSS inputs.</p>
https://dsp.stackexchange.com/questions/43255/performance-of-adaptive-filters
Question: <p>I have general theoretical questions: </p> <ul> <li>Is it true that an adaptive filter with two inputs (one normal and one delayed by the single time increment) can completely get rid of any <strong>single frequency</strong> noise? </li> <li>Is it then true that a three-input adaptive filter can (completely) get rid of a noise consisting of two harmonics, etc.? </li> </ul> <p>Maybe there is a theorem for this, any pointers would be appreciated.</p> Answer: <p>In general case, to fully filter out a noise consisting of $N$ (arbitrary) harmonics, one needs an adaptive filter with length (number of taps) of at least $2N$.</p>
https://dsp.stackexchange.com/questions/31608/adaptive-filter-with-two-inputs
Question: <p>I just have a question about using an least-mean-squares algorithim adaptive filter for system identification. Consider the following <a href="https://i.sstatic.net/homVX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/homVX.png" alt="enter image description here"></a></p> <p>I am told that as the error converges to a small value, the adaptive filter coefficients w[k] will indeed repersent the unknown system h[k]. Now, that doesn't make sense to me since the n[k] is being used in the error calculation.</p> <p><strong>Won't the adaptive filter coefficients w[k] now repersent the unknown system coefficients h[k] AND the noise n[k]?</strong></p> Answer: <p>Because an LMS estimator will, over time, "average out" uncorrelated zero-mean noise. It's pretty much in the name.</p> <p>But yes, you're right, there <em>is</em> a noise component in the estimate; one of the qualities of an estimator is how little the noise variance influences the estimate variance after a given length of observation.</p> <p>That's the case, however, for <em>all</em> estimators: you measure signal + noise, and you estimate parameters from that. The parameters <em>must</em> be somewhat noisy; otherwise, there's something broken with your noise model.</p>
https://dsp.stackexchange.com/questions/53924/system-identification-using-lms-adaptive-filter
Question: <p>The quadratic performance surface of an adaptive filter is a paraboloid. Its minimum can be found wherever the gradient is zero. However, since there are two types of paraboloids (elliptical and hyperbolic), is there a way to tell if the minimum detected is a global minimum or just a saddle point?</p> Answer: <p>The quadratic surface is determined by the autocorrelation matrix of the data, which is always positive definite or positive semi-definite. This means that any stationary point is always a minimum. In the worst case, this minimum is not unique if the matrix is singular, but it can never be a saddle point.</p>
https://dsp.stackexchange.com/questions/23143/adaptive-filter-gradient-descent
Question: <p>I have 3 sensor inputs: $a(t)$, $b(t)$ and $c(t)$. I want to design a filter such that the weighted linear combination of the three is always a constant. Kind of like:</p> <p>$$w_1(t)a(t) + w_2(t)b(t) + w_3(t)c(t) = k$$</p> <p>So from my undergrad modules I think I need a adaptive filters. I can perform training to find $k$ and filter weights at initialization. </p> <p>May I know what is the best filter to implement in my case? $a(t)$, $b(t)$ and $c(t)$ are not independent.</p> <p>Kelvin </p> Answer: <p>What I was getting at in the comments above is that the linear system $w_1(t)a(t) + w_2(t)b(t) + w_3(t)c(t) = k$ has an infinite number of solutions, so you need to state some criterion that allows you to choose a unique solution. I think you have pointed out a constraint that is worth examining.</p> <p>The idea: note that the linear equation I gave above is the <a href="http://en.wikipedia.org/wiki/Plane_%28geometry%29#Point-normal_form_and_general_form_of_the_equation_of_a_plane" rel="nofollow">equation for a plane in $\mathbb{R}^3$</a>:</p> <p>$$w_1(t)a(t) + w_2(t)b(t) + w_3(t)c(t) = k$$</p> <ul> <li><p>This defines a plane in three-dimensional Euclidean space. </p></li> <li><p>$\left[a(t), b(t), c(t) \right]$ is a vector that is normal to the plane. </p></li> <li><p>The collection of all vectors in $\mathbb{R}^3$ of the form $\left[w_1(t),\ w_2(t),\ w_3(t)\right]$that satisfy the above equation are points on the plane.</p></li> <li><p>Since you said that $w_1(t), w_2(t), w_3(t)$ don't change much over a short time duration, then you can assume that the point $\left[w_1(t+\Delta t),\ w_2(t+\Delta t),\ w_3(t+\Delta t)\right]$ should be geometrically close to the point at the previous time instant, $\left[w_1(t),\ w_2(t),\ w_3(t)\right]$.</p></li> </ul> <p>So, the algorithm would look something like this:</p> <ul> <li><p>Initialize your algorithm by finding $k$, which you said you can do.</p></li> <li><p>Solve for the initial weights $w_1(t), w_2(t), w_3(t)$, which you said you can do.</p></li> <li><p>On subsequent time steps, measure $a(t)$, $b(t)$, and $c(t)$. This defines the plane in $\mathbb{R}^3$ that the filter weight vector can possibly lie upon.</p></li> <li><p>Find the point on the plane that is closest to the filter weights from the previous iteration. Use this point as the new vector of filter weights.</p></li> <li><p>Repeat.</p></li> </ul> <p>I'm not sure if this will give the desired effect or not (as your inquiry is light on details), but it has an intuitive geometric explanation. It might be worth a try.</p>
https://dsp.stackexchange.com/questions/17617/adaptive-filter-weight-adjustment
Question: <p>I studied a bit about adaptive filter on internet and found that its a special filter which keep on updating its filter value as soon as it proceeds. It finds out the difference between input and output and using the error function and previous coefficients finds out the new filter coefficients.</p> <p>But this doesn't make any sense. It always tries to minimize the difference between input and output. So, how is it of any use, if it tries to pass all the signals as it is.</p> <p>Can anyone explain me how it is being used in real day applications.</p> <p>It will also be good if you can help me through some links which could help me in implementation of adaptive digital filter.</p> <p>please comment if I am unclear in expressing my doubt ! </p> Answer: <p>The key concept that you are missing is that you are not just minimising the difference between input and output signals. The error is often calculated from a 2nd input. Just look at the <a href="http://en.wikipedia.org/wiki/Adaptive_filter#Example_application" rel="noreferrer">Wikipedia example related to the ECG</a>.</p> <p>The filter coefficients in this example are recalculated to change the notch frequency of a notch filter according to the frequency extracted from the mains signal. One could use a static notch filter, but you would have to reject a wider range of frequencies to accommodate the variability in the mains frequency. The adaptive filter follows the mains frequency and so the stop band can be much more narrow, thus retaining more of the useful ECG information.</p> <p>EDIT:</p> <p>I have looked at this again and I think I understand your question a little better. The LMS algorithm needs an error term in order to update the filter coefficients. In the ECG example that I paraphrase above, I give the error term as a second input from a mains voltage. Now I'm guessing that you are thinking, "Why not just subtract the noise from the signal-plus-noise to leave the signal?" This would work fine in a simple <em>linear</em> system. Even worse, most examples given online tell you (correctly but confusingly) that the error term is calculated from the difference between the desired signal and the output of the adaptive filter. This leaves any reasonable person thinking "If you already have the desired signal, why bother doing any of this!?". This can leave the reader lacking motivation to read and comprehend the mathematical descriptions of adaptive filters. However, the key is in <a href="http://www.ece.mcmaster.ca/faculty/reilly/coe4tl4/adaptive%20filters%20Scot%20Douglas.PDF" rel="noreferrer">section 18.4 of Digital Signal Processing Handbook</a>, Ed. Vijay K. Madisetti and Douglas B. William. </p> <p>where: </p> <ul> <li>x=input signal, </li> <li>y=output from filter, </li> <li>W=the filter coefficients, </li> <li>d=desired output, </li> <li>e=error</li> </ul> <blockquote> <p>In practice, the quantity of interest is not always d. Our desire may be to represent in y a certain component of d that is contained in x, or it may be to isolate a component of d within the error e that is not contained in x. Alternatively, we may be solely interested in the values of the parameters in W and have no concern about x, y, or d themselves. Practical examples of each of these scenarios are provided later in this chapter.</p> <p>There are situations in which d is not available at all times. In such situations, adaptation typically occurs only when d is available. When d is unavailable, we typically use our most-recent parameter estimates to compute y in an attempt to estimate the desired response signal d.</p> <p>There are real-world situations in which d is never available. In such cases, one can use additional information about the characteristics of a “hypothetical” d, such as its predicted statistical behavior or amplitude characteristics, to form suitable estimates of d from the signals available to the adaptive filter. Such methods are collectively called blind adaptation algorithms. The fact that such schemes even work is a tribute both to the ingenuity of the developers of the algorithms and to the technological maturity of the adaptive filtering field</p> </blockquote> <p>I will keep building on this answer when I get the time, in an attempt to improve the ECG example. </p> <p>I found this set of lecture notes to be particularly good also: <a href="http://www.commsp.ee.ic.ac.uk/~mandic/ASP_Slides/ASP_Lecture_7_Adaptive_Filters_2015.pdf" rel="noreferrer">Advanced Signal Processing Adaptive Estimation and Adaptive Filters</a> - Danilo Mandic</p>
https://dsp.stackexchange.com/questions/1572/what-does-an-adaptive-filter-do
Question: <p>I want to mention upfront that I'm not very experienced in this field.</p> <p>I have a signal <span class="math-container">$u(k)$</span> that I get from a black box simulation (sampled irregularly). The signal looks like this:</p> <p><a href="https://i.sstatic.net/miL0y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/miL0y.png" alt="enter image description here" /></a></p> <p>The blue signal has small frequency oscillations that i want to remove. The orange curve was obtained by fitting a polynomial on the whole dataset.</p> <p>My goal is to obtain a smooth estimate of the blue signal in real-time (streaming data) and then compute its derivative. Is it possible to obtain something as smooth as the orange curve ?</p> <p>If so, can you suggest some commonly used methods ?</p> Answer: <p>You may have few options:</p> <ol> <li>On Line Least Squares<br /> You may use the <a href="https://dsp.stackexchange.com/a/56670/128">Sequential Least Squares</a> (MATLAB Code available in the link).<br /> This will give you exactly what you got using the polynomial model.<br /> Since it is a polynomial model, you will be able to calculate the derivative on line as well.</li> <li><a href="https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter" rel="nofollow noreferrer">Savitzky Golay Filter</a><br /> Those filter with pre defined coefficients approximate the polynomial Least Squares solution locally.<br /> They also have a variant to calculate the derivative on line.<br /> Since they are online they are even more sensitive to outliers (Locally) than the global Least Squares.</li> <li><a href="https://dsp.stackexchange.com/questions/21598">Kalman Filter</a><br /> You may use the Kalman Filter to have online estimation both of the smooth curve and its local derivative.</li> </ol>
https://dsp.stackexchange.com/questions/88750/adaptive-filtering
Question: <p>I'm having some confusion learning about the LMS Adaptive Filter. I understand that the whole model of adaptive filters relies on the fact that we give it a reference signal to which it keeps comparing the input * filter with and the filter coefficients keep changing until the error between input and reference is zero.</p> <p>Let's say I have a telephone conversation with a time-varying sinusoid added on top that changes frequencies every so often. How exactly do I give it a reference signal? I mean, if I had a reference signal of what I wanted the input to become, I would have just used it? And similarily if I had a reference signal for my unwanted sinusoid, I could have just subtracted it from my original sound.</p> <p>What am I missing here?</p> Answer: <p>The LMS and many of the variants of Adaptive Filters (In the Linear System context) work in the following settings (Intuitive):</p> <ol> <li>You have access to 2 signals.</li> <li>One signal is the result of the other one when a Linear System is applied.</li> </ol> <p>This sounds really limiting, yet in practice it is powerful and flexible.</p> <p>In the settings you mentioned the most known and similar problem is the Echo Cancellation model.</p> <p>Pay attention that your model can't be formed as a Linear System. Namely the connection between the clean signal and the corrupted signal can't be described using Linear System.</p>
https://dsp.stackexchange.com/questions/53427/modelling-unwanted-signal-in-a-lms-adaptive-filter
Question: <p>I got a problem when I was trying to denoise a signal. Actually, it is a simple simulation. The signal is the addition of a step signal (The info I wish to get) and a sinusoidal one (the noise I wish to remove). See below<img src="https://i.sstatic.net/NcyNr.jpg" alt="(a) The noise (b) The signal and (c) Signal + the noise"> However, I tried different parameters of using the adaptive filter, it simply cannot filter out the sinusoidal noise from the step signal. See figure like this.<img src="https://i.sstatic.net/REaIX.jpg" alt="Using adaptive filter"></p> <p>Any suggestions will be greatly appreciated! </p> <p>Below is the matlab code</p> <pre><code>clear all close all %% walking induced noise t = [1:5000]*1e-2; f = 0.1; WalkNoise = 1*sin(2*pi*f.*t);%+1.5*cos(3*pi*f.*t); WN = WalkNoise + 0.05*randn(size(WalkNoise)); figure subplot(3,1,1) plot(t,WN); title('Noise'); %% signal h1 = 14; % height of the signal 1 h2 = 18; % height of the signal 2 L = 5000; % total length of the signal bp =2500; % location of break point x1 = h1*ones(1,bp); x2 = h2*ones(1,L-bp); Sig = [x1,x2]; Sig = Sig + 0.1*randn(size(Sig)); subplot(3,1,2) plot(t,Sig); title('Signal'); %% walking-induced-noise + signal NoisySig = Sig + WN; subplot(3,1,3) plot(t,NoisySig); hold on title('Signal + noise'); %% adaptive filtering figure plot(t,NoisySig); hold on title('Signal + noise'); mu = 0.001; % LMS step size. ha = adaptfilt.lms(20,mu); [y,e] = filter(ha,WN,NoisySig); plot(t,e,'r'); legend('Signal+noise','Filtered using Adaptive filter'); </code></pre> Answer: <p>I tried your code change adaptfilt.lms to adaptfilt.nlms<br> also decrease the step size to 0.0001<br> These conditions gave me better results.<br> nlms is better than lms as there is stability in learning filter coefficeints.The lms algorithm could change the filter coefficients drastically.</p>
https://dsp.stackexchange.com/questions/23229/why-adaptive-filter-does-not-work-in-my-application
Question: <p>I’m trying to understand the conception of function <a href="http://www.mathworks.com/help/dsp/ref/maxstep.html" rel="nofollow noreferrer">maxstep</a> </p> <p>The foundation of this function is function <code>firwiener</code> with input parameters: length of adaptive filter, samples of input signal, which returns dLam, kurt </p> <p>and then step size calculated as:</p> <pre><code>*mumaxmse = 2/(max(dLam)*(kurt+2)+sum(dLam));* </code></pre> <p>Though, with some types of input data this function works correctly ("x" in Example2), and with some types ("x" in Example 1) incorrectly</p> <ol> <li>Why it happens?</li> <li>How to estimate mumaxmse at Example 1?</li> </ol> <p>Example 1 </p> <pre><code> D = 16; % Number of delay samples b = exp(1j*pi/4)*[-0.7 1]; % Numerator coefficients a = [1 -0.7]; % Denominator coefficients ntr= 1000; % Number of iterations s = sign(randn(1,ntr+D)) + 1j*sign(randn(1,ntr+D)); % QPSK signal n = 0.1*(randn(1,ntr+D) + 1j*randn(1,ntr+D)); % Noise signal r = filter(b,a,s)+n; % Received signal x = r(1+D:ntr+D); % Input signal (received signal) L = 32; % filter length [~,~,~,~,~,dLam,kurt] = firwiener(32-1,x,x); % Third input is 'dummy' mumaxmse = 2/(max(dLam)*(kurt+2)+sum(dLam)); % Compute MSE Step size bound </code></pre> <p>Here mumaxmse is NaN because of kurt (kurt is NaN)</p> <p>====================================== Example 2 </p> <pre><code> x = randn(2000,1)+sqrt(-1)*randn(2000,1); d = x; obj = fdesign.lowpass('n,fc',31,0.5); hd = design(obj,'window'); % FIR filter to identified. coef = cell2mat(hd.coefficients); % Convert cell array to matrix. x(:,1) = filter(sqrt(0.75),[1 -0.5],sign(randn(size(x,1),1))); [~,~,~,~,~,dLam,kurt] = firwiener(32-1,x,x); % Third input is 'dummy' mumaxmse = 2/(max(dLam)*(kurt+2)+sum(dLam)); % Compute MSE Step size bound </code></pre> <p>And here mumaxmse is correctly (kurt not NaN)</p> <pre><code>+++++From Matab++++++++++++ %MAXSTEP Maximum step size for adaptive filter convergence. % % MUMAX = MAXSTEP(H,X) predicts a bound on the step size to provide % convergence of the mean values of the adaptive filter coefficients. % % The columns of the matrix X contain individual input signal sequences. % The signal set is assumed to have zero mean or nearly so. % % [MUMAX,MUMAXMSE] = MAXSTEP(H,X) predicts a bound on the adaptive % filter step size to provide convergence of the adaptive filter % coefficients in mean square. % % See also MSEPRED, MSESIM, FILTER. % Author(s): S.C. Douglas % Copyright 1999-2009 The MathWorks, Inc. % $Revision: 1.6.4.2 $ $Date: 2009/10/16 04:52:21 $ error(nargchk(2,2,nargin,'struct')); xt = x(:); % Stack input sequences into one vector % Compute Step size bound for convergence in the mean L = length(h.Coefficients); % Length of coefficient vector mumax = 2/(mean(xt.*xt)*L); % Calculate sufficient Step size bound if (nargout &gt; 1) [~,~,~,~,~,dLam,kurt] = firwiener(L-1,x,x); % Third input is 'dummy' mumaxmse = 2/(max(dLam)*(kurt+2)+sum(dLam)); % Compute MSE Step size bound if (h.StepSize &gt; mumaxmse/2) || (h.StepSize &lt;= 0) % Test h.StepSize and warn if outside reasonable limits warning(generatemsgid('InvalidStepSize'), ... ['Step size is not in the range ',... '0 &lt; mu &lt; mumaxmse/2: \n',... 'Erratic behavior might result.']); end end +++From MATLAB++++++++++++++++++++++++++++++++ function [W,R,P,V,Lam,dLam,kurt] = firwiener(N,x,y) %FIRWIENER Optimal FIR Wiener filter. % B = FIRWIENER(N,X,Y) computes the optimal FIR Wiener filter of order N, % given two (stationary) random signals in column vectors X and Y. % % B = FIRWIENER(N,X,Y) where X and Y are matrices, averages over the % columns of X and Y when computing the Wiener filter. % Author(s): Scott C. Douglas % Copyright 1999-2009 The MathWorks, Inc. % $Revision: 1.1.4.3 $ $Date: 2009/09/03 04:50:31 $ [ntr,L] = size(x); r = zeros(2*(N+1)-1,1); % Initial autocorrelation vector p = r; % Initial cross correlation vector for k=1:L r = r + xcorr(x(:,k),N); % Calculate (k)th autocorrelation and accumulate p = p + xcorr(y(:,k),x(:,k),N); % Calculate (k)th cross correlation and accumulate end R = toeplitz(r(N+1:2*(N+1)-1))/(L*ntr); % (L x L) input autocorrelation matrix P = p(N+1:2*(N+1)-1).'/(L*ntr); % (1 x L) cross correlation vector W = P/R; if nargout &gt; 3, [V,Lam] = eig(R); % Find eigenvalue decomposition of R dLam = diag(Lam); % Specify eigenvalue vector if nargout &gt; 6, kurt = 0; % Initial kurtosis value for i=1:N for k=1:L xv = filter(V(:,i),1,x(:,k)); % Calculate (k)th eigenvector filtered signal kurt = kurt + mean(xv.^4)/mean(xv.^2)^2 - 3; % Estimate kurtosis value end end kurt = kurt/(L*N); % Average kurtosis value of eigenvector filtered signals end end </code></pre> Answer:
https://dsp.stackexchange.com/questions/3554/maximum-step-size-for-adaptive-filter-convergence
Question: <p>I'm trying to understand how to specify the "desired signal" in adaptive LMS filters such as the following: <a href="http://www.mrtc.mdh.se/projects/wcet/wcet_bench/lms/lms.c" rel="nofollow noreferrer">This one</a>, or <a href="http://read.pudn.com/downloads158/ebook/707037/10%20DSP%20applications%20using%20C%20and%20the%20TMS320C6X%20DSK/Chp07.pdf" rel="nofollow noreferrer">this one</a> page 7, or <a href="http://gplib.sourceforge.net/classgplib_1_1LMSCanceller.html#a6398f704207d3184f5548a23a728dda7" rel="nofollow noreferrer">this one</a>.</p> <ul> <li>It seems like the example there uses it for noise suppression? </li> <li>But why is the desired signal input + noise? </li> <li>Isn't it what we want to get rid of, rather than desire?</li> </ul> <p>I'm trying to use LMS for "arbitrary magnitude response" filtering (as suggested <a href="https://dsp.stackexchange.com/a/31887/16003">here</a>).</p> Answer:
https://dsp.stackexchange.com/questions/31892/desired-signal-in-lms-adaptive-filters
Question: <p>What is the advantage of Variable step size LMS over Leaky-LMS adaptive filter algorithm? Which one has a better performance?</p> Answer: <p>Variable step size LMS is generally used to improve the speed of convergence or decrease steady-state error. Leaky adaptation is used to combat problems like the potential instability of the filter in a finite-precision implementation. It is closely related to the L2 norm regularization technique and results in continuous downscaling of filter coefficients (hence smoothing the extracted filter). So comparing these two techniques is not that meaningful. In fact, you can easily use both techniques together to achieve all the above advantages together.</p>
https://dsp.stackexchange.com/questions/36664/variable-step-size-lms-vs-leaky-lms-adaptive-filter-algorithm
Question: <p>I want to create an adaptive filter. Its coefficients have this general shape:</p> <p><img src="https://i.sstatic.net/ZeXvs.jpg" alt="enter image description here"></p> <p>When the input signal for the filter is a sine wave, the filter behaves the desired way if the look-back window is set to a length equal to 1/4th the period of the sine wave. I can somehow understand why this is so because a sinusoid is essentially 4 times the same piece of data, flipped and mirrored: this becomes obvious if you split it at multiples of pi/2:</p> <p><img src="https://i.sstatic.net/m0Uxk.jpg" alt="enter image description here"></p> <p>When I say "the filter behaves the desired way" I mean that it turns together with the sinusoid:</p> <p><img src="https://i.sstatic.net/tRqdL.jpg" alt="enter image description here"></p> <p>.. now the question is: when the adaptive filter is placed on a sinusoid of unknown frequency, what error criterion should it use so it automatically will select a window length of 1/4th the period of the sinusoid, and behave in the same nice way as the image above? No matter where the starting point of the calculation is (so also if it does not start calculating at those "pretty" pi/2 multiples).</p> <p>The goal would be to find the right length of the window. To do this, you need to go through lengths of i.e. 2->100, and always calculate the same error function, but on a longer and longer input window (going back from the current period to the left: further and further "back in time"). </p> <p>So we take the filter coefficients, multiply them with a piece of sinusoid, and then on this we calculate the error function: for 2 periods, for 3, for 4 etc. all the way to i.e. 100.</p> <p>What is the ideal error function to use for this? Some sort of sinusoidal mean squared error type of thing.. All help appreciated.</p> Answer:
https://dsp.stackexchange.com/questions/21677/non-standard-error-function-for-adaptive-filter
Question: <p>I am working on a project which requires me to implement adaptive filter as a predictor. I have just started on adaptive filter and I intend to use least mean square algorithm for weight adjustment.</p> <p>How can I predict future values from this system ?</p> <p>Any help would be beneficial for me. Thanks. <img src="https://i.sstatic.net/eUmiV.png" alt="Adaptive filler prediction model"></p> Answer: <p>Yes you can predict future temperatures, based on past temperatures, using adaptive filtering as well.</p> <p>The optimal linear estimation of a WSS random process from its past values, which is known as linear prediction, is given by a Wiener filter structure where the desired response to be estimated is the current sample of the input (current temparature in your case) and the filter input is the <span class="math-container">$N$</span> past samples of the input, (assuming one step forward prediction of order <span class="math-container">$N$</span>).</p> <p>The LMS adaptive filtering algorithm simply approaches this optimal Wiener predictor coefficients for WSS signals and for non WSS signals tries to continue to be optimal by tracking it.</p> <p>This prediction mechanism does not depend on the physical origin of the signals but on their statistical characterisation. As long as your temperature data posses reasonable degree of correlation within it, then the filter will do its best to predict it.</p>
https://dsp.stackexchange.com/questions/54955/can-temperature-data-be-predicted-using-adaptive-filter-such-as-lms-algorithm
Question: <p>I run many times in equations containing the trace of covariance matrix of an adaptive filter input. But it is not really clear what it is.</p> <p>For example in <a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=902113" rel="nofollow noreferrer">this paper</a> the input covariance matrix is</p> <p>$$\textbf{R} = E\left(\textbf{u}_i^*\textbf{u}_i \right)$$</p> <p>where:</p> <ul> <li><p>$\textbf{u}_i$ is row vector of input (seems to be regression vector in given time)</p></li> <li><p>$^*$ denotes Hermitian conjugation (complex conjugation for scalars)</p></li> <li><p>$E$ is not explained, but I would expect it is the expected value</p></li> </ul> <p>Questions:</p> <ol> <li>What does it mean? </li> <li>How does the matrix look like, what is the shape of it? </li> <li>How do I estimate it from input data $\textbf{U}$ - matrix where one row represents one input vector?</li> </ol> Answer: <p>The Covariance Matrix is commonly defined as </p> <p>$$\mathbf Q = E\left[ (\mathbf x -\mathbf\mu_{x})(\mathbf x -\mathbf\mu_{x})^*\right]$$ with $\mu$ denoting the mean value, i.e. $\mu_{x}=E\left[\mathbf x\right]$, and $\mathbf x$ being column vectors. The fact that you define the covariance matrix as</p> <p>$$\mathbf{R}_i = E\left[\textbf{u}_i^*\textbf{u}_i \right]$$</p> <p>indicates that your $\mathbf u_i$ have zero mean and are row vectors; that might be a result of the signal model you use, but it's pretty non-standard.</p> <p>Furthermore, the product of a column vector with its hermitian is inherently hermitian (symmetric if real).</p> <p>Now, one way to estimate an expectation value from multiple observations. Wikipedia <a href="https://en.wikipedia.org/wiki/Estimation_of_covariance_matrices" rel="noreferrer">has an article on estimating covariance matrices</a>, so here's just the gist: the most intuitive estimator is the <em>sample covariance matrix</em>; adapted to your formula, that'd be</p> <p>$$\hat{\mathbf R} = \frac1{N-1}\sum\limits_{i=1}^N \mathbf u_i^*\mathbf u_i$$</p>
https://dsp.stackexchange.com/questions/38161/covariance-matrix-of-an-adaptive-filter-input
Question: <p>I am trying to make a frequency domain adaptive filter in matlab. It uses matlab adaptfilt.fdaf to create the filter parameters like step size and initializing initial filter weight values. Then I have tried to implement the overlap - save frequency domain adaaptive filter algorithm from the paper "<a href="http://users.isy.liu.se/en/rt/fredrik/spcourse/multirate.pdf" rel="nofollow">Frequency-domain and multirate adaptive filtering</a>" by J. J. Shynk. It is also based on the source code of the thisfilter.m script that can be found in "%MATLAB_FOLDER%\R2009b\toolbox\filterdesign\flterdesign\@adaptfilt\@fdaf\thisfilter.m". I used sine wave as the desired signal and </p> <pre><code>noisy = sinewave.*(rand(1, length(t) ).') * 0.1; </code></pre> <p>as the noisy input. At the end, I plot the difference <code>(output - desired)</code> to see how close the filter output comes to the desired signal. For the above noisy input, if I play around witht he step size and other filter parameters, I can make this difference go closer to 0 (although it never actually becomes 0, but sort of oscillates around 0) as I process more and more blocks of the input signal. However, if I then change the input to </p> <p><code>noisy = sinewave + (rand(1, length(t) ).') * 0.1;</code> </p> <p>I just can't make the difference converge towards 0 no matter what I do. Either the difference increases to very high magnitude (around the range x$10^{50}$) if the step size is too high, or the difference oscillates in a roughly sine wave pattern, but never seems to go towards 0. Can anyone provide any ideas on how this can be fixed? </p> <p>A few facts that I should point out here:<br> When the noisy signal is <code>noisy = sinewave.*(rand(1, length(t) ).') * 0.1;</code>, the plot of the difference <code>(output - desired)</code> is in the range +0.3 to -0.3 for a good value of step size, that is, one that makes this plot converge and not diverge. When the noisy signal is <code>noisy = sinewave + (rand(1, length(t) ).') * 0.1;</code> this difference is of the range +4 to -4 when the output does not diverge. </p> <p>If I get some good suggestions on how I can improve this code I will post further details about the source code and output.<br> Here is the current version of the Matlab code. It still has some lines of code that I used for my own trouble shooting so it's not exactly like Shynk's paper right now, but this version can be compiled. It uses the <code>noisy = sinewave.*(rand(1, length(t) ).') * 0.1;</code> version of the noisy signal. If I get some good suggestions from here I will update the source code as well for further discussion. </p> <pre><code>clc clear all close all % % % Arguments to fdaf % % % % H = ADAPTFILT.FDAF(L,STEP,LEAKAGE,DELTA,LAMBDA,BLOCKLEN,OFFSET,COEFFS, STATES) L = 128; % number of filter coefficients ( = n for best results; N = Block length; default = 10) % L is the number of samples by which shifting takes place during every % iteration STEP = 0.6; % mu; default = 1; corresponds to 2*mu in paper LEAKAGE = 1; % filter leakage, default = 1 --&gt; No leakage DELTA = 1; % initialize and assign FFT input signal powers; should be 1 if it's effect is to be ignored; default = 1 LAMBDA = 1; % assign averaging factor; should be 1 if it's effect is to be ignored; default = 1 BLOCKLEN = 128; % length of block, number of samples read from the input to be filtered at one iteration --&gt; N; should equal L for best efficiency OFFSET = 0; % should be 0 to remove it's effect; default = 0 COEFFS = zeros(L, 1); % probably initial value for filter coefficients; should be 0; default = 0 array of length L STATES = zeros(L, 1); % the very first "previous" value for input noisy signal % % % % % % % % % % Define input signal % % % % % % % % % % %% Time specifications: Fs = 8000; % samples per second dt = 1/Fs; % seconds per sample StopTime = 1.6; % seconds t = (0:dt:StopTime-dt)'; % seconds %% Sine wave: Fc = 60; % hertz sinewave = cos(2*pi*Fc*t); desired = sinewave; % load rand_seq.dat rand_seq = (rand(1, length(t) ).') * 0.1; noisy = sinewave.*rand_seq; % noisy signal % % % % % % % % % % Creating filter % % % % % % % % % % h = adaptfilt.fdaf(L,STEP,LEAKAGE,DELTA,LAMBDA,BLOCKLEN,OFFSET,COEFFS, STATES); x = noisy; d = desired; % % % % % % % % % % Applying filter % % % % % % % % % % ntr = length(x); % temporary number of iterations N = h.BlockLength; % block length L = h.FilterLength; % number of coefficients ntrB = floor(length(x)/N); % temporary number of block iterations ; --&gt; has to be exactly divisible in the original function y = zeros(size(x)); % initialize output signal vector e = y; % initialize error signal vector X = zeros(L+N,1); % initialize temporary input signal buffer E = zeros(L+N,1); % initialize temporary error signal buffer Ef = zeros(L+N,1); % initialize temporary error signal buffer nnL = 1:L; % index variable used for input signal buffer nnLpN = N+1:N+L; % index variable used for input signal buffer nnNpL = L+1:L+N; % index variable used for input signal buffer nnLpNr = L+N:-1:N+1; % index variable used for coefficient updates FFTW = h.FFTCoefficients.'; % initialize and assign frequency domain Coefficients normFFTX = h.Power; % initialize and assign FFT input signal powers X(nnLpNr) = h.FFTStates; % assign input signal buffer mu = h.StepSize; % assign step size ZN = zeros(N,1); % assign N-dimensional zero vector mu_k = zeros(L+N, 1); % use seperate mu value for each frequency bin in the fft domain % mu gets multiplied to values in the FFT % domain, but each element of mu is a scalar Pest = zeros(L+N, 1); lambda = 0.8; alpha = 1 - lambda; FFTW_sample_index = zeros(ntrB, 1); for n=1:ntrB, nn = ((n-1)*N+1):(n*N); % index for current signal blocks X(nnL) = X(nnLpN); % shift temporary input signal buffer up X(nnNpL) = x(nn); % assign current input signal vector FFTX = fft(X); % compute FFT of input signal vector; FFTX is the result of an L+N point FFT % if n == 1 % Pest = (abs(FFTX)).^2; % else % Pest = lambda * Pest + alpha * ( (abs(FFTX)).^2 ); % end % mu_k = mu ./ Pest; % if mod(n, 30) == 0 % % mu = 0.009; % FFTW(:, 1) = 0; % end mu_k = mu ./( lambda + (abs(FFTX)).^2 ); Y = ifft(FFTW.*FFTX); % compute current output signal vector y(nn) = Y(nnNpL); % assign current output signal block e(nn) = d(nn) - y(nn); % assign current error signal block % E(nnNpL) = mu*e(nn); % assign current error signal vector E(nnNpL) = e(nn); FFTE = fft(E); % compute FFT of error signal vector % normFFTX = bet*normFFTX + ombet*real(FFTX.*conj(FFTX)); % update FFT input signal powers % G = ifft(FFTE.*conj(FFTX)./(normFFTX + Offset)); % compute gradient vector FFTE_multConjFFTX = FFTE.*conj(FFTX); G = mu_k .* FFTE_multConjFFTX; G = ifft(G); % G = ifft(FFTE.*conj(FFTX)); G(nnNpL) = ZN; % impose gradient constraint % FFTW = lam*FFTW + mu * fft(G); % update frequency domain coefficients % FFTW = FFTW + mu * fft(G); % modified to FFTW = FFTW + mu_k .* fft(G); % FFTW = FFTW + mu_k .* fft(G); FFTW = FFTW + fft(G); % USe this equation if mu_k .* FFTE_multConjFFTX; has been used above FFTW_sample = abs(FFTW(25)); % invFFTW_index(n) = sum( invFFTW )/ntrB; FFTW_sample_index(n) = FFTW_sample; % % if n &lt; 20 % FFTW = FFTW + fft(G); % USe this equation if mu_k .* FFTE_multConjFFTX; has been used above % end end if isreal(x) &amp;&amp; isreal(d), y = real(y); % constrain output signal to be real-valued e = real(e); % constrain error signal to be real-valued end diff_here = y - desired; figure(1); plot(diff_here); figure(2); plot(1:ntrB, FFTW_sample_index); % figure(2); % plot(1:length(noisy), noisy, 'b', 1:length(y), y/4, 'r'); </code></pre> Answer:
https://dsp.stackexchange.com/questions/9529/adaptive-filter-does-not-converge-for-all-inputs
Question: <p>I have got review for my work saying, that my work (adaptive filter variant) should be analyzed in transient and steady-state before claiming it improves performance.</p> <p>I have done common (in my opinion) analysis:</p> <ol> <li>prediction of linear system</li> <li>prediction of non-linear system</li> <li>prediction of real measured non-stationary data</li> </ol> <p>How can I extend my analysis with transient and steady-state experiments? How the experimental data should look like?</p> <p>Thank you in advance for any hint, note, reference or example. I am pretty sure that I am missing something basic.</p> Answer:
https://dsp.stackexchange.com/questions/37100/transient-and-steady-state-analysis-for-adaptive-filter
Question: <p>For adaptive filtering, both finite and infinite impulse response (FIR/IIR) filters can be utilized. As an advantage of FIR filters in this context, guaranteed stability is often mentioned, while IIR filters do not share this property (see <a href="https://dsp.stackexchange.com/questions/32129/whats-the-advantage-of-adaptive-iir-filter-against-fir">this related question</a> and its answer).</p> <p>I understand IIR filters are unstable when their impulse response diverges, meaning a transfer function pole lies outside the unit circle (has a radius <span class="math-container">$r&gt;1$</span>).</p> <p>My intuition is that since in the example of system identification, only stable systems are practical to identify, it would make sense that estimates of such a systems' transfer function were stable as well.</p> <p>My question is:</p> <ul> <li>Why is this not the case? How can the adaption of IIR filters result in unstable solutions?</li> </ul> <p>Please note that I am not asking about measures to ensure stability in IIR adaptive filtering.</p> Answer: <p>The IIR filter doesn't have to be unstable, but it has the potential of being so; unlike the FIR case which doesn't have even the potential.</p> <p>One reason for the (potential) unstability of an IIR (adaptive) filter is the numerical issues due to coefficient quantization. When the poles are closer to unit circle this will be critical. This is especially important if you are using coarse quantization (like in an legacy 8 bit system?) or you are forcing the limits of your numerical precision. </p> <p>Furthermore, during the adaptation process, the chaotic behaviour of the input (which is possibly nonstationary) may lead into unbounded updates of the coefficients. When the errors get quite large during a transient stage, the resulting update on the coefficients can also be quite large leading (depending on the adaptation algorithm) the filter to fall into an unstable region...</p>
https://dsp.stackexchange.com/questions/61159/why-can-adaptive-iir-filters-result-in-unstable-solutions
Question: <p>I'm trying to filter some motion noise from an ECG signal. To do that I'll try to implement an adaptive filter using the LMS algorithm.</p> <p>I think I have to calculate the MSE of this:</p> <p><code>E[e^2 ] = E[(s + n )^2 ]+ 2E[(s + n)X ]W^T + WE[X^T X ]W^T</code></p> <p>in which <code>s+n</code> is the noised <code>ECG</code>,</p> <p><code>X (n) = [ 1, Accx (n), Accy (n), Accz (n)]</code> are the values from the accelerometer,</p> <p><code>W = [w0, w1, w2, w3]</code> the coefficients.</p> <p><code>Wt +1 = Wt + 2αeX</code>,</p> <p>I'm trying to understand how to calculate <code>alfa</code> and minimise the MSE.</p> <p><a href="http://www.mathworks.com/help/dsp/ref/adaptfilt.lms.html" rel="nofollow">http://www.mathworks.com/help/dsp/ref/adaptfilt.lms.html</a></p> <p>can anyone give me a simpler example? I'm kind of lost here.</p> Answer: <p>This <a href="http://www.scilab.org/" rel="nofollow">scilab</a> script implements a simple LMS adaptive filter.</p> <pre><code>M = 50; LMS = zeros(M,N); LMS(:,1) = zeros(M,1); ERR = zeros(1,N); y = zeros(1,N); mu = 0.0005; for t=M:N, Uwindowed = u(t - [0:M-1]'); y(t) = LMS(:,t)'*Uwindowed.'; ERR(t) = d(t) - y(t); LMS(:,t+1) = LMS(:,t) + mu*Uwindowed.'*ERR(t); end; </code></pre> <p>The inputs to it are <code>u(t)</code>, the noisy signal and <code>d(t)</code> the desired signal. For the standard LMS algorithm to work, you need to have an idea of what the noiseless signal looks like.</p> <p>From your equations:</p> <ul> <li>$\alpha$ appears to be similar to <code>mu</code> (or <code>mu/2</code>).</li> <li><code>u</code> is your <code>s+n</code>.</li> <li>I am not sure what your <code>d</code> is.</li> <li>Your equation for the error is, er, erroneous. :-)</li> </ul>
https://dsp.stackexchange.com/questions/9784/mse-in-adaptative-filter
Question: <p>I'm looking to implement a feedback cancellation filter using Wiener Filtering, where an adaptive Wiener filter is used to cancel the feedback occurring in the path between a loudspeaker and a mic (assume PA system). The idea is essentially from this paper: </p> <blockquote> <p>Spriet, Ann, et al. "Adaptive feedback cancellation in hearing aids with linear prediction of the desired signal." IEEE Transactions on signal processing 53.10 (2005): 3749-3763.</p> </blockquote> <p>According to the paper:</p> <ol> <li>Signal delivered at the mic is <span class="math-container">$x[k]$</span>.</li> <li>Signal driving the loudspeaker is <span class="math-container">$u[k]$</span>.</li> <li>The transfer function of the feedback path is <span class="math-container">$F[q]$</span>, so the signal captured at the microphone is <span class="math-container">$y[k] = x[k] + u[k]F[q]$</span>.</li> <li>The feedforward transfer function from microphone to the loudspeaker is <span class="math-container">$G[q]$</span>.</li> </ol> <p>Here is the block diagram that helps put it all together:</p> <p><a href="https://i.sstatic.net/xBrjv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xBrjv.png" alt="Block Diagram from paper."></a></p> <p>In the description, <span class="math-container">$k$</span> is the discrete time index, and <span class="math-container">$q^{-1}$</span> is the unit delay operator (I know these to conventionally be <span class="math-container">$n$</span> and <span class="math-container">$z^{-1}$</span>).</p> <p>The idea is to introduce an adaptive filter <span class="math-container">$\hat{F}[q]$</span> that estimates the feedback path and cancels it from <span class="math-container">$y[k]$</span>. There's a couple of things being talked about, but essentially since <span class="math-container">$x[k]$</span> and <span class="math-container">$u[k]$</span> are correlated, they talk of adding a probe signal <span class="math-container">$r[k]$</span> to the loudspeaker input <span class="math-container">$u[k]$</span>, which helps identify <span class="math-container">$\hat{F}[q]$</span>. <span class="math-container">$r[k]$</span> is apparently usually a noise, which I assume helps decorrelate <span class="math-container">$x[k]$</span> and <span class="math-container">$u[k]$</span>.</p> <blockquote> <p>My first problem is: Wouldn't the noise also be output by the loudspeaker? Or is it added to a copy of the signal, not affecting what is fed to the loudspeaker?</p> </blockquote> <p>Alternately, the paper also says that many audio signals can be closely approximated as a low order AR process:</p> <p><span class="math-container">$x[n] = h[n]*w[n]$</span>, where <span class="math-container">$w[n]$</span> is white noise. This condition is not satisfied for voiced speech or music, in which case a pulse train is suggested.</p> <blockquote> <p>So my bigger question is, why is this noise (white noise) so important in AR modeling, or Adaptive Filtering? It seems to defeat the purpose to add noise to the signal.</p> </blockquote> <p>Any </p> Answer: <p>I believe the point of feeding white noise into the system is for the filter to adapt its coefficients before actually generating the signal <span class="math-container">$x[k]$</span>. This would mean there are two "operating modes" for the system: coefficient adapting mode (in which white noise, a broadband signal, is used to adapt the filter to the feedback path), and performing mode (where the signal <span class="math-container">$x[k]$</span> is input into the system with the filter already adapted).</p> <p>So the answer to your first question would be that the noise is output by the loudspeaker and follows the whole signal path. The system adapts it's coefficients to be similar to <span class="math-container">$F(q)$</span>, and once that process has ended the white noise can be turned off. Afterwards, when generating <span class="math-container">$x[k]$</span> the system will already reduce feedback using the <span class="math-container">$\hat{F}_0(q)$</span> transfer function that was adapted previously, and the coefficients will only change when the forward path <span class="math-container">$G(q)$</span> is modified.</p> <p>The purpose of using white noise (or a pulse train) is to have a broadband signal that has frequency components throughout the whole audible spectrum so that the adapted filter works for any kind of input signal <span class="math-container">$x[k]$</span>. If the system was used without first feeding the broadband signal into it, every time <span class="math-container">$x[k]$</span> presents new frequency components, the coefficients may need to adapt during the performance/speech, which may have audible effects depending on the convergence speed of the algorithm.</p>
https://dsp.stackexchange.com/questions/58956/why-is-white-noise-so-important-in-system-identification-or-adaptive-filters
Question: <p>Are least square filters, or filters that minimize error energy, the same as least mean square adaptive filters?</p> Answer: <p><strong>TL;DR:</strong> No, they are not necessarily the same.</p> <hr> <p><strong>Gory Details</strong></p> <p>Least squares is just an optimization technique. It is used in a variety of ways.</p> <p>For filter <strong>design</strong> it is used to select that realizable filter $H_r(e^{j\omega})$ that most closely matches, in the least squares sense, the ideal required filter response $H_i(e^{j\omega})$: $$ H_r(e^{j\omega}) = \arg \min \parallel H_r - H_i \parallel_2 $$ where $\parallel \cdot \parallel_2$ is the 2-norm or least-squares norm.</p> <p>This sort of filter $H_r$ is not adaptive. That is, it doesn't change once it has been designed.</p> <p>Adaptive filters may also use the least squares criterion, but in a different way: as part of the <strong>adaptation step</strong>.</p> <p>Adaptive filters start off with initial filter coefficients $\vec{w}_o[0]$ and then use an update: $$ \vec{w}_o[n] =\vec{w}_o[n-1] + \mu g[n-1] $$ where $\mu$ is the step-size and $g$ is the gradient of the least squared error surface in the direction of the minimum (from our current "location" of $\vec{w}_o[n-1]$).</p> <p>Here, $g$ is determined by our error criterion: least squares. This means: $$ \parallel \vec{w}_{\tt opt} - \vec{w}_o \parallel_2 $$ where $ \vec{w}_{\tt opt}$ is the unknown optimal (minimizing) solution.</p>
https://dsp.stackexchange.com/questions/42192/are-all-least-square-filters-adaptive
Question: <p>In <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1261952&amp;tag=1" rel="nofollow">this paper</a> stands:</p> <blockquote> <p>The derivation and analysis of NLMS rest upon the usual independence assumptions.</p> </blockquote> <p>It has a footnote:</p> <blockquote> <p>The independence assumptions used in the analysis of adaptive filters are:</p> <ol> <li>sequences $x(k)$ and $w(k)$ are zero mean, stationary, jointly normal, and with finite moments</li> <li>the successive increments of tap weights are independent of one another; and</li> <li>the error and $x(k)$ sequences are statistically independent of one another</li> </ol> </blockquote> <p>I have several problems with that:</p> <ol> <li><p>Who and when choose this assumptions? Is it something general or it just appears in the first publication on this topic?</p></li> <li><p>I think in practice it is not possible to work just with stationary inputs $x(k)$. And also input and the adaptive weights also will never be zero mean (especially during online usage when it is impossible to normalize the input data). Also the error and input are strongly correlated, so how it can be statistically independent? </p></li> <li><p>Is there some other theory dealing with analyzing those more real-life applications where those assumptions cannot be fulfiled?</p></li> </ol> <p>If is it possible to answer, than please answer. If my questions are too stupid, please correct me. Thanks in advance.</p> Answer: <p>I think there is an error in your referenced independence assumption. $w(k)$ should be the update part $\Delta w(k)$ i.e the</p> <p>$w(k+1)=w(k)+\mu \Delta w(k) =w(k)+\mu \frac{x(k)*e(k)}{c+x(k)^Hx(k)}$ </p> <p>The error after convergence is uncorelated and zero mean. </p> <p>The two other assumptions are correct. </p> <p>If your filter is fast enough you may use it at nonstationary systems as long as you are able to track the variations.</p>
https://dsp.stackexchange.com/questions/33771/what-is-usual-independence-assumptions-on-adaptive-filters
Question: <p>What does it mean by leakage in case of digital filters? My specific question is about the frequency domain adaptive filter function provided in the Matlab DSP toolkit, accessed as adaptfilt.fdaf. It has a parameter called LEAKAGE, but I am not sure what exactly does it represent or how it affects the filter response.<br> The filter created as h = adaptfilt.fdaf can be used as<br> [y, e] = filter(h, x, d)<br> which filters the data in x. Studying the source code of the filter function provided shows how leakage has been used, but the theory on which this filter function is based does not include the leakage component. What does it do and why is it there, if the paper on which this function is based does not have it?</p> Answer: <p>In adaptive filtering, leakage is a stabilization method which may be useful if the covariance matrix is close to singular (i.e. at least one of the eigenvalues is very small), or if there are finite-precision effects in the implementation of the adaptive filter. Leakage changes the update formula such that not only the mean squared error but also the norm of the filter taps is minimized. This prevents unbounded growth of the filter coefficients in cases of numerical ill-conditioning.</p> <p>For you this simply means that you initially use a leakage factor of 1 (i.e. no leakage) if the FDAF works properly with your input signals. If you encounter coefficient drift (large fluctuation about the optimum solution), you can start by slightly decreasing the leakage factor until the coefficient fluctuation becomes sufficiently small. Note that leakage achieves stabilization at the expense of performance degradation because due to the changed update formula it introduces some bias in the filter taps.</p>
https://dsp.stackexchange.com/questions/9441/what-is-leakage-in-frequency-domain-adaptive-filters
Question: <p>Note: This post was made to aid with adaptive equalizer design.</p> <p>Adapting an FIR filter using algorithms like LMS, RLS, etc... will generally result in updates that are non-symmetric and therefore non-linear phase.</p> <p>When the adaptive FIR filter taps are linear phase, one can synchronize a &quot;desired&quot; signal with the output of the adaptive FIR filter (lets call this output the &quot;actual&quot; signal) by delaying the &quot;desired&quot; signal by the adaptive FIR filter's group delay. An error is calculated as the difference between the &quot;desired&quot; and &quot;actual&quot; signals and fed into an adaptive algorithm to determine the next filter update.</p> <p>My questions are:</p> <ul> <li>What do I do when the adaptive FIR filter taps are non-symmetric?</li> <li>How should the &quot;desired&quot; signal's delay change as the adaptive FIR filter's group delay changes?</li> </ul> <p>Thanks in advance:)</p> Answer:
https://dsp.stackexchange.com/questions/95984/training-a-non-linear-phase-adaptive-filter