category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
spectral analysis
When to use symmetric vs asymmetric (periodic) window functions?
https://dsp.stackexchange.com/questions/95448/when-to-use-symmetric-vs-asymmetric-periodic-window-functions
<p>Libraries like <a href="https://docs.scipy.org/doc/scipy/reference/signal.windows.html" rel="nofollow noreferrer">scipy</a> typically offer constructing window functions in a symmetric or asymmetric flavor. I'm aware of the rule of thumb:</p> <ul> <li>Use symmetric for filter analysis.</li> <li>Use asymmetric for spectral analyses like STFT.</li> </ul> <p>I have never really seen a good explanation of this rule of thumb, and I would be interested where exactly this is coming from.</p> <p>To thoroughly understand it, it may be interesting to address related questions:</p> <ul> <li>What happens if one would do it the other way around?</li> <li>Regarding STFT, does it actually depend on the window length? I.e., is the recommendation &quot;use asymmetric&quot; just a result of using almost exclusively <em>even</em> window lengths (for performance, FFT friendly), and would the recommendation flip when actually using an <em>odd</em> window length?</li> </ul> <hr /> <p>Here are a few examples of where I had seen this &quot;rule of thumb&quot; — perhaps an exaggeration after all:</p> <ul> <li>In the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.hann.html#scipy.signal.windows.hann" rel="nofollow noreferrer">docstrings in scipy</a> for the individual window functions: <em>When True (default), generates a symmetric window, for use in filter design. When False, generates a periodic window, for use in spectral analysis.</em></li> <li>In <a href="https://www.mathworks.com/help/signal/ref/hann.html" rel="nofollow noreferrer">Matlab's equivalent (see the <code>sflag</code> documentation)</a>: <em>&quot;symmetric&quot; — Use this option when using windows for filter design. &quot;periodic&quot; — This option is useful for spectral analysis [...]</em></li> <li>In <a href="https://pytorch.org/docs/stable/generated/torch.hann_window.html" rel="nofollow noreferrer">torch's docstring</a>: <em><code>periodic</code> flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like <code>torch.stft()</code>.</em></li> </ul>
<p>It’s <em>very</em> nitpicky, but I’m assuming it has to do with the following considerations:</p> <ul> <li><p>The <em>window method</em> for filter design consists in windowing an ideal infinite duration filter (such as a <span class="math-container">$\tt{sinc}$</span>) to make it realizable. Most of the time, linear phase is a requirement, and that’s made possible with coefficients that reflect symmetry.</p> </li> <li><p>The DFT assumes periodicity. When doing spectral analysis, a periodic window “wraps” around, which means that the window function aligns the beginning and end points of the windowed segment, avoiding discontinuities when the signal is treated as periodic.</p> </li> </ul> <blockquote> <p>What happens if one would do it the other way around?</p> </blockquote> <p>Try a few experiments! I don’t think you’ll see much difference in the spectral analysis case. For filter design, you would, depending on the characteristics of the impulse response (especially for very short filters).</p>
534
spectral analysis
Can spectral density be a complex quantity?
https://dsp.stackexchange.com/questions/54573/can-spectral-density-be-a-complex-quantity
<p>I have a signal (<span class="math-container">$S(t)$</span>) which is product of a Gaussian (<span class="math-container">$G(t)$</span>) and a random phase function (<span class="math-container">$e^{i\theta(t)}$</span>, here <span class="math-container">$\theta(t)$</span> is a random function), as shown below</p> <p><span class="math-container">$S(t)=G(t).e^{i\theta(t)}$</span></p> <p>If I calculate the auto correlation of such a signal (<span class="math-container">$E[S^*(t)S(t−τ)]$</span>) it turns out to be a complex quantity and the same goes with the spectral density (Fourier transform of the auto-correlation function). My questions are the following.</p> <ol> <li>If the above analysis valid? as the process is not wide sense stationary. </li> <li>If the analysis is not valid, is there some way to handle this kind of situation?</li> <li>If the analysis is valid, What does the complex spectral density signifies?</li> </ol>
<blockquote> <p>and the same goes with the spectral density (Fourier transform of the auto-correlation function). </p> </blockquote> <p>No, that's not the case.</p> <p>Since the autocorrelation is a hermitian symmetric function for <em>any</em> <span class="math-container">$S$</span>, its Fourier transform is always real.</p> <blockquote> <p>If the above analysis valid? as the process is not wide sense stationary. </p> </blockquote> <p>If the process is not WSS, then you can't just proclaim <span class="math-container">$E[S^*(t)S(t-\tau)]$</span> to be dependent on only one variable (usually, <span class="math-container">$\tau$</span>), and hence, a (1D) Fourier transform doesn't make much sense.</p> <blockquote> <p>If the analysis is not valid, is there some way to handle this kind of situation?</p> </blockquote> <p>Depends! You might want to define/find <em>coherency times</em> and do Short-Time Fourier Transforms within that.</p> <p>Your system, in fact, is just a phase shifted impulse response – as such, a phase-delay representation, which might be derived from a Frequency Shift-Delay plane, might be more helpful in analyzing things. You'll find such a Frequency Shift-Delay plane in what is called <em>scatter function</em> in wireless communications, representing the Doppler and path coefficients of a wireless channels in motion.</p> <p>But in your case: Is trying to understand PSD or PSD-equivalents really useful? Don't you just want to build a parametric estimator for <span class="math-container">$\mu_G$</span>, <span class="math-container">$\sigma_G^2$</span> and <span class="math-container">$\theta$</span> instead?</p>
535
spectral analysis
What is the best way to represent audio visually? (x-post from UX)
https://dsp.stackexchange.com/questions/2744/what-is-the-best-way-to-represent-audio-visually-x-post-from-ux
<p>Original Question: <a href="https://ux.stackexchange.com/q/23040/16006">https://ux.stackexchange.com/q/23040/16006</a></p> <p>I've only taken some basic signal analysis courses, so I might be missing some things.</p> <p><strong>Purely theoretical question:</strong></p> <p>What methods exist for representing audio?</p> <p>What methods could be made for representing audio, more specifically <em>musical audio</em>?</p> <p>So far, I'm aware of:</p> <ul> <li><p>Viewing the waveform (Soundcloud does this), mostly useless except for seeing "loudness"</p></li> <li><p>Spectral analysis (<a href="http://www.youtube.com/watch?v=62_0KIuoqK8" rel="nofollow noreferrer" title="Example">Example</a>), good for seeing frequency and "loudness"</p></li> </ul> <p>Essentially I'm wondering if there is a way one could "see" the notes, beats, and so on of a song, visually.</p> <p>Right off the top of my head I can think of displaying 3 differently colored waves over time representing treble, mid, bass in a soundcloud-like container with the section playing (or moused-over) being magnified, with the surrounding waveforms being compressed into the corners (like a wide-angle lens effect).</p> <p><strong>EDIT:</strong> I don't know where this could be used, this was just born out of my frustration with current audio visualizing technology.</p> <p>I imagine having a 3d graph of a spectral analysis over time (<strong>Ninja Edit:</strong> apparently known as spectrogram) would be the "best" solution since you see everything but it might not be the most elegant and it might not be portable to places like soundcloud.</p> <p>Even current spectrum analysis is hard to decipher (Too low level for images):</p> <p><a href="https://i.sstatic.net/rgPeD.jpg" rel="nofollow noreferrer">FL Studio wave editor</a></p> <p>I'm essentially wondering what might work for casual users, and for people wondering ahead of time how the song will play out.</p>
<p>What a human (or their ear-brain) perceives in sound is a psychoacoustic phenomena, and may or may not be exactly related to the actual audio as recorded. e.g. the exact notes, beats and instruments that a human "hears" may be influenced by visual cues, memory of other similar music, and the musical context around the actual sound of the note in question.</p>
536
spectral analysis
Spectral entropy and moments and non stationary signal processing
https://dsp.stackexchange.com/questions/49809/spectral-entropy-and-moments-and-non-stationary-signal-processing
<p>What is Spectral entropy and spectral moments? I know what the normal entropy of a signal is! And also what are some good time-frequency features for the analysis of non-stationary signals?</p>
537
spectral analysis
hamming window for LPC
https://dsp.stackexchange.com/questions/25747/hamming-window-for-lpc
<p>I am working on a library for generating LPC for speech synthesis. I am currently using a hamming window for the spectral analysis which goes in 200ms blocks over the signal, and does the a-to-k conversion.</p> <p>I have read some stuff online about a technique of overlapping these windows so it would process the signal in groups like [1-200, 150-350, 300-500, etc].</p> <p>My question is: How exactly is this done, and more important-- is there a benefit to overlapping the windows? Will I get better results with my analysis?</p>
538
spectral analysis
FFT freqency bin center in R
https://dsp.stackexchange.com/questions/61029/fft-freqency-bin-center-in-r
<p>I'm trying to do a spectral analysis in R. I learned it in Python from Allen Downey's ThinkDSP book.</p> <p>What is the R equivalent of the Python numpy function, numpy.fft.fftfreq?</p> <p>If you provide a window length and spacing, that function returns the frequency bin centers. I've been hunting through R package documentation and StackOverflow. Is there an equivalent function in R?</p>
539
spectral analysis
Using Fast Fourier Transform to determine musical notes
https://dsp.stackexchange.com/questions/54072/using-fast-fourier-transform-to-determine-musical-notes
<p>Hi guys i am doing a course in Digital Filters and Spectral analysis. We are given a coursework/homework, and I have absolutely no idea what to do with it. I come from Maths background, never done any signal processing before, and since I am new to the university I don't really have anyone to ask.</p> <p>Problem:</p> <p>In this exercise you are required to use spectral analysis techniques to determine the musical notes played within a short audio sample (with sampling frequency 44.1KHz). The sample will comprise a short sequence of 5 chords, each comprising 3 or 4 different musical notes played concurrently. Each note comprises a fundamental plus a series of harmonics at multiples of the fundamental frequency. All of the notes in this exercise belong to the 12 note equal temperament scale.</p> <p>Here is some piece of code I scrambled so far:</p> <pre><code>[x,fs] = audioread('sample1_va18535.wav'); fs % fs is the sampling frequency usually 44.1 KHz wavplay; % Play audio N = 4410; % 0.1 seconds at 44.1KHz N1 = 2205; % 0.05 seconds n = N1+1:N1+N; % 0.05 – 0.15 seconds xn = x(n,1); % Select left channel of short clip wavplay; % Play short clip t = n/fs; % Time index plot(t,xn); xlabel('Time (s)'); ylabel('Amplitude'); window = hamming(N); % Create window wxn = xn .* window; % Apply window Xk = fft(wxn); % DFT k = 0:N-1; f=k*fs/N; % Frequency in Hz pause; plot(f,abs(Xk)); ylabel('Magnitude'); xlabel('Frequency (Hz)'); </code></pre> <p>Now I am pretty sure this is far from over ( I am not even sure if this code is correct).Would anyone be so kind and explain to me how can I generate those frequencies? I know I am required to create a vector of fundamental frequencies and ignore any harmonics.</p> <p>Here is the file: <a href="https://ufile.io/lyge5" rel="nofollow noreferrer">https://ufile.io/lyge5</a></p>
<p>As Robert B.J. has already indicated, a bare bones FFT analysis is not the recommended method for a professional audio harmonic inspection. Nevertheless it can be very useful in certain cases, one of which is, I think, this one. Be aslo warned that, as hotpaw2 indicated, with this simplistic approach, false positives might be detected.</p> <p>From your provided file (45000+ samples, 1 second of duration and taken at 16 bits, 44100 Hz), by first by plotting it you will notice that there are 5 (almost) equal length pieces each about 9000 samples long. These 5 pieces correspond to those 5 chords played and defined in the question. As shown in the plot below:</p> <p><a href="https://i.sstatic.net/moSE2.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/moSE2.gif" alt="enter image description here"></a></p> <p>Now, for simplicity of the analysis, I've only taken the first chord block of 9000 samples and computed its periodogram (<span class="math-container">$\frac{1}{N}|X(\omega)|^2$</span>) wrt Hertz frequency. The result is the following figure.</p> <p><a href="https://i.sstatic.net/SqsLy.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SqsLy.gif" alt="enter image description here"></a></p> <p>From this spectral plot, you can clearly see those harmonic spikes. What you need to do is to find the frequencies corresponding to each of those peaks; i.e., find the abscissa corresponding to the peaks. You can use a number of algorithms for (precisely) finding those peak frequencies. But for a basic analysis you don't need scientific accuracy. Here is the set of (approximate) frequencies I've found to be existing on this spectral plot:</p> <p><span class="math-container">$$f = \{176, 221, 262, 350, 441, 525, 660, 700, 786, 874, 881, 1048, 1101, 1223, 1309, 1321, 1398, 1541, 1571, 1747, 1762, 1832, 1832, 1981, 2094, 2201, 2356, 2617 \} $$</span> </p> <p>(these are not precise! have headroom for a few Hertz of deviations). Now you have to fit them into some harmonic families. Probably you will assume that the lowest frequencies belong to the <strong>fundamentals</strong>. After a bit of search, you may say that one possible organization of this chord is <em>F major</em> with these <strong>three</strong> notes on it:</p> <ul> <li>F at 174 Hz</li> <li>A at 220 Hz</li> <li>C at 262 Hz</li> </ul> <p>I hope you can see their upper harmonics and can individually discriminate which harmonic belong to which note. Also, note that, certain upper harmonics of different fundamentals can fall quite close in frequency.</p> <p>You can continue this analysis for the remaning four chords. The code is below:</p> <pre><code>clc; clear all; close all; [x,Fs,Nb] = wavread('C:\PathToFile\sample1.wav',[1, 45000]); figure,plot(x);title('signal x[n] sampled at 44100 Hz, 16 bits'); y = reshape( x, 9000, 5 ); Y = fft(y,Fs); figure,plot((1/9000)*abs(Y(:,1)).^2); title('Periodogram of the first chord block |X_1[k]|^2|'); </code></pre> <p>Note that I have found the (approximate) peak frequencies by simple visual inspection. This was partly justified by the basic type of the spectrum.</p>
540
spectral analysis
Spectral centroid manipulations
https://dsp.stackexchange.com/questions/42452/spectral-centroid-manipulations
<p>So, I've created a simple sound analysis application and one of the features I've implemented is the spectral centroid (as explained here <a href="https://en.wikipedia.org/wiki/Spectral_centroid" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spectral_centroid</a>). In order to get reliable results, we need to set a threshold on our spectral centroid data because the FFT always returns some energy in bins that really don't have any energy.</p> <p>Now, I would like to be able to manipulate the spectral centroid (e.g push the values up or down, which to some extent resembles pitch shifting) and then use an inverse FFT operation to convert the values to waveform data. This way, we'd be able to hear the changes but I'm not sure how to do this.</p> <p>The spectral centroid is calculated by using magnitudes, not raw FFT data. The magnitudes are calculated as given below</p> <pre><code>mag[i] = 10*log10(sqrt((pFFTLReal[i] * pFFTLReal[i] + pFFTLImg[i] * pFFTLImg[i])/fftNorm));//fftNorm is basically a correction factor that depends on a type of window (blackman, hamming etc) </code></pre> <p>So, given all this, how would I change the spectral centroid by, say, 10 percent and see that change in the generated (manipulated) waveform?</p> <p>Is this even possible?</p> <p>Please, if this is not clear enough, I'll try to provide some more information.</p> <p>Thank you</p>
<p>It's been some time since I asked this question and after some work done on this subject, I think it's time to revisit it. It should be noted that while the spectral centroid, pretty much like any other spectral feature, is calculated using the FFT magnitudes, and not raw FFT data, we don't have to manipulate the magnitudes directly. Instead, we modify FFT data, which in turn, affects the magnitudes. This makes the process a lot easier than having to manipulate the magnitudes.</p> <p>The spectral centroid is a single value (per FFT frame) so in the case of STFT we have as many centroid as we have STFT frames. The process of manipulating the spectral centroid is pretty straightforward. If, for instance, the spectral centroid is 600Hz and we wish to increase its value by 10% so that the new value is 660Hz, we would simply rearrange our frequency bins by a certain amount (a bin step). The actual calculation is qute simple </p> <pre><code>const binStep = frameLenHalf - frameLenHalf/(1+(float)freqDelta/100); //frequency delta % (e.g. 10%, 20%, -10%, -20% etc) //for increasing frequency float *pCfftLReal = new float[frameLen]; float *pCfftLImg = new float[frameLen]; //for increasing frequency for(j=frameLenHalf-1;j&gt;binStep;j--){ pCfftLReal[j] = pFFTReal0[j-binStep]; pCfftLImg[j] = pFFTImg0[j-binStep]; } } //for decreasing frequency for(int j=0; j&lt;frameLenHalf-1-binStep; j++){ pCfftLReal[j] = pFFTReal0[j+binStep]; pCfftLImg[j] = pFFTImg0[j+binStep]; } </code></pre> <p>Needless to say, this process suffers from rounding errors (primarily calculating the binStep), but in general, the larger the bin resolution, the larger the errors. This can be alleviated by having a greater frame length (but for STFT this is usually undesirable). Other more complex approaches are also possible but I haven't investigated any of these.</p> <p>Also, sometimes the results can be skewed if the amount of frequency shift exceeds the Nyqiust frequecy and we keep the original sample rate (upsampling would help here).</p> <p>Before we can actually listen to our manipulated sound, an inverse FFT shoud be carried out on our modified data. </p> <p>What I've described here is just a rough, naive approach to spectral centroid manipulation. It should also be said that this kind of manipulation doesn't retain the original harmonics ratio, given that the frequency delta shift is applied equally on every bin. Keeping the harmonics ratio intact would require calculating a new bin step for every frequency, which is only slightly more complex that the algo given above.</p> <p>I've tried both approaches (with a constant bin step and a changing bin step) and perceptually, the first doesn't really change the original sound ( I guess this has to do with higher frequencies affecting timbre more than lower ones and the first approach changes the higher frequencies less than it does the lower ones). </p> <p>I'm not done with this yet, so I will update this post as soon as I have something interesting to share.</p>
541
spectral analysis
Signal Plus Weakly Stationary Noise
https://dsp.stackexchange.com/questions/17698/signal-plus-weakly-stationary-noise
<p>I was reading the book "Spectral Analysis of Time Series" By Herman Koopmans. On <a href="http://books.google.de/books?id=F09lhyXw4mcC&amp;lpg=PP1&amp;dq=herman%20koopman%20time%20series&amp;pg=PA55#v=onepage&amp;q=A%20Nonstationary%20Process%20with%20a%20Wiener%20Spectrum&amp;f=false" rel="nofollow">Page 55</a>, he explains that a specific type of non-stationary signal which is the result of adding weakly stationary ergodic noise to a deterministic signal can be decomposed to Wiener spectra. I wonder why there is a need for signal to be ergodic and why weakly stationarity is not enough to derive spectral summation formula for noise and deterministic part?</p> <p>More specifically based on those assumptions he shows that one has: $$C_X(\tau) = \lim\limits_{ \tau\to\infty}\frac{1}{2T}\int^{T}_{-T} X(t + \tau)X(t) dt = C_S(\tau) + C_N(\tau), \text{ almost surely}.$$</p> <p>And then $$F_X(A)=F_S(A)+F_N(A)$$ where $F_Z$ is the spectral distribution of stochastic process $Z(t)$, where $$X(t)=N(t)+S(t)$$</p>
<p>For ergodic processes, time averages (defined by integrals over time) and ensemble averages (defined by expectations with respect to probability distributions) are identical. This means that the autocovariance is the same, no matter if defined by a time integral or by an expectation:</p> <p>$$C_X(\tau)=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{-T}^{T}X(t+\tau)X(t)dt= E\{X(t+\tau)X(t)\}$$</p>
542
spectral analysis
Spectrum Analysis using Windowed FFTs
https://dsp.stackexchange.com/questions/1357/spectrum-analysis-using-windowed-ffts
<p>I have a couple of questions regarding windowed FFTs:</p> <ol> <li><p>Why is the noise floor higher with windowed FFTs (according to Wikipedia's spectral leakage page, anyway), when the whole point of windowing is to reduce side lobes?</p></li> <li><p>I realize that different windows are better for different things, but is there a window that is considered to be the best all-around window for spectrum analysis? Alternatively, is there a better way to do spectrum analysis than windowed FFTs? It would have to be a DSP approach (i.e. I can't do an array of analog filters), but within that constraint I am game for different solutions.</p></li> </ol> <p>Thanks for your time in reading this.</p>
<p>A non-rectangular window will remove "noise" from distant bins at the cost of adding more "noise" to the immediately adjacent bins to a narrow-band spectrum peak. The sum of both these spectral leakage effects is greater than zero for a non-rectangular window. So if you count the raising of the level total of all adjacent bins as noise, then the S/N ratio is lowered. </p> <p>Some people don't care about the bins immediately adjacent to a spectrum peak (their spectral peaks are a priori assumed to be widely spaced; and/or they interpret, interpolate, or phase-vocoder adjust the energy out of those adjacent bins back into the central peak bin), so for those purposes, the reduced far-side-lobe energy means less noise.</p> <p>Another reason for a lower S/N ratio is that windowing of quantized data is an informationally lossy process, and these (re)quantization losses can also be considered a form of noise.</p> <p>"Best" is relative to some weighting of quality metrics, and different users may have very different weightings. If you don't have a set of prioritized design goals for which to optimize a window, then you may not have a strong reason to not just use a Von Hann window.</p> <p>Depending on your data source and your needs, using just some windowed FFTs may not even be a good form of spectrum analysis, much less the best possible. Or the opposite.</p>
543
spectral analysis
Difference of doing a PSD estimate of data and logarithmic transformed data?
https://dsp.stackexchange.com/questions/60931/difference-of-doing-a-psd-estimate-of-data-and-logarithmic-transformed-data
<p>What is the difference between doing a PSD estimate with data and the same data but which is logarithmically transformed before the estimate? Does it make the data more sinusoidal in nature?</p> <p>An exercise in a book asks this question:</p> <p>"For the lynx data, compare your spectral analysis results from the original data, and the data transformed first by taking the logarithm of each sample and then by subtracting the sample mean of this logarithmic data. Does the logarithmic transformation make the data more sinusoidal in nature?"</p>
<p>The exercise is instructive. Since this is an exercise, I will not do the full problem. I leave that to you.</p> <p>grabbing the lynx data from</p> <p><a href="https://www.encyclopediaofmath.org/index.php/Canadian_lynx_data" rel="nofollow noreferrer">https://www.encyclopediaofmath.org/index.php/Canadian_lynx_data</a>.</p> <p>The data is counts and there are no years with a zero count.</p> <p><a href="https://i.sstatic.net/8mvsQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8mvsQ.jpg" alt="enter image description here"></a></p> <p>and putting it into matlab and taking the log and subtracting the mean of the log,</p> <p><a href="https://i.sstatic.net/MVJ60.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MVJ60.jpg" alt="enter image description here"></a></p> <p>Would you say that the data looks sine like ?</p> <p>This technique works on certain kinds of data. Think of it as a tool and is covered in a number of basic stats book.</p>
544
spectral analysis
Why look at power spectral density for stochastic processes?
https://dsp.stackexchange.com/questions/47740/why-look-at-power-spectral-density-for-stochastic-processes
<p>I have been told that for deterministic signals, it makes sense to look at their respective Fourier transforms/spectra. </p> <p>For stochastic processes on the other hand, I am supposed to work with power spectral density in terms of qualitative analysis. </p> <p>Why? </p>
<p>Because a stochastic process itself doesn't <em>have</em> a Fourier transform.</p> <p>That's really all there is to it.</p> <p>You can only transform signals (i.e. functions over a body isomorphic to $\mathbb R$, for example, functions of time). You can't transform a random variable whose individual realizations are such functions!</p>
545
spectral analysis
White gaussian noise analysis deduction
https://dsp.stackexchange.com/questions/71395/white-gaussian-noise-analysis-deduction
<p>I´m stuck in a deduction analysis of the variance of a gaussian white noise signal in a &quot;integrate-and-dump detector&quot; of a baseband data transmission receiver, where <span class="math-container">$n(t)$</span> is white noise with double-sided power spectral density <span class="math-container">$N_0/2$</span> [W/Hz]</p> <p><a href="https://i.sstatic.net/t4rRW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t4rRW.png" alt="first picture" /></a></p> <p><a href="https://i.sstatic.net/DxGNb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DxGNb.jpg" alt="second picture" /></a></p> <p>I can understand all the steps except when they deduce <a href="https://i.sstatic.net/tU05T.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tU05T.jpg" alt="third image" /></a></p> <p>How do you get to this last expression? Thank you.</p>
<p>They say that <span class="math-container">$n(t)$</span> is white noise with a double-sided power spectral density (PSD) of <span class="math-container">$N_0/2$</span>, i.e., the PSD is given by</p> <p><span class="math-container">$$S_n(f)=\frac{N_0}{2}\tag{1}$$</span></p> <p>The auto-correlation function is the inverse Fourier transform of <span class="math-container">$(1)$</span>, which is</p> <p><span class="math-container">$$R_n(\tau)=E\big\{n(t)n(t+\tau)\big\}=\mathcal{F}^{-1}\big\{S_n(f)\big\}=\frac{N_0}{2}\delta(\tau)\tag{2}$$</span></p>
546
spectral analysis
Anyone know what algorithm the Spice AC Noise Analysis uses?
https://dsp.stackexchange.com/questions/42488/anyone-know-what-algorithm-the-spice-ac-noise-analysis-uses
<p>Anyone know what algorithm the Spice AC Noise Analysis uses?</p> <p><a href="http://vision.lakeheadu.ca/eng4136/spice/noise_analysis.html" rel="nofollow noreferrer">http://vision.lakeheadu.ca/eng4136/spice/noise_analysis.html</a></p> <p>Is it some spectral modeling synthesis? I.e. that it estimates the main signal using peak detection and subtracts those from the signal in order to get the noise?</p> <p><a href="https://en.wikipedia.org/wiki/Spectral_modeling_synthesis" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spectral_modeling_synthesis</a></p>
<p>Noise analysis in Spice (Berkeley Spice) is done by summing up the power spectral density from every noise source in the circuit. There are a couple caveats.</p> <ol> <li>The circuit is assumed to be linear. In other words the circuit is first solved for a specific DC operating point then each of the components' equivalent linear noise sources are modeled as thermal noise sources. Each of the components internal resistance values produce thermal noise whose frequency spectrum is shaped by the LINEAR response of the circuit. </li> <li>Each source is assumed to be uncorrelated with any other. That means that the power spectral density of the resulting noise is the sum of the power spectral density of each source individually. This is not necessarily the case when you have matched pairs of transistors that are closely coupled and share phonic coupling.</li> <li>The 1/f noise is a complex parameter model. 1/f noise is difficult to model but there have been several standard models developed. If you stay away from low frequency in the analysis you can avoid dealing with this issue. </li> </ol> <p>See google for more info on noise models such as <a href="http://web.mit.edu/klund/www/papers/UNP_noise.pdf" rel="nofollow noreferrer">Noise Sources in Bulk CMOS.</a></p>
547
spectral analysis
How to increase the spectral resolution?
https://dsp.stackexchange.com/questions/49291/how-to-increase-the-spectral-resolution
<p>My question is about the spectral resolution of a discrete signal. Each sample of my signal is made up with 2^n frames sampled at 44.1 kHz.</p> <p>So, when I want to know the spectral resolution, I calculate : 44100/number_of_frames. With 2048 frames, my spectral resolution is around 20Hz. But, when I take a look to the bands filtered by an equalizer, the band with in the low frequencies are around only 5 Hz (16 Hz -> 20 Hz -> 25 Hz -> 32 Hz...). How is it possible ?</p> <p>I thought about using "zero padding", but, even if it will help me to have a better location of each peak of the the spectrum analyze, this method don't magically increase the spectral resolution.</p> <p>I also thought about increasing the number of frames analyzed. But, to get a spectral resolution of 5 Hz with a signal sampled at 44.1 kHz, I would need 8192 frames and it would represent 185 ms. It's very far from a pseudo real-time analysis and a singer who would listen his voice after this analyze while he is singing would hear this "delay".</p> <p>So, what is the solution ?</p> <p>Thank you for all your reply</p>
<p>You seem to have a good grasp of the tradeoffs. When using short-time Fourier analysis like you are, there is a version of the <a href="https://en.wikipedia.org/wiki/Uncertainty_principle" rel="nofollow noreferrer">uncertainty principle</a> at play. Increasing your time resolution (in your case, using a shorter DFT) results in coarser frequency resolution, and vice versa. That is, the <em>time-bandwidth product</em> is a constant.</p> <p>The way to increase your STFT's spectral resolution is to increase the duration of time that the transform covers, as you noted. If you truly need to be able to resolve frequencies that precisely (within a few Hz of one another), then you need to observe them for a long enough period of time to discern them. If you know <em>a priori</em> some characteristics of your signal, and conditions are favorable (i.e. SNR is high enough), then you might be able to get the job done with coarser resolution (and therefore a shorter transform). </p> <p>For instance, if you know that your signal is likely to be a single tone somewhere in a particular band, and you want to know its frequency precisely, then you don't necessarily need to use a really long DFT. Instead, you can use a shorter DFT, then <a href="https://dsp.stackexchange.com/questions/35112/how-to-calculate-a-delay-correlation-peak-between-two-signals-with-a-precision">use peak interpolation techniques to give a sub-bin estimate of where the peak actually lies</a>.</p>
548
spectral analysis
FMCW radar signal processing: FFT with nonuniform sampling points
https://dsp.stackexchange.com/questions/96225/fmcw-radar-signal-processing-fft-with-nonuniform-sampling-points
<p>I have a behavioral model for a PLL that generates a chirp signal for an FMCW radar. To improve efficiency, the model outputs only the zero-crossing points at the negative edge.</p> <p>From this zero-crossing data, I need to compute the RMS frequency error over a specific frequency range. However, since the sampling points are non-uniform, a straightforward FFT may not be suitable.</p> <p>What would be an effective approach to estimate the RMS frequency error in this case? Are there specific interpolation methods, resampling techniques, or alternative spectral analysis approaches that would work best?</p>
<p>Regardless of where your data comes from, if you have a non-uniformly sampled signal and wish to look at its spectral characteristics, there are a couple of ways to do it:</p> <ul> <li><p>Compute the non-uniform discrete Fourier transform directly (not very efficient): <span class="math-container">$X[f_k] = \sum_{n=0}^{N-1} x(t_n)\cdot e^{-j2\pi t_n f_k}$</span>, where <span class="math-container">$t_n$</span> contains the sample times in seconds, and <span class="math-container">$f_k$</span> is the frequency of interest in Hz.</p> <p>Note that for a single frequency <span class="math-container">$f_k$</span>, this amounts to computing the dot product between your signal and a complex exponential of frequency <span class="math-container">$f_k$</span> sampled at the same times as your signal. If your signal has a large component of frequency <span class="math-container">$f_k$</span>, this dot product will be large. This is straightforward, but possibly too computationally expensive depending on your context.</p> </li> <li><p>Compute the non-uniform discrete Fourier transform using an efficient function such as MATLAB's <a href="https://www.mathworks.com/help/matlab/ref/double.nufft.html" rel="nofollow noreferrer">nufft</a>, or Python's <a href="https://finufft.readthedocs.io/en/latest/" rel="nofollow noreferrer">FINUFFT</a> or <a href="https://github.com/pynufft/pynufft" rel="nofollow noreferrer">pyNUFFT</a> (there are probably others as well).</p> </li> <li><p>Compute the <a href="https://en.wikipedia.org/wiki/Least-squares_spectral_analysis#The_Lomb.E2.80.93Scargle_periodogram" rel="nofollow noreferrer">Lomb-Scargle periodogram</a>. MATLAB has the <a href="https://www.mathworks.com/help/signal/ref/plomb.html" rel="nofollow noreferrer"><code>plomb</code> function</a> for this, and Scipy (Python) has <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lombscargle.html" rel="nofollow noreferrer"><code>lombscargle</code></a></p> </li> <li><p>Interpolate the data to a uniform time grid before computing the FFT.</p> </li> </ul> <p>No doubt there are other methods, but these are the ones I have seen. Which one you choose probably depends on the nature of your data.</p>
549
spectral analysis
Analyzing the quality of a music track
https://dsp.stackexchange.com/questions/48027/analyzing-the-quality-of-a-music-track
<p>I have a library of music tracks that I use to DJ with. Its currently about 3000 tracks that I have gather over the years. Some of it consists of low quality rips that I want to get rid of. </p> <p>Currently I am writing a script that will look at the time/size = compression rate. Anything that is below 300kb/s I want to delete. </p> <p>I am wondering if that is a good heuristic to use, or could I possibly do something fancier. Like looking at a spectral analysis to determine what is low vs high quality. </p>
<blockquote> <p>Currently I am writing a script that will look at the time/size = compression rate. Anything that is below 300kb/s I want to delete.</p> </blockquote> <p>That's a very bad idea.</p> <p>Just because something compresses well, because it fits the signal model of the compressor well, doesn't mean it's bad quality. Usually, quite the contrary.</p> <p>You give an example in your comments yourself:</p> <blockquote> <p>Im talking about tracks that have lot of clipping</p> </blockquote> <p>Clipping is a nonlinearity, introducing a multitude of tones into the spectrum that shouldn't be there. Since psychoacoustic models and thus lossy compressors usually need to quantize the spectrum in some way, having clipping in your audio means harder to compress audio meaning larger file size.</p> <p>Generally, if something is hard to compress, a variable-rate codec will increase the file size. If something is easy to compress without much loss, then the file size will stay down. I'm pretty sure you can record a pretty perfect tuning fork tone in high quality and compress it very nicely. </p> <p>Also, unless we're talking early MP3 encoders and such, modern codecs above ca. 160 kb/s are <em>really</em> hard for a human to tell from the uncompressed audio. That's the whole idea of having a good lossy compressor: Just lose the information that's irrelevant.</p> <p>Note that I'm not saying you can't generally say that for example an MP3 with 64 kb/s will sound bad in almost all cases. It's just that you don't need any intelligence to do that, but can often directly read that from the file's metadata.</p> <blockquote> <p>Like looking at a spectral analysis to determine what is low vs high quality.</p> </blockquote> <p>Nope, you're looking at the reconstructed signal after decompression. Unless you have extensive knowledge of the spectrum of your original audio, there's little information about audible losses in that. Again, both musicians, sound engineers and audio codecs strive to do the same: produce sound that sounds good to the human ear.</p> <p>Also, remember that <em>clipping</em> (or soft clipping, and other similar nonlinearities), as the example you chose, is actually a pretty commonly used <em>tool</em> in production of music these days. Who are you to tell someone that the recording of their neighbor's cat walking on a keyboard is quality-wise better than let's say a Kraftwerk song that uses clipping intentionally? How is a crisp-sounding KPop-girlband song with strongly emphasized treble quality-wise superior to the muddy sound of the 1968 original release of Jimi Hendrix' <em>All along the watchtower</em>? There's whole musical genres that depend on sound mechanics that would signify a lack of recording quality in others. Even within the same genre, quality can't simply be deducted from a rigid description of the signal – compare Shostakovich's <em>The Bolt</em> to other ballet suites of that era.</p> <p>"Good audio quality" in the end is an <em>extremely</em> subjective matter, and I'm afraid without a large-scale database of manually tagged "good" and "bad" examples <em>according to your perception</em>, you won't be able to implement any classifier, be it through neural nets or through more classic statistical approaches.</p>
550
spectral analysis
Example of non-equivalence of the two PSD definitions
https://dsp.stackexchange.com/questions/55449/example-of-non-equivalence-of-the-two-psd-definitions
<p>According to the book <em>Introduction to Spectral Analysis</em> by P. Stoica and R. Moses, the power spectral density (PSD) <span class="math-container">$P(\omega)$</span> can either be defined as the discrete-time Fourier transform (DTFT) of the covariance sequence <span class="math-container">$r(k)$</span>, i.e., <span class="math-container">\begin{align} P(\omega)=\sum_{k=-\infty}^{\infty}r(k)e^{-j\omega k}, \end{align}</span> or alternatively as <span class="math-container">\begin{align} P(\omega)=\lim_{N\to \infty}E\Bigg\{\frac{1}{N}\Bigg|\sum_{n=0}^{N-1}x(n)e^{-j\omega n}\Bigg|^2\Bigg\}, \end{align}</span> and these two definitions are equivalent under the assumption that <span class="math-container">\begin{align} \lim_{N\to \infty}\frac{1}{N}\sum_{k=-(N-1)}^{N-1}|k||r(k)|=0. \end{align}</span> What is an example of a signal for which these two definitions yield different results, i.e., for which the above assumption does not hold?</p>
<p><strong>Work in progress:</strong> wait till I am done before reading (and throwing brickbats!)</p> <p>This question is difficult to answer without getting into a lot of details about basic signal analysis and Fourier transform theory.</p> <p>Because of the way my brain works, I will discuss only real-valued <em>continuous-time</em> <em>deterministic</em> signals, and will get into the <em>stochastic</em> and discrete-time stuff later. The classical Fourier transform theory considers a <em>finite-energy</em> signal <span class="math-container">$x(t)$</span> and defines its Fourier transform as <span class="math-container">$$X(f) = \int_{-\infty}^\infty x(t) \exp(-j2\pi ft)\, \mathrm dt.\tag{1}$$</span> Note that <span class="math-container">$x(t)$</span> necessarily decays away to <span class="math-container">$0$</span> as <span class="math-container">$t \to \pm\infty$</span>; absent this property, the signal cannot have finite energy as we have assumed. </p> <p>The <em>energy spectral density</em> of <span class="math-container">$x(t)$</span> is defined to be <span class="math-container">$|X(f)|^2$</span> which happens to be the Fourier transform of the <em>autocorrelation</em> function <span class="math-container">$$r_x(t) = \int_{-\infty}^\infty x(\tau)x(\tau+t)\, \mathrm d\tau.\tag{2}$$</span> Note that <span class="math-container">$r_x(0)$</span> equals the finite energy of the signal.</p> <p>Now, all of this is fine and dandy but it doesn't work <em>power</em> signals which are signals that have infinite energy but finite (average) <em>power</em>, that is, <span class="math-container">$$\mathcal P_T = \frac{1}{2T}\int_{-T}^T |x(t)|^2\, \mathrm dt, \tag{3}$$</span> which will be recognized as the average power delivered by <span class="math-container">$x(t)$</span> (into a <span class="math-container">$1\Omega$</span> resistor) over the <span class="math-container">$2T$</span>-second interval <span class="math-container">$[-T,T]$</span>, approaches a finite limit <span class="math-container">$\bar{\mathcal P}$</span> as <span class="math-container">$T \to \infty$</span>. The power signals that everyone is familiar with are <em>periodic</em> signals (generally represented by Fourier <em>series</em>) and to accommodate these, we include Dirac deltas or <em>impulses</em> into our theory and <em>pretend</em> that the right side of Eq. <span class="math-container">$(1)$</span> "converges" to an impulse or a sum of impulses. </p> <blockquote> <p>The Fourier transform of a periodic signal with fundamental frequency <span class="math-container">$f_0$</span> consists of <em>impulses</em> at the <em>harmonics</em> <span class="math-container">$nf_0, n \in \mathbb Z$</span>, of the fundamental frequency. Specifically, if the periodic signal <span class="math-container">$x(t)$</span> has Fourier <em>series</em> <span class="math-container">$\displaystyle\sum_{n=-\infty}^\infty c_n \exp(j2\pi nf_0 t)$</span>, then the Fourier transform <span class="math-container">$X(f)$</span> of <span class="math-container">$x(t)$</span> is <em>defined</em> to be <span class="math-container">$$X(f) = \sum_{n=-\infty}^\infty c_n \delta(f-nf_0),$$</span> where <span class="math-container">$\delta(\cdot)$</span> denotes an impulse or Dirac delta.</p> </blockquote> <p>This has become second nature to us by now and we even incorporate impulses into tables of Fourier transforms and the like, and get into arguments about whether the Fourier transform of an impulse <em>train</em> is another impulse <em>train</em> or not. Note that the <em>power spectral density</em> of <span class="math-container">$x(t)$</span> is defined to be <span class="math-container">$S_x(f) = |X(f)|^2$</span> and for the case of periodic <span class="math-container">$x(t)$</span>, we have that <span class="math-container">$$x(t) = \sum_{n=-\infty}^\infty c_n \exp(j2\pi nf_0 t) \implies X(f) = \sum_{n=-\infty}^\infty c_n \delta(f-nf_0)\\ ~\text{and}~ \\|X(f)|^2 = \sum_{n=-\infty}^\infty |c_n|^2 \delta(f-nf_0).$$</span> If you choose to "square" the sum <span class="math-container">$X(f)$</span> to arrive at <span class="math-container">$|X(f)|^2$</span>, please remember that <span class="math-container">$\delta(f-nf_0)\delta(f-mf_0)$</span> is <span class="math-container">$\delta(f-nf_0)$</span> if <span class="math-container">$n$</span> equals <span class="math-container">$m$</span> and <span class="math-container">$0$</span> if <span class="math-container">$n\neq m$</span>. The autocorrelation function of <span class="math-container">$x(t)$</span> is the inverse Fourier transform of the power spectral density and is thus <span class="math-container">$$r_x(t) = \sum_{n=-\infty}^\infty |c_n|^2 \exp(j2\pi nf_0 t)$$</span> which is also a periodic function.</p> <hr> <p>But what if <span class="math-container">$x(t)$</span> is a power signal but is <em>not</em> a periodic signal and so we can't brute-force use Eq. <span class="math-container">$(1)$</span> and hand-wave our way to a power spectral density consisting of impulses only? Well, one way to proceed is to re-consider Eq. <span class="math-container">$(3)$</span> and note that <span class="math-container">$\mathcal P_T$</span> can be expressed as <span class="math-container">$$\mathcal P_T = \frac{1}{2T}\int_{-T}^T |x_T(t)|^2\, \mathrm dt = \frac{1}{2T}\int_{-\infty}^\infty |x_T(t)|^2\, \mathrm dt\tag{4}$$</span> where <span class="math-container">$x_T(t)$</span> defined as <span class="math-container">$$x_T(t) = \begin{cases}x(t), &amp; -T \leq t \leq T,\\0, &amp;\text{otherwise,}\end{cases}.\tag{5}$$</span> Note that <span class="math-container">$x_T(t)$</span> is a <em>finite-energy</em> signal no matter how large <span class="math-container">$T$</span> is (as long as <span class="math-container">$T$</span> is finite) and thus it has a Fourier transform <span class="math-container">$X_T(f)$</span>. Note also that <span class="math-container">$x_T(t)$</span> has <em>finite</em> support <span class="math-container">$[-T,T]$</span> while the support of <span class="math-container">$X_T(f)$</span> is the entire frequency axis <span class="math-container">$-\infty &lt; f &lt; \infty$</span>. Next, note that Parseval's relation lets us re-write Eq.<span class="math-container">$(4)$</span> as <span class="math-container">$$\mathcal P_T = \frac{1}{2T}\int_{-\infty}^\infty |X_T(f)|^2\, \mathrm df\tag{6}$$</span> leading to <span class="math-container">\begin{align} \bar{\mathcal P} &amp;= \lim_{T \to \infty}\mathcal P_T\\ &amp;= \lim_{T \to \infty} \frac{1}{2T}\int_{-\infty}^\infty |X_T(f)|^2\, \mathrm df\\ &amp;= \int_{-\infty}^\infty \lim_{T \to \infty} \frac{1}{2T}|X_T(f)|^2\, \mathrm df &amp;{\scriptstyle{\text{since}~\bar{\mathcal P}~\text{is finite by assumption.}}}\tag{7} \end{align}</span> But the average power is just the area under the power spectral density curve, that is, <span class="math-container">$$\bar{\mathcal P} = \int_{-\infty}^\infty S_x(f)\, \mathrm df$$</span> and thus the power spectral density of this finite-power signal is <em>defined</em> as <span class="math-container">\begin{align}S_x(f) &amp;= \lim_{T \to \infty} \frac{1}{2T}|X_T(f)|^2\tag{8}\\ &amp;= \lim_{T \to \infty} \frac{1}{2T}\left|\int_{-\infty}^\infty x_T(t)\exp(-j2\pi ft) \,\mathrm dt\right|^2\tag{9}\\ &amp;= \lim_{T \to \infty} \frac{1}{2T}\left|\int_{-T}^T x(t)\exp(-j2\pi ft) \,\mathrm dt\right|^2\tag{10} \end{align}</span> which (except for the expectation operator -- not needed because everything is deterministic here) looks a lot like the second definition of the power spectral density in the OP's question.</p> <p>The autocorrelation function of <span class="math-container">$x(t)$</span>, which is the inverse Fourier transform of the power spectral density <span class="math-container">$S_x(f)$</span> in Eq. <span class="math-container">$(8)$</span> is <span class="math-container">\begin{align}r_x(t) &amp;= \lim_{T \to \infty} \frac{1}{2T}\int_{-\infty}^\infty x_T(\tau)x_T(t+\tau) \,\mathrm d\tau\\ &amp;= \lim_{T \to \infty} \frac{1}{2T}\int_{-T}^T x_T(\tau)x_T(t+\tau) \,\mathrm d\tau &amp;{\scriptstyle{\text{since}~x_T(\tau)=0~\text{when }~|\tau|&gt;T}}\\ &amp;= \lim_{T \to \infty} \frac{1}{2T}r_{x_T}(t).\tag{11} \end{align}</span> Note that <span class="math-container">$r_{x_T}(t)$</span> has support <span class="math-container">$[-2T,2T]$</span> but is a poor approximation to <span class="math-container">$r_x(t)$</span> when <span class="math-container">$|t| &gt; T$</span> because the overlap between the support <span class="math-container">$[-T, T]$</span> of <span class="math-container">$x_T(\tau)$</span> and the support of <span class="math-container">$x_T(t+\tau)$</span> is small. </p> <hr> <p>OK, but what about random processes which is what the OP was asking about? Well, the problem is our model for a random process is a collection of random variables <span class="math-container">$\{\mathscr X(t)\colon t\in \mathbb R\}$</span> whereas what we observe with our spectrum analyzers and oscilloscopes is a finite segment, say, <span class="math-container">$x_T(t)$</span> of one sample path <span class="math-container">$x(t)$</span> among the many sample paths of the process. So, for each <span class="math-container">$t \in [-T,T]$</span>, we know the value that the random variable <span class="math-container">$\mathscr X(t)$</span> took on for the one outcome <span class="math-container">$\omega$</span> in the underlying sample space <span class="math-container">$\Omega$</span> but it is quite difficult to get much information about even the basic properties of <span class="math-container">$x_T(t)$</span> (e.g. is <span class="math-container">$x_T(t)$</span> a continuous function? does it have a Fourier transform? etc.) from the bald description of the random process as "a collection of random variables". So, let's assume that the random process is <em>wide-sense-stationary</em> and that it satisfies various <em>ergodic theorems</em> that allow for estimation of the power spectral density and the autocorrelation function of the process from the segment <span class="math-container">$x_T(t)$</span> (and its Fourier transform <span class="math-container">$X_T(f)$</span>) which is the only part of the single sample path <span class="math-container">$x(t)$</span> available to us. The <em>Wiener-Khinchin</em> theorem says the power spectral density <span class="math-container">$S_{\mathscr X}(f)$</span> of the process <span class="math-container">$\{\mathscr{X}(t)\}$</span> is given by <span class="math-container">\begin{align} S_{\mathscr X}(f) &amp;= E\left[\lim_{T \to \infty} \frac{1}{2T}\left|\int_{-T}^T \mathscr{X}(t)\exp(-j2\pi ft) \,\mathrm dt\right|^2\right]\tag{12} \end{align}</span> and we can <em>approximate</em> this by <span class="math-container">$$\frac{1}{2T}\left|\int_{-T}^T x(t)\exp(-j2\pi ft) \,\mathrm dt\right|^2 = \frac{1}{2T}\left|\int_{-T}^T x_T(t)\exp(-j2\pi ft) \,\mathrm dt\right|^2.$$</span> It would appear reasonable that we should be able to express the power spectral density <span class="math-container">$S_{\mathscr X}(f)$</span> as the Fourier transform of the process autocorrelation function <span class="math-container">$R_{\mathscr X}(t)$</span> which we can approximate by <span class="math-container">$r_{x_T}(t)$</span>??:</p> <p><span class="math-container">$$S_{\mathscr X}(f) \approx \frac{1}{2T}\int_{-\infty}^\infty r_{x_T}(t)\exp(-j2\pi ft) \, \mathrm dt?? \tag{13}$$</span> Well, <span class="math-container">$r_{x_T}(t)$</span> is known only for <span class="math-container">$|t| \leq 2T$</span> and might not be all that great an estimate of the value of <span class="math-container">$R_{\mathscr X}(t)$</span> when <span class="math-container">$|t| \geq T$</span>. So, the Wiener-Khinchin theorem also says that the </p>
551
spectral analysis
Good Continuing Education Course in the Basics of Frequency Analysis
https://dsp.stackexchange.com/questions/55853/good-continuing-education-course-in-the-basics-of-frequency-analysis
<p>All,</p> <p>Are there any good continuing education courses of length 2-3 days to give an engineer a good background in Frequency Analysis. B&amp;K used to teach one, but I don't think that they teach it anymore. I am looking for something that teaches sampling, aliasing, continuous vs. discrete signals, DFT, PSD, windowing, spectral leakage, filtering, etc. </p> <p>Thank you.</p>
552
spectral analysis
Units of a Fast Fourier Transform (FFT) and Spectrogram
https://dsp.stackexchange.com/questions/78188/units-of-a-fast-fourier-transform-fft-and-spectrogram
<p>What is the units of FFT, when doing Spectral Analysis of a Signal?</p> <ol> <li><p>For above question, the answer could be V or V/HZ for voltage signal. Which one is right? I would expect the result to be V.t or V/Hz because of dt.</p> </li> <li><p>I used <a href="https://www.mathworks.com/help/signal/ref/pspectrum.html" rel="nofollow noreferrer">the <code>pspectrum</code> function in MATLAB</a> to create a spectrogram image with power spectrum and dB magnitude.</p> </li> </ol> <p>In general, the spectrogram is obtained as the square of the absolute value of the Short Time Fourier Transform. Also, the power spectral density of a normal signal is studied as |FFT|^2. Then since it is a density function, does the integral value by applying the window function to the frequency domain of |FFT|^2 express the power of the signal in dB? I have seen different interpretations of power spectrum and power spectral density. <a href="https://people.math.harvard.edu/%7Eknill/teaching/math22b2019/handouts/lecture31.pdf" rel="nofollow noreferrer">Parseval's theorem</a> also means that the energy in the frequency domain and the time domain could be the same. If the dB scale power spectrum is integrated by multiplying |fft|^2 by the window function, is this also a power spectrum?</p>
<p>A few things to note here</p> <ol> <li>There are four different types of Fourier Transforms and they work all somewhat differently</li> <li>The FFT is an implementation of the Discrete Fourier Transform (DFT), not the Continuous Fourier Transform FT. The DFT uses sums, the FT uses integrals</li> <li>If your signal is in Volts, the units of the DFT will also be Volts. The units of the FT would be <span class="math-container">$V/Hz$</span>.</li> <li>In order for Perceval's theorem to hold for the DFT, you need to adopt a scaling of <span class="math-container">$1/\sqrt{N}$</span> for both forward and backward transform.</li> <li>The spectral bin power of of the FFT is given by the magnitude squared and has the units of <span class="math-container">$V^2$</span></li> <li>The spectral power density is given by the spectral bin power divided by the bin bandwidth and has units of <span class="math-container">$V^2/Hz$</span></li> <li>The physical interpretation is always somewhat complicated. None of the units discussed so far represent actual physical power, intensity or energy (in <span class="math-container">$W$</span>, <span class="math-container">$W/m^2$</span>, or <span class="math-container">$J$</span>). In any real physical situation there is always a second quantity and/or and impedance in play that determines the actual power, etc.</li> </ol>
553
spectral analysis
Cross Power Spectral Density of Three Signals?
https://dsp.stackexchange.com/questions/94802/cross-power-spectral-density-of-three-signals
<p>I am performing a method of data analysis that requires estimating the CPSD between two measured signals. I actually have three signals, so I typically sum two of them and cross them with the third. Is there a analogous concept to the traditional cross power spectral density but for three signals at once? Or is there a more appropriate way to estimate the cross power spectrum between three individual signals?</p>
<p>Didn't have the chance to write out an answer until now, but wanted to at least provide some mathematical explanation for your options.</p> <p>Let's say you have signals <span class="math-container">$x_{1}(t),x_{2}(t),x_{3}(t)$</span>. What you are describing is computing the cross PSD between <span class="math-container">$x_{1}(t)+x_{2}(t)$</span> and <span class="math-container">$x_{3}(t)$</span>. Because calculating an autocorrelation function is linear, you are summing the cross PSDs <span class="math-container">$\phi_{13}(\omega)$</span> and <span class="math-container">$\phi_{23}(\omega)$</span>, so you are not getting information about <span class="math-container">$\phi_{12}(\omega)$</span>. The proof is pretty easy, <span class="math-container">\begin{equation} \int_{-\infty}^{\infty}\left[x_{1}(t)+x_{2}(t)\right]x_{3}^{*}(t-\tau)dt = \int_{-\infty}^{\infty}x_{1}(t)x_{3}^{*}(t-\tau)dt + \int_{-\infty}^{\infty}x_{2}(t)x_{3}^{*}(t-\tau)dt \end{equation}</span> To compute this PSD we then have <span class="math-container">\begin{align} \phi(\omega) &amp;= \int_{-\infty}^{\infty}\left[x_{1}(t)+x_{2}(t)\right]x_{3}^{*}(t-\tau)e^{-j\omega t}dt \\ &amp;= \int_{-\infty}^{\infty}x_{1}(t)x_{3}^{*}(t-\tau)e^{-j\omega t}dt + \int_{-\infty}^{\infty}x_{2}(t)x_{3}^{*}(t-\tau)e^{-j\omega t}dt \end{align}</span></p> <p>To retain information about <span class="math-container">$\phi_{12}(\omega)$</span>, you would have to compute its cross-PSD separately. Another option is to compute a coherence spectra for each pair, for example, <span class="math-container">\begin{equation} \psi_{12}(\omega) = \frac{\phi_{12}(\omega)}{\sqrt{\phi_{11}(\omega)\phi_{22}(\omega)}} \end{equation}</span> which computes a normalized correlation coefficient.</p> <p>If you are wanting to compute a single transformation encapsulating all three signals, there is something called the bispectral density. This is computed as the 2D-FFT of the third cumulant, defined as <span class="math-container">\begin{equation} \Phi(\omega_{1},\omega_{2}) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}x_{1}(\tau_{1})x_{2}(\tau_{2})x_{3}^{*}(\tau_{1}+\tau_{2})e^{-j\omega_{1}\tau_{1}}e^{-j\omega_{2}\tau_{2}}d\tau_{1}d\tau_{2} \end{equation}</span></p> <p>Whether or not this would contain the information you are looking for is application dependent. You might only need the two cross-PSDs, or you might need more. My personal recommendation would be to start with the three individual cross-PSDs, though.</p>
554
spectral analysis
Cross Power Spectral Density of Unevenly Sampled Data
https://dsp.stackexchange.com/questions/8825/cross-power-spectral-density-of-unevenly-sampled-data
<p>Here's my problem. The input signals $x$ and $y$ will be having the time value aligned with each other. However, the data are not evenly sampled. I would like to calculate CPSD of both signals.</p> <p>The solution comes to my mind is as follow</p> <ol> <li>$R_{xy}$ = cross correlation of $x$ and $y$ (I'm not sure how to do it with unevenly sampled data)</li> <li>Use Lomb Periodogram to estimate the CPSD of Rxy</li> </ol> <p>Is there anything I miss? And what is the right way to do it? </p> <p><strong>Update</strong></p> <p>Sam Maloney suggested to use interpolation to fill the gap to produce evenly sampled data. This solution is good. </p> <p>However, the data I obtained may have some missing data / long gap. Interpolating the data may "predict" the data wrongly and produce undesired end result. </p> <p>As quoted in <a href="http://www.mpi-hd.mpg.de/astrophysik/HEA/internal/Numerical_Recipes/f13-8.pdf" rel="nofollow" title="Numerical Recipes Chapter 13.8">Numerical Recipes (Spectral Analysis of Unevenly Sampled Data)</a></p> <blockquote> <p>However, the experience of practitioners of such interpolation techniques is not reassuring. Generally speaking, such techniques perform poorly. Long gaps in the data, for example, often produce a spurious bulge of power at low frequencies (wavelengths comparable to gaps)</p> </blockquote> <p>This is the reason why I choose Lomb Periodogram over FFT in <strong>Step 2</strong> without interpolating the data. However, I'm unable to figure out how to solve the <strong>Step 1</strong>. Is there any way to calculate the cross correlation of 2 signals without interpolating them? </p>
<p>One way would be to interpolate the signals to produce evenly-spaced samples and then calculate the CPSD as normal.</p>
555
spectral analysis
Analyze and reproduce sonic screwdriver sound
https://dsp.stackexchange.com/questions/58961/analyze-and-reproduce-sonic-screwdriver-sound
<p>Do you know about Doctor Who and its screwdriver?</p> <p>Well, I'm trying to understand how to replicate <a href="https://web.archive.org/web/20060112064455/http://www.bbc.co.uk/doctorwho/sounds/sonicscrewdriver.mp3" rel="nofollow noreferrer">this sound</a> but the spectral analysis is too way complicated, just see its spectrogram.</p> <p>I was trying to figure out what kind of sine waves or partials it is made of, but probably I have no good experience in this.</p> <p>I'm writing it in python but for know I have nothing than a signal with two sine waves just to make tests.</p> <p>How can I understand how this sound is made? There is a frequency modulation component which I would not consider right now, because it is probably a steady state sound which was modulated later.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import sys from scipy.signal import get_window import sounddevice as sd import soundfile as sf timeLength = 1.0 # seconds fs = 44100 t = np.arange(0, timeLength, 1.0/fs) ww = get_window("hann", t.size) A = .01 phi = np.pi # phase, radiants. T = 1.0/fs df = 20 t_circ = 0.1 t_mid = 0.2 f_mid = 3700 f_circ = df*(1.0 - ((t_circ - t_mid)/t_mid)**2.0)**0.5 phase1 = np.cumsum((f_mid + f_circ)*T*np.ones(t.size)) phase2 = np.cumsum((f_mid - f_circ)*T*np.ones(t.size)) circ = np.sin(2.0*np.pi*phase1) + np.sin(2.0*np.pi*phase2) ss = [] #brown_noise = np.random.wald(1, 0.002, len(t)) for i in range(10): f = f_mid + 20*i s = A * np.cos(2 * np.pi * f * t + circ) ss.append(s) ss = sum(ss) ss = ss/max(ss) plt.plot(ss) plt.show() sf.write('out.wav', ss, fs) </code></pre>
<p><strong>An approximation</strong></p> <p>Actually, the signal is not that complicated in my opinion. However, I am not a sounds guy... Just playing around with it a bit.</p> <p>Interestingly, I have a very similar signal that I have produced (see attached sound file and spectrograms. Spectrogram above is my signal (somewhat tuned to your signal's parameters) and below is the sonic screw-driver (of which Doctor by the way? ;) )</p> <p>I cannot post the code here, since as I said the parameters are a bit tuned to your signal. What I basically did is to create a signal that is...</p> <ol> <li>A sum of 10 signals. Center frequency f_mid = 3.7 kHz + 20Hz*N, N = 0, 1, ..., 9, just to have some kind of a wobbly jitter in there</li> <li><p>The center frequency is modulated by a "circle" (df: maximum change of the center frequency, t_circ: duration of a circle, t_mid: starting point of the circle in time) with df = 20 Hz, t_circ = 0.1s. The phase of the signal is generated from the constant frequency plus the dynamic modulation and a very simple numerical integration (phase multiplied by an increasing dt-vector and cumsuming over it). To get to a signal's frequency, you would differentiate its phase wrt to time - so this is the inverse process. <code> T = 1.0/44100.0 f_circ = df*(1.0 - ((t_circ - t_mid)/t_mid)**2.0)**0.5 phase1 = np.cumsum((f_mid + f_circ)<em>T</em>np.ones(t_circ.shape)) phase2 = np.cumsum((f_mid - f_circ)<em>T</em>np.ones(t_circ.shape)) circ = np.sin(2.0*pi*phase1) + np.sin(2.0*pi*phase2) </code></p></li> <li><p>Added some somewhat "Brownian" Noise, however it is much below the noise in your signal.</p></li> </ol> <p><strong>Things to explore</strong></p> <ul> <li><p>The "duration" of the "circles" seems so vary slightly. Probably sinusoidal variation is also fine, so the period could be jittered. </p></li> <li><p>Apparently, there is additionally a very broad noise-band. Maybe try some uniform noise with a time-varying windowing? </p></li> <li><p>Subtract a bunch of short-term, constant frequency signals from your signal. There is a nearly some kind of a grid visible in the spectrogram.</p></li> </ul> <p><strong>Some code samples</strong></p> <p>I am sorry, in my previous description, I apparently have missed some details. Please find here a code snipped that should be able to produce a basic signal that exhibits the basic "circular" frequency change:</p> <pre><code>import matplotlib matplotlib.use('MacOSX') import matplotlib.pyplot as pp import numpy as np import scipy.io.wavfile import scipy.signal # Export function for convenience def export_wave2(t, signal_l, signal_r, name = "test_sp.wav"): sample_rate = 44100 data = np.stack((signal_l, signal_r)) data = np.transpose(data) scipy.io.wavfile.write(name, sample_rate, data.astype("float32")) # Sampling frequency fs = 44100 T = 1.0/fs # Circle parameters t_circ_max = 0.2 # duration of one "circle" t_circ = np.arange(0, t_circ_max, 1.0/fs) t_mid = t_circ.max()/2.0 f_mid = 3700 # middle frequency of the sound signal df = 400 # amplitude of the frequency modulation, ie frequency will vary between f_mid - fd and f_mid + fd # Create ONE circle f_circ = df*(1.0 - ((t_circ - t_mid)/t_mid)**2.0)**0.5 phase1 = np.cumsum((f_mid + f_circ)*T*np.ones(t_circ.size)) phase2 = np.cumsum((f_mid - f_circ)*T*np.ones(t_circ.size)) circ = np.sin(2.0*np.pi*phase1) + np.sin(2.0*np.pi*phase2) # Repeate the circle N times, ie the signal will be N*t_circ_max long N = 10 circs = np.hstack([circ for k in range(0,N)]) circs = circs / np.abs(circs).max() circs_time = np.arange(0, N*t_circ_max, 1/fs) pp.plot(circs_time, circs) pp.show() export_wave2(circs_time, circs, circs, name = "test_sp.wav") </code></pre> <p>The frequency behaviour is of course not visible directly in the plots, but when a spectrogram is calculated from <em>circs</em>. I have loaded the test_sp.wav into Audacity for this.</p> <p><a href="https://filebin.net/qrg1ken28bpm2tay/screwdriver.mp3?t=yzkr1o6k" rel="nofollow noreferrer">Sound example</a></p> <p><a href="https://i.sstatic.net/LFY1g.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LFY1g.jpg" alt="Spectrograms"></a></p>
556
spectral analysis
use wavelet for improving spectral resolution
https://dsp.stackexchange.com/questions/15587/use-wavelet-for-improving-spectral-resolution
<p>let us consider following code </p> <pre><code>function [sca_1,sca_2,sca_3,sca_4]=calc_wavelet(y,wname,scales,freq,fs) %y-input signal %wname-wavelet basis name %freq-test frequencies %fs-sampling rate TAB_Sca2Frq = scal2frq(scales,wname,1/fs); [~,idxSca_1] = min(abs(TAB_Sca2Frq-freq(1))); sca_1 = scales(idxSca_1); [mini,idxSca_2] = min(abs(TAB_Sca2Frq-freq(2))); sca_2 = scales(idxSca_2); [~,idxSca_3] = min(abs(TAB_Sca2Frq-freq(3))); sca_3 = scales(idxSca_3); [mini,idxSca_4] = min(abs(TAB_Sca2Frq-freq(4))); sca_4 = scales(idxSca_4); coefs = cwt(y,scales,wname); clf; wscalogram('image',coefs,'scales',scales,'ydata',y); hold on plot([1 size(coefs,2)],[sca_1 sca_1],'Color','m','LineWidth',2); plot([1 size(coefs,2)],[sca_2 sca_2],'Color','m','LineWidth',2); plot([1 size(coefs,2)],[sca_3 sca_3],'Color','m','LineWidth',2); plot([1 size(coefs,2)],[sca_4 sca_4],'Color','m','LineWidth',2); </code></pre> <p>i have took following data</p> <pre><code>&gt;&gt; wname = 'morl'; scales = 1:1:128; &gt;&gt; fs=100; &gt;&gt; freq=[13.7 10.5 29.9 31 ]; </code></pre> <p>when i run following code</p> <pre><code>[scal_1,scal_2,scal_3,scal_4]=calc_wavelet(B,wname,scales,freq,fs); </code></pre> <p>got result :</p> <p><img src="https://i.sstatic.net/lQLsd.png" alt="enter image description here"></p> <p>one frequency is lost,i have tried also following basis</p> <pre><code>&gt;&gt; wname = 'mexh'; &gt;&gt; wname = 'morl'; &gt;&gt; wname = 'haar'; &gt;&gt; wname = 'gaus4'; </code></pre> <p>please pay attention that my model is following</p> <p><a href="https://dsp.stackexchange.com/questions/15559/understanding-1d-wavelet-analysis">https://dsp.stackexchange.com/questions/15559/understanding-1d-wavelet-analysis</a></p> <p>so which wavelet basis should i choose for good spectral resolution?i want to use wavelet for ability to distinguish signals which are closed spaced to each other,like in my case</p> <p>freq=[13.7 10.5 29.9 31];</p> <p>it could be even more closed to each other</p>
557
spectral analysis
Parameters for signal analysis
https://dsp.stackexchange.com/questions/16910/parameters-for-signal-analysis
<p>I am extremely new to signal analysis. And before posting I did a lot of reading on signal analysis, FFT and windowing. I am working on my thesis which involves comparison of speech signals lets say about 100 speech samples for a given sentence. I have the recordings and all the data. I have a few questions in order of what I think I should do.</p> <ol> <li><p>I need to be able to separate noise from the signal. What parameters should I use for that? If I have to use a window function then what sort of window should I apply for that? Hanning, Kaiser or Rectangular.</p></li> <li><p>What parameters should I look for to find out the similarities and the differences in the speech signals? Should I see the spectral densities, or the amplitudes and intensities?</p></li> </ol> <p>Sorry for being so naive, I'm a real noob here. I hope you can help me out and bear with me patiently.</p> <p>For software I am using PRAAT and for noise removal I think Audacity would be good. I used a Sony voice recorder for recording the speech samples.</p>
<p><strong>Noise removal</strong> </p> <p>You should use a Gaussian convolution filter.</p> <p><strong>Similarities in signal</strong></p> <p>Generally this is done by spectrum analysis - like a Fourier transform. get the DTFT of say every second or half-second (you will need to experiment with window size to get best results) and then match that to a database of Fourier transforms for your reference sounds. You'll probably want to pull out frequencies with the highest amplitudes and make a histogram, which you can query your database to find the closest histogram.</p> <p>Sure, you could do parts of this with Audacity or other software, but if you want to learn more about DSP i suggest you use Matlab or Python+numpy/scipy to code the processing yourself! It will be a lot more flexible, maybe frustrating at times, but I highly recommend it.</p>
558
spectral analysis
Estimating Average HR from PPG sensor
https://dsp.stackexchange.com/questions/75243/estimating-average-hr-from-ppg-sensor
<p>I am reading <a href="https://stm.sciencemag.org/content/10/431/eaap8674" rel="nofollow noreferrer">Smartphone based Blood Pressure Monitoring via the Oscillometric Finger Pressing Method</a>, which is trying to estimate blood pressure from a PPG sensor and a small applied force finger sensor. I am not an engineer, so I'm unfamiliar with a lot of the terminology.</p> <p>In this paper, they describe using the ppg sensor measurements to estimate an average heart rate. The single sentence that describes this process is <em>&quot;The average heart rate is determined from the blood volume waveform based on its spectral peaks within the frequency range of 0.5 to 3 Hz&quot;.</em></p> <p>I have a ppg sensor, so I can take a time series measurement of the blood volume, but frankly, I have no idea what this sentence means other than it involves some kind of spectral analysis. I'm unable to find any similar descriptions of this algorithm in google scholar. I would really appreciate it if someone could interpret and translate this sentence into either an algorithmic, or statistical terminology, which I am much more familiar.</p>
559
spectral analysis
Find highest frequency of a very manky signal
https://dsp.stackexchange.com/questions/35028/find-highest-frequency-of-a-very-manky-signal
<p>I'm trying to compute the highest frequency (as can be sampled) in some pretty manky looking discrete time-dependent signals. My current method - a discrete fourier analysis - fails for some pretty awful looking but clearly oscillating signals (with discernable highest frequencies).</p> <p>My current method is to compute the discrete fourier transform, locate the local maxima, and perform weighted averages around these peaks (to undo the 'smearing' of the dft across the discrete sample frequency bins). Here it is in MATLAB.</p> <pre><code>% compute spectrum dft = abs(fft(weight)); % real response dft = dft(1:floor(end/2) + 1); % keep only pos freqs dft = dft/sum(dft); % normalise % find present modes modes mode = zeros(0, 2); [pks, locs, widths, ~] = findpeaks(dft); for k=1:length(locs) % only consider significant peaks if pks(k) &gt; 0.1 av_width = min([floor(peak_width_factor*widths(k)), locs(k)-1, length(dft)-locs(k)]); inds = (locs(k)-av_width):(locs(k)+av_width); av_ind = (inds * dft(inds)) / sum(dft(inds)); av_freq = (av_ind - 1)/T; mode = [mode; av_freq, pks(k)]; end end </code></pre> <p>This produces a matrix of present modes (of a spectral significance above 0.1/1) where each row is the frequency and significance of the mode.</p> <p>This works great for signals like this (time signal left, dft right with the frequencies of detected modes labeled):</p> <p><a href="https://i.sstatic.net/lE0uG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lE0uG.png" alt="enter image description here"></a></p> <p>but fails for signals like these (where we see and expect frequencies close to 2; not an order of magnitude smaller!)</p> <p><a href="https://i.sstatic.net/jmIdD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jmIdD.png" alt="enter image description here"></a></p> <p>Often I expect a certain frequency (e.g f=2.6 in the above signals) and can judge myself the mode is present by the average period between the local maxima. I tried codifying this - computing the average time between local maxima - in MATLAB, but it was pretty unreliable:</p> <pre><code>[~, peak_inds] = findpeaks(weight); cycle_periods = diff(peak_inds) * dt; av_cycle_period = sum(cycle_periods)/length(cycle_periods); av_freq = 1/av_cycle_period; </code></pre> <p>My signals are well enough sampled (around 15 to 20 values per observable manky period) to resolve these manky modes visually. I've studied time series and random processes at an undergraduate level, but we never really went too deep into spectral analysis. So:</p> <ul> <li><p>How can I reliably compute these highest-frequency modes in my very 'unfourier' signals?</p></li> <li><p>Why does my current DFT analysis incorrectly deduce very low frequency modes in these manky signals?</p></li> <li><p>Why would, if multiple present modes have dissimilar frequencies, my naive distance-between-local-maxima-average method fail at extracting the highest frequency?</p></li> </ul>
<p>I think most of your questions can be solved by subtracting then mean from the signal. Namely the all sinusoidal waves, with a nonzero frequency, have a mean of zero. So when the mean of a signal is not close to zero, then it will show up at 0 Hz (you can look at this as $\cos(0\,t)=1$).</p> <p>After a closer look and some testing by myself it does seem that only removing the mean would not fix your problem. Your signal seems to be contaminated by white noise, which has been integrated probably twice (so low frequencies are much more present than high frequencies). You could try to subtract a higher order polynomial fit, so linear or even quadratic. These fits can easily be obtained using some <a href="http://global.oup.com/booksites/content/0199268010/samplesec3" rel="nofollow">linear algebra</a>, for example for a quadratic fit:</p> <pre><code>X = [ones(N, 1), t', t'.^2]; b = (X' * X) \ X' * y'; z = y - (X * b)'; </code></pre> <p>Or you could also look at the biggest peaks above $f^{-2}$, with $f$ the frequency vector. However if you have a signal with normal white noise you should probably just stick with removing the mean, since my proposed methods try to make use of the fact that the present noise is white noise which has been integrated twice.</p>
560
spectral analysis
Effect of DC component on the whole signal - comparison between normalised and non normalised
https://dsp.stackexchange.com/questions/34101/effect-of-dc-component-on-the-whole-signal-comparison-between-normalised-and-n
<p>I have a fourier analysis signal as in the picture attached, where red represents the FFT of movement of the hand of a stroke subject and the blue one is the movement of a healthy subject.</p> <p>I am doing some analysis called <strong>Spectral Arc Length</strong>, where I will calculate the spectral arc length to compute the smootheness of movement. (check out this paper: On the analysis of movement smoothness by Sivakumar Balasubramanian), where in the metric, the longer the spectral arc length, the less smooth the movement is. </p> <p>The first image down here is the original data, where we can see that the DC component of the stroke subject is higher than the healthy subject, and the amplitude of the frequency signal is higher, causing the arc length of the signal to be larger, signifies less smooth movement.</p> <p>MATLAB CODE: </p> <pre><code>y=variable; Ts=1/Fs; L= size(y,2); % Length of signal NFFT = 2^(ceil(log2(L))+4); % Next power of 2 from length of y Y = fft( y, NFFT )/L; f = Fs/2*linspace(0,1,NFFT/2+1); </code></pre> <p>however, I have a problem with my signal where the metric suggest than we should normalised the signal to the DC component of the signal <code>(Y=Y/max(Y))</code>. By doing this, my signal turn out to be like the second picture, and when I calculate the spectral arc length metric, turns out that stroke 'apparently' have smoother movement than the healthy (due to shorter spectral arc length), which I am pretty sure should't be the case.</p> <ul> <li>My question is, does the normalization to the DC component makes sense? Does the DC component has any effect on the rest of the signal, <strong><em>where larger DC will cause larger amplitude of the signal?</em></strong></li> <li>another option for me is to calculate the arc length after the DC signal (starting from the black marker is put on the signal here)</li> <li>I would also try to do some wavelet analysis if Fourier analysis doesn't work for my calculation...</li> </ul> <p><a href="https://i.sstatic.net/NfpuI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NfpuI.jpg" alt="Non-normalised"></a> <strong>Non-normalised data</strong></p> <p><a href="https://i.sstatic.net/wtFad.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wtFad.jpg" alt="Normalised data Y= Y/max(Y)"></a> <strong>Normalised data Y= Y/max(Y)</strong></p>
<p>As I understand from the description,you are not just using the DC information but also information at other frequencies.</p> <p><strong>If this is the case:</strong> </p> <p>Generally,Normalizing a signal means subtracting the mean value of the signal from the data.</p> <pre><code>Y = Y - mean(Y) .......(1) </code></pre> <p>Fourier transform is a linear transform.So,any constant scaling proportionately scales up the transform.</p> <pre><code>Y = Y/max(Y) .... (2) </code></pre> <p>If <code>(2)</code> is performed then the transform is scaled by <code>max(Y)</code> but the scaling varies with the data and this can be clearly seen in the 2nd plot of your description.</p>
561
spectral analysis
Spectral plot shows more notes than there really should be
https://dsp.stackexchange.com/questions/31151/spectral-plot-shows-more-notes-than-there-really-should-be
<p>I was experimenting with sound analysis lately and from what I see when I plot spectral data of an audio file is that apart from notes that were actually picked there are some other notes with quite high local amplitude.<br> For example I have a sample where D major chord is played with some nasty distortion. After looking onto the spectral plot I can observe a note D, F# and A as expected. But as I go higher some other notes have higher amplitude than threshold, one of those is E.<br> Tried same algorithm for recorded guitar chord without distortion, but it sounds like there are some other effects like chorus or flanger and again additional note was found.<br> So, the question is: why I can see additional notes when analysing spectrum even though they were not picked during recording ? </p> <p>Note: The note E has some harmonics further in spectrum, so it cannot be treated like noise.</p>
<p>Since the notes from a guitar are no pure sinusoids, you should expect to see some harmonics, even when analyzing the dry signal without effects. E.g., the note E is the perfect fifth of the note A, i.e., it is the second harmonic.</p> <p>If you use distortion or modulation effects (chorus, flanger, and phaser) you get even more additional frequencies due to the non-linearity (distortion) or due to time-varying filtering (modulation effects).</p> <p>So in sum there are three sources of additional frequency components different from the fundamental pitches:</p> <ol> <li>harmonics (due to non-sinusoidal signal)</li> <li>non-linearities (e.g., overdrive/distortion)</li> <li>time-varying filtering (e.g., modulation effects)</li> </ol> <p>And in your example you get all three of them.</p>
562
spectral analysis
Deriving the impulse response of an ideal low-pass filter
https://dsp.stackexchange.com/questions/84934/deriving-the-impulse-response-of-an-ideal-low-pass-filter
<p>The impulse response of an ideal low-pass filter can be determined by setting <span class="math-container">$H(\omega)=1$</span> in the Fourier-representation <span class="math-container">$$h(n) = \frac{1}{2\pi}\int_{-\omega_c}^{\omega_c} H(\omega)e^{j\omega n}d\omega$$</span></p> <p>The solution will be a function of form <span class="math-container">$\sin(n)/n$</span>. Now, the motivation behind the substitution above is the desire to amplify each frequency component before <span class="math-container">$\omega_c$</span> by an equal amount. Hence the term ideal response. Also, the fact that for some <span class="math-container">$\omega_0$</span> there is a corresponding amplitude <span class="math-container">$\vert H(\omega_0)\vert$</span> is not ( as far as I know ) interesting while performing spectral analysis. What matters are the amplitude relations between some <span class="math-container">$\omega_0$</span> and <span class="math-container">$\omega_1$</span>, the frequency components of a signal perhaps.</p> <p>Therefore, the substitution <span class="math-container">$H(\omega) = C$</span>, for any constant <span class="math-container">$C$</span> should be equally valid right?</p>
<p>An additional gain (or attenuation) doesn't change the characteristic of a filter. So yes, any constant is fine. After all, it's matter of definition; there might be people who say that an ideal lowpass filter has unity gain. But that's quite a moot point in my opinion.</p> <p>I do remember a case where a paper was submitted to a journal in which the author called a filter with a constant gain <span class="math-container">$\neq 1$</span> an &quot;allpass filter&quot;. One of the reviewers, who is a very famous professor, known to everybody who has ever heard the term &quot;filter bank&quot;, wrote in his review that an allpass filter only deserves its name when it has a gain of unity, and the author was strongly advised to make appropriate changes. So, some people take gain constants quite seriously.</p>
563
spectral analysis
What kind of signal reppresents this Power Spectral Density?
https://dsp.stackexchange.com/questions/51538/what-kind-of-signal-reppresents-this-power-spectral-density
<p>I'm sampling 8 bioelectric signals with a embedded board which use an 8-channels ADC (<a href="http://www.analog.com/media/en/technical-documentation/data-sheets/AD7175-8.pdf" rel="nofollow noreferrer">AD7175-8</a>). My sampling rate for every single channel is about 6250 Hz. When move the analysis in the frequency domain, what I obtain is the power spectral density as follow</p> <p><a href="https://i.sstatic.net/txn8z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/txn8z.jpg" alt="enter image description here"></a></p> <p>instead of an (almost) flat one as I expect. It seems that there is a square-modulated interfering signal. I want to understand the nature of this signal and for this reason I'd like to know if you can tell me what kind of signal generate this particular power density spectrum. </p>
564
spectral analysis
Preparing audio data for FFT
https://dsp.stackexchange.com/questions/14546/preparing-audio-data-for-fft
<p>I would like to experiment with some input from a microphone and am receiving a 2 channel, 512 samples buffer in real time</p> <p>I know the signal could be passed through a low pass filter, and then windowed before the FFT.</p> <p>The sample rate is 44100.00, what low pass filter is needed for this, does the analog to digital conversion not run it through a low pass filter before converting it?</p> <p>How exactly does a window apply to real time sample input of 512 samples? Is a window not the same as a filter? </p> <p>I have also read that the signal can be run through the FFT twice, why exactly is this done and is it necessary for this data?</p> <p>I would like to slowly build this up to do spectral analysis etc. Is this done directly on the FFT bin data? </p>
<blockquote> <p>I know the signal could be passed through a low pass filter, and then windowed before the FFT.</p> </blockquote> <p>Yes, it <em>could</em> be, and it's usually advisable to use a window, but these are not just things you do blindly because you heard about them. Low passing the data will alter it in ways you might not want to. It might be useful, or it might not -- depends on your application. It will reduce the amount of high frequencies represented in your FFT results. Is that what you want? Maybe, maybe not. Windowing is useful because it reduces (but does not eliminate) frequency-domain distortion caused by separating your data into chunks. Usually you want that, but there are trade-offs (my explanation is an over-simplification).</p> <blockquote> <p>The sample rate is 44100.00, what low pass filter is needed for this, does the analog to digital conversion not run it through a low pass filter before converting it?</p> </blockquote> <p>The input must be filtered before the digital conversion (and usually this is done for you). There is no way to filter it after to achieve the same effect, so even if it's not taken care of for you, you might as well forget about it. Chances are, your DA Converter does a halfway decent job of this, though.</p> <blockquote> <p>How exactly does a window apply to real time sample input of 512 samples? Is a window not the same as a filter?</p> </blockquote> <p>A window and a filter are different, as explained above.</p> <blockquote> <p>I have also read that the signal can be run through the FFT twice, why exactly is this done and is it necessary for this data?</p> </blockquote> <p>You haven't really told us anything about what you want to do with this data, but no, it's highly unusual to do two FFTs.</p> <blockquote> <p>I would like to slowly build this up to do spectral analysis etc. Is this done directly on the FFT bin data?</p> </blockquote> <p>More or less. I suggest you read up on filtering, windowing, fourier analysis on Wikipedia. <a href="http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html" rel="nofollow">This blog post</a> might also help.</p>
565
spectral analysis
How to interpret these different Fourier analysis of this audio signal?
https://dsp.stackexchange.com/questions/24635/how-to-interpret-these-different-fourier-analysis-of-this-audio-signal
<p>This is my first dive in DSP. I would like to familiarize myself with frequency analysis. I have two audio tracks which should be digitized at 16bit-44.1kHz and 24bit-192kHz (music, presented as a 24bit-192kHz sample) respectively.</p> <p>I wanted to identify the effect of the low-pass filter around the Nyquist frequency (22.05kHz and 96kHz respectively).</p> <p><strong>Edit:</strong> I completely reworked the question.</p> <hr> <h2><em>Software used:</em></h2> <p>I basically estimated the power spectral density using Welch's method as implemented by <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.welch.html#scipy.signal.welch" rel="nofollow noreferrer"><code>scipy.signal.welch</code></a> in the <code>Scipy</code> library of the <code>Python</code> programming language.</p> <p>Basically, I used a script equivalent to:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from scipy import signal from waveio import readwav # Load data from one channel (#0) for each sample file wav192 = readwav("24b-192khz.wav")[:,0] wav44 = readwav("16b-44khz.wav")[:,0] # DoE: 2 sample size and two windows types chunks = [256, 4096] windows = ["hanning", "boxcar"] # boxcar is rectangular # Prepare a figure plt.figure() # Calculate density spectra and plot for N in chunks: for w in windows: f, Pxx44 = signal.welch(wav44, fs=44100, window=w, nperseg=N, nfft=2*N, scaling="density") plt.semilogy(f, Pxx44) plt.legend(["chunk=%d; window=%s"%(c, w) for c in chunks for w in windows]) plt.xlabel("Frequency (Hz)") plt.ylabel("Density (I$^2$/Hz)") </code></pre> <hr> <h2><em>The power spectral density of the <strong>44.1kHz</strong> audio sample:</em></h2> <p><img src="https://i.sstatic.net/xm4Q7.png" alt="44100Hz 16bit"></p> <p>Which is basically just as expected:</p> <ul> <li>The chunk size, i.e. the number of samples per fft-transform segment in the real domain, does not change the density a lot if a Hann window is used.</li> <li>The chunk size effect is clearly visible with the boxcar (rectangular) window. From what I understand, this is because of spectral leakage which diminishes as the chunk size increases. Is that correct?</li> <li>The low-pass filter effect at the Nyquist frequency (22.05kHz) </li> </ul> <p>So far, so good.</p> <hr> <h2><em>The power spectral density of the <strong>192kHz</strong> audio sample:</em></h2> <p><img src="https://i.sstatic.net/l7gZj.png" alt="192kHz 24bit"></p> <p>Good point:</p> <ul> <li>Same behaviour in regard to the chunk size and the window. Is spectral leakage really that strong? That's pretty impressive.</li> </ul> <p>Oddities:</p> <ul> <li>What the heck is happening? </li> <li>Where is the low-pass filter near the Nyquist frequency?</li> <li>Why are very-high frequencies even <em>increasing</em>? Could that be related to the choice of the windowing function?</li> </ul> <p>From my interpretation, there is no low-pass filter visible because basically no audio system would go above 192kHz and generally, the software/hardware creator are smart enough to apply a low-pass filter designed with regard to the actual output bandwidth of the audio system.</p> <p>As for the increasing audio signal above 57kHz, I really can't explain it: the original audio sample is some classical music. I wouldn't expect any instrument to generate louder sounds in that range or frequencies. Any idea? Could this be an example of upsampling?</p>
<p>If you look at the <a href="https://en.wikipedia.org/wiki/Window_function#Rectangular_window" rel="nofollow">Rectangular window</a> the best its rejection gets is about 40 dB. So that behavior, especially obvious in the bottom plot, for the rectangular window is to be expected.</p> <p>I don't know for sure if this explains everything but look at the level between your peak signal and the high-frequency components. There is almost 60 dB of rejection there. I've always heard that a good rule of thumb is to get 60 dB of rejection from your filters. I know that's not the full 96 dB offered by 16-bit music, but I bet you'd be hard pressed to actually hear that. Of course, there's the silliness of having music at that sample rate. Humans just can't hear anything above around 20 kHz, give or take. <a href="http://xiph.org/~xiphmont/demo/neil-young.html" rel="nofollow">This article</a> gives a good summary of the issues. It also brings up a good point, that that sample rate has the potential to pick up harmonics and other high-frequency distortion caused by equipment and electronics, despite the fact that we can't hear it. Perhaps something like that is going on? </p>
566
spectral analysis
What Fourier analysis would be appropriate for analyzing servo position error as a function of frequency?
https://dsp.stackexchange.com/questions/96070/what-fourier-analysis-would-be-appropriate-for-analyzing-servo-position-error-as
<p>I'm looking for a Fourier analysis method that will help me with a servo position tracking problem. I'll give some background:</p> <p>Imagine I have a control system that attempts to control a linear actuator to a submicron position. I have a following error signal in units of nanometers that I monitor. In an ideal world, this following error signal would be zero because my control system is perfect. Unfortunately, my system is not perfect and this signal dithers. In order to better understand the following error signal, I want to perform some Fourier analysis on the signal to see how the position error amplitudes map to the frequency domain. Ideally, I would be able to look at the analysis and make a statement: &quot;For my 1000nm peak following error in the time domain, 800nm correspond to frequency range X, 100 nm frequency range Y, and the remaining 100nm correspond to the remaining range&quot;.</p> <p>However, I'm not sure what spectral analysis would actually be able to answer this. Here are are my current thoughts / questions:</p> <ul> <li>If I were to perform a basic DFT over over a range of data, I could apply amplitude normalization. However, because the signal is random the number of frequency bins will influence the amplitude calculation (i.e. longer signal will decrease FFT amplitudes). Now this might be okay since I could integrate the bins over a specific frequency range.</li> <li>Ideally integrating over the entire frequency spectrum would yield the peak error value of the time domain signal. However, the phases of each frequency component would influence how &quot;additive&quot; two frequencies are.</li> <li>I've been studying the usage of the PSD which seems like it could be useful since this is a random signal, however, I don't care about the power of the signal. I truly just care about the amplitude of the signal since it corresponds to position error.</li> </ul> <p>Ultimately I think this is a scaling problem. Of course it would be trivial to see what frequency components are influencing my systems performance, but more importantly, I want to account for every nanometer of that peak follow error in the frequency spectrum.</p> <p>Edit for further clarification on what I'm ultimately interested in: When taking the FFT of my error signal and amplitude scaling it (i.e. dividing the magnitude by the number of sample points), I notice that most of the error is centered around 60hz but is distributed among a 10hz bandwidth window. So in other word, most of the error frequency content is found between 55hz and 65hz. My natural next question would be: &quot;If I could clean up at that 60hz random noise that might clear up most of that 10hz noise window, how much would that improve my following error?&quot;. I can't simply integrate over that 10hz bandwith per @TimWestcotts answer below, so what method would allow me to quantify the error attributed to that 10hz range?</p>
<p>Possibly the reason this question has lain fallow is because it contains a factual error, and cannot be answered as stated.</p> <blockquote> <p>&quot;For my 1000nm peak following error in the time domain, 800nm correspond to frequency range X, 100 nm frequency range Y, and the remaining 100nm correspond to the remaining range&quot;</p> </blockquote> <p>In general, you can't do that, because except under very specific conditions, the <em>amplitudes</em> of signals of different frequencies do not add, or integrate. Rather, the power (or at least the signal amplitude squared) will add or integrate.</p> <p>So you could say that out of a total <span class="math-container">$1 \cdot 10^{-12} \mathrm{m^2}$</span> of mean-squared error, <span class="math-container">$8 \cdot 10^{-11} \mathrm{m^2}$</span> is in frequency range X, <span class="math-container">$1 \cdot 10^{-11} \mathrm{m^2}$</span>, <span class="math-container">$1 \cdot 10^{-11} \mathrm{m^2}$</span> is in frequency range Y, and the remaining <span class="math-container">$1 \cdot 10^{-11} \mathrm{m^2}$</span> is elsewhere.</p> <blockquote> <p>however, I don't care about the power of the signal. I truly just care about the amplitude of the signal since it corresponds to position error.</p> </blockquote> <p>Unfortunately, signal power is what you have to work with, just like all the rest of us.</p> <p>You <em>can</em> talk about the RMS value of the signal in your frequency ranges: just take the square roots of the power values. The RMS values won't add up to the total signal RMS value, but you'll at least be speaking in a sensible way that people can understand.</p>
567
spectral analysis
What is theorem under this formula?
https://dsp.stackexchange.com/questions/86160/what-is-theorem-under-this-formula
<p>I'm new to DSP. As I reading the textbook, I cannot understand the formula <span class="math-container">$X_{s}(f)=\frac{1}{T}\sum_{n=-\infty}^{\infty} X(f-nf_{s})$</span>. Could you please give me some keywords so I can learn the theorem and understand it?</p> <blockquote> <p>From spectral analysis, the original spectrum (frequency components) <span class="math-container">$X(f)$</span> and the sampled signal spectrum <span class="math-container">$X_s(f)$</span> in terms of Hz are related as <span class="math-container">$X_{s}(f)=\frac{1}{T}\sum_{n=-\infty}^{\infty} X(f-nf_{s})$</span> where <span class="math-container">$X(f)$</span> is assumed to be the original baseband spectrum, while <span class="math-container">$X_s(f)$</span> is its sampled signal spectrum, consisting of the original baseband spectrum <span class="math-container">$X(f)$</span> and its replicas <span class="math-container">$X(f-nf_s)$</span>. Since Equation (2.2) is a well-known formula, the derivation is omitted here and can be found in well-known texts (Ahmed and Natarajan, 1983; Alkin, 1993; Ambardar, 1999; Oppenheim and Schafer, 1975; Proakis and Manolakis, 1996).</p> </blockquote>
<p>It's not a <em>theorem</em>, but a <em>result</em> that is part of the <a href="https://en.m.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem" rel="nofollow noreferrer">sampling theorem</a>, and that shows the <strong>sampling operation in the frequency domain:</strong></p> <p>The sampling operation with frequency <span class="math-container">$f_s = \dfrac{1}{T}$</span> can be defined as: <span class="math-container">$$x_s(t) = x(nT) = x(t)\sum_{n=-\infty}^{\infty}\delta(t-nT) = x(t) \frac{1}{T}\sum_{k=-\infty}^{\infty}e^{jk\omega_st} = \frac{1}{T}\sum_{k=-\infty}^{\infty}x(t)e^{jk\omega_st}$$</span> where:</p> <ul> <li><span class="math-container">$\omega_s = 2\pi/T$</span></li> <li><span class="math-container">$\delta(t-nT)$</span> is an <em>impulse train</em> with period <span class="math-container">$T$</span> (i.e <em>sampling frequency</em> <span class="math-container">$f_s =\tfrac{1}{T}$</span>), and since it's periodic with period <span class="math-container">$T$</span>, we can use its <strong>Fourier series</strong> <span class="math-container">$\frac{1}{T}\sum_{k=-\infty}^{\infty}e^{jk\omega_st}$</span></li> </ul> <p>In the frequency domain, taking the Fourier transform of <span class="math-container">$x_s(t)$</span> (you can prove this using the <a href="https://www.tutorialspoint.com/linearity-and-frequency-shifting-property-of-fourier-transform" rel="nofollow noreferrer">shifting property</a>), we get: <span class="math-container">$$X_s(\omega) = \frac{1}{T}\sum_{k=-\infty}^{\infty}X(\omega - k\omega_s)$$</span> Note that <span class="math-container">$\omega = 2\pi f$</span>, so you can replace the dependency on <span class="math-container">$\omega$</span> by <span class="math-container">$f$</span>: <span class="math-container">$$X_s(f) = \frac{1}{T}\sum_{k=-\infty}^{\infty}X(f- kf_s)$$</span></p>
568
spectral analysis
How to get fft bins of an audio signal to approach a BarkScale?
https://dsp.stackexchange.com/questions/96208/how-to-get-fft-bins-of-an-audio-signal-to-approach-a-barkscale
<p>I have an audio signal and I would like to do spectral analysis/processing on it. I am interested in having frequency bins that approach a BarkScale rather than being equidistant.</p> <p>First, I should mention that I am an amateur in DSP, please take it into account when answering.</p> <p>I did some research and saw that we could warp the signal in the frequency domain in order to approach a BarkScale using an allpass filter properly set <a href="https://citeseerx.ist.psu.edu/document?doi=0539fda11dbc892df75bfba35c8197ae17bf7351&amp;repid=rep1&amp;type=pdf" rel="nofollow noreferrer">https://citeseerx.ist.psu.edu/document?doi=0539fda11dbc892df75bfba35c8197ae17bf7351&amp;repid=rep1&amp;type=pdf</a>, however I am not sure I understood it correctly and it is achieving what I want to do.</p> <p>I did a quick implementation of:</p> <pre><code>one pole allpass -&gt; FFT -&gt; eq a bin -&gt; IFFT </code></pre> <p>By eq-ing I mean changing the magnitude.</p> <p>I was expecting the frequency EQed to change if I enable/disable the allpass, but I don't see anything happening. This seems to confirm that I didn't understand the paper, or maybe my implementation is wrong.</p>
<p>No, that's not at all &quot;frequency-warping to a Bark scale&quot;. That's just a frequency-domain equalizer (and a bad one, because you ignore the cyclic nature of the convolution theorem of the FFT; compare <a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">this</a> question to understand why applying frequency-domain effects on non-overlapped FFTs is usually a bad idea), with a useless single-pole IIR upfront.</p> <p>It's really not quite clear why you think a single-pole all-pass filter in front of an FFT would do anything acoustically observable to your signal. This <em>might</em> indicate you might not have understood what an all-pass filter is (and that would be a prerequisite for understanding your papers on frequency-warped filters). What it definitely indicates is that you don't understand to what the all-pass system in the paper you cite gets applied to!</p> <p>You're warping the <em>frequencies of a filter</em> (or a filter bank). That's the goal. You're not &quot;warping the signal&quot; (as in your previous question); you're replacing the mathematical concept of &quot;unit delay&quot; <span class="math-container">$z^{-1}$</span> in the description of a filter with something frequency-dependent, thereby transforming the scale at which that filter operates (&quot;warping&quot;).</p> <p>So, your single-pole IIR never appears on its own in the kind of filter you are trying to build. <em>You mentally replace the element of &quot;delay by 1&quot; with a mathematical operation that is, if it stood on its own, a single-pole IIR</em>; but a single-pole IIR in itself doesn't do any warping of frequencies – that's plain impossible: it's a linear filter, and as such cannot shift energy from one frequency to another; it can only weigh the energy that's there at every frequency individually.</p> <p>So, this might mean you have a more basic misunderstanding what the involved filtering <em>should</em> be doing for you. When we speak of &quot;Bark Scale filtering&quot;, we typically mean that we have a bank of filters, each covering an equivalent width of Bark scale frequency. To achieve that, you can either design separate filters for each band, or you can, and that's what the authors assume you do, modify the same filter in a way that makes it become wider (according to Barkhausen's formula) with increasing frequency. See references <code>[24],[25]</code> in your paper!</p>
569
spectral analysis
Inconsistency between the units of power spectral density and the definition that people often give
https://dsp.stackexchange.com/questions/65963/inconsistency-between-the-units-of-power-spectral-density-and-the-definition-tha
<p>Perhaps someone can help me resolve something - this is my understanding:</p> <p>In deterministic signal analysis, for a continuous signal <span class="math-container">$x(t)$</span> the <a href="https://en.wikipedia.org/wiki/Energy_(signal_processing)" rel="nofollow noreferrer">signal energy</a> is defined by <span class="math-container">$$E_{\textrm{s}} = \int^{+\infty}_{-\infty} |x(t)|^2\mathrm dt \hspace{1cm} \textrm{Units:}\hspace{0.3cm}[\textrm{signal}^2\cdot \textrm{time}]$$</span> where the subscript <span class="math-container">$s$</span> is to indicate explicitly that we are talking about &quot;signal energy&quot;, and not real physical energy (which would be in units of <em>Joules</em> if you were to divide signal energy by some load impedance).</p> <p>Similarly, the average power of a signal is defined by <span class="math-container">$$P_{\textrm{s}}^\textrm{ av} = \lim_{T\to\infty}\frac{1}{T} \int^{+T/2}_{-T/2} |x(t)|^2\mathrm dt \hspace{1cm} \textrm{Units:}\hspace{0.3cm}[\textrm{signal}^2]$$</span></p> <p>This makes sense because it is the same unit as the <em>rate of signal energy transferred</em>, which is signal power.</p> <p>Therefore, the units of <em>power spectral density</em> should be [<em>signal power per frequency band</em>], or <span class="math-container">$[\textrm{signal}^2 / \textrm{Hz}]$</span>.</p> <p>My problem is that I have seen many times now people who seem to know what they are talking about saying that the power spectral density is given by</p> <p><span class="math-container">$$ S_{xx}(f) = |X(f)|^2 $$</span></p> <p>where <span class="math-container">$X(f)$</span> is the Fourier transform of <span class="math-container">$x(t)$</span>. BUT, the units of this quantity are <strong>not correct</strong>. Since the units of the Fourier transform <span class="math-container">$X(f)$</span> are <span class="math-container">$[\textrm{signal}\cdot \textrm{time}]$</span>, then the units of <span class="math-container">$S_{xx}(f)$</span> written above are <span class="math-container">$[\textrm{signal}^2\cdot \textrm{time}^2] = [\textrm{signal}^2\cdot \textrm{time} /\textrm{Hz}]$</span>, which are the units of <em>energy</em> spectral density, not <em>power</em> spectral density. Am I missing something fundamental here? Why do people often write this simple definition of <span class="math-container">$S_{xx}(f)$</span>?</p> <p>See these answers for some examples:</p> <ul> <li><p><a href="https://dsp.stackexchange.com/questions/2341/difference-between-power-spectral-density-spectral-power-and-power-ratios/2342#2342">Dilip Sarwate's answer to <em>Difference between power spectral density, spectral power and power ratios</em></a></p> </li> <li><p><a href="https://dsp.stackexchange.com/questions/59372/power-spectral-density-why-this-2-method-are-equal/59376#59376">Florian's answer to <em>Power spectral density: Why are these two methods equal?</em></a></p> </li> <li><p><a href="https://dsp.stackexchange.com/questions/64833/help-with-obatining-power-spectral-density-of-a-simple-continuous-cosine-using/64837#64837">Hilmar's answer to <em>Help with obatining power spectral density of a simple continuous cosine (using both forms of the definition for PSD)</em></a></p> </li> </ul>
<ul> <li>The OP is correct in their dimensional analysis</li> <li><span class="math-container">$|X(f)|^2$</span> is NOT the power spectral density, despite what other authors might claim. Other authors probably call this the power spectral density because it is close to right and it captures most of the important features without having to delve into technicalities.</li> </ul> <p>Power has dimensions of <span class="math-container">$[\text{signal}^2]$</span>. Energy has dimensions of <span class="math-container">$[\text{power}\cdot\text{time}] = [\text{signal}^2\cdot\text{time}]$</span>. The spectral density of anything has dimensions of <span class="math-container">$[\text{thing}\cdot \text{frequency}^{-1}]$</span>. Thus, power spectral density has dimensions of <span class="math-container">$[\text{signal}^2 \cdot \text{frequency}^{-1}] = [\text{signal}^2\cdot \text{time}]$</span>. Note that it is coincidental that power spectral density has the same dimensions as energy and it should be understood that power spectral density is power per frequency. Also note that the Fourier transform of anything always has dimensions of <span class="math-container">$[\text{thing}\cdot\text{frequency}^{-1}]$</span>.</p> <p>The power spectral density is more nicely defined as follows. We define the windowed signal</p> <p><span class="math-container">$$ x_{\Delta t}(t) = \begin{cases} x(t) \text{ for } |t|&lt; \frac{\Delta t}{2}\\ 0 \text{ for } |t| \ge \frac{\Delta t}{2} \end{cases} $$</span></p> <p>The windowed Fourier transform is then</p> <p><span class="math-container">$$ X_{\Delta t}(f) = \int_{t=-\infty}^{+\infty} x_{\Delta t}(t) e^{-i2\pi f t} dt = \int_{t=-\frac{\Delta t}{2}}^{\frac{\Delta t}{2}} x(t) e^{-i2\pi f t} dt $$</span></p> <p>The power spectral density is then defined by</p> <p><span class="math-container">$$ S_{xx}(f) = \lim_{\Delta t\rightarrow \infty} \frac{1}{\Delta t} |X_{\Delta t}(f)|^2 $$</span></p> <p>More properly when dealing with random signals one might take an expectation value of the squared windowed transform.</p> <p>This can be expressed another way. We can define a window function</p> <p><span class="math-container">$$ w_{\Delta t}(t) = \frac{1}{\sqrt{\Delta t}} \theta\left(t-\frac{\Delta t}{2}\right)\theta\left(\frac{\Delta t}{2} - t\right) $$</span></p> <p>Here <span class="math-container">$\theta$</span> is the Heaviside function. And a windowed version of <span class="math-container">$x(t)$</span> given by</p> <p><span class="math-container">$$ x_{w_{\Delta t}}(t) = x(t)w_{\Delta t}(t) $$</span></p> <p>Note that this is the exact same as the windowed function defined above but with a factor of <span class="math-container">$\frac{1}{\sqrt{\Delta t}}$</span> built in. The Power spectral density can then be defined equivalently as</p> <p><span class="math-container">$$ S_{xx}(f) = \lim_{\Delta t \rightarrow \infty} |X_{w_{\Delta t}}(f)|^2 $$</span></p> <p>The reason we must work with <span class="math-container">$x_{w_{\Delta t}}(t)$</span> rather than <span class="math-container">$x(t)$</span> is that <span class="math-container">$x(t)$</span> is that, if <span class="math-container">$x(t)$</span> has constant power or at least finite power for infinite time, then <span class="math-container">$x(t)$</span> has infinite energy. However, even if <span class="math-container">$x(t)$</span> has infinite energy, <span class="math-container">$x_{w_{\Delta t}}(t)$</span> has finite energy. Note that the window function is not dimensionless but acts so that the finite total energy in <span class="math-container">$x_{w_{\Delta t}}(t)$</span> given by <span class="math-container">$\int |x_{w_{\Delta t}}(t)|^2 dt$</span> is in fact the average finite energy in <span class="math-container">$x(t)$</span>.</p> <p>We also have the fact that infinite length signals do not have well behaved Fourier transforms, for example, the Fourier transform of a pure tone <span class="math-container">$e^{+i2\pi f_0 t}$</span> is a dirac delta function, i.e. not well behaved. The windowed version of this will have a well-behaved Fourier transform.</p> <p>@Dan Boschen expresses some confusion about reconciling the dimensions of <span class="math-container">$S_{xx}(f)$</span> with the Fourier transform of the autocorrelation function. There is no need for confusion. The units agree.</p> <p><span class="math-container">$$ S_{XX}(f) = \tilde{R}_{xx}(f) = \int R_{xx}(t) e^{-i2\pi ft} dt = \int \langle x(t)x(0)\rangle e^{-i2\pi ft}dt $$</span></p> <p>The expression on the right has dimensions of <span class="math-container">$[\text{signal}^2\cdot \text{time}]$</span> which is the same as the units of power spectral density expressed above. This should hint that the Fourier transform of the auto-correlation function is NOT given by <span class="math-container">$|X(f)|^2$</span>...</p> <p><span class="math-container">$R_{xx}(t)$</span> (for stationary <span class="math-container">$x(t)$</span>) is defined as</p> <p>ensemble average: <span class="math-container">\begin{align} R_{xx}(t) = \langle x(t)x(0) \rangle = \int yz f_{x(t),x(0)}(y,z) dy dz \end{align}</span></p> <p><span class="math-container">$f_{x(t),x(0)}(y,z)$</span> is the joint probability density function for the random variables <span class="math-container">$x(t)$</span> and <span class="math-container">$x(0)$</span> so it has dimensions of <span class="math-container">$[\text{signal}^{-2}]$</span>.</p> <p>time average: <span class="math-container">\begin{align} R_{xx}(t) = \langle x(t)x(0) \rangle = \lim_{\Delta t \rightarrow \infty} \frac{1}{\Delta t} \int_{t'=-\frac{\Delta t}{2}}^{\frac{\Delta t}{2}} x(t'+t)x(t') dt' \end{align}</span></p>
570
spectral analysis
How to calculate time domain SNR using known sequence
https://dsp.stackexchange.com/questions/69380/how-to-calculate-time-domain-snr-using-known-sequence
<p>I am using the following formula to calculate SNR of a real world complex baseband signal sampled at 1x Nyquist.</p> <pre><code>SNR = Rxy(tm)^2 / [ Px*Py - Rxy(tm)^2 ] SNR (dB) = 10*log10(SNR) </code></pre> <p>where</p> <pre><code>Rxy(tm) = peak of the cross correlation at time delay, tm Px = power in reference signal Py = power in received signal </code></pre> <p>I verified proper implementation of the formula using simulated real-valued and complex-valued signals with and without noise.</p> <p>On real data the SNR estimates using the above formula are too low (by 10.0+ dB). I manually verified the actual SNR a few different ways. I used spectral analysis to visually measure the signal power to the noise floor. I also measured the signal power to noise power (when signal is off), and both of those techniques give me an answer closer to what I expect.</p> <p>I am flummoxed as to why this equation is not working on real-world signals. Do I need to take the DC bias (mean of data) into account and add that back to the SNR estimate? If I do that then I get values closer to what I expect.</p> <p>Reference: Formula came from Principles of Communications (Tranter, Ziemer) textbook</p>
<p>The peak of the cross correlation should be the transmit signal power times the channel's attenuation.</p> <p>Realizing that, it's really just</p> <p><span class="math-container">\begin{align} \text{SNR} &amp;= \frac{P_\text{signal}}{P_\text{noise}}\\ &amp;= \frac{P_\text{signal}}{P_\text{received} - P_\text{signal}}\\ &amp;= \frac{P_\text{tx}\cdot a_\text{channel}}{P_\text{received} - P_\text{tx}\cdot a_\text{channel}}\\ &amp;= \frac{P_{crosscorr,max}}{P_\text{received}-P_{crosscorr,max}}, \end{align}</span></p> <p>which is</p> <pre><code>=Rxy(tm)^2 / (Py-Rxy(tm)^2) </code></pre> <p>in your notation.</p> <p>So, either that book is wrong or you're missing something there.</p>
571
spectral analysis
Purpose of using polyphase filter bank (PFB)
https://dsp.stackexchange.com/questions/24166/purpose-of-using-polyphase-filter-bank-pfb
<p>What is the advantage of using a polyphase filter bank (PFB) for spectral analysis over just using the FFT? In the standard <a href="http://cnx.org/contents/3dea9cf9-32b6-4bf2-940b-cf8d251a0a84@15/Uniformally_Modulated_%28DFT%29_Fi" rel="nofollow">"critically sampled" uniform DFT filterbank</a>, the polyphase decimation/filtering is followed by an $M$-point DFT block, which implements the last step of the PFB in a computationally efficient manner using the FFT. </p> <p>If you have to do an FFT anyways, why bother with the PFB? Is the reason that I can choose a custom prototype low-pass filter on the front-end? Are there some computational savings I'm missing out on?</p> <p>EDIT: If comparing this to a bank of quadrature downconverters, what is the point of using a PFB if the FFT is the same mathematically? It can't be the delay of $N$ samples requried to fill up an FFT block because the decimated branches have $1/N$ the rate, which means you will be waiting the same amount of time on average with either approach. What am I missing?</p>
<p>One advantage of a polyphase filterbank approach is, as you guessed, that you can control the frequency response of each channel. When using a DFT alone, you have limited control over the frequency band covered by each bin (characterized by a <a href="https://en.wikipedia.org/wiki/Dirichlet_kernel" rel="nofollow">Dirichlet kernel</a> in the unwindowed case, or by the frequency response of whichever window function you select). This is sufficient for a lot of applications.</p> <p>In some other applications, however, you might want tighter control over the per-channel frequency response. Say you wanted to construct a DFT-based spectrum analyzer with very particular specifications (e.g. -3 dB response at the midpoint between output bins, -80 dB response at a one-bin spacing from center). You can utilize the polyphase filterbank structure to implement whatever filter is needed to achieve the specified level of performance.</p> <p>Another application is in cases where reconstruction is needed after the channelizer: if you're careful in the design of the polyphase filter, you can actually perform straightforward spectral modification (similar to the "ideal filtering" that many signal processing beginners attempt using the DFT) in the frequency domain, then use a synthesis filterbank to take the composite signal back into the time domain. This structure, with cascaded analysis and synthesis stages, is known as a <a href="http://www.ece.mcgill.ca/~pkabal/papers/1990/Ramachandran1990.pdf" rel="nofollow">transmultiplexer</a>.</p>
572
spectral analysis
Filter length in maximally decimated polyphase channelizer
https://dsp.stackexchange.com/questions/91992/filter-length-in-maximally-decimated-polyphase-channelizer
<p>I would like to know whether there is a condition in selecting the number of channels <span class="math-container">$M$</span> and filter length <span class="math-container">$N$</span>. As of now I am trying to design a channelizer where the data length <span class="math-container">$M$</span> is much less than the FIR filter length <span class="math-container">$N$</span>.</p> <p>Right now I am referring to the paper &quot;<a href="https://ucsdwcsng.github.io/channelizer" rel="nofollow noreferrer"><em>High Resolution Spectral Analysis and Signal Segregation Using the Polyphase Channelizer</em></a>&quot; for this implementation. The paper provides a solution to the problem by stacking the filter coefficients such that the the number of taps per channel will increase (where filter length <span class="math-container">$N=M \cdot K$</span> and finally the number of filter taps per channel is <span class="math-container">$K$</span>).</p> <p>Finally, what I understood from the paper is to do an element-wise multiplication with the input data with each of the filter taps of length (<span class="math-container">$K$</span> in this example) and do an average. I am not sure my understanding is correct. So if someone can give a clarity on the implementation it would be helpful.</p>
<blockquote> <p>The paper provides a solution to the problem by stacking the filter coefficients such that the the number of taps per channel will increase (where filter length N=M⋅K and finally the number of filter taps per channel is K).</p> </blockquote> <p>That's not specific to the paper, that's just how polyphase systems generally work: Take a prototype filter of length <span class="math-container">$N$</span>, and deinterleave its input or its output (or both, for rational resamplers). So, I think this will be a pretty steep learning curve to learn from the paper alone!</p> <p>You'll want some textbook to explain polyphase channelizers as used in the paper. Luckily, one of the authors of that paper basically invented the stuff and wrote a classic text book on it; it's good for your needs! You'll want to read Harris, Fred. <em>Multirate Signal Processing for Communication Systems</em>; I think the 2004 version is cheap on the used book market. There, you need chapters 1, 2, 3.1 and 6, and you'll get the theory (and quite a bit of practice) of understanding and designing polyphase channelizers.</p> <blockquote> <p>Finally, what I understood from the paper is to do an element-wise multiplication with the input data with each of the filter taps of length (K in this example) and do an average</p> </blockquote> <p>Can't read the paper, but: If that's the case, that's not a polyphase channelizer. You process the input samples in a round-robin fashion through the <span class="math-container">$M$</span> paths (each of length <span class="math-container">$N/K$</span>) of your polyphase filter; to calculate the <span class="math-container">$M$</span> output channels, you typically have to add a DFT (in the implementation of an FFT) to the end.</p>
573
spectral analysis
Detect repetitive units within signals
https://dsp.stackexchange.com/questions/22306/detect-repetitive-units-within-signals
<p>I have several signals that consist of repetitive units. In the figure you'll clearly see the variability of the signals, that increases top down. The first signal is super repetitive and units are indicated with green lines. In the third, you'll see peaks and in the middle a little insertion that I know consists of rather diverging units, which still are units however (red lines). The remaining three signals display the variation more.</p> <p>Which signal processing / machine learning tool should I use in order to detect these units? With thresholding it works to find the significant peaks, but once signals get funky it's really difficult to accurately detect unit positions.</p> <p><img src="https://i.sstatic.net/4Zdqe.png" alt="enter image description here"> Edit: </p> <p>I made some progress with spectral analysis and plotted a filtered signal over the initial signals. Indicated by the blue arrow is a region variable repeats. Here, the amplitude of the original and filtered signal do not match. Same for the red arrow. Similar effects for signal further down.<img src="https://i.sstatic.net/xpqPM.png" alt="enter image description here"> </p>
<p>Your repetitive signal seem to be in a lower rate than the noise. I would run an FFT (why short) on all 6 sequences and filter out (zero) the frequencies that are clearly attributed to noise and then run an inverse FFT. If you know the repetition rate you might select the one frequency that has the correct information and use the phase information to locate the desired pick. I will try to include exactly 20 cycles if possible (or 16 or 32).</p>
574
spectral analysis
How to eliminate a cyclic spectrum estimation window artifact?
https://dsp.stackexchange.com/questions/83096/how-to-eliminate-a-cyclic-spectrum-estimation-window-artifact
<p>I am using an implementation of the averaged cyclic periodogram (section 3.2.4 in Antoni, Jérôme. &quot;Cyclic spectral analysis in practice.&quot; Mechanical Systems and Signal Processing 21.2 (2007): 597-630.). The cyclic spectrum is estimated based on two <span class="math-container">$L$</span> ln <span class="math-container">$$\hat{P}_{YX}^{(W)}\left(f,\alpha;L\right)=\frac{1}{K\Delta}\sum_{k=0}^{K-1}X_{N_w}^{k}\left(f+0.5\alpha \right)X_{N_w}^{k}\left(f-0.5\alpha \right)^*$$</span> where <span class="math-container">$$X_{N_w}^{k}\left(f\right)=\Delta\sum_{n=kR}^{n=kR+N_w-1}w_k[n]y[n]e^{-j2\pi fn\Delta}$$</span></p> <p>This implementation is based on the attached <a href="https://github.com/scipy/scipy/pull/15519" rel="nofollow noreferrer">scipy pull request</a>.</p> <p>Now, if I apply this implementation with the Hann window, on normal noise, I receive an artifact elevation at the modulation frequency representing the half of the window length (<span class="math-container">$\frac{2F_s}{N_w}$</span>). Seems like a Boxcar window eliminates this problem. Is it the only and the best solution?</p> <pre><code>import numpy as np from scipy.signal import csd, windows import matplotlib.pyplot as plt fs = 20000 s = np.random.normal(size=fs) alpha = np.arange(5, 200) # window = windows.boxcar(256) window = windows.hann(256) BiSpectrum = [] for alpha1 in alpha: x = s * np.exp(-1j * np.pi * (alpha1 / fs) * np.arange(s.shape[-1])) y = s * np.exp(1j * np.pi * (alpha1 / fs) * np.arange(s.shape[-1])) f, Pxy = csd(x, y, fs=fs, window=window) Pxy = Pxy[f &gt; 0] f = f[f &gt; 0] BiSpectrum.append(Pxy) BiSpectrum = np.abs(BiSpectrum).T plt.figure(figsize=[19, 9]) plt.pcolormesh(alpha, f, BiSpectrum, vmax=np.percentile(BiSpectrum, 99.99)) plt.xlabel('Modulation frequency <span class="math-container">$\\alpha$</span> [Hz]') plt.ylabel('Carrier Frequency <span class="math-container">$f$</span> [Hz]') plt.colorbar() plt.show() </code></pre> <p><a href="https://i.sstatic.net/gbluZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gbluZ.png" alt="enter image description here" /></a></p>
<p>Answer found in a Boustany, Roger, and Jérôme Antoni. &quot;Cyclic spectral analysis from the averaged cyclic periodogram.&quot; IFAC Proceedings Volumes 38.1 (2005): 166-171.</p> <p>The article states that the artifact is due to leakage and it is solved by increasing the overlap between the CSD windows. Sensitivity test is presented via the envelope spectrum, gained by the sum of the bispectrum over the carrier frequency. Simulation resuts are with an agreement with the article.</p> <p><a href="https://i.sstatic.net/0hrJ6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0hrJ6.png" alt="enter image description here" /></a></p>
575
spectral analysis
How to convert SPL [dB] in PSD [Watt/Hz]?
https://dsp.stackexchange.com/questions/96486/how-to-convert-spl-db-in-psd-watt-hz
<p>I want to calculate vibro-acoustic analysis of plate in program Ansys workbench Mechanical.</p> <p>Tell me please how to convert Sound Pressure Level [dB] in Power Spectral Density [Watt/Hz] ?</p>
<p>Tricky.</p> <p>Assuming a level of <span class="math-container">$L_{SPL}$</span> in dB and that your application uses the standard sound pressure reference of <span class="math-container">$p_{ref} = 20\mu Pa$</span> we can calculate the RMS sound pressure as</p> <p><span class="math-container">$$p = p_{ref} \cdot 10^{L_{SPL}/20} \tag{1}$$</span></p> <p>Next step would be to calculate the intensity which is the product of the pressure and the particle velocity. This will require a fair bit of assumptions and an analysis of your specific application. For example if the sound pressure was measured in free air reasonable far away from any surface or source, you can approximate the particle velocity as <span class="math-container">$v = \frac{P}{Z_{air}}$</span> where <span class="math-container">$Z_{air} = c_{air} \cdot \rho_{air}= 420 \frac{Pa \; s}{m}$</span> is the free field acoustic impedance of air. This impedance is the product of the speed of sound and the density of the medium so you can calculate it for other media as well (provided they are reasonable homogenous and fluid).</p> <p>So now we have the intensity</p> <p><span class="math-container">$$I = \frac{p^2}{Z}\tag{2} \left[ \frac{W}{m^2}\right]$$</span></p> <p>Intensity is defined at any specific point in the sound field. Power, however, is an integral property. You need to integrate the intensity over a closed surface. This will either require a whole lot more measurements or some aggressive assumptions about the shape of your sound field (free-field, cylindrical, diffuse field, plane wave, etc.). The integration surface will also define WHAT power you are going to measure. Could be radiated power of the source, average power in the sound field, reflected power, etc.</p> <p>So you'll need something like</p> <p><span class="math-container">$$P = \unicode{x222F} I(x,y,z) dA \left[ W \right]$$</span></p> <p>where the intensity is function of the 3 spatial coordinates x, y &amp; z and you integrate over a closed surface. This will finally be a power in Watts.</p> <p>To get to the Power Spectral Density you would need to do a spectral analysis BEFORE step 1, i.e. you would need a frequency dependent SPL and not just a single number. One common form is to represent the SPL in different frequency bands (octave, third octaves etc). In this case you can the do calculation in each band separately and then just divide the final band power by the width of the band (which may change with frequency). This has the added advantage that you can adjust your power integration with frequency. The shape of a sound field tends to change a lot with frequency (and wavelength).</p>
576
spectral analysis
Forecasting with ARMA models, from a filter point of view
https://dsp.stackexchange.com/questions/23606/forecasting-with-arma-models-from-a-filter-point-of-view
<p><a href="https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average" rel="nofollow">ARMA</a> models are afaik just filters with transfer function $ {MA(z) \over AR(z)} \equiv {FIR(z) \over IIR(z)} $ .<br> However forecasters of stock prices, market trends ...<br> seem to be mainly statisticians, with their own vocabulary and culture.<br> For example, "signal-to-noise ratio" is rarely mentioned; for another, differencing must increase noise. Can anyone suggest either</p> <ul> <li>textbooks or introductory courses on ARMA forecasting from a filter or signal processing point of view</li> <li>websites with real time series and running code to ARMA-model them ?</li> </ul> <p>(I'm interested in ARMA models for prediction, not in spectral analysis as such. ARMA models may well be wrong for prediction from short, noisy data -- what will the economy do next year ? -- hence the need for real examples.)</p> <p><hr> Added: Some 40 years ago, R.W. Hamming wrote in <a href="https://books.google.com/books?isbn=0486319245" rel="nofollow">Digital Filters</a>:</p> <blockquote> <p>... We have a predicting filter without finding, or even talking about, the transfer function. Statisticians often do this, and neglect to examine the corresponding transfer function of the formula, which can often shed some light on the whole system.</p> </blockquote>
<p>ARMA divides the signal into two parts and that models the two parts. </p> <p>Financial time series are corrupted by different types of correlated and uncorrelated noises with definite functions that allow modeling and others more difficult as per having to use aproximations. In addition financial or economic times series may exhibit long memory processes (Black noise) or may have other long range dependencies or other non stationary issues. It is for this reason that besides the MA part it is hard to tell if a series is stochastic or deterministically chaotic or a mix between these.</p> <p>The two terms of ARMA include an autoregresive term and a moving average term. The deterministic part of time series is modeled with ARIMA, ARFIMA etc family of models and the stockastic part using ARCH and GARCH family of models (usually volatility is modeled this way as a large portion of the stockastic part is white noise which can not be forecasted but its variance can). </p> <p>Diferenciation as you mention may eliminate some of the signal, however is necesary for regression in many cases as to make a time series weak sense stationary which is a requirement to have a meaningfull and valid regression. </p> <p>Signal to noise ratio is hard to guess for any unknonw noise+signal composition. However there are some ways to aproximate this by determining what noises affect your signal and what weights of noise correspond to each signal as well.</p> <p>I would sugest from a signal procesing point to google / take a look at "financial / non stationary time series signal procesing" papers. There are many methods and ways as to extract information from a signal. </p> <p>I would sugest from a modeling point of view that you look at ARIMA or GARCH presentations from academia or papers from leading economic schools (LSE etc).</p> <p>I will also sugest taking a look / google data preprocesing for regression (including the terms of unbalanced data set, stationarity, normality, linearity, aditivity and serial correlation, homocedasticity, spurious regression, cointegration, transformation of variables for financial time series). </p>
577
spectral analysis
Extraction of non-sinusiodal repetition rates
https://dsp.stackexchange.com/questions/2119/extraction-of-non-sinusiodal-repetition-rates
<p>I have an auto-correlation function that was generated from a signal, and I am trying to extract its 'repetition rate' in order to calculate the dominant frequency of the pulse, but I am not exactly clear how to do this. </p> <p>Here are two cases, labelled 'good' and 'bad' to mark best/worst case scenarios. What methodologies exist that I can use here? Since the repetitions are not sinusoidal, the spectrum does not seem to yield good information. </p> <p>Here is the original 'good' case:</p> <p><img src="https://i.sstatic.net/yQPxR.jpg" alt="enter image description here"></p> <p>followed by its auto-correlation. <img src="https://i.sstatic.net/SUzfn.jpg" alt="enter image description here"></p> <p>Similarly, here is the original 'bad' case:</p> <p><img src="https://i.sstatic.net/n4QZf.jpg" alt="enter image description here"></p> <p>again followed by its auto-correlation:</p> <p><img src="https://i.sstatic.net/l70Z9.jpg" alt="enter image description here"></p> <p><strong>EDIT: Raw data files as requested:</strong></p> <p><a href="http://dl.dropbox.com/u/4724281/raw1.mat" rel="nofollow noreferrer">Raw1</a></p> <p><a href="http://dl.dropbox.com/u/4724281/raw2.mat" rel="nofollow noreferrer">Raw2</a></p> <p><strong>EDIT On Feedback:</strong></p> <p>Spectral analysis was suggested, however I did not have too much luck with it as the spectrum does not look too sharp around the region where you would expect it to. I think this may to due to the fact that the signal is not a repetitive sinusoid and so projecting it on sinusoidal bases does not do too well. </p>
<blockquote> <p>You are right that the repetition is around 650 by how exactly do I compute that automatically? Seems like a peak-picking problem to me? Or is there some other methods that can be used?</p> </blockquote> <p>Yes, it's just peak-picking. Your period is the x value of the first strong peak:</p> <p><img src="https://i.sstatic.net/q9VVL.jpg" alt="enter image description here"></p> <p>Your peaks are all similar in height, probably because you're doing the autocorrelation using the FFT? So it's a circular autocorrelation. You can </p> <ol> <li>make it non-circular by zero-padding before doing the FFT, or </li> <li>could skew the plot you currently have by adding a linear function that emphasizes the peaks close to 0 and reduce the height of the farther ones.</li> </ol> <p>It depends on what you're specifically looking for. In either case you then have to pick the highest peak that's not at 0 lag. </p> <p>I adapted my script from <a href="https://gist.github.com/255291" rel="nofollow noreferrer">here</a> and it (surprisingly) worked without any tweaking:</p> <p><img src="https://i.sstatic.net/r8EV9.png" alt="enter image description here"></p> <p>raw1 x = 709.23</p> <p><img src="https://i.sstatic.net/0DGSu.png" alt="enter image description here"></p> <p>raw2 x = 710.77</p> <pre><code>from pylab import * # Import matlab files import scipy.io #sig = scipy.io.loadmat('raw1.mat')['raw1'][0] sig = scipy.io.loadmat('raw2.mat')['raw2'][0] # Calculate autocorrelation (same thing as convolution, but with # one input reversed in time) corr = fftconvolve(sig, sig[::-1], mode='full') # throw away the negative lags, so x = 0 means 0 lag corr = corr[len(corr)/2:] # Find the first low point from derivative d = diff(corr) start = find(d &gt; 0)[0] # Starting at the first low point, find the highest peak peak = argmax(corr[start:]) + start # Fit parabola to estimate more precise location of peak px, py = parabolic(corr, peak) plot(corr) axvline(px, color='r') print px </code></pre> <p>The variable <code>peak</code> is the integer x value at the peak (709). Then the <a href="https://gist.github.com/255291#file_parabolic.py" rel="nofollow noreferrer">parabolic function</a> fits a parabola to the peak and tries to get a more precise estimate (709.232...), which may or may not be relevant to what you're doing.</p> <p>The python <code>fftconvolve()</code> function does zero-padding internally to produce a linear convolution, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow noreferrer">like <code>convolve()</code> does</a></p>
578
spectral analysis
Paper replication: Validating the proper way to pass .wav audio breathing data through a bandpass filter
https://dsp.stackexchange.com/questions/76605/paper-replication-validating-the-proper-way-to-pass-wav-audio-breathing-data-t
<p>I am working on trying to apply a low and high pass filter to an audio file that contains a set of exhalations over a microphone. The inhalations have been cut out of the file, and the exhalations are stitched together in the file. I am attempting to replicate this <a href="https://jamanetwork.com/journals/jamaotolaryngology/fullarticle/409837" rel="nofollow noreferrer">paper</a> where they have set 10 and 150 Hz cutoffs on their microphone data. I have attached the relevant portion of the paper at the end of the post.</p> <p>I am currently following the code linked at <a href="https://dsp.stackexchange.com/questions/56604/bandpass-filter-for-audio-wav-file">this</a> DSP stack exchange post, but am unclear on what an appropriate measure for the 'order' parameter would be when using the Butterworth bandpass filter. Currently, with an order of 5, it seems to filter out all of the audio from the .wav file... what is the effect of the order on the input data?</p> <p>The end goal is to perform a spectral analysis on the filtered data. The unfiltered file can be found <a href="https://drive.google.com/file/d/16QP9Jk4-fRqNHg4ZPupwk_7OHnl5G1Zy/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p> <p>Thank you in advance for any help. <a href="https://i.sstatic.net/R9zoj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R9zoj.png" alt="enter image description here" /></a></p>
<p>From the paper</p> <blockquote> <p>we used a microphone that has a low-pass filtered with a cutoff frequency at 10 Hz and a high-pass filtered at 150 Hz and is amplified by 20 dB</p> </blockquote> <p>This makes no sense whatsoever. If you lowpass filter audio at 10 Hz, you have nothing left. I'm guessing it's a typo. Probably it's supposed to be 10 kHz.</p> <p>A bandpass from 10Hz to 150 Hz also makes no sense since the analysis frequencies are much higher. Example:</p> <blockquote> <p>The nasal sounds were calculated for each of the nasal cavities and a 2000- to 4000-Hz frequency interval was used for evaluation.</p> </blockquote>
579
spectral analysis
Kullback-Leibler Distance of Spectral Data
https://dsp.stackexchange.com/questions/37279/kullback-leibler-distance-of-spectral-data
<p>I am currently reading through <a href="https://www.cs.cmu.edu/~rbd/papers/dannenberg-goto-structure-2009.pdf" rel="nofollow noreferrer">Music Structure and Analysis from Acoustic Signals</a> and am having some difficulty in understanding how the modified Kullback-Leibler distance is calculated. (I am just very recently starting to get into audio analysis and back into shape regarding more applied math, so this could just be a case of me getting confused by some symbols!)</p> <p>I'm specifically having some trouble understanding the <strong><em>Segmentation by Clustering</em></strong> section. The formula for the proposed KL distance is:</p> <p>$$KL2(A, B) = KL(A; B) + KL(B;A)$$</p> <p>Expanded out, this yields:</p> <p>$$KL2(A,B) = \left(\frac{\mathrm{Cov}[A]}{\mathrm{Cov}[B]}\right) + \left(\frac{\mathrm{Cov}[B]}{\mathrm{Cov}[A]}\right) + (\mu_A - \mu_B) \cdot \left(\frac{1}{\mathrm{Cov}[A]} + \frac{1}{\mathrm{Cov}[B]}\right)$$</p> <p>Here's where I'm running into issues:</p> <ol> <li><p>This formula is expected to be used between to matrices of spectral data. So, basically- how do I compute the covariance? It seems I do it between each feature vector. That's easy enough, but how do I combine them so that they're usable in this formula? It shouldn't be a vector of covariances, right- how could they be divided, then?</p></li> <li><p>Just so I know I'm correct, then the dot product means that the means has to be a vector, right? If that's the case, then what happens if the segments of audio that I'm looking at happen to be different lengths? Will the resulting vector from that subtraction just be the length of the longer vector, with the tail end just being either positive or negative entries based on if A or B had more entries?</p></li> </ol> <p>Apologies if this is confusing! Any help is appreciated.</p>
<p>The expression in your question seems to have been written for univariate Gaussians. For multivariate Gaussians, $KL(A,B) = \frac{1}{2}\left[\log\frac{|\Sigma_B|}{|\Sigma_A|} - d + Tr(\Sigma_B^{-1}\Sigma_A) + (\mu_B - \mu_A)^T \Sigma_B^{-1}(\mu_B - \mu_A)\right]$ where $d$ is the dimensionality of the feature vectors. </p> <p>(<a href="https://stats.stackexchange.com/questions/60680/kl-divergence-between-two-multivariate-gaussians">https://stats.stackexchange.com/questions/60680/kl-divergence-between-two-multivariate-gaussians</a>)</p> <p>So </p> <p>\begin{eqnarray*} KL2(A,B) &amp;:=&amp; KL(A,B)+KL(B,A)\\ &amp;=&amp; \frac{1}{2}\left[ - 2d + Tr(\Sigma_B^{-1}\Sigma_A) + Tr(\Sigma_A^{-1}\Sigma_B) + (\mu_B - \mu_A)^T (\Sigma_A^{-1}+\Sigma_B^{-1})(\mu_B - \mu_A)\right]. \end{eqnarray*}</p> <p>I don't think you compute this for the spectrograms themselves but for feature vectors (some of the features may be derived from the spectrogram).</p>
580
spectral analysis
Why is it used the Power Density Spectrum (PSD) over an anylisis with the Fast Fourier Transform (FFT)?
https://dsp.stackexchange.com/questions/70103/why-is-it-used-the-power-density-spectrum-psd-over-an-anylisis-with-the-fast-f
<p>I'm currently working with physiological signals (PPG and GSR) for emotion recognition but, from my research, I've found out that almost everyone in that area use a PSD analysis over a FFT analysis. I've been reading about them and found out that PSD helps with giving a clearer view of the spectrum despite the amount of data that you have, as described in this blog <a href="https://blog.endaq.com/why-the-power-spectral-density-psd-is-the-gold-standard-of-vibration-analysis" rel="nofollow noreferrer">https://blog.endaq.com/why-the-power-spectral-density-psd-is-the-gold-standard-of-vibration-analysis</a>, also because it is supposed to be used with random signals and therefore GSR and PPG are signals that have some random nature within them. Despite this, I still can't gasp the intuition why a PSD is used over a FFT analysis.</p>
581
spectral analysis
FFT of signal data with windowing, overlapping and averaging
https://dsp.stackexchange.com/questions/85303/fft-of-signal-data-with-windowing-overlapping-and-averaging
<p>This is my first ever question here so the help is really appreciated.</p> <p>I am performing FFT on a signal. I want to perform windowing, 50% overlapping and averaging to the signal. There is a function <code>scipy.signal.welch</code> to perform this automatically but the output is in power spectral density. I want the output in magnitude and phase shift both, but from power spectral density only magnitude is achievable. Is there a way to compute phase shift from power spectral density or a simple way to do this analysis in the form of FFT rather than in power spectral density?</p> <p>I know how to apply windowing in python but I do not know how to do overlapping and averaging manually.</p> <p>Below is my code:</p> <pre><code>import numpy as np from numpy.fft import fft, ifft, fftshift, fftfreq import pandas as pd import matplotlib.pyplot as plt from scipy import signal import scipy.fft data = pd.read_csv('lucid_1p34g_1024fps_5mins.csv') ref = data.loc[:,&quot;Input 0&quot;] sensor1x = data.loc[:,&quot;Input 1&quot;] sensor1y = data.loc[:,&quot;Input 2&quot;] sensor1z = data.loc[:,&quot;Input 3&quot;] fs = 1024 blockSize = 1024 f, Pxx = signal.welch(sensor1z, 1024, window='hann', nperseg=blockSize, noverlap=512) plt.plot(f, Pxx) # power spectral density plot plt.show() &quot;&quot;&quot;Manual Calculation&quot;&quot;&quot; N = len(sensor1z) n = np.arange(N) T = N/fs freq = n/T window = np.hanning(N) f1z = fft(sensor1z) #fft transform of input 3 plt.plot(freq, np.abs(f1z)) plt.show() </code></pre>
<blockquote> <p>I want to perform windowing, 50% overlapping and averaging to the signal</p> </blockquote> <p>This makes sense for magnitude, but not for phase.</p> <blockquote> <p>Is there a way to compute phase shift from power spectral density</p> </blockquote> <p>No. The PSD is computed by averaging magnitude spectra together, so there is no phase information.</p> <blockquote> <p>or a simple way to do this analysis in the form of FFT rather than in power spectral density</p> </blockquote> <p>Yes, what you want is simply the phase of <code>f1z</code>:</p> <pre><code>plt.plot(freq, np.angle(f1z)) </code></pre> <hr> <p>Depending on what information you want from the phase, you might want to unwrap / normalize the phase:</p> <pre><code>plt.plot(freq, np.unwrap(np.angle(f1z))) # or plt.plot(freq, np.unwrap(np.angle(f1z)*180/pi)) </code></pre> <p><strong>EDIT per the OP's request</strong></p> <p>What you are asking is a coding question: you already know how to perform fft and get magnitude and phase. What you want is doing this on overlapping segments of your signal, then averaging.</p> <p>Again, not going to write the code for you but here would be an approach:</p> <ol> <li><p>Define segment length <span class="math-container">$N$</span>. To do this, figure out what frequency resolution <span class="math-container">$d_f$</span> you're happy with, and compute <span class="math-container">$N = \text{ceil}(f_s/d_f)$</span></p> </li> <li><p>Define overlap <span class="math-container">$R$</span>: you can experiment with this, common ones are <span class="math-container">$R = 1/2$</span> or <span class="math-container">$R = 1/4$</span> for example.</p> </li> <li><p>Define a window function of length <span class="math-container">$N$</span>. A <a href="https://en.wikipedia.org/wiki/Hann_function" rel="nofollow noreferrer">Hann</a> window for example. Or if you just want a rectangular window, go on to step 4 and dis-regard the <code>np.multiply</code> operation.</p> </li> <li><p>Now here comes the coding part:</p> <ul> <li>do fft on <code>np.multiply(data(0:N-1), window(0:N-1))</code>.</li> <li>extract magnitude and phase and store both somewhere</li> <li>advance by <code>R</code>: do fft on <code>np.multiply(data(R:N-1+R), window(0:N-1))</code></li> <li>extract magnitude and phase and store both</li> <li>advance by <code>R</code>: do fft on <code>np.multiply(data(2R:N-1+2R), window(0:N-1))</code></li> <li>extract magnitude and phase and store both</li> <li>etc</li> </ul> <p>This can be done in a loop. Once you have all your magnitude and phase arrays, just average them together.</p> </li> </ol> <p>FYI, this is just a naive implementation of the Short Time Fourier Transform that @EricCanton mentions in his <a href="https://dsp.stackexchange.com/a/85308/63763">answer</a>.</p>
582
spectral analysis
Basic questions about spectral leakage
https://dsp.stackexchange.com/questions/96427/basic-questions-about-spectral-leakage
<p>Let <span class="math-container">$x(t)$</span> be a periodic signal with period <span class="math-container">$T&gt;0$</span>. Suppose we sample <span class="math-container">$x(t)$</span> with sample rate <span class="math-container">$f_s\in\mathbb N$</span> in the interval <span class="math-container">$[0,T)$</span>. Hence, the sample interval is <span class="math-container">$T_s=1/f_s$</span> and the number of sample points is <span class="math-container">$N=\lfloor Tf_s\rfloor$</span> (where <span class="math-container">$\lfloor\cdot\rfloor$</span> denotes the floor function). Let <span class="math-container">$\mathcal{F}_x[k]$</span> be the DFT of the sampled signal <span class="math-container">$x[n]$</span>.</p> <p>By the IDFT formula we get that <span class="math-container">$$ x[n]=\frac{|\mathcal{F}_x[0]|}{N}+\sum_{k=1}^{N-1} \frac{|\mathcal{F}_x[k]|}{N}\cos\left(2\pi\cdot\frac{k}{N}\cdot n+\mathrm{Arg}(|\mathcal{F}_x[k]|)\right). $$</span><br /> By recalling that <span class="math-container">$x[n]=x(nT_s)$</span> we deduce that <span class="math-container">$$ x(nT_s)=\frac{|\mathcal{F}_x[0]|}{N}+\sum_{k=1}^{N-1} \frac{|\mathcal{F}_x[k]|}{N}\cos\left(2\pi\frac{kf_s}{N}(nT_s)+\mathrm{Arg}(|\mathcal{F}_x[k]|)\right). $$</span> On the other hand, by the Fourier Theorem we may write <span class="math-container">$x(t)$</span> as the following sum<br /> <span class="math-container">$$ x(t)=a_0+\sum_{k=1}^{\infty}\textstyle a_k\cos\left(2\pi f_k t+\phi_k\right). $$</span> Therefore, the frequencies that appear in <span class="math-container">$x(t)$</span> should be among the numbers <span class="math-container">$$ \frac{kf_s}{N}=\frac{kf_s}{\lfloor Tf_s\rfloor}\approx \frac{k}{T} $$</span><br /> with relatively high amplitude <span class="math-container">$\frac{|\mathcal{F}_x[k]|}{N}$</span>.</p> <p><strong>My questions</strong></p> <p>(a) The above sum for <span class="math-container">$x[n]$</span> have <span class="math-container">$N$</span> summands, so <span class="math-container">$x[n]$</span> contain at most <span class="math-container">$N$</span> frequencies. As I understand, if one of those frequencies does not appear in <span class="math-container">$x(t)$</span>, then we get a <em>spectral leakage</em>. Is that analysis in correct?</p> <p>(b) In what cases there is no spectral leakage at all? Is it true when <span class="math-container">$x(t)$</span> contain only finite number of frequencies?</p>
583
spectral analysis
Adding noise to an ECG signal
https://dsp.stackexchange.com/questions/6103/adding-noise-to-an-ecg-signal
<p>I am doing a project on ECG arrythmia analysis using matlab.</p> <ol> <li><p>I have designed notch filter for removing 50 Hz noise but don't know how to add a 50 Hz powerline interference noise to a clean ECG signal? </p></li> <li><p>Also, I want to check whether noise is reduced in the filtered signal. Will Power spectral density using modified welch periodogram indicate whether noise is filtered or not?</p></li> <li><p>How can I compare which wavelet (e.g. db6) is best suited for ECG analysis?</p></li> </ol>
<p>1) Create a 50 Hz sinusoid and then simply add it to your ECG signal. You can control the power of the 50 Hz noise by multiplying the sinusoid by some gain factor (can be less than or more than 1) before you add it to the ECG.</p> <p>2) I'm not familiar with the Welch periodogram, but if it displays the power spectral density then it should do fine. I would just do an FFT myself.</p>
584
spectral analysis
Why Cram&#233;r spectral representation and not DTFT for stochastic process
https://dsp.stackexchange.com/questions/68936/why-cram%c3%a9r-spectral-representation-and-not-dtft-for-stochastic-process
<p>In a lot of time-series analysis references I find (written by mathematicians or statisticians rather than engineers), I find the following signal decomposition for a stochastic process, termed the &quot;Cramér representation&quot; (e.g. eqn 8.11 of this <a href="https://www.stat.tamu.edu/%7Esuhasini/teaching673/chapter8.pdf" rel="noreferrer">reference</a>): <span class="math-container">$$ X[n] = \int_{\langle 2\pi \rangle} e^{-j\omega n} d Z(\omega) $$</span></p> <p>The factor <span class="math-container">$dZ(\omega)$</span> is referred to as a spectral increment. I found another reference (<a href="https://projecteuclid.org/euclid.bsmsp/1200512593" rel="noreferrer">ref</a>, eqn 77) that said that the spectral increments are orthogonal (w.r.t. the expectation operator) if the process is stationary.</p> <p>Compare this to the inverse discrete-time Fourier Transform (IDTFT), non-normalized, angular frequency convention (eqn 4.2.28 of Proakis &amp; Manolakis, <em>Digital Signal Processing</em>, 4th ed): <span class="math-container">$$ X[n] = \frac{1}{2\pi} \int_{\langle 2\pi \rangle} e^{j\omega n} X(\omega) d\omega $$</span></p> <p>With the exception of trivial differences in convention (minus sign in the exponent, normalization factor), the two representations appear to be the same. Ignoring the minus sign convention on <span class="math-container">$\omega$</span> for now, I am tempted to just conclude: <span class="math-container">$$ dZ(\omega) = \frac{1}{2\pi} X(\omega) d\omega $$</span> but I suspect there is a deeper mathematical reason why this would be wrong and that the statistics literature uses spectral increments instead.</p> <p>Why do statisticians prefer the Cramér representation? Are there any computational or theoretical advantages to using it?</p> <p>Does it have something to do with the convergence (or existence) of some type of integral? Or some issue relating to the fact that <span class="math-container">$X[n]$</span> is explicitly a stochastic process in the Cramér representation whereas the DTFT might rely on the signal being deterministic.</p> <p>I wonder this because engineering education (at least mine was this way) tends to abuse notation or gloss over certain mathematical difficulties because those nuances wouldn't matter for the situations in which an engineer would be using said mathematical tools. For instance, as an undergrad I never had to learn what a Lebesgue integral was, even though I was implicitly computing Lebesgue integrals in my probability course.</p>
<p>I will introduce some terminology and intuition that will be helpful when reading other references. It will be neither complete nor completely rigorous.</p> <hr> The measures that we first encounter in real analysis assign <i>sizes</i> (non-negative real numbers) to <i>measurable</i> subsets of <span class="math-container">$\mathbb{R}$</span>; Lebesgue measure is the measure that agrees with the intuition we build in calculus (the measure of the interval <span class="math-container">$[a,b]$</span> is <span class="math-container">$b-a$</span>, <i>etc</i>).<br> <br> <span class="math-container">$Z$</span> is a measure, but it is a <i>stochastic measure</i>†. It does <b>not</b> assign <i>numbers</i> to measurable subsets of <span class="math-container">$[0,2\pi]$</span>. Rather, it assigns a <i>random variable</i> to each such subset: <span class="math-container">\begin{equation} X_A = \int_{A}dZ(\omega). \end{equation}</span> The convergence of the integral on the right-hand side is an issue that I would rather not try to explain (to you or to myself).<br> <br> In particular, the <span class="math-container">$Z$</span> used for WSS processes is an <i>orthogonal stochastic measure</i>. One result is that random variables assigned to non-overlapping sets are independent of one another.<br> <br> If <span class="math-container">$A$</span> is a Lebesgue-measurable set, then <span class="math-container">$Z(A)$</span> is a random variable, and the expectation of <span class="math-container">$\left|Z(A)\right|^2$</span> is <span class="math-container">\begin{equation} \mathsf{E}[\left|Z(A)\right|^2] = \textrm{Lebesgue measure of $A$}. \end{equation}</span> Hence, Lebesgue measure is "under the hood" even if we stick with the notation <span class="math-container">$dZ(\omega)$</span>.<br> <br> Just as we can use Lebesgue measure to integrate functions over subsets of <span class="math-container">$\mathbb{R}$</span>, we can use <span class="math-container">$Z$</span> to integrate functions over subsets of <span class="math-container">$[0,2\pi]$</span> (such as all of <span class="math-container">$[0,2\pi]$</span>). <hr> Let <span class="math-container">$\mu$</span> be Lebesgue measure on <span class="math-container">$\mathbb{R}$</span>, and let <span class="math-container">$\nu$</span> be another measure on <span class="math-container">$\mathbb{R}$</span>. <span class="math-container">$\nu$</span> is said to be <a href="https://en.wikipedia.org/wiki/Absolute_continuity#Absolute_continuity_of_measures" rel="nofollow noreferrer"><i>absolutely continuous</i></a> with respect to Lebesgue measure if there is a function <span class="math-container">$f$</span> such that <span class="math-container">$d\nu = fd\mu$</span>, or the measure <span class="math-container">$\nu(A)$</span> of <span class="math-container">$A$</span> is equal to <span class="math-container">\begin{equation} \nu(A) = \int_{A}f(x)d\mu(x). \end{equation}</span> The function <span class="math-container">$f$</span> is called the <a href="https://en.wikipedia.org/wiki/Radon%E2%80%93Nikodym_theorem" rel="nofollow noreferrer">Radon–Nikodym derivative</a> of <span class="math-container">$\nu$</span> with respect to <span class="math-container">$\mu$</span>.<br> <br> <b>Not all measures are absolutely continuous with respect to Lebesgue measure</b>. The example most familiar to electrical engineers is Dirac measure. Lebesgue measure assigns measure zero to any set consisting of a single point, and a measure that is absolutely continuous with respect to Lebesgue measure must do the same. But the Dirac measure <span class="math-container">$\delta_0$</span> assigns measure 1 to the set <span class="math-container">$\{0\}$</span> and to any set that contains <span class="math-container">$0$</span>. Since <span class="math-container">$\delta_0$</span> is not absolutely continuous with respect to Lebesgue measure, <span class="math-container">$d\delta_0$</span> <b>cannot</b> be written as <span class="math-container">$fd\mu$</span>.<br> <br> There are also <a href="https://en.wikipedia.org/wiki/Singular_distribution" rel="nofollow noreferrer">more exotic measures</a> that are not absolutely continuous with respect to the Lebesgue measure. <hr> <b>I have found no evidence of the notion of absolute continuity of stochastic measures.</b><br> <br> <b>EDIT</b>: While theoretical results about spectral representations of WSS processes are crucial for applications, the <span class="math-container">$dZ$</span> notation may be off-putting and perhaps even doubt-inducing. I suspect that writing <span class="math-container">$Y(\omega)d\omega$</span> for <span class="math-container">$dZ(\omega)$</span> is a useful abuse of notation that allows the user to manipulate symbols as though some analogue of the Radon-Nikodym derivative existed. Rigor can be added after the fact.<br> <br> Note that rigor might arrive decades after the fact. Plenty of ideas seem to work just fine without complete mathematical rigor.
585
spectral analysis
When do phases not exist for spectrograms?
https://dsp.stackexchange.com/questions/58680/when-do-phases-not-exist-for-spectrograms
<p>I have been reading a paper on the <a href="https://ieeexplore.ieee.org/document/7251907" rel="nofollow noreferrer">"Single pass spectrogram inversion"</a> </p> <p>and I came across this in the Introduction part.</p> <blockquote> <p>In many applications, the analysis and modification of the Short-Time Fourier Transform (STFT) and the Short-Time Fourier Transform Magnitude (STFTM) of speech and audiosignals are necessary. These applications include, but are not limited to audio enhancement, reverberation analysis, time and pitch modification, and noise cancellation. <strong>Phases are either lost, become meaningless as the spectral representations are manipulated, or simply do not exist for artificially constructed spectrograms.</strong> The objective, then, is to use these spectral representations to generate a real-valued signal that corresponds as closely as possible to the original spectrograms.</p> </blockquote> <p>I do not understand why phases do not exist for artificially constructed spectrograms. A spectrogram simply represents what frequencies exist for a certain duration of time, right? So technically, we could just have zero phase for all the frequencies involved and it would be a realizable signal.</p> <p>Any help is appreciated</p>
586
spectral analysis
Analysing DAC Spectra: Transient Noise Analysis
https://dsp.stackexchange.com/questions/73872/analysing-dac-spectra-transient-noise-analysis
<p>I am working with a new Digital-to-Analog Converter (DAC) design in simulation and I'm trying to analyse the output. The device takes in an ideal 14-bit digital representation of a sine wave and outputs through an ideal Butterworth filter (a̶n̶t̶i̶-̶a̶l̶i̶a̶s̶i̶n̶g̶ anti-imaging).</p> <p>In the simple analyses, I have been setting my input sinusoidal frequency such that it is coherent, and my sample capture length set up to collect 8196 points. I'm then running this through a simple MATLAB script that windows the simulation data and calculates the Welch periodogram using pwelch(), to reduce spectral leakage and variance respectively. Once I've done that, I'm measuring the characteristics of the device such as the SNR, SINAD (SNDR), THD and so on.</p> <p>The next step in my analysis is slightly more complicated. I have run 5 transient noise simulations which result in 5 sets of output data, the only differences being the noise seen on the transient signal. How can I combine these spectra to get a more accurate picture of where the noise floor is in the device?</p> <p>Assuming the noise is random, can I approach the situation similar to Bartlett's method but averaging across all 5 FFTs and creating a periodogram of the result?</p>
587
spectral analysis
Complex Spectral Phase Evolution (CSPE) Performance depending on signal windowing?
https://dsp.stackexchange.com/questions/57905/complex-spectral-phase-evolution-cspe-performance-depending-on-signal-windowin
<p>I am look into CSPE. "<a href="http://jssunderlin.pbworks.com/f/13449.pdf" rel="nofollow noreferrer">Signal Analysis Using the Complex Spectral Phase Evolution (CSPE) Method</a>"</p> <p>The method is simple. It compares the original signal's FFT and shifted signal FFT in phase domain so that it can get an estimate of frequency. The original purpose of this paper is to improve the accuracy. However, I am wondering if it can be used to detect if there is a tone around certain FFT bin. </p> <p>One way to do this is to get the <span class="math-container">$\delta$</span> value for each FFT bin. If <span class="math-container">$|\delta| &lt; .5 $</span> indicates a potential tone around the frequency. One simulation is to do it on pure noise. However, I found that the results depend largely on the window you added to your signal. Here is my code:</p> <pre><code>#!/usr/bin/env python3 import numpy as np import matplotlib.pyplot as plt from scipy import signal nfft = 512 nsamples = 513 noise = np.random.randn(nsamples) + 1j * np.random.randn(nsamples) noise = np.sqrt(.5) * noise SNR = 10 noise = noise * 10 ** (-SNR/20) recv = noise # pure noise s0 = recv[:nsamples-1] s1 = recv[1:] S0 = np.fft.fft(s0 * signal.chebwin(len(s0), at = 80), nfft) S1 = np.fft.fft(s1 * signal.chebwin(len(s1), at = 80), nfft) #S0 = np.fft.fft(s0, nfft) # square window #S1 = np.fft.fft(s1, nfft) # square window SS = np.conj(S0) * S1 aSS = np.angle(SS) idx = np.where(aSS &lt; 0) aSS[idx] = aSS[idx] + 2 * np.pi cSS = aSS * (nfft/2/np.pi) bSS = cSS - np.arange(nfft) print(np.sum(np.abs(bSS) &lt; .5)) # estimation of potential # of tones </code></pre> <p>Some results:</p> <ol> <li>If chebwin is used, I usually got 300 potential tones, which is bad.</li> <li>If I used squared window, I usually got 20+, which is not bad</li> <li>If I reduce chebwin's attenuation, the number also reduces.</li> </ol> <p>I can't figure how this can be related to the window function.</p> <p>Thanks</p> <p>Yan</p>
588
spectral analysis
Auto-correlation of time signals
https://dsp.stackexchange.com/questions/43529/auto-correlation-of-time-signals
<p>I'm interested in papers which are about auto-correlations of periodic <strong>time series</strong> signals.All relevant papers and applications are interesting to me, as I am studying the properties of the auto-correlation of periodic, digital time signals.</p> <p>The reason, I am asking you for this help,is that due to my own search, I am well aware of the magnitude of work on the analysis of spectral properties of cyclostationary time series and this makes it difficult for me to locate the important scientists that introduced and pushed the analysis of periodic/cyclostationry features of time series. Thank you!</p>
589
spectral analysis
Issues with ML Pattern Recognition After Bandpass Filtering
https://dsp.stackexchange.com/questions/88988/issues-with-ml-pattern-recognition-after-bandpass-filtering
<p>We've been working on a machine learning project for pattern recognition, using time-domain features such as kurtosis, mean, standard deviation, variance, skewness, and peak-to-peak values.</p> <p>Background:</p> <p>Initially, we trained our data after applying a high-pass filter at 1 kHz. The results were satisfactory. Upon performing a spectral analysis last week, we discovered that our region of interest lay between 1 kHz and 3 kHz. Issue: When testing our pattern recognition system this week, the model's performance deteriorated significantly. Analyzing the data revealed a strong signal component at 8 kHz.</p> <p>Steps Taken:</p> <p>We decided to apply a bandpass filter between 1 kHz and 3 kHz to focus on our identified region of interest, expecting our time-domain features to be more relevant. We trained a new model using the bandpass-filtered data. However, the model's performance in recognizing patterns was not up to par. As an additional experiment:</p> <p>We applied the 1 kHz to 3 kHz bandpass filter on the dataset originally trained with 1 kHz high-pass filtering. Yet again, we faced recognition performance issues. We're somewhat puzzled as to why our ML system is underperforming after these filtering operations. Any insights or suggestions would be highly appreciated.</p>
590
spectral analysis
Finding the best principle component
https://dsp.stackexchange.com/questions/16432/finding-the-best-principle-component
<p>The title might be unclear, but the problem is this. I have a signal sampled 1500 times with a rate of 60/s, and a sensor array 512 units large. There is a lot of noise, echo and other frequencies being picked up, but I am interested in only one. First I do a spike removal, then a bandpassfilter (butterworth) around the frequency range I suspect the signal is hidden. I then do a PCA to find wich of the 512 sensors picks up any systematic variation. Then I search for the best fit sinuswave in each of the top sensors (say 5 out of 512). </p> <p>So main question, how to determine which of the PCA's is the one picking up the true signal without me knowing excactly what the frequency of the signal is? Second question, does the above steps seem reasonable? I am no expert in this, but experiments seems to indicate that with high STN objects of measurements (say a pendulum) it is clearly visible which PCA fits best (amplitude, hz, residuals of fit), but with low STN (say heartbeat), it is not so clear.</p> <p>Sorry for lengthy text. Thanks alot for any answers!</p> <p>edit: Running spectral analysis (using different Methods) gives different results depending on wich PC i am analysing. Might there be a spectral Method that does multivariate samples?</p> <p>edit 2: So while waiting for answers I have chosen to filter out principle components based on the correlation coefficient from a sinus wave fit (r>0.5) <em>and</em> % of data explained (p>0.15). So in other words, the signal i am looking for has to be present in at least 15 out of the 512 sensors (if I understand PCA correctly), and the best fit sinewave has to explain at least 25% of the variation. This works well with test setups in a noisy environment. <em>Question:</em> Is my approach sensible? Should or shouldnt I forego FFT and spectral estimation? Besides visualization, does frequency estimation based on the FFT gain anything over a iterative sinewave fitting, given that I know the signal is sinusoidal?</p> <p>Thanks alot</p>
<p>By doing PCA, the principal components you will get will not correspond to a single recording, but rather to a mix between them. PCA is a feature extraction method, whereas what you are looking for, seems to me, as a feature selection problem.</p> <p>Also, if you have so many simultaneous recordings, all affected by the same sources of noise, why not perform active noise cancelation?</p>
591
spectral analysis
the demonstration that states that the FT of the ACF function is the square of the DTFT of the signal
https://dsp.stackexchange.com/questions/59380/the-demonstration-that-states-that-the-ft-of-the-acf-function-is-the-square-of-t
<p>I am following the book The Intuitive Guide to Fourier Analysis &amp; Spectral Estimation with MATLAB. I am trying to selflearn the fourier analysis in matlab. I got lost in one passage in the demonstration that states that the FT of the ACF function is the square of the DTFT of the signal. I have attached it here<a href="https://i.sstatic.net/5HF5U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5HF5U.png" alt="enter image description here"></a> As you can see in the passage that I named 1. a delta_tau is missed in my opinion. Can you confirm that? More important in the point 2. in my opinion there should be a tau and not t. So I don't know how it is demonstrated this formula, because if in the passage 2. there is t the demonstration can be ended. I hope you can help me</p>
<p>You are right, the derivation is full of typos. The first equation below Eq. <span class="math-container">$(8.39)$</span> should read</p> <p><span class="math-container">$$\int_{-\infty}^{\infty}x(t+\tau)e^{\color{red}{-}j\omega\tau}d\tau=X(\omega)e^{j\omega \color{red}{t}}\tag{1}$$</span></p> <p>Substituting into <span class="math-container">$(8.39)$</span> gives</p> <p><span class="math-container">$$\begin{align}\mathcal{F}\big\{R(\tau)\big\}&amp;=\int_{-\infty}^{\infty}x(t)X(\omega)e^{j\omega t}dt\\&amp;=X(\omega)X(-\omega)=|X(\omega)|^2\end{align}\tag{2}$$</span></p> <p>where the last equality is only true for real-valued <span class="math-container">$x(t)$</span>. However, the overall result is also true for complex-valued <span class="math-container">$x(t)$</span> because in that case the ACF is defined differently:</p> <p><span class="math-container">$$\mathcal{F}\big\{R(\tau)\big\}=\int_{-\infty}^{\infty}x^*(t)x(t+\tau)dt\tag{3}$$</span></p>
592
spectral analysis
Strong vs weak COLA (constant overlap-add)
https://dsp.stackexchange.com/questions/81687/strong-vs-weak-cola-constant-overlap-add
<p>My question is on the aliasing cancellation of the OLA method when spectral modification is involved. The book related to this question is given by <a href="https://ccrma.stanford.edu/%7Ejos/sasp/Constant_Overlap_Add_COLA_Cases.html" rel="nofollow noreferrer">this link</a>.</p> <p>As stated by the webpage, for weak COLA condition, the aliasing cancellation is disturbed by spectral modifications. I am not quite clear about this statement.</p> <p>In my opinion, spectral modification can NOT disturb aliasing cancellation if reconstruction method is OLA. If sufficient zero is padded on the analysis window and the frame data, time domain aliasing due to circular convolution with the impulse response of the spectral modification is cancelled. When the weak COLA condition is satisfied, we can recover the filtered signal in time domain using the overlap-add method. Such operation is independent of whatever spectral modifications are made as long as sufficient zero-padding is used.</p> <p>For example, when a M-point periodic Hamming window is used and the hop size between adjacent frames of data is M/2. In this case, the weak COLA condition is met but the strong COLA is not. Whatever spectral modifications are made to the DFT of the frame data, such modifications can be interpretated as a circular convolution with the frame data. Though the frequency responses of the channel filters are heavily aliased from the perspective of downsampled filter bank, the original signal can be perfected reconstructed using the OLA method, which is a time domain method.</p> <p>What am I missing? Hope you can clarify.</p>
593
spectral analysis
$i^{\text{th}}$-dimensional autocorrelation function
https://dsp.stackexchange.com/questions/91730/i-textth-dimensional-autocorrelation-function
<p>I am referring to the work of Stephen A. Billings on &quot;<a href="https://eprints.whiterose.ac.uk/87212/1/acse%20research%20report%2056.pdf" rel="nofollow noreferrer">Identification of a class of nonlinear systems using correlation analysis</a>&quot; from the year 1978, where it is mentioned that the <span class="math-container">$i^{\text{th}}$</span> dimensional autocorrelation function for a zero mean white Gaussian process <span class="math-container">$x(t)$</span> with spectral density of 1 W/cycle is as follows: <span class="math-container">\begin{equation} \overline{x(t_1)x(t_2)\dots x(t_i)} = \begin{cases}0,&amp; \text{odd } i\\ \sum_i{ \prod_{n\ne m} \delta(t_n-t_m)},&amp; \text{even } i\\ \end{cases} \end{equation}</span></p> <p>I would like to use this result in my work and check if this is true. How do I create a signal with prescribed spectral density in MATLAB? and how do I perform autocorrelation for <span class="math-container">$i &gt; 2$</span> in MATLAB?</p>
<p>The statement is provably true:</p> <ol> <li><span class="math-container">$x(t)$</span> is zero mean and Gaussian with variance 1.</li> <li><span class="math-container">$$E[x(t_i)x(t_j)] = \begin{cases} 0 \mbox{ for } t_i \not= t_j \\ 1 \mbox{ for } t_i = t_j\end{cases}$$</span></li> </ol> <p>For <span class="math-container">$$E[ x(t_1) x(t_2) \ldots x(t_N)]$$</span></p> <ol start="3"> <li><p>For <span class="math-container">$N$</span> odd: There will always be a &quot;left over&quot; <span class="math-container">$E[x(t_N)] = 0$</span> multiplying the result, so it's always zero.</p> </li> <li><p>For <span class="math-container">$N$</span> even: There will be a non-zero value only if <span class="math-container">$t_i = t_j$</span> for distinct <span class="math-container">$i$</span> and <span class="math-container">$j$</span>. <a href="https://math.stackexchange.com/a/1917666/5965">For the specific case where all times are equal</a>,</p> </li> </ol> <p><span class="math-container">$$E[X^{2n}]=(2n-1)!!\sigma^{2n}$$</span></p> <p>where !! represents the <a href="https://en.wikipedia.org/wiki/Double_factorial" rel="nofollow noreferrer">double factorial function</a> (not two applications of the factorial). For each pair of <span class="math-container">$t_i$</span>, <span class="math-container">$t_j$</span> that are equal, it will generate another 1 (since <span class="math-container">$\sigma = 1$</span> here).</p> <p>Note that the paper uses time averages and the above uses ensemble averages. The paper assumes ergodicity of the time series, so the two are equivalent.</p> <hr /> <p>Much as it goes against my better judgement, here's some code that does this. The output validates the &quot;proof&quot;:</p> <pre><code>For 2: should be 1 Calculated: 0.9946066112024137 For 3: should be 0 Calculated: 0.014338673370010435 For 4: should be 3 Calculated: 3.0088602833619134 For 5: should be 0 Calculated: 0.12289469288233784 For 6: should be 15 Calculated: 15.20500109670608 For 7: should be 0 Calculated: 1.6572818702690346 </code></pre> <hr /> <h3>Code Below</h3> <pre class="lang-py prettyprint-override"><code>import numpy as np import math from matplotlib import pyplot as plt N = 1000 Nruns = 100000 def do_simulation(M): indices = 11*np.ones(M, dtype=int) sum_product = 0 for idx in np.arange(Nruns): x = np.random.normal(size=N) sum_product = sum_product + math.prod(x[indices]) return sum_product/Nruns def number_should_be(M): if (M % 2 == 0): return math.prod(range(M-1, 0, -2)) else: return 0 print(&quot;For 2: should be &quot; + str(number_should_be(2)) + &quot; Calculated: &quot; + str(do_simulation(2))) print(&quot;For 3: should be &quot; + str(number_should_be(3)) + &quot; Calculated: &quot; + str(do_simulation(3))) print(&quot;For 4: should be &quot; + str(number_should_be(4)) + &quot; Calculated: &quot; + str(do_simulation(4))) print(&quot;For 5: should be &quot; + str(number_should_be(5)) + &quot; Calculated: &quot; + str(do_simulation(5))) print(&quot;For 6: should be &quot; + str(number_should_be(6)) + &quot; Calculated: &quot; + str(do_simulation(6))) print(&quot;For 7: should be &quot; + str(number_should_be(7)) + &quot; Calculated: &quot; + str(do_simulation(7))) </code></pre>
594
spectral analysis
Questions on Cepstral Analysis
https://dsp.stackexchange.com/questions/89115/questions-on-cepstral-analysis
<p>I have a few questions regarding cepstral analysis, that the numerous articles and papers I've read on the topic didn't answer.</p> <p><strong>What I understood:</strong> The cepstrum captures the periodicity of harmonics in a spectrum.</p> <p><strong>My questions:</strong> In articles treating about default detection in gearboxes, they say that the presence of sidebands around the ToothMesh frequency (corresponding to +- both shafts rotating frequencies) in the spectral domain leads to the presence of two peaks at corresponding quefrencies for both of these rotating frequencies on the cepstrum.</p> <ul> <li>Are there also peaks in the cepstrum corresponding to the presence of toothmesh harmonics? (this is never addressed). They show TM, 2xTM, 3xTM frequencies on the spectrum and I would expect to see a peak at TM quefrency on the cepstrum but this is never shown.</li> <li>I struggle to understand why the sidebands are &quot;used&quot; by the cepstrum when they are not spaced by int (let's say you have sidebands at +-85Hz and +-15Hz and TM=1000Hz so for example TM-85Hz=915Hz and 2xTM-85=1915Hz but 1915/915 is not an integer). I would expect to also have on the spectrum proper peaks at 85Hz and 15Hz together with some harmonics of these frequencies. Is that periodicity also used by the cepstrum? Is this adding with a periodicity found in the repetition of sidebands?</li> </ul> <p>For applications linked to speech recognition, they say that the use of the log in the cepstrum calculation formula is very interesting as it allows to separate the source excitation from the vocal tract transfer function.</p> <ul> <li>How is that true? I get that log(axb)=log(a)+log(b) but I don't see how it is leveraged to the purpose of separating two contributions. The inverse FFT is applied to the entire log.</li> </ul> <p>Sorry if I used terms incorrectly. Also, my thoughts on the topic are a bit fuzzy, to say the least. Many thanks in advance for your help.</p> <p>Cheers</p> <p>Antoine</p>
<p>A cepstrum not only captures the periodicity of harmonics within a spectrum, but also the much wider envelope covering the total width of the harmonics across the spectrum (consider that to be a window on an infinite harmonic train.) The envelope is usually represented by a much lower set of quefrencies in the cepstrum. This looks disjoint from the set of quefrencies representing harmonic trains. So even though there is only one inverse FFT, one usually ends up with at least two separate blobs a plot of quefrency results, one for the envelope or transfer function, and one (or more) for the harmonic trains.</p>
595
spectral analysis
Feature extraction for sound recognition and classification
https://dsp.stackexchange.com/questions/21961/feature-extraction-for-sound-recognition-and-classification
<p>I am building an application that would "listen" to the microphone input, analyse it, and compare the analysis to a pre-analysed and pre-classified sound bank (small - maximum 20 sounds). It will then show the user what sound it was.</p> <p>Now, I have a vague idea on how to implement this. I would like to choose a set of features that would best represent the sounds. The issue is that the sounds in the sound bank could be whatever the user recorded. From strong onsets and short sounds, to long onsets and long sounds.</p> <p>The current features I'm thinking of are:</p> <ul> <li>Spectral Centroid</li> <li>Spectral Flux</li> <li>Spectral Rolloff</li> </ul> <p>What do you think? Would these be sufficient to properly classify the sound? Also, as these features output a single value for specific sound buffer, how would you go about handling the feature vector that represents the whole sound? I am using kNN for classification, and was wondering what's the best way to compare two feature vectors? would cross-correlation be a feasible technique?</p> <p>Thanks a lot!</p> <p>P.S I have seen that a similar question was asked <a href="https://dsp.stackexchange.com/questions/16994/feature-extraction-for-sound-classification">here</a>, but it doesn't fully answer my issues.</p>
<p>Have a look @ <a href="https://github.com/bmcfee/librosa/" rel="nofollow noreferrer">librosa</a>, a simple python library for audio analysis, implementing common features. Here is a <a href="https://github.com/librosa/librosa/blob/main/examples/LibROSA%20demo.ipynb" rel="nofollow noreferrer">great introduction and example notebook</a>.</p>
596
spectral analysis
Help with denoising signal and periodogram analysis resources
https://dsp.stackexchange.com/questions/71917/help-with-denoising-signal-and-periodogram-analysis-resources
<p>This is a cross posting from the crossvalidated stack exchange as I thought this may be a better forum to ask.</p> <p>I have a dataset consisting of respiratory time series signals of different lengths obtained from different groups of patients. I want to either classify or cluster the patients using these timeseries by using the commonalities of the time series of each group. However, I have no experience in dsp.</p> <ol> <li><p>Firstly, I am confused if I am supposed to filter my signals to get rid of any frequencies above the Nyquist frequency. My sampling frequency is 32Hz and my time series is somewhat noisy and has some artifacts. I am also unsure of which filter to select for this.</p> </li> <li><p>Secondly, I wanted to look at the periodogram and the average power spectral density at each frequency within a group - but I am not sure if I understand the periodogram very well - if I have different time series lengths then my periodogram length will vary too, so I am not sure how this comparison can be made.</p> </li> </ol> <p>Being from Pure Math, I know Fourier analysis purely from the perspective of functions and using Fourier transforms to obtain the coefficients that describe the projection of these functions onto an orthonormal system. With periodograms however, I noticed that the x-axis represents sample frequencies. I am confused with the distinction between sampling frequencies vs. underlying frequencies of the generating function (say I have <span class="math-container">$\sin(2\pi x)$</span> sampled at 10Hz, does the periodogram characterize the 1Hz underlying frequency of the function?)</p> <p>Any resources on understanding how to analyze and remove noisy components of time signals from a machine learning perspective would be much appreciated! Due to time constraints, I have shied away from long textbooks on digital signal processing. Thanks a lot.</p>
<blockquote> <p>Firstly, I am confused if I am supposed to filter my signals to get rid of any frequencies above the Nyquist frequency. My sampling frequency is 32Hz and my time series is somewhat noisy and has some artifacts. I am also unsure of which filter to select for this.</p> </blockquote> <p>That ship has sailed.</p> <p>Let <span class="math-container">$S=\left\{\left.\alpha e ^{i(\omega t+\varphi)}\right|\alpha &gt; 0, 0\le \varphi &lt;2\pi, \omega \in \mathbb R\right\}$</span>, i.e. the set of all distinct complex sinusoid with an amplitude, frequency and phase.</p> <p>Then <span class="math-container">$(S,\cdot)$</span> is a commutative semigroup (proof trivial).</p> <p>Introducing the equivalence relation <span class="math-container">$\sim: a\sim b \iff a\left(\frac{n}{r}\right)= b\left(\frac{n}{r}\right) \forall n\in\mathbb Z$</span> (&quot;two signals are identical after sampling with rate <span class="math-container">$r$</span>&quot;), we see that signals <span class="math-container">$s_l=\alpha e^{i(\omega_l t + \varphi)}, l=1,2,\ldots$</span> are <span class="math-container">$s_1\sim s_2$</span> if <span class="math-container">$\frac{\omega_1-\omega_2}{2\pi}=nr, n\in\mathbb Z$</span>, i.e. we can't tell signals apart after sampling if their frequency differed by a multiple of the sampling rate.</p> <p>Let's formalize this: <span class="math-container">$T=\left\{\alpha e^{i([ft+\phi]\mod 2\pi)}\right\}$</span> is a quotient semigroup of <span class="math-container">$S$</span>, i.e. <span class="math-container">$T\preceq S$</span>, and each of the elements is a leader of a <span class="math-container">$\sim$</span>-equivalence class – and this homomorpism <span class="math-container">$S\mapsto T$</span> is in fact sampling, which, as we can see above, is not bijective.</p> <p>Hence, all the original frequency components from <span class="math-container">$S$</span> got mapped to some component from <span class="math-container">$T$</span> with frequency normalized to the sampling frequency. That mapping is called <em>aliasing</em>.</p> <p>What an anti-alias filter <span class="math-container">$h$</span> does is</p> <p><span class="math-container">$$h(s): S\mapsto S, h=\begin{cases} s, &amp; f&lt; f_\text{nyquist}\\ 0 &amp; \text{else,} \end{cases} $$</span></p> <p>and as you'll figure out when inserting <span class="math-container">$h(s)$</span> above is that this yields only the elements that are not aliased to a different frequency.</p> <p>Thus, everything that &quot;survives&quot; <span class="math-container">$h$</span> will also &quot;survive&quot; aliasing without undergoing a change in frequency.</p> <p>So, if you needed an anti-aliasing filter, it's now too late. Go and do your recording again.</p> <blockquote> <p>Secondly, I wanted to look at the periodogram and the average power spectral density at each frequency within a group - but I am not sure if I understand the periodogram very well - if I have different time series lengths then my periodogram length will vary too, so I am not sure how this comparison can be made.</p> </blockquote> <p>In that case, the periodogramm is not a useful mapping on its own – you'll need to add something like a truncation / padding operation to bring all signals to the same duration, for example. At which point the periodogram doesn't seem to be a sensible approach anymore.</p>
597
spectral analysis
Right algorithm for fourier transform on physical heights
https://dsp.stackexchange.com/questions/10207/right-algorithm-for-fourier-transform-on-physical-heights
<p>I have data from a LIDAR unit that I would like to get the spectral density of. Unfortunately, the only thing I remember from my Fourier analysis class are the methods that I know will not work.</p> <p>The data comes from a 1D LIDAR scan of a (mostly flat) surface, which returns radial distance at evenly spaced $d\theta$'s. I can convert this data to x-y data which looks distinct depending on the type of surface (e.g. grass looks a lot choppier than concrete). I think Fourier analysis would be a good way to distinguish between the different types. Unfortunately the data has some problems that makes it difficult to analyze:</p> <ul> <li>It is unevenly spaced in x.</li> <li>It has some missing values.</li> <li>It is not strictly a function in x (if there is some overhang, an earlier part of the sweep might hit under the overhang to get a distant x value, and a later part could strike the overhang, to get a closer value.)</li> </ul> <p>It has occurred to me to do a Fourier analysis on the original $r-\theta$ data, but since I am concerned with the deviations around perfectly flat ground, I am not sure how I could use it.</p>
<p>Your question seems to have a couple of nested issues: First off, the computation of the Power-Spectral-Density is as straight forward as the computation of the signals' <a href="http://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow">Discrete Fourier Transform</a>, (DFT), followed by its absolute magnitude squared. The computation of the $O(N^2)$ DFT is accomplished very efficiently using the $O(NlogN)$ <a href="http://en.wikipedia.org/wiki/Fast_Fourier_transform" rel="nofollow">FFT</a> algorithm, that comes in canned form from many different libraries. </p> <p>Thus, if $x[n]$ is your original domain signal, (sampled in space, time, or whatever other quantity), the PSD is given by:</p> <p>$$ PSD[k] = |\sum_{n=o}^{N-1} x[n] \ e^{\frac{-j \ 2 \pi n k}{N}}|^2 = |X[k]|^2 $$</p> <p>(Where $X[k]$ is the Discrete Fourier Transform (DFT) of $x[n]$). (Also note, the above formula is for same-length DFTs).</p> <p>The second aspect of your quandary seems to rest on:</p> <ol> <li>What domain to best represent your orignal <a href="http://en.wikipedia.org/wiki/Lidar" rel="nofollow">LIDAR</a> measurements in.</li> <li>How to deal with non-evenly spaced samples.</li> <li>How to deal with missing samples.</li> </ol> <p><strong>Regarding (1):</strong> I do not see why you cannot retain the data in its original $r - d \theta$ form. You are after all after radial distance per spatial sample, and this is exactly what LIDARs do for you. You want to then use the DFT as a transformation that will ostensibly magnify and separate different surfaces for you, by virtue of the repetitiveness in the spatial domain, corresponding to particular deltas in the Fourier domain. (Grass with high spatial repetitivity, VS concrete with low repetitivity). </p> <p><strong>Regarding (2) &amp; (3):</strong> Both those problems can easily be solved using linear regression, and/or interpolation, whichever one suits your fancy. That is, you would collect all your $r- d \theta$ data, replete with missing samples and non-even ones. What you seek however is a uniformly spaced sampling on a $d \theta$ grid. You can do a simple polynomial fit via <a href="http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29" rel="nofollow">Least-Squares-Estimation</a>, (LSE). That is, your objective will be to decipher the co-efficients of a best-fit polynomial (of degree constrained by the LIDAR physics and spatial bandwidth of the target surface), that best fits your data. </p> <p>For example, let $d \theta$ be the LIDAR independent variable. You have elected to use a degree-2 polynomial ($D = 2$) to best fit your data in the Least Squared (LS) sense. You have collected a bunch of points from your LIDAR, at some given $d \theta$'s and corresponding radial distances $r$. You would like to find the co-efficients $p_i$, that give you the line of best fit. That is, you would like to find the $p_i$'s in:</p> <p>$$ r = p_0 + p_1 \ (d \theta) + p_2 \ (d \theta)^2 $$</p> <p>Thus let $\bf{r}$ be a $1$ x $N$ vector of recorded radial distances to the scanned surface. Let $\boldsymbol{d \Theta}$ be a $(D+1)$ x $N$ data matrix, such that the first row is all $1$s, the second row is composed of each $d \theta$ for each value of $n$, the third row is composed of each $(d \theta)^2$ for each value of $n$, etc. $N$ of course, is the total number of data points from the LIDAR scan of the surface. Finally, let $\bf{p}$ be the $(D+1)$ x $1$ vector of co-efficients to be solved for. Then, the LSE solution is:</p> <p>$$ \bf{p} = (\boldsymbol{d \Theta} \ \boldsymbol{d \Theta}^T)^{-1} \boldsymbol{d \Theta} \ \bf{r}^T $$</p> <p>With the coefficient vector $\bf{p}$ in hand, you can now go back and solve for $r$'s at any $d \theta$ grid. That is, simply construct a uniformly spaced $d \theta$ grid, and solve for corresponding $r$'s. Then, you will be ready to transform this regressed result into the Fourier domain, and look for characteristic spatial periodicity corresponding to different materials. </p>
598
spectral analysis
Detecting a frequency swept sinusoid and its parameters?
https://dsp.stackexchange.com/questions/18950/detecting-a-frequency-swept-sinusoid-and-its-parameters
<p>Given a (FFT-sized) frame of data, and detection of a spectral component statistically above the noise floor in the FFT of this window, what characteristics or signal analysis could be used to determine that this spectral component is more likely to be a linearly swept sinusoid, rather than one that is stationary across the frame? </p> <p>And, assuming the dF/dt sweep across the data window is small (from a fraction of an FFT bin to a couple bins), how can one estimate the sweep parameter (but, beyond answer offered to this <a href="https://dsp.stackexchange.com/questions/3245/how-can-the-fft-be-used-for-estimating-linear-chirp-parameters">question</a>, assuming this is an estimation of a detected signal in noise.)</p> <p>One offered solution seems to be to segment the FFT frame into several shorter subframes, do shorter STFTs, and look for a linear best fit among the resulting set of subframe FFT peak magnitude frequency estimates (which all have poorer frequency resolution due to the shorter subframes). Are there any other or better options for detection and estimation?</p>
<p>You could compute the instantaneous frequency of the signal in the frame. This can be done as outlined in <a href="https://library.seg.org/doi/abs/10.1190/1.1443220?journalCode=gpysa7" rel="nofollow noreferrer">this paper</a> (e.g. Eq.(9)). You need to compute the analytic signal using a Hilbert transformer:</p> <p>$$x_a[n]=x[n]+jx_h[n]\tag{1}$$</p> <p>where $x_h[n]$ is the Hilbert transform of $x[n]$. The instantaneous frequency is given by the (discrete-time) derivative of the phase of (1). If you use a first order difference for approximating the discrete-time derivative you'll get the formula given in Eq. (9) of the paper cited above. Given an estimate of the instantaneous frequency, you could simply fit a line to estimate the sweep parameter.</p> <p>This little Matlab/Octave script shows how this approach would work with a toy problem:</p> <pre><code>% create chirp signal N = 256; n=(0:N-1)'; fx=(0.1+0.01*n/N); x=sin(2*pi*fx.*n); % compute analytic signal xh=hilbert(x); xr=real(xh);xi=imag(xh); % compute instanteneous frequency (could be done over fewer points) tmp1 = xr(1:N-1).*xi(2:N) - xr(2:N).*xi(1:N-1); tmp2 = xr(1:N-1).*xr(2:N) + xi(1:N-1).*xi(2:N); f=atan2(tmp1,tmp2)/(2*pi); % (biased) estimate of instanteneous frequency % linear regression over center values of f skip = floor(N/4); % skip unreliable points at beginning and end of f na = skip:(N-skip); A=[ones(length(na),1),na(:)]; u=A\f(na); f0=u(1); k=u(2)/2; % remove bias [f0,k] % reconstructed linear frequency fr = f0 + k*n; plot(n,fx,(2:N),f-k*(2:N)',n,fr,':') legend('exact','estimated','reconstructed') axis([0,N,f0*.9,(f0+k*N)*1.1]) </code></pre> <p>The plot shows the actual linear frequency of the signal (blue), the estimate from the analytic signal (green), and the reconstructed frequency obtained from fitting a line through the estimated frequency. Note that the estimate is unreliable at both ends, so the line is fitted to the center values of the frequency estimate.</p> <p><img src="https://i.sstatic.net/f0Aqf.png" alt="enter image description here"></p>
599
filtering
Can I use BRISQUE to compare different filtering techniques for the same acquired image?
https://dsp.stackexchange.com/questions/71033/can-i-use-brisque-to-compare-different-filtering-techniques-for-the-same-acquire
<p>BRISQUE compares your image with a pre-learned model with opinion scores. I am not sure about this but would image resolution affect the result of BRISQUE?</p> <p>Moreover, if I have 2 filters and I would like to compare the results of using either filter to get the better result, can I use BRISQUE to quantify the better filter?</p> <p>Additionally, if I have a 2-step filter, should I compare the result from each step with the previous image by using PSNR; that is step 1 against original and step 2 against step 1? or should I just apply the final result into BRISQUE?</p> <p>Thanks to who can help me understand this better!</p>
600
filtering
LPF - signal values unaffected at specific times
https://dsp.stackexchange.com/questions/21774/lpf-signal-values-unaffected-at-specific-times
<p>Is it possible to design an LPF that has an output identical to the input at specific points in time domain (the rest of the input waveform can get filtered/distorted)? Is there a general name/technique for this kind of thing (assuming it is possible), so that I can search for more information on this topic?</p>
<p>What you probably are looking for are Nyquist/$M$th-band filters. The impulse response of these have the following property (assuming a non-causal impulse response centered around tap 0 for ease of exposition)</p> <p>$h_n = \left\{ \begin{matrix} \frac{1}{M} &amp; n=0\\ 0 &amp; n = kM, k=\pm 1, \pm 2, \dots\end{matrix} \right.$</p> <p>This means that every $M$th sample is untouched (the $1/M$ term is for scaling purposes). Often it is said that these filters have zero inter symbol interference, assuming that a symbol arrived every $M$th sample.</p>
601
filtering
Shock filtering using structure tensor
https://dsp.stackexchange.com/questions/32409/shock-filtering-using-structure-tensor
<p>my name is niladri, I am new to image processing(actually this is my first code). I want to implement shock filter using structure tensor. I have rough idea of what structure tensor is and implemented in MATLAB. But to design a shock I need to calculate the sign using the dominating eigenvector. But as for my understanding dominating eigenvector will be number(real and complex) for any matrix and structure tensor is basically a 2X2 matrix for each point(x,y). please correct me if my understanding is wrong. My code so far is Edit:11/08/16: I have completed the Code but still not getting the correct response. I am unable to understand my misconception</p> <pre><code> clc; clear all; I = imread('C:\Users\Niladri\Desktop\miramarp2gs.bmp'); [m,n] = size(I); struct_scale = 3; %inte_scale = 5; x = -2*struct_scale:2*struct_scale; g = exp(-0.5*(x/struct_scale).^2); g = g/sum(g); gd = -x.*g/(struct_scale^2); Ix = conv2(g', conv2(gd, double(I))); Iy = conv2(g, conv2(gd', double(I))); sx = Ix^2; sxy = Ix*Iy; sy = Iy^2; %fileID = fopen('D:\IP\test.txt','w'); fileID1 = fopen('D:\IP\test2.txt','w'); for i = 1:512 for j = 1:512 s = [sx(i,j) sxy(i,j);sxy(i,j) sy(i,j)]; %[v ,d] = sort(eigs(s)); [v1, d1] = sort(diag(real(eig(s)))); %[u , s1, d] = svd(s); v_perp = [0 -1;1 0 ]*v1(:,1); t = abs(dot(v1(:,1),v_perp)); if v1(dot(v_perp, u(:,1)))&gt;1E-9 disp(t); end c = [v1(:,1),v_perp]; temp = det(c); if temp &gt; 0 res = 1; elseif temp &lt; 0 res = 0; end fprintf(fileID1,'%1d',res); end fprintf(fileID1,'\n'); end fclose(fileID1); </code></pre>
<p>I think that the question here is "how is the dominant direction of the local slope actually estimated?" with the outlook of using it in a shock filter.</p> <p><a href="http://www.eurasip.org/Proceedings/Eusipco/Eusipco2012/Conference/papers/1569582949.pdf" rel="nofollow">A Shock Filter is applied iteratively</a> and each time it propagates grayscale values <strong>along</strong> the direction of an <em>edge</em> but <strong>not across</strong> the <em>edge</em>, therefore, progressively, the edge is preserved but the area towards the broad direction of the structure is "filled". This is similar to applying <a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/dilate.htm" rel="nofollow">dilation</a> or <a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/erode.htm" rel="nofollow">erosion</a> but taking into account the local contrast gradient. The <em>edge</em> can be defined at various scale levels, which results to the average direction of an edge defined over a bigger area surrounding a pixel of interest $(x,y)$.</p> <p>The code seems alright, in that it sets up the normalised Gaussian <code>g</code> and then derives its first order derivative in <code>gd</code>. It uses that to obtain the gradients <code>Ix,Iy</code> but <code>sx, sxy</code> should be pointwise, rather than matrix, operations. Otherwise, <code>Ix, Iy</code>, "filter" each other. By controlling for the variance parameter of <code>g</code> (and by extension, <code>gd</code> too), we control how big the area around the pixel of interest $(x,y)$ is (or, in terms of dilation and erosion, how big the disk -or <a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/strctel.htm" rel="nofollow">structuring element</a>- is).</p> <p>At this point, <code>Ix, Iy</code> could be used to estimate the local contrast gradient but if we come across areas with roughly opposite slopes, then averaging their components would result to 0 and the (false) perception that the area is flat. Instead, the <a href="https://en.wikipedia.org/wiki/Structure_tensor" rel="nofollow">Structure Tensor</a> is used and more specifically the eigenvector that corresponds to the largest eigenvalue (and there is one of those for each $(x,y)$ pair). For more information please see <a href="http://ami.dis.ulpgc.es/biblio/bibliography/documentos/weickert_dagm03.pdf" rel="nofollow">this link</a> and specifically sections 2, 3 and 4.</p> <p>So, with that vector indicating the direction of the gradient of an <em>edge</em>, we find the perpendicular vector (with that sign flipping) and propagate values in <strong>that</strong> direction.</p> <p>Hope this helps.</p>
602
filtering
How is Bayesian Estimation related to filtering?
https://dsp.stackexchange.com/questions/35526/how-is-bayesian-estimation-related-to-filtering
<p>I am reading about <a href="http://rads.stackoverflow.com/amzn/click/0133457117" rel="nofollow noreferrer">estimation theory</a>, including topics like Bayesian Estimation (e.g. Wiener Filtering).</p> <p>It seems that we usually define a filter in terms of tis frequency response (e.g. High Pass, Low Pass). On the other hand, Wiener Filter works by filtering out noise without specific reference to frequency domain.</p> <p>How are the two related? If not, are they just two different approaches to the filtering problem?</p>
<p>In Wiener filtering, you filter a noisy signal to more closely resemble a <em>desired</em> signal that you have access to. In Bayesian estimation, you take <em>prior knowledge</em> into account to estimate some state given noisy measurements. In frequency filtering, you just remove frequency content from a signal. </p>
603
filtering
How to filter key clicks?
https://dsp.stackexchange.com/questions/36589/how-to-filter-key-clicks
<p>I am the author of an amateur radio application that produces a waterfall display across a range of frequencies. Time is on the $x$-axis, frequency is on the $y$-axis, and the relative strength of each signal is depicted by the intensity of color.</p> <p>Some of the signals have, what are called, <em>"key clicks"</em>. That is, at the start and end of each signal, the bandwidth of the signal is wider than the steady-state bandwidth. An example is shown below:</p> <p><a href="http://www.kkn.net/~n2ic/key_clicks.jpg" rel="nofollow noreferrer">[1] http://www.kkn.net/~n2ic/key_clicks.jpg</a></p> <p>These are actually present in the signal - They are not artifacts of my DSP processing. I would appreciate suggestions on how to filter out the key clicks.</p>
<p>Assuming from your screenshot that the frequency magnitudes of "key clicks" are always higher (brighter) than the normal signal, you could just employ a basic <a href="https://en.wikipedia.org/wiki/Limiter" rel="nofollow noreferrer">Limiter</a>? In other words, if a frequency magnitude value exceeds a certain threshold, set that frequency value to its last (valid) value. If you wanted to keep the plot from looking unnatural, you could simulate the data where the clicks are or use data from previous samples. </p>
604
filtering
How can i create a low pass filter in matlab
https://dsp.stackexchange.com/questions/41619/how-can-i-create-a-low-pass-filter-in-matlab
<p>i am trying to create a low pass filter in matlab for a project in signals and systems class but i couldn't. cut off frequency is 500 , sampling frequency is 10000 and the low pass filter's band width is 150 Hz. If you help me i would be very appreciated</p>
<p>filterDesigner Filter Designer filterDesigner launches the Filter Designer. Filter Designer is a Graphical User Interface (GUI) that allows you to design or import, and analyze digital FIR and IIR filters.</p> <pre><code>If the DSP System Toolbox is installed, Filter Designer seamlessly integrates advanced filter design methods and the ability to quantize filters. % Example: % Launch Filter Designer. filterDesigner; % Lanches Filter Designer </code></pre> <p>See also fvtool, signalAnalyzer.</p> <pre><code>Reference page for signal/filterDesigner </code></pre>
605
filtering
Recommendation for studying filters
https://dsp.stackexchange.com/questions/45193/recommendation-for-studying-filters
<p>I have a project to do about what happens to a periodic function when we pass it through a low-pass, high-pass and band-pass filter. I have no expressions for the filters or the function, I just have to analyse graphics. I already concluded that the low-pass filter passes the zones of the graphic that have a low variation, and that the high pass filter passes the zones of the graphic that have a high variation. In the case of the band-pass filter the resulting wave consists on various sinusoids (it just passes certain intervals of variations). Ok, I now have the general idea, but I was looking for a text that provided me more rigorous explanation in terms of the frequencies, and their relation to the variation of a function. Does anyone here knows a book, or a paper that covers this specific topic of signals filtering? Thanks! </p>
<p>If you have some numerical methods background, Richard Hammings, Digital Filters, is a good place to start. The book is from 1977 but the math hasn’t changed. The book has also been released by Dover so it has a very low price, unlike Oppenheim and Schaefer’s book and also unlike Oppenheim and Schaefer, is compact. The math level isn’t advanced. </p> <p>Hamming is one of the founders of Information Theory. He essentially invented parity codes. He was also the first president of SIAM.</p>
606
filtering
filter multiple sources from each other
https://dsp.stackexchange.com/questions/53871/filter-multiple-sources-from-each-other
<p>I am trying to supply a solution for recording a court room and then, use some smart algorithms in order to automatically convert the speech to text.</p> <p>In order to do that I have three boom microphones, one is near the judge, the second is on A side and the third one on the B side. this way i get 3 different sound sources to convert to text. the problem is that each microphone picks up the other sides also (quietly but they are still there).</p> <p>I mean that the judge microphone can hear a little bit the A and B sides also. this of course messes with my voice to text algorithm.</p> <p>Is there any way to subtract from the judge microphone the A side and the B side tracks in order to clean them and hear only the judge?</p> <p>Then I can apply the same algorithm to A side and To B side also. does anyone knows how to do it?</p>
<p>That's called <em>source separation</em>, and higher-end conference table systems already do that<sup>citation needed</sup>.</p> <p>You'll be able to find quite a bit of literature if you search for that term, but an easy approach would be to assume that things are linear (sadly, in audio, that's not really often the case), and that you can simply:</p> <p>Calibrate the system by only speaking (better: feeding white noise) into one microphone, then calculating the auto- and crosscorrelations between the signals. The cross-correlation functions will directly tell you with what you need to convolve the signal of that one microphone before subtracting it from the signals of the others.</p> <p>But maybe the solution to that problem is simpler: Microphones come with a directivity; use such that only record what's coming from straight in front of them.</p>
607
filtering
Can filter &quot;depth&quot; be adjusted by mixing dry and wet signals?
https://dsp.stackexchange.com/questions/24831/can-filter-depth-be-adjusted-by-mixing-dry-and-wet-signals
<p>Can filter "depth" be adjusted by mixing dry and wet signals? </p> <p>I.e. can I simulate e.g. a +6dB bandshelf/peak filter at 1kHz by mixing in some of the dry unequalized signal and some of a wet signal that has been bandpass filtered at 1kHz and the filter has around the same shape as the bandshelf/peak. </p> <p>Can it be theoretically the same?</p>
<p>adding the input to the output of a scaled 2nd-order bandpass IIR <strong>will</strong> get you the classic peak/cut EQ curve.</p> <p>adding the input to the output of a scaled 1st-order LPF or HPF <strong>will</strong> get you a 1st-order low-shelf or high-shelf EQ.</p> <p>adding the input to the output of a scaled 2nd-order LPF or HPF will get you a <em>particular</em> form of a 2nd-order low or high shelving EQ, but there are many others. doing it this way will often leave an unintended null or lump in the frequency response. if you want a symmetric (in log frequency) and perfectly monotonic gain vs. frequency curve, i might suggest referring to the <a href="https://www.w3.org/TR/audio-eq-cookbook/" rel="nofollow noreferrer">Audio EQ Cookbook</a>.</p>
608
filtering
How to design a phonographic sound filter in python?
https://dsp.stackexchange.com/questions/84182/how-to-design-a-phonographic-sound-filter-in-python
<p>I'm making a phonographic filter or simulator to make one of my songs to sound as if it was recorded on a phonographic cylinder.</p> <p>Two important things about phonographic recoding is that sound gets more treble when it's louder and also that the recording of the wave is made top to bottom so the negative part of the wave will be more quiet than the positive part because the deeper the stylus goes in the wax higher the resistance of the wax also resulting in some a/c shift.</p> <p>Also the wax of the recording material shouldn't be uniform having different height and hardness what generates the characteristic sound of silent. For the sake of noise generation I used this image of a cracked soap <a href="https://thumbs.dreamstime.com/b/close-up-cracked-dry-soap-surface-background-dehydrated-dry-skin-concept-close-up-cracked-dry-soap-surface-background-134542817.jpg" rel="nofollow noreferrer">https://thumbs.dreamstime.com/b/close-up-cracked-dry-soap-surface-background-dehydrated-dry-skin-concept-close-up-cracked-dry-soap-surface-background-134542817.jpg</a> and blurred it a little, because it's too cracked and converted to grayscale. I could use actual silent sound of a gramophone though.</p> <p>Having the noise and the input signal, the next step is to shift the speed of the input, I'm not sure about the relations in the equations here but I made that the speed at any point to be inverse proportional to the sum of the of the signal and the noise (the signal makes the stylus to go deeper on the wax and the noise generates drag resitence as it's a measure of wax hardness, so it stops the electric engine), then I ac/shifted the signal to the negative peak (-1) and multiplied it by a factor of the noise (resistance to the stylus plunge) and then shifted it again to 0.</p> <p>I obtained an acceptable sound (for the sake of purpose) after post processing the resulting signal by applying some noisegate, bandpass, overdrive and echo.</p> <p>So how can I improve my algorithm and why did I need the post process? What are the physical processes implied in the the noisegating, bandpassing, and the other alterations of the phonographic sound?</p>
609
filtering
Adding channel effects to a signal
https://dsp.stackexchange.com/questions/25501/adding-channel-effects-to-a-signal
<p>I have a a channel matrix "H" that is circulant. I have data blocks. I want to add the channel effects to the signal. </p> <p>When H was only a vector of channel coefficients I would say:</p> <pre><code>%Going Through The Channel After_channel= filter(H,1,Data); </code></pre> <p>but now that H is a matrix the line above wouldn't work. </p> <p>I'm not sure what to do </p>
<p>This is a small example of how to implement this in Matlab:</p> <pre><code>h=[0.4070; 0.8150; 0.4070]; % Channel: Proakis A d=[1; -1; 1; 1]; % Data H=convmtx(h,4); % Channel Matrix y1=H*d; % Matrix approach y2=conv(h,d); % Convolution y1-y2 </code></pre>
610
filtering
An intuitive explanation as to why a matched filter is time reversed?
https://dsp.stackexchange.com/questions/70173/an-intuitive-explanation-as-to-why-a-matched-filter-is-time-reversed
<p>It easy enough to study correlation, and matched filters. But the challenge I see unmet anywhere to date and which I struggle to meet myself is a simpler presentation, to with more difficult task I suspect.</p> <p>Can you explain, in lay terms, to satisfy the intuitions of a listener, why the matched filter is a time reversed copy of the signal we hope to detect? It is not a mathematical proof, or explanation I'm after (they abound) but other some way of explaining it (to lay listeners) that generates a &quot;a ha&quot; experience, a feeling that it makes sense, and I get why we time reverse the signal for a matched filter ...</p> <p>This is something I have yet to find, read or master myself.</p>
<p>Picture the transmitted signal as a «signature». You want to find some process that maximize the probability of detecting that signature even when there is noise.</p> <p>What do you do to find some signature buried in noise? You make a template that exactly match the known signature, and you slide it back or forth in time, noting how much the actual signal deviates from the ideal template in any one spot. The time-shift that gives a «large enough» correspondence is your assumed signal location.</p> <p>The time-reversal of the template is just about reversing the reversal of that argument in the convolution operation. Think of it as non-reversed correlation if you prefer (leaving out complex numbers right now).</p> <p>-k</p>
611
filtering
The result if order of two filter are reversed
https://dsp.stackexchange.com/questions/73111/the-result-if-order-of-two-filter-are-reversed
<ol> <li>Apply the Composite Laplacian Filter first, then apply the gaussian filter.</li> <li>Apply the gaussian filter first, then apply the composite laplacian filter.</li> </ol> <p>My work as below: Here we assume the original image is function <span class="math-container">$f(x,y)$</span> <span class="math-container">$$ (1): (f-\nabla^2f)\ast G = f \ast G - \nabla^2f \ast G$$</span> <span class="math-container">$$ (2): (f \ast G) - \nabla^2(f \ast G) = f \ast G - f \ast \nabla^2G $$</span></p> <p>I am wondering the statement <span class="math-container">$\nabla^2f \ast G = f \ast \nabla^2G$</span> is true or not.</p>
<p>Formally, (linear) derivatives and convolutions may commute, as explained on <a href="https://en.wikipedia.org/wiki/Convolution#Differentiation" rel="nofollow noreferrer">Wikipedia-Convolution/Properties/Differentiation</a>. This is a major &quot;operation-saving&quot; property: if one wants to differentiate many images and convolve then with a fixed kernel (here a Gaussian), one can convolve the images with the differentiate kernel.</p> <p>You don't have to differentiate all images, just the kernel once.</p> <p><em>Caveat</em>: what is written above could be taken with care, notably when derivatives don't exist theoretically, or with subtleties in implementing derivatives.</p>
612
filtering
Does a filter add oscillations to a signal?
https://dsp.stackexchange.com/questions/38171/does-a-filter-add-oscillations-to-a-signal
<p>this is a more general version of a <a href="https://dsp.stackexchange.com/q/38108/4298">question which i asked previously</a>.</p> <p>From what I understand, the purpose of a filter is to change the amplitude of a specific frequency band.</p> <p>As I went through the analytic calculation of a filtered (2nd order) signal I observed that the output signal contains an oscillating component (a sinusoid, with the frequency being the imaginary part of the complex conjugate pole pair).</p> <p>A complex pole (in the left half plane) does correspond to a damped oscillation. I really cannot understand how the addition of a specific frequency to the signal conforms with the job of a filter. Thanks for your insights!</p> <p>EDIT: </p> <p>The filter I am considering is a second order low pass Bessel Filter. The input signal is an exponential decay. I tried to calculate the output signal and observed there was a exponentially decaying cosine involved with its frequency being the imaginary part of the two poles. It really surprised me since I did not expect an oscillation in a filtered signal, which it did not express before.</p> <p>I do understand, that this sinusoid was present in the signal already before filtering, it just struck me that it appears in the filtered signal as a single term.</p>
<p>There are two unrelated phenomena that need to be understood in this context. First of all, as pointed out in the answers by <a href="https://dsp.stackexchange.com/a/38177/4298">hotpaw2</a> and by <a href="https://dsp.stackexchange.com/a/38172/4298">MBaz</a>, an LTI system cannot add any frequency components to an input signal. This is obvious from the input-output relation in the frequency domain:</p> <p>$$Y(\omega)=X(\omega)H(\omega)\tag{1}$$</p> <p>where $Y(\omega)$ is the output spectrum, $X(\omega)$ is the input spectrum, and $H(\omega)$ is the filter's frequency response. Clearly, frequencies not contained in the input signal (i.e., frequencies for which $X(\omega)=0$ holds), cannot appear at the output because if $X(\omega_0)=0$ it follows that $Y(\omega_0)=0$ (assuming finite $H(\omega)$). However, this statement must be understood correctly, as explained below.</p> <p>As you've observed, there can be oscillations in the output signal that do not appear to be present in the input signal. These are caused by one of two phenomena. The first one can be observed for systems with rational transfer functions having complex conjugate poles. A stable system with poles at $s_{\infty}=-\alpha\pm j\omega_0$ ($\alpha&gt;0$) will generally contain an exponentially damped output term with frequency $\omega_0$, even if $X(\omega_0)=0$! Note that this is no contradiction with $(1)$ because a damped oscillation at $\omega_0$ in the output signal can even occur if $Y(\omega_0)=0$. The equation $Y(\omega_0)=0$ only means that a sinusoidal signal with frequency $\omega_0$ extending from $t=-\infty$ to $t=\infty$ cannot occur in the output signal. It does not say that there cannot be a right-sided <em>exponentially damped</em> sinusoidal signal at that frequency. The part of the output signal related to the system's poles (even with zero initial conditions) is called <em>natural response</em> (in contrast to <em>forced response</em>, the shape of which is determined by the input signal). If you want to read more (than you might want to know) about natural and forced response, check out <a href="https://dsp.stackexchange.com/a/29743/4298">this answer</a>.</p> <p>The second phenomenon is the <a href="https://en.wikipedia.org/wiki/Gibbs_phenomenon" rel="nofollow noreferrer">Gibbs phenomenon</a>, which is most clearly observed with ideal (low pass) filters that completely suppress certain higher frequencies of the input signal, and in this way cause oscillations that were seemingly not present in the input signal. However, those oscillations actually don't occur because you add frequencies but because you <em>remove</em> frequencies from the input signal. Think about the ripples in the impulse response of an ideal low pass filter (a sinc function), which are often clearly visible in the output signal, especially if the input signal is wideband, such as an impulse or an ideal step. A nice figure of the step response of an ideal low pass filter (a sine integral) can be seen <a href="https://en.wikipedia.org/wiki/Gibbs_phenomenon#Signal_processing_explanation" rel="nofollow noreferrer">here</a>.</p>
613
filtering
Remove high resolution spurious peaks from sinusoidal signal
https://dsp.stackexchange.com/questions/46281/remove-high-resolution-spurious-peaks-from-sinusoidal-signal
<p>I have the current from a LEM transducer measured. The measurement is taken on the output of a transformer. The signal is a 50 Hz signal, measured at 100kHz. When the demand of the system is increased the current increases and therefore the amplitude of the sinusoidal gets bigger. I am looking for the maximum currents and the period over which this occurred. </p> <p>Unfortunately the data is not the best quality. In some files the transducer connections appear to be bad as a signal sometimes will jump and cause a spike but the spike has many data points so using a median filter or similar is not helping me rid my system of the spikes. </p> <p>I had initially applied a moving RMS filter to the signals (I have 50 different test files) and then taking the maximum of the signal to get the value and time where it occurred. However with the spikes it throws this out. </p> <p>I thought about downsmapling the signal to reduce the resolution and hence the number of samples in the spikes and then doing a moving RMS but I would prefer to keep the resolution as per the original dataset. </p> <p>Does anyone have any suggestions that I could try? </p>
<p>Since the spikes are regions of bad data you want to figure out how to identify where they are and exclude them from your analysis. You certainly have more than enough sampling points to get a good read on the signal parameters on the regions that are good.</p> <p>You indicate that you are looking for the amplitude. What resolution is your goal, every cycle? Are there harmonics?</p> <p>I would set a DFT frame up that is two or three, maybe four cycles long. Let's say four, for argument's sake. That will be 8000 sample points. Your DFT doesn't have to be nearly that dense. A 64 point FFT should to the trick. Just use every 8000 / 64 = 125th sample. If you have a spikeless signal, only the bins that are multiples of four should have any significant magnitude and the bins in between will be near zero. You can then be confident you have a clean read and find the peaks in the time domain.</p> <p>If there is a spike in your DFT frame, then the in between bins will have larger values. A little experimentation should allow you to find good threshold values. It is important for you to frame the DFT on a whole number of cycles. The easiest way to do this is to look for zero crossings.</p> <p>Ced</p>
614
filtering
Filtering Overlapping Frequencies in sound file
https://dsp.stackexchange.com/questions/19485/filtering-overlapping-frequencies-in-sound-file
<p>I have been given a sound file of a plane passing over a rain forest filled with birds. I am supposed to filter out the sound of the plane as it flies over. I've accomplished this with various types of filters in MATLAB, but I always run into one problem. I can either cut out all of the plane and lose some of the background rain forest noise, or leave (almost) all of the rainforest noise and leave a decent amount of airplane noise. This is the frequency spectrum of the original sound file: <img src="https://i.sstatic.net/BqTQ2.png" alt="Frequency Spectrum"> (Sorry for the crappy image.) Note the different colors represent left and right channels. From the plot, the majority of the plane's frequency is in the 0-1000 Hz range. The plane is clearly heard however until I use a high pass filter to remove everything from around 1700 Hz to 0 like so: <img src="https://i.sstatic.net/HoMXN.png" alt="enter image description here"> I suspect that the plane is causing that spike around 1700 Hz but evidently the rain forest is still producing lower frequency noise than that. </p> <p>What would be the best method to remove that spike while keeping all that extra rain forest frequency?</p> <p>EDIT:</p> <p>Here is the plot of the time domain. Note how the plane gradually gets louder than fades away. <img src="https://i.sstatic.net/TtUqo.png" alt="enter image description here"></p> <p>The sampling frequency is 22050 samples/sec. </p> <p>And the spectrogram. <img src="https://i.sstatic.net/YzOu3.jpg" alt="enter image description here"></p>
<p>In my opinion, the best approach for this kind of spectrally-overlapping noise is to use blind source separation techniques, like independent component analysis (ICA) or time-frequency masking, or sparse source decomposition. Look up the work of Emmanuel Vincent, Kostas Kokkinakis, or Philipos Loizou. These methods work best when some statistics of the noise are time-independent, and the SNR is small or negative, as appears to be the case in your example. Vincent has some MATLAB code available on the web.</p>
615
filtering
Filtering out a specific frequency in an analog signal
https://dsp.stackexchange.com/questions/24782/filtering-out-a-specific-frequency-in-an-analog-signal
<p>I'm wondering how I would go about filtering out a regular sine-wave signal with a constant frequency and voltage.</p> <p>I'm really new to the whole DSP-area but imagine I would have a regular sinewave at a specific frequency (let's say 30Hz), and on top of that signal I have a bunch of other small peaks (which are the ones I would want to sample and read). What method would I choose?</p> <p>Here is a visualization of what I meant:</p> <p><img src="https://i.sstatic.net/hrQD4.png" alt="enter image description here"></p> <p>Excuse my terrible Paint-skills but how would I filter out that sinewave, and <em>just</em> keep the small spikes in voltage? What might be worth sharing is that the time between the spikes does have an impact on my measurement of the spikes, or to rephrase: The amount of spikes within x-units of time does have an impact.</p> <p>Any guidance to the right direction would really be appreciated.</p> <p>Thank you.</p>
<p>A notch filter is a good approach in general for removing an unwanted frequency. But if you are certain of the <strong>exact</strong> frequency, phase and amplitude of the unwanted sine wave, then you may also consider adding the noisy signal to a sine wave of the same frequency and amplitude but 180 degrees out of phase with the unwanted sine wave.</p> <hr> <p>To illustrate the notch filter example, the plot below shows an attempt to generate a sine wave with some odd impulses (black), and then filter it with a notch filter knowing the exact frequency (red).</p> <p><a href="https://i.sstatic.net/DHzKw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DHzKw.png" alt="enter image description here"></a></p> <p>R code for implementing it below.</p> <hr> <pre><code># 24782 # install.packages('signal') T &lt;- 1024 t &lt;- 0:(T-1) fs &lt;- 1000 omega &lt;- 2*pi*30/1000 phi &lt;- 0.9279835 x &lt;- sin(omega*t + phi) changed &lt;- sample(1:T, 10) y &lt;- x y[changed] &lt;- y[changed] + 3 alpha &lt;- 0.9 num &lt;- c(1, -2*cos(omega), 1) den &lt;- c(1, -2*alpha*cos(omega), alpha*alpha) yf &lt;- signal::filter(num, den, y) plot(t,y, type="l", lwd=10) lines(t,yf, col="red", lwd=4) legend(850, 4, c("Original", "Filtered"), lwd=c(2.5,2.5),col=c("black","red")) </code></pre>
616
filtering
Use of an impulse-response system to model sediment transport along a river reach
https://dsp.stackexchange.com/questions/46302/use-of-an-impulse-response-system-to-model-sediment-transport-along-a-river-reac
<p>I am new to signal processing and have come into the subject from the study of rivers and basic geophysics. I am trying to test an idea previously put forward that the sediment transport response of rivers can be modeled as a linear impulse-response system. Transport along a river reach changes as the upstream supply of water or sediment changes. Working from an impulse response convolution, I derived an algebraic equation for the sediment transport (flux) of a river reach as measured at the downstream end of the reach. Here is the derivation of the algebraic equation (Equation 27 below), and I apologize in advance because I prepared the write-up snippet as an image from the master LaTex file:</p> <p><a href="https://i.sstatic.net/arp8D.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/arp8D.jpg" alt="My write-up of the derivation used to describe the response of a river as a linear impulse-response system"></a></p> <p>When I use Equation 27 to model the measured sediment transport at the downstream end of a river reach I observe that when the system is perturbed by an increase in the supply of water and sediment at the upstream end (elapsed time ~2100-2400 and 4050-4310 minutes in the Figure that follows), Equation 27 predicts the mirror image in behavior of what I measure. This is depicted in the following figure:</p> <p><a href="https://i.sstatic.net/NK5Vk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NK5Vk.jpg" alt="Measured and modeled sediment transport at the downstream end of a river reach"></a></p> <p>In the Figure the measured transport is the heavy dark line, the modeled response with Equation 27 is the light blue dotted line, and the inverse of Equation 27 for periods of supply perturbation is the dark gray dashed line. Note that to calculate the responses shown in the Figure I reset the elapsed time when the upstream supply changes - but here I am plotting the results against the running elapsed time. It can't be a coincidence that the piece-wise modeled-inverse curve (dark gray dashed line) does a pretty good job of matching the measured transport. However, I have no idea why nor do I know if I can arrive there mathematically from the starting point of Equation 20. Any assistance or guidance on this would be greatly appreciated.</p>
617
filtering
How the zero - phase filter without filtered signal truncation at the end can be implemented?
https://dsp.stackexchange.com/questions/55372/how-the-zero-phase-filter-without-filtered-signal-truncation-at-the-end-can-be
<p>I mean what for the window FIR filter the filtered signal is truncated at the end because impulse response of the filter is symmetric. For example the code from here <a href="https://github.com/tmk/tmk_keyboard/blob/master/tmk_core/tool/mbed/mbed-sdk/workspace_tools/dev/dsp_fir.py" rel="nofollow noreferrer">(in python)</a> (lowpass FIR filter) gives the next results:</p> <p>color meanings:<br/> <br/> blue - input signal<br/> green - shifted filtered signal<br/> red arrow - the end of the filtered signal</p> <p><a href="https://i.sstatic.net/g98ay.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g98ay.png" alt="enter image description here"></a></p> <p>The red arrow shows where the filtered signal is ended. How can I implement the zero - phase or nearly zero - phase filter so what the filtered signal would end at the same point in time as the original one? The end of the filtered signal is of most significance.</p> <p>Thanks.</p>
<p>You can either gather more data at the end than you need for output, or zero pad past the end of your data with at least half the length of your zero-phase filter. </p> <p>If you can’t get data past the “ends”, and don’t like the artifacts from zero-padding, then you might try other reasonable assumptions (circularity, continuity, etc.) depending on your model.</p>
618
filtering
Accelerometer drift: What hardware-level signal conditioning operations are being performed under the hood of a MEMS accelerometer chip?
https://dsp.stackexchange.com/questions/72732/accelerometer-drift-what-hardware-level-signal-conditioning-operations-are-bein
<p>I'm in a debate with a peer who says that filtering accelerometer signals at the chip-level has nothing to do with mitigating the problem of drift</p> <p>Often, additional software-level filtering is employed to smooth noise, depending on the application. But My question pertains to any filtering or processing that is happening on the chip, specifically as it pertains to drift (and not so much to smoothness of the output).</p> <p>My assertion is that noise due to thermal, electrical, vibrational, and sampling jitter will invariably contribute to drift, and that this type of noise is filtered on the chip to help reduce the problem. By reading an Invensense product PDF, we know there's &quot;signal conditioning&quot; happening on the chip. But the documentation makes no real mention of what's going on in there. It seems to be proprietary, and reasonably so.</p> <p>I don't need any proprietary information, but if anyone could speak as to whether noise filtering reduces drift, that would be tremendously helpful. It seems obvious to me, but perhaps I am mistaken.</p>
619
filtering
Strange noises when filtering audio signal
https://dsp.stackexchange.com/questions/8272/strange-noises-when-filtering-audio-signal
<p>I am using Naudio open source library and I am trying to do some simple filtering. The problem is that I hear some "clicks", not too loud. The library offers me the possibility to use at least two buffers, so the computing time doesn't introduce a delay between them. Because in the most of the time I am dealing with a stereo signal, I have split it in two arrays and I compute each other independently. I would like to know if is there something special that I have to do with a filter when I used it on a buffer. I have first used a low pass biquad filter like the one below:</p> <pre><code> //generate coeff //sincerely, I don't know what's up with q //I have taken into consideration some values //to see if the noise disappears double w0 = 2 * Math.PI * cutoffFrequency / _sampleRate; double cosw0 = Math.Cos(w0); double alpha = Math.Sin(w0) / (2 * q); _b0 = (1 - cosw0) / 2; _b1 = 1 - cosw0; _b2 = (1 - cosw0) / 2; _a0 = 1 + alpha; _a1 = -2 * cosw0; _a2 = 1 - alpha; for (int i = 2; i &lt; length; i++) { output[i] = (float)((_b0 / _a0) * input[i] + (_b1 / _a0) * input[i - 1] + (_b2 / _a0) * input[i - 2]- (_a1 / _a0) * output[i - 1] - (_a2 / _a0) * output[i - 2]); } output[1] = (float)( (_b0 / _a0) * input[1] + (_b1 / _a0) * input[0] + (_b2 / _a0) * input[0] - (_a1 / _a0) * output[0] - (_a2 / _a0) * output[0]); output[0] = (float)( (_b0 / _a0) * input[0] + (_b1 / _a0) * 0 + (_b2 / _a0) * 0 - (_a1 / _a0) * 0 - (_a2 / _a0) * 0); </code></pre> <p>I thought that all my problems came from the first two samples (output 0:1), I've tried all combinations: output[-1]=0,output[-1]=output[0], but nothing worked. What values output[i-1], output[i-2] should have when "i" is 0 or 1?</p> <p>I have encountered the same noise (clicks) when I used a LowPass Windowed-Sinc Filter, just like this:</p> <p>//calculate coeff</p> <pre><code> int i; int m = length; double PI = Math.PI; length=101; for (i = 0; i &lt; length; i++) { if (i - m / 2 == 0) { _h[i] = 2 * PI * _cutOffFrecv; } else { //!=0 _h[i] = Math.Sin(2 * PI * _cutOffFrecv * (i - m / 2)) / (i - m / 2); } _h[i] = _h[i] * (0.54 - 0.46 * Math.Cos(2 * PI * i / m)); } //normalize the low-pass filter kernel for unity gain at DC double s = 0; for (i = 0; i &lt; m; i++) { s = s + _h[i]; } for (i = 0; i &lt; m; i++) { _h[i] = _h[i] / s; } //convolve the input &amp; kernel //_kernelSize=101 //most often length is 6615 or 6614 for each channel //in these examples I compute only one channel for (j = 0; j &lt; length; j++) { output[j]=0; for (i = 0; i &lt; _kernelSize; i++) { if (j &gt;= i) { output[j] =(float)(output[j]+ _h[i] * input[j - i]); } } } </code></pre> <p>The problem surely is not from splitting the signal, or from combining channels, because I have tested this without any filter and everything is ok. I have also tried to simulate some delays created by a processing algorithm (but without changing the signal) and nothing went wrong. I am very sure that the problem comes from filtering. Everything I wrote is used on a buffer.</p>
<p>The context in which you use these functions is not clear, but it seems to me that your problem is "edge effects".</p> <p>When you are evaluating the convolution or the biquad, you need to access samples which are outside the current buffer. Your two implementations evaluate these samples as zero. This is incorrect. For example, for the biquad, everytime you process a block of audio, you need to store the last 2 values of the <code>input[]</code> and <code>output[]</code>; and reuse them in place of <code>input[-1]</code>, <code>input[-2]</code>, <code>output[-1]</code>, <code>output[-2]</code>. Even if the data you process comes in small chunks, you must process it as if it came in one single stream; so the state variables of your filters must not be reset to zero at the boundaries of each buffer.</p>
620
filtering
Blurring transfer function of image
https://dsp.stackexchange.com/questions/16457/blurring-transfer-function-of-image
<p>I need help solving the following blurring function question.Assume an image $f(x,y)$ is moving in front of a camera so that $𝑥_0(𝑡)$ and $𝑦_0(𝑡)$ are the time-varying components of motion in the x- and y- directions respectively. The camera’s shutter opens at 𝑡 = 0 and closes at 𝑡 = 𝑇. Assume that the shutter opening and closing operations are instantaneous. The blurred image 𝑔(𝑥, 𝑦) captured by the camera is calculated as:</p> <p>$$ g(x,y) = \int_{0}^{T} f( x - x_0(t), y - y_0(t)) dt $$</p> <p>Calculate the blurring transfer function $H(u,v) = G(u,v) / F(u, v)$ in frequency domain with respect to $𝑥_0(𝑡)$ and $𝑦_0(𝑡)$, where $𝐺(𝑢,𝑣)$ and $𝐹(𝑢,𝑣)$ are the Fourier transforms of $𝑔(𝑥,𝑦)$ and $𝑓(𝑥,𝑦)$ respectively and then calculate the blurring transfer function for the case where the image $𝑓(𝑥, 𝑦)$ moves in only x- direction with a constant speed $\frac{a}{T}$.</p> <p><strong>Attempt</strong>: I really have no idea how to set up this blurring function.It is clear that the speed, $v$, is</p> <p>$$ v = x_0(t)i + y_0(t)j $$</p> <p>and also that</p> <p>$$ \mathcal F \{ f(x - x_0, y - y_0) \} = \text{exp}( (-2\pi i (x_0u + y_0v) ) )F(u,v) $$</p> <p>$$ G(u,v) = \mathcal F \{ g(x,y) \} = \mathcal F \left\{\int_{0}^{T} f( x - x_0(t), y - y_0(t))\right\} $$</p> <p>but I don't know how to proceed from there.Any help appreciated</p>
<p>Your start looks right, although I think your speed equation is wrong. To continue, the FT has no time dependance so you can take the integral out.</p> <p>$$G(u, v) = \int_0^T \mathcal F\{f(x-x_0, y-y_0)\} dt $$</p> <p>$$ = F(u,v) \int_0^T e^{-2\pi i(x_0u + y_0v)} dt$$</p> <p>Divide by $F$ to get the transfer function.</p> <p>If I understand the question correctly $x_0(t)$ and $y_0(t)$ are positions such that $x_0(t) = x_0(0) + v_x(t)t$ and similar for $y$. You can then put in an appropriate version of the speed given and do the integration.</p>
621
filtering
Extract signal from big gaussian noise
https://dsp.stackexchange.com/questions/16511/extract-signal-from-big-gaussian-noise
<p>I have signal, periodic with amplitude 10 with frequency about 10 kHz. It is hidden in Gaussian noise with standard deviation 100. Some idea how to extract periodic signal? Thanks in advance.</p>
<p>Rearrange the signal into a matrix containing one period in each row and average down the columns. </p>
622
filtering
How to design a filter with a certain magnitude response
https://dsp.stackexchange.com/questions/16781/how-to-design-a-filter-with-a-certain-magnitude-response
<p>I am trying to design a filter whose magnitude is the same as that of a given signal. The given signal is wind turbine noise, so it has significant low-frequency content. After designing the filter, I want to filter white Gaussian noise so as to create a model of wind turbine noise. The two signals, that is the original and the filtered noise should sound similar.</p> <p>I am using arbitrary magnitude filter design in Matlab for that (FIR, Order: 900, Single rate, 1-band, response specified by amplitudes, Sample rate 44100 Hz, Design method: firls). The problem is that, although I design the filter using the values from the original signal's magnitude, the filter magnitude fails to follow the magnitude at higher frequencies. Could you please help me with that?</p> <p>The idea is that I am using a polynomial curve, to fit the shape of the spectrum of the original sound. Then, I filter the magnitude of the generated noise, by multiplying the extracted polynomial curve with the generated noise magnitude. Finally, after calculating the new magnitude and phase, I get back to the time domain.</p> <pre><code>[x,fs] = audioread('cotton_0115-0145.wav'); % original noise sample x = x(:,1); % extract one channel x = x.'; N = length (x); % fft of original signal Y = fft(x,N)/N; fy = (0:N-1)'*fs/N; % half-bandwidth selection for original signal mag = abs(Y(1:N/2)); fmag = fy(1:N/2); % polynomial fitting degreesOfFreedom=40; tempMag=20*log10(mag)'; % desired magnitude in dB tempFmag=fmag; figure(1) plot(tempFmag,tempMag,'b'); title('Spectrum of original signal-Polynomial fitting') xlabel('Frequency (Hz)'); ylabel('20log10(abs(fft(x)))'); axis([tempFmag(1) tempFmag(end) min(tempMag) 0]); hold on p = []; for i=1:4 p=polyfit(tempFmag,tempMag,degreesOfFreedom); pmag=polyval(p,tempFmag); plot(tempFmag,pmag,'r'); pause above=pmag&lt;tempMag; abovemag=tempMag(above); abovefmag=tempFmag(above); tempMag=abovemag; tempFmag=abovefmag; end hold on legend('signal magnitude','polynomial'); % loc1 = find(fmag == 0); loc2 = find(fmag == 22050); Nmag = length(mag); M=((Nmag-1)*max(tempFmag))/(tempFmag(end)-tempFmag(1)); freqFinal=tempFmag(1):max(tempFmag)/M:max(tempFmag); freqFinal=tempFmag(1):max(tempFmag)/length(mag):max(tempFmag); magnitudesFinal=polyval(p,freqFinal); figure(2) plot(fmag,20*log10(mag)'); hold on; plot(freqFinal,magnitudesFinal,'g'); title('Spectrum of original signal-Choice of polynomial curve') xlabel('Frequency (Hz)'); ylabel('abs(fft(x))'); axis([freqFinal(1) freqFinal(end) min(magnitudesFinal) max(magnitudesFinal)]); %% % noise generation Nn = N; noise = wgn(1,Nn,0); noise = noise/(max(abs(noise))); Ynoise = fft(noise,Nn)/Nn; fn = (0:Nn-1)'*fs/Nn; % polynomial for whole f range newmagA = 10.^(magnitudesFinal/20); newmagB = fliplr(newmagA); newmagB(end+1) = newmagB(end); newmag = [newmagA newmagB]; %filtering Ynoisenew = newmag .* Ynoise; figure(3) magnoise = 20*log10(abs(Ynoisenew(1:Nn/2))); fnoise = fn(1:Nn/2); plot(fnoise, magnoise); % magnitude and phase of filtered noise magn = abs(Ynoisenew); phn = unwrap(angle(Ynoisenew)); % Back to Time domain sig_new=real(ifft(magn.*exp(1i*phn))); figure(4) sig_new = sig_new / max(abs(sig_new)); plot(t, sig_new); Ysignew = fft(sig_new,Nn)/Nn; fn = (0:Nn-1)'*fs/Nn; figure(5); plot(fn(1:Nn/2),20*log10(abs(Ysignew(1:Nn/2)))); </code></pre>
<p>if you are only interested in designing a signal representing the wind turbine noise, you could just generate it from your magnitude. E.g., with your magnitude $A$ over frequency $f$ you can generate a random phase $\phi$ for each frequency and over a time $t$ and then add all together. Here a small code for illustration (I tried to use your varibles. However you need to check orientation etc.):</p> <pre><code>phi = rand(size(fmag))*2*pi; x = (mag*sin(2*pi*fmag*t+repmat(phi,1,N))); </code></pre> <p>Hope this helps.</p>
623
filtering
difference between mean filter and order statistic filter
https://dsp.stackexchange.com/questions/24064/difference-between-mean-filter-and-order-statistic-filter
<p>I am working on my paper about comparing between mean filter and ordering statistics filter, The mean filter is contraharmonic mean filter and the ordering statistic filter is alphatrimmed mean filter. So, I compare contraharmonic mean filter and alphatrimmed mean filter</p> <p>and i've read a slide presentation that explain mean filter is good for removing some kind of gaussian noise (uniform noise) and order statistics filter is good for removing some kind of exponential noise and salt &amp; pepper noise (rayleigh noise).</p> <p>Can anybody here show me the book source of that statement? is it in gonzales book? Thank you so much, your answers will help me a lot and sorry for my poor english :(</p>
<h3>Quick Answer</h3> <p>The main difference of this filters is how it perform the operations.</p> <h2>Mean Filter</h2> <h3>Brief Description</h3> <p>Mean filtering is a spatial filter, and it's a simple, intuitive and easy to implement method of smoothing images, i.e. reducing the amount of intensity variation between one pixel and the next.</p> <h3>How It Works</h3> <p>The idea of mean filtering is simply to replace each pixel value in an image with the mean (`average') value of its neighbors, including itself. Mean filtering is usually thought of as a convolution filter and it's based around a kernel, which represents the shape and size of the neighborhood to be sampled when calculating the mean. Often a 3×3 square kernel is used like this $$ K = \left[ {\begin{array}{cc} \frac{1}{9} &amp; \frac{1}{9} &amp; \frac{1}{9}\\ \frac{1}{9} &amp; \frac{1}{9} &amp; \frac{1}{9}\\ \frac{1}{9} &amp; \frac{1}{9} &amp; \frac{1}{9}\\ \end{array} } \right] $$</p> <p>Note: <em>Check this link for more details in <a href="http://homepages.inf.ed.ac.uk/rbf/HIPR2/mean.htm" rel="nofollow">mean filtering</a>.</em></p> <h2>Order Statistics Filter</h2> <h3>Brief Description</h3> <p>This type of filter is based on estimators and is based on <em>"order"</em>, the sense of order is about some quatities like $\operatorname{min} $ <em>(first order statistic)</em>, $\operatorname{max}$ <em>(largest order statistic)</em> and etc...</p> <h3>How It Works</h3> <p>Given $N$ observations $ X_{1}, X_{2}, X_{3}, \dots X_{N} $ of a random variable $X$, the order statistics are obtained by sorting the $\{X_{i}\}$ in ascending order. This produces $\{X_{(i)}\}$ satisfying: </p> <p>$$X_{(1)} \leq X_{(2)} \leq X_{(3)} \dots \leq X_{(N)}$$</p> <p>where $\{X_{i}\}$ are the order statistics of the N observations. So, an Order Statistic Filter (OSF) is a estimator $ F(X_{1}, X_{2}, X_{3}, \dots X_{N})$.</p> <p>Some common filters which fit the order statistic filter framework are:</p> <ul> <li><p>The <strong>linear average</strong>, which has coefficients: $$ \alpha_{i} = \frac{1}{N} $$</p></li> <li><p>The <strong>median filter</strong>, which has coefficients: $$ \alpha_{i} = \left\{ \begin{array}{ll} 1 &amp; i = (N+1)/2\\ 0 &amp; \text{otherwise} \end{array} \right.$$</p></li> <li><p>The <strong>trimmed mean filter</strong>, which has coefficients: $$ \alpha_{i} = \left\{ \begin{array}{ll} 1/M &amp; (N - M + 1)/2 \leq i \leq (N + M + 1)/2 \\ 0 &amp; \text{otherwise} \end{array} \right.$$ </p></li> </ul> <p>Please check the references for more details in the <a href="http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/VELDHUIZEN/node13.html" rel="nofollow">Todd Veldhuizen</a> web page.</p>
624
filtering
If a signal is added to another in the freq. domain, how to filter it out?
https://dsp.stackexchange.com/questions/24870/if-a-signal-is-added-to-another-in-the-freq-domain-how-to-filter-it-out
<p>I have two signals, $x$ and $y$. I know that $F(x)=F(s)+F(n)$ and $F(y)=F(n)$, where $s$ is the 'clean' signal, $n$ is the added noise and $F$ donate Fourier Transform.</p> <p>To obtain the clean signal, I am trying the following: $s=F^{-1}[F(x)-F(y)]$. Is there any better way that I should try?</p>
625
filtering
In fourier space, how to apply transfer function with n frequencies to input data with m&gt;&gt;n frequencies
https://dsp.stackexchange.com/questions/29783/in-fourier-space-how-to-apply-transfer-function-with-n-frequencies-to-input-dat
<p>I have a transfer function in Fourier space with $N=2028$ frequencies $(\frac {0, 1}{(N\cdot dx)} \dots ) $</p> <p>Where $dx = 0.1m$. </p> <p>I need to apply this transfer function to a signal with 20000 samples (also $dx=0.1m$). When I transform this signal to Fourier space I get 20000 frequencies. So, the sizes don't match. What do I need to do to be able to apply the transfer function (i.e. to multiply the Fourier transforms)? I guess, I could just remove the first $20000-2028=17952$ frequencies from the Fourier transform of the input signal, since they are not present in the transfer function.</p> <p>But is that correct?</p>
<p>That is not correct. The frequency range is still the same with 20000 bins, the bins are just closer. What you need to do is:</p> <ul> <li>Apply a Window Function with lenght 2028 (such as Hamming Window) with an appropriate overlap between windows.</li> <li>Fourier-transform each window.</li> <li>Multiply each window by the transfer function.</li> <li>Inverse-F-transform each window.</li> <li>Overlap-Add the windows again to gain the filtered signal.</li> </ul>
626
filtering
Practical examples of ARMA model
https://dsp.stackexchange.com/questions/35159/practical-examples-of-arma-model
<p>I am studying the Kalman filter and its basic implementation, and it was asked to use the filter to estimate a signal observed in noise $$y(n) = x(n) + v(n)$$ where $v(n) \sim \mathcal{N}(0, \sigma^2)$ and $x(n)$ is modeled as an ARMA(2,2) process $$x(n) = b_0 u(n) + b_1 u(n-1) - a_1 x(n-1) - a_2 x(n-2).$$ </p> <p>The exercise is conceptually intersting, but I was wondering if there is an direct application for this case. I mean, which processes, in real life, can be modeled as an ARMA(2,2) and in which context the Kalman filter is used, not as a predictor, for a signal like $s(n)$?</p> <p>Thanks!</p>
<p>ARMA models are useful when you need to model a Signal plus Noise situation where the signal is an AR process and the noise models sensor noise. The overall model is an ARMA model.</p> <p>See HL Van Trees, Detection, Estimation, and Modulation Theory, vol 4 Array Processing. He gives an example of a Spatial AR process sensed by noisy sensors. The overall model is ARMA</p> <p>ARMA model are also useful in situations where you have strong nulls n your spectrum like multi path and you have a noise like signal</p>
627
filtering
Understanding the lowpass filtering of a digital signal
https://dsp.stackexchange.com/questions/36661/understanding-the-lowpass-filtering-of-a-digital-signal
<p>I do not have any background in Electrical Engineering and I have recently started a project which involves signals captured from sensors.</p> <p>Lowpass filtering: The way I understand it is: The values below the <code>CUTOFF</code> are allowed to pass through, above the <code>CUTOFF</code> are simply filtered.</p> <p>So far so good. Now, how does this work in real world?</p> <p>The data I have is captured 100 times per second at equidistant intervals, hence my sampling frequency is 100 Hz. This is also understandable. </p> <p>But now, my <code>CUTOFF</code> is 20 Hz. What does it mean? Because all I have is some digits. None of them is below 20. When I read about filtering it makes me think of <code>CUTOFF</code> as some value from the range of signal itself. I have given multiple explanations to myself. </p> <ol> <li><p>Somehow, from the signal, find out the frequency of each data point and if that frequency is below 20 Hz, don't do anything. Otherwise put 20 Hz in the output. But I have no idea how to find out frequency of single data point. After all, its just a number, right?</p></li> <li><p>It has to work on multiple values at a time. That is, from one data point to the other, if there is some <em>above average peak</em>, just smooth it out. And this 20 Hz is essentially a limit of how sharp the peak can be.</p></li> </ol> <p>Please help me understand in the simplest terms. I asked the same question to my guide and he explained me using the terminologies in Electrical Engineering and some more alienate stuff, which was ways and bounds beyond my brain. </p>
<p>Any signal can be constructed from the sum of sinusoids of different frequencies (to within an arbitrarily small error). This reconstruction is unique and can be calculated with the Fourier transform.</p> <p>When we talk about the various frequencies that comprise a signal, we are talking about these sinusoids of various frequencies. In fact, in signal processing, we are used to thinking about the <em>same</em> signal in the time domain -- as a function that gives a magnitude for every instant in time, or in the frequency domain -- as a function that gives a magnitude and phase for every frequency.</p> <p>When you apply a low-pass filter, you are modifying the frequency domain view of the signal so that all frequencies higherthan the cutoff have their magnitudes set to pretty much zero.</p> <p>Another way of thinking about it is, you take all the sinusoids that make up the original signal, and throw away the ones with frequency higher than your cutoff.</p> <p>The secret to how this actually works is called the "convolution theorem": <a href="https://en.wikipedia.org/wiki/Convolution_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Convolution_theorem</a></p> <p>The convolution theorem shows that convolving two signals together is the same as <em>multiplying</em> their frequency-domain representations together. A "low pass" filter is just a signal that has a constant magnitude for frequencies below the cutoff, and near zero magnitude for frequencies above the cutoff.</p> <p>When the filter is convolved with the input signal, the input signal's high-frequency components are multiplied by zero, effectively removing them.</p>
628
filtering
How do I eliminate line noise from USB microphone
https://dsp.stackexchange.com/questions/36725/how-do-i-eliminate-line-noise-from-usb-microphone
<p>I'm trying to record voice from a microphone into my laptop. Unfortunately, the laptop does not have a microphone jack, so I have to use the USB input. This doesn't present a problem when I record voice from my gaming headset, but when I record from the condenser microphone over USB I get a very pronounced line hum. This occurs even when the laptop is unplugged from power and on its battery. </p> <p>The hum consists mainly of 55 Hz noise, but with some 110 and 512 Hz components. I can eliminate this using a notch filter in "post production", as it were, but I'd rather not have to do this. This noise is not the fault of the microphone, by the way (at least not directly), since when I run the microphone signal into my digital voice recorder (using a 3.5mm plug) there is no background hum at all.</p> <p>Is there some way to process or filter the output from the microphone so that this noise never makes it to the laptop?</p>
<p>I would suggest adding a device as like Xenyx 302USB (~50$) in your recording path.</p> <p><strong>If it's just voice recording in question</strong> (and Windows based computer) then you could install <a href="https://sourceforge.net/projects/equalizerapo/" rel="nofollow noreferrer">EqualizerAPO</a> and prepare suitable filters for to fix the issue. Maybe one steep (something like 72dB/oct or even steeper) HPF to cut below 75Hz-80Hz frequencies or bandpass filter (by <a href="https://en.wikipedia.org/wiki/Voice_frequency" rel="nofollow noreferrer">this article</a>) could work but, with setting few notch type filters by the hum frequencies your hum issue certainly can be fixed. EqualizerAPO allows you to set the filters work for recording device as well as for the playback device.</p>
629
filtering
Digital Lowpass butterworth filter with cut off 500Hz and sampling rate 1.25MSPS
https://dsp.stackexchange.com/questions/38120/digital-lowpass-butterworth-filter-with-cut-off-500hz-and-sampling-rate-1-25msps
<p>I am trying to simulate a distributed sensing system and I need to filter only frequencies lesser than 500 Hz(low pass filter) from acquired signal with sample rate 1250000 Samples/sec using below mentioned program:</p> <pre><code> %Time specifications: Fs = 1250000; % samples per second dt = 1/Fs; % seconds per sample StopTime = 0.25; % seconds t = (dt:dt:StopTime-dt); % seconds %Input Sine wave: Fc = 300; % hertz dataIn = cos(2*pi*Fc*t); % Plot the signal versus time: figure; plot(t,dataIn); xlabel('time (in seconds)'); title('Input signal'); zoom xon; wc=500; % cut off frequency Wn=wc/(Fs/2); % normalized frequency [b,a] = butter(6,Wn,'low'); figure; freqz(b,a) %Plot filtered output dataOut = filter(b,a,dataIn); figure; plot(t,dataOut) title('dataout') zoom xon; </code></pre> <hr> <p>However I am unable to filter the desired frequencies below 500 Hz, as the cut off frequency is dependant on the sampling rate. Kindly suggest what to be done as I am new in MATLAB.</p> <p>Thanks in advance.</p>
<p><strong>Solution</strong>: Take a lower order butterworth filter or sampling frequency</p> <p><strong>Reason</strong>: As discussed <a href="https://www.kvraudio.com/forum/viewtopic.php?t=311413" rel="nofollow noreferrer">here</a>, a high order butterworth filter with a low (relative) cutoff frequency may be numerically unstable due to quantisation noise.</p> <p><strong>Explanation with figures</strong>:</p> <p>I changed your frequency plot to include the region of interest:</p> <pre><code>freqz(b,a, logspace(1, 5, 1000), Fs) ax = findall(gcf, 'Type', 'axes'); set(ax, 'XScale', 'log'); </code></pre> <p>The frequency response is: <img src="https://i.sstatic.net/mAVye.png" alt="Frequency response of 6th order butterworth filter"></p> <p>which seems very noisy at low frequencies, indicating an unstable filter.</p> <p>This results in an extremely high output (blue = input, red = output): <a href="https://i.sstatic.net/uIXi7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uIXi7.png" alt="enter image description here"></a></p> <p>If I change the butterworth order to 4, I get: <a href="https://i.sstatic.net/uIXi7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RZfEM.png" alt="Frequency response of 4th order butterworth filter"></a> <a href="https://i.sstatic.net/uIXi7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ah9rS.png" alt="Input (blue) and Output(red) of the 4th order butterworth filter"></a></p> <p>The output (red) is a shifted version of the input (blue) as can been expected from the frequency response.</p> <p>Keeping a 6th order butterworth filter, but lowering the sampling frequency also solves your problem.</p>
630
filtering
Do integration/differentiation processes work as simple filters?
https://dsp.stackexchange.com/questions/48584/do-integration-differentiation-processes-work-as-simple-filters
<p>How these processes are different from simple IIR 1 order filtering, FIR filters in terms of amplitude and phase characteristics?</p>
<p>Yes, integration and differentiation can be linear filters. You can start from <code>laplace properties</code> that say:</p> <p>$ \int_{0}^{t} {x(t)dt} \longrightarrow \frac{X(s)}{s} \\ \frac{d}{dt}x(t) \longrightarrow sX(s) $</p> <p>So you can find <code>transfer function</code> of integration and differentiation:</p> <p>$ H_{INT}(s) = \frac{1}{s} \\ H_{DIFF}(s)=s $</p> <p>You can convert these transfer function in digital <code>IIR</code> filters for example by <code>bilinear transform</code> or others digitization techniques. However you should notice that $ H_{DIFF}(s) $ is not causal, then you must add a <code>pole</code> to the transfer faction far from <code>useful</code> signal frequency, and it begin:</p> <p>$ H_{DIFF_{causal}}(s)= \frac{s}{\alpha s + 1}$ where $\alpha$ is the <code>time constant of the derivative filter</code> a small <code>real</code> $&gt;0$.</p> <p>Using <code>bilinear transform</code> $H_{INT}(z)$ begin a <code>trapezoidal integrator</code>, using <code>Euler transofrm</code> $H_{INT}(z)$ begin a <code>rectangular integrator</code>, you can see this difference and the digitization in the <a href="https://it.mathworks.com/help/control/ref/pid.html" rel="nofollow noreferrer">MATLAB PID page</a>.</p> <p>I wrote a <a href="https://github.com/andrea993/SimplePID" rel="nofollow noreferrer">simple <code>PID</code> program</a> in <code>C</code> that computes these operations using <code>Euler transform</code>, a <code>PID</code> in closed loop doesn't need to be accurate so <code>Euler</code> works well.</p> <p>In literature there are a lot of ways to implement <code>derivative</code> filters (<code>IIR</code> and <code>FIR</code>) that workaround in elegant way anti-causal problem, however in a lot of situations you can simply digitize the analog transfer functions to make IIRs and FIRs.</p>
631
filtering
What do poles do for a filter?
https://dsp.stackexchange.com/questions/49830/what-do-poles-do-for-a-filter
<p>What are the disadvantages of having too many poles?</p> <p>Thanks.</p>
<p>Poles in a filter come from recursions. Consider a discrete filter</p> <p>$$y(k) = x(k) + \alpha y(k-1)$$</p> <p>where $y(k)$ describes the output of the system, $x(k)$ the input and $y(k-1)$ the output of the system the sample before.</p> <p>Now what the recursive part $y(k-1)$ does is that it feeds the system the last output, which can cause problems depending on the $\alpha$ you choose. If $\alpha \geq 1$, the system reacts to any given input with an ever increasing output, which makes the system <em>unstable</em>. </p> <p>Now the more complex our system gets, the more things you have to factor in for its design, as pointed out in Stanleys' answer. Problems related to poles do not stem from their count however.</p> <p>Also, poles generally increase the magnitude of the systems' transfer function, causing certain frequency components of signals to be amplified by the system. </p>
632
filtering
How to apply discrete filters to a signal
https://dsp.stackexchange.com/questions/59843/how-to-apply-discrete-filters-to-a-signal
<p>Let's say I have a signal like x[k] = [-20 -50 -30 50 30 -60 60 -60 60 10 5 10 5 5], and I want to apply a lowpass and a highpass filter to this signal (separately). For example the impusle response of the filters are as follows:</p> <p>Lowpass: h[k] = [-1 2 6 2 -1], k = -1,0,...,3</p> <p>Highpass: g[k] = [-1 2 -1]; k = -1, 0, 1</p> <p>How can I calculate the first four signal values after applying these filters?</p>
<p>Another way to simply get your result for this kind of a problem (where <span class="math-container">$h[n]$</span> is very short) is to use the following method :</p> <p>Let your output of the discrete convolution sum be <span class="math-container">$y[n]$</span> : <span class="math-container">$$ y[n] = x[n] \star h[n] $$</span></p> <p>Then by expanding <span class="math-container">$h[n]$</span> into impulses, the convolution will be distributed over addition ( using the <em>highpass filter</em> {-1,2,-1}; k = -1,0,1; to demonstrate ) :</p> <p><span class="math-container">$$ y[n] = x[n] \star \{ -\delta[n+1] + 2 \delta[n] - \delta[n-1] \} $$</span></p> <p><span class="math-container">$$ y[n] = - x[n+1] + 2 x[n] -x[n-1] $$</span></p> <p>By simple argumentation of convolution nonzero ranges, it can be seen that <span class="math-container">$y[n]$</span> starts at the index <span class="math-container">$n=-1$</span> hence the first sample of the output is:</p> <p><span class="math-container">$$ y[-1] = -x[0] + 2 x[-1] - x[-2] = -x[0] = 20 $$</span></p> <p>Note that <span class="math-container">$x[n]=0$</span> for <span class="math-container">$n&lt;0$</span>; The second sample of <span class="math-container">$y[n]$</span> will be <span class="math-container">$y[0]$</span> which is:</p> <p><span class="math-container">$$ y[0] = -x[1] + 2 x[0] - x[-1] = 50 + -40 = 10 $$</span></p> <p>and so on... You can apply the procedure for your other filter and other output samples</p>
633