category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
signal denoising
|
Removing periodic spike noise from ECG signal
|
https://dsp.stackexchange.com/questions/23681/removing-periodic-spike-noise-from-ecg-signal
|
<p>The signal shown in the following figure is collected from a ECG sensor. The spike noise that is observed with a periodicity of 30 seconds was traced to the periodic blip of the LED as it draws current.</p>
<p><img src="https://i.sstatic.net/qsV5D.png" alt="enter image description here"> </p>
<p>The following is one instance of the spike noise after zooming in,</p>
<p><img src="https://i.sstatic.net/Pt8uc.png" alt="enter image description here"></p>
<p>Frequency domain filters are not effective in this case because the frequency domain characteristics of the LED blip artifact is similar to the heart beat signal(QRS). </p>
<p>Is there any effective way to remove this artifact..? I was think of using the blip as a mother wavelet and performing a multi-scale correlation with the ECG signal to detect the spikes. Is this the procedure for wavelet denoising? Is there any other effective way of denoising for this case?</p>
|
<p>If duration of this spike is a only few point in time (1 or 2) , you can use median filter (<a href="http://en.wikipedia.org/wiki/Median_filter" rel="nofollow">wiki</a>) with size 3 or 5. But be careful - if useful signal has same duration, you will lose information.</p>
<p>P.S.<br>
As Fat32 write above, I suggest "to look for some electrical methods of avoiding those spikes"</p>
| 334
|
signal denoising
|
Denoise EEG signal by using Daubechies function
|
https://dsp.stackexchange.com/questions/13636/denoise-eeg-signal-by-using-daubechies-function
|
<p>I have an EEG signal and it contains eye blink artifacts. I read some references and know that it is possible to detect eye blinks and remove them by using wavelet transforms, but I don't know that how do it. </p>
<p>How do I detect eye blinks? After transforming the EEG signal into wavelet coefficients, what should I do and which level of Daubechies can be used to do that?</p>
|
<p>You might simply use soft/hard thresholding. This operation is quite standard and called Wavelet Denoising.
Here are some resources:</p>
<p><a href="http://www.stthomas.edu/mathematics/pdfs/MSAD/Denoising%20via%20Wavele.pdf" rel="nofollow">http://www.stthomas.edu/mathematics/pdfs/MSAD/Denoising%20via%20Wavele.pdf</a>
<a href="http://scholar.lib.vt.edu/theses/available/etd-12062002-152858/unrestricted/Chapter4.pdf" rel="nofollow">http://scholar.lib.vt.edu/theses/available/etd-12062002-152858/unrestricted/Chapter4.pdf</a>
<a href="http://www.ee.iitb.ac.in/~icvgip/PAPERS/202.pdf" rel="nofollow">http://www.ee.iitb.ac.in/~icvgip/PAPERS/202.pdf</a></p>
<p>An overview of noise removal for EEG:
<a href="http://www.ijarcce.com/upload/february/3_A%20survey%20on%20different.pdf" rel="nofollow">http://www.ijarcce.com/upload/february/3_A%20survey%20on%20different.pdf</a></p>
<p>Some source code on image denoising via VisuShrink:
<a href="http://www.mathworks.com/matlabcentral/fileexchange/43996-image-denoising-using-visushrink" rel="nofollow">http://www.mathworks.com/matlabcentral/fileexchange/43996-image-denoising-using-visushrink</a></p>
| 335
|
signal denoising
|
Tracking period of quasi-periodic transient type signal in strong non-stationary interference
|
https://dsp.stackexchange.com/questions/90256/tracking-period-of-quasi-periodic-transient-type-signal-in-strong-non-stationary
|
<p>Consider a desired (pink) signal as well as its observation in heavy, non-stationary interference (green, this is the desired signal plus interference). As seen in the plot, the interference can also display quasi-periodic behavior.
Initial filtering removed frequency bands where the interference dominated. The post-filtering desired and interfering signals share the same frequency band. Given the observations, I wish to track in real-time the period associated with the desired signal.</p>
<p>Given the frequency overlap, I don't see how additional linear filtering can help further mitigate interference. Also, most of the wavelet denoising approaches I have found in the literature don't have such poor signal-to-interference ratio. Other methods in the literature rely on Empirical Mode Decomposition for denoising, which seem computationally expensive and perhaps not ideally suited to real-time.</p>
<p>Was wondering if anyone here has some ideas for how to handle this admittedly difficult interference mitigation problem. Thanks!</p>
<p><a href="https://i.sstatic.net/naq30.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/naq30.png" alt="enter image description here" /></a></p>
| 336
|
|
signal denoising
|
Calculating Shannon-like entropy function of a 1D signal with random noise
|
https://dsp.stackexchange.com/questions/94589/calculating-shannon-like-entropy-function-of-a-1d-signal-with-random-noise
|
<p>I have been searching for a measure of Shannon's entropy <span class="math-container">$\ H $</span> or other entropy-like formulae that vary smoothly with noise for real 1D signals. MATLAB has built in functions for image entropy. The ultimate goal is to use that function for denoising with chi-square (<span class="math-container">$\chi^2 $</span>) as a constraint. While the objective function and minimization are secondary concerns, the primary challenge lies in dealing with negative values introduced by noise.</p>
<p>The key question that no denoising paper explicitly mentions is: <em>How do researchers in this area deal with negative values due to noise?</em> The reason is that many authors seem to use the raw values of noisy signals in their pseudo-"entropy" calculations for denoising, specifically employing the formula <a href="https://mathoverflow.net/questions/475196/numerical-implemenation-of-denoising-data-using-maximum-entropy">MathOverflow for the details, if interested</a>:</p>
<p><span class="math-container">$$ H = - \sum_i x_i \log(x_i)$$</span></p>
<p>where <span class="math-container">$\ x_i $</span> represents the signal values. However, the actual Shannon entropy formula for a probability distribution <span class="math-container">$P$</span> is:</p>
<p><span class="math-container">$$ H(P) = - \sum_i p_i \log(p_i)$$</span></p>
<p>where <span class="math-container">$p_i$</span> are the probabilities that must satisfy <span class="math-container">$\ p_i \geq 0$</span> and <span class="math-container">$\sum_i p_i = 1$</span>.</p>
<p>The true Shannon entropy formula deals with probabilities, and this is typically calculated using the histogram method. However, it seems that that the histogram method can show erratic behavior as a function of noise for discrete data, which raises further concerns about the reliability of entropy calculations under noisy conditions.</p>
<p>Given that noise can introduce negative values in <span class="math-container">$x_i$</span>, how these negative values are handled in entropy-based denoising approaches specifically when people use "raw" values. Specifically, how do researchers ensure that the entropy calculation remains valid and meaningful when the signal values can be negative due to noise? Are there any standard preprocessing steps, transformations, or specific adaptations of the entropy formula used to address the issue of random negative values? How does the chi-square constraint come into play in such scenarios? Are there alternative methods to the histogram approach that mitigate erratic behavior?
<a href="https://i.sstatic.net/1Ks87pf3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Ks87pf3.jpg" alt="Image" /></a></p>
|
<p>You're a bit missing the point here: when you use the formula <span class="math-container">$ H = - \sum_i x_i \log(x_i)$</span> instead of <span class="math-container">$ H(P) = - \sum_i p_i \log(p_i)$</span> for anything where <span class="math-container">$P(x_i) \ne x_i$</span>, then that's where your entropy estimate becomes wrong.</p>
<p>The noise (assuming it's stochastically independent from the signal) actually adds <em>precisely</em> its own entropy to the "clean" signal's entropy. That's exactly the motivation for Shannon's noise formula – independent sources contribute entropy linearly.</p>
<p>So, a correct estimator would correctly see the entropy being increased by noise, exactly by the amount of entropy in the noise.</p>
<p>Your proposed one doesn't do that, aside from <em>very</em> strange combinations of signal and noise probability density functions, so it doesn't seem fit for the purpose.</p>
| 337
|
signal denoising
|
How to perform filtering through optimization?
|
https://dsp.stackexchange.com/questions/81725/how-to-perform-filtering-through-optimization
|
<p>I have an objective function for finding out the signal estimate say <span class="math-container">$\parallel X_{cap}-X\parallel_2^2 +$</span> denoising term. Can someone suggest me any techniques on how to incorporate a filter such as Kalman filter as an addend to this objective function in place of denoising term since the output of the filter is vector and the objective function has all scalar terms (since we are taking norm).</p>
| 338
|
|
signal denoising
|
Poisson noise and curve fitting - denoise first?
|
https://dsp.stackexchange.com/questions/15967/poisson-noise-and-curve-fitting-denoise-first
|
<p>If I have an image that is severely corrupted by Poisson noise, and I want to fit a function to the image, is it "better" to attempt to denoise the signal first before fitting, or should I move straight to the fitting stage?</p>
<p>In the example below, a 2D Gaussian function has been corrupted by Poisson noise. Should I fit a 2D Gaussian to the noisy data, or to a denoised version?</p>
<p>Denoising images is often good for qualitative reasons, but I'm curious to know about the quantitative case, for example where the volume of the Gaussian is important?</p>
<p>By "denoising", I'm thinking along the lines of the techniques such as <a href="http://josephsalmon.eu/papers/ICASSP12_SDWH.pdf" rel="nofollow noreferrer">Non-local PCA</a> rather than median filtering etc.</p>
<p><img src="https://i.sstatic.net/DCZFc.png" alt="Noisy Gaussian image"></p>
|
<p>If the example images you've given are at all representative of your application, you may want to consider thinking about the problem a little differently. Instead of thinking of the image as "corrupted by Poisson noise", think of the observed data as a limited number of photons sampled at random from the latent image intensity map. The photon counts you get are providing <em>incomplete</em> information about the latent image, not <em>corrupt</em> or <em>noisy</em> information.</p>
<p>From this perspective, if you are truly fitting a curve or formula of few parameters to your data, this is a classical parametric statistical estimation problem (lots of samples, few parameters). Such problems have a good estimation theory which guarantees you will accurately learn the fit parameters without needing too many sample points. If you apply a denoising step before parametric fitting, the theoretical guarantees will no longer apply because the error in the denoised image is some complicated composite of residual fit variance and bias from the denoiser's implicit signal model.</p>
<p>So there's a theoretical justification for fitting the function straight to the data. Just make sure that you use the correct Poissonian likelihood function when you do so, or use the EM algorithm if you don't want to worry about such things.</p>
<p>EDIT: actually you don't even need Poissonian likelihood here (unless your data is quantized/binned). Just take the approach described in <a href="http://en.wikipedia.org/wiki/Maximum_likelihood" rel="nofollow">http://en.wikipedia.org/wiki/Maximum_likelihood</a>. Here the $x_1, x_2, ... x_n$ are a collection of $n$ vectors giving the 2-D coordinates of your photon observations, and $\theta$ is the parameter vector for the function you want to fit. In the case of a Gaussian you have $\theta = (\mu,\Sigma)$ where $\mu$ is the mean (centroid in 2D) and $\Sigma$ is the covariance matrix. Then in the notation of the wiki article, your likelihood is</p>
<p>$$ \prod_{i=1}^n f(x_i | \theta) = C \prod_{i=1}^n \exp( -\tfrac{1}{2} (x_i - \mu)^T \Sigma^{-1} (x_i - \mu)) $$</p>
<p>where $C$ is some constant with square roots of pi in it and such :) If your data is quantized/binned then you consider the count value in each bin as a Poisson random variable and go from there. Weirdly, the Poisson and the $n$ observation approaches are nearly equivalent from an optimization standpoint.</p>
<p>I don't have a great reference for you offhand because I learned this stuff in lab classes or wherever years ago. But the wiki article and refs therein seem good. This paper seems ok too: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.671&rep=rep1&type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.671&rep=rep1&type=pdf</a></p>
| 339
|
signal denoising
|
Why NMSE is the same as output SNR but with an opposite sign?
|
https://dsp.stackexchange.com/questions/81699/why-nmse-is-the-same-as-output-snr-but-with-an-opposite-sign
|
<p>I am doing denoising on signal, and as performance measures Normalized Mean Squared Error (NMSE) and output-SNR between original/clean and denoised signals are used.
However, for several cases the answer for NMSE and oSNR are the same but with different signs. Is it okay?</p>
<p>Input SNR for clean signal = 15 dB
O-SNR = 19.166491 dB
NMSE = -19.166491 dB</p>
|
<p>I mean, for fixed signal power, SNR is proportional to the inverse of noise power, and NMSE is literally the normalized inverse noise power. In decibel, inverse is just a sign change.</p>
| 340
|
signal denoising
|
Total Variation of a Signal - Is It Proportional to Signal Energy?
|
https://dsp.stackexchange.com/questions/49108/total-variation-of-a-signal-is-it-proportional-to-signal-energy
|
<p>In an audio application, I found it very useful to measure the <strong>total variation of a signal <span class="math-container">$y[n]$</span></strong></p>
<p><span class="math-container">$$\sum_{n=n_0}^{n_0+N} |y[n]-y[n-1]|$$</span></p>
<p>over a window of time length <span class="math-container">$N$</span> (discrete analogous to <a href="https://en.wikipedia.org/wiki/Total_variation#The_form_of_the_total_variation_of_a_differentiable_function_of_one_variable" rel="nofollow noreferrer">total variation of a function</a>).</p>
<p>I've noticed that:</p>
<ul>
<li><p>during "background noise only" parts of the signal, this total variation is low</p>
</li>
<li><p>during "background noise + musical sound" parts of the signal, the total variation is strictly higher.</p>
</li>
</ul>
<p>Thus, it worked well in my application for envelope detection, etc. After doing my application, I heard about <a href="https://en.wikipedia.org/wiki/Total_variation_denoising" rel="nofollow noreferrer">total variation denoising</a>, and it seems to confirm why it works:</p>
<blockquote>
<p>It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute gradient of the signal is high.</p>
<p>This noise removal technique has advantages [...] total variation denoising is remarkably effective at simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios</p>
</blockquote>
<p>The total variation of the signal over a time-window is in fact <strong>the distance traveled on the y-axis by the 1D-curve <span class="math-container">$n \mapsto y[n]$</span></strong>, so we understand why it works:</p>
<ul>
<li><p>when the signal is noise only, the waveform of the signal "travels" at a nearly-constant rate (see left of the following image)</p>
</li>
<li><p>when the signal is noise + musical sound, the waveform of the signal "travels" more! (see right of the image)</p>
</li>
</ul>
<p><a href="https://i.sstatic.net/kNCgD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kNCgD.jpg" alt="enter image description here" /></a></p>
<p>Now the question:</p>
<p><strong>Question: It seems that the total variation of the signal over a time-window is more or less proportional to the energy present in the signal during this window. Is this true?</strong> At least for signals with a zero mean?</p>
|
<p>No it is not.<br />
Total Variation is like the amount of changes in the signal.<br />
Though changes require energy it doesn't mean they are proportional.</p>
<p>For instance, imagine that during a Window we see a constant signal of high value.<br />
Clearly this high energy signal (Unless energy for you is the Variance, usually it is the 2nd moment) yet its Total Variation is zero.</p>
<p>For instance:</p>
<p><a href="https://i.sstatic.net/fFDhL.png" rel="noreferrer"><img src="https://i.sstatic.net/fFDhL.png" alt="enter image description here" /></a></p>
<p>While the green signal has low TV but high energy the red signal has higher TV yet smaller energy.</p>
| 341
|
signal denoising
|
extract trend correctly, including most recent values
|
https://dsp.stackexchange.com/questions/50114/extract-trend-correctly-including-most-recent-values
|
<p>I'm looking to extract the trend of a signal.</p>
<p>i've tried two methods for now, polynomial regression, and wavelet denoising</p>
<p>both methods don't respect the computation of the last values (meaning the last values computed will not be the same if we compute a longer buffer containing new values).</p>
<p>is there a way, excluding FIR/IIR filters to extract a trend that stay consistent across all the values in time ?</p>
<p>matlab code is welcome if possible</p>
<p>here, with Matlab denoising, we can see most recent values aren't the same
<a href="https://i.sstatic.net/52rRQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/52rRQ.jpg" alt="enter image description here"></a></p>
<p>thanks</p>
| 342
|
|
signal denoising
|
How to preprocess such signals?
|
https://dsp.stackexchange.com/questions/34334/how-to-preprocess-such-signals
|
<p>I am interested in denoising accerelation measurements, recorded in ambient vibration tests. Such tests consist in recording the vibrations of a mechanical structure, say a table for example. So say I put an accelerometer on the table and measure the signal without touching the table. The objective is to retrieve the dynamical properties of the table, i.e. its dominant natural frequencies.</p>
<p>The signal is a combination of the table dynamics response under a random excitation, plus some measurement noise. The sought frequencies correspond to peaks in the PSD, but for some other reason I would like to denoise the measurements.</p>
<p>The typical range of frequency can be estimated for a structure. Let's assume it is [10,20]Hz in the present case.
The first thing I do is bandpass the signal with a bandpass of [10,20]Hz. Then, I decimate the measurements down to 50Hz, so that 20Hz is below the Nyquist frequency. But this does not suffice for my needs, so I am wondering how I could improve the preprocessing / denoising.</p>
<p>I am considering using wavelets to denoise the signals (i.e. have nice peaks in the |DFT| vs frequency plot), but I am not sure if it's a good idea. It would probably be if the input was highly unstationnary, but that's not very likely. Another possibility could be using correlation with some sines of appropriate frequencies, maybe... So the question is to the experienced DSP people, can you think of anything which I could use to improve the denoising?</p>
<p><em>Please do not hesitate do ask for clarifications. I tried to make my question simple enough to not discourage readers, but had to remove some details.</em></p>
| 343
|
|
signal denoising
|
Convex Optimization in Signal and Image Processing
|
https://dsp.stackexchange.com/questions/24890/convex-optimization-in-signal-and-image-processing
|
<p>In signal processing, convex optimization plays a useful role in problems such as sparse signal recovery and filter design. What other places does convex optimization appear?</p>
<p>For example, in compressed sensing the Basis pursuit Denoising problem, the LASSO problem and the Dantzig selector can be posed as:</p>
<p>\begin{eqnarray}
\min_{x} \ell(Ax-b)+r(x)
\end{eqnarray}</p>
<p>where $\ell(\cdot)$ is and $r(\cdot)$ are appropriate loss and regularization terms, respectively. Moreover, the design of a filter subject to time and frequency constraints often yields a convex formulation.</p>
|
<p>There's a whole area of signal processing dedicated to optimal filtering. In pretty much every case I've seen the filtering problem is formulated with a convex cost function. </p>
<p>Here's a freely available book on the subject - <a href="http://www.ece.rutgers.edu/~orfanidi/osp2e/osp2e.pdf" rel="nofollow noreferrer">Sophocles J. Orfanidis - Optimum
Signal Processing</a>.</p>
| 344
|
signal denoising
|
How detect and filter noise from signal
|
https://dsp.stackexchange.com/questions/13119/how-detect-and-filter-noise-from-signal
|
<p>This is the spectrum of the signal:<img src="https://i.sstatic.net/KyDB1.png" alt="Welch method of SPD"></p>
<p>And this is the signal:</p>
<p><img src="https://i.sstatic.net/w17nt.png" alt="Original sig"></p>
<p>On the FFT spectrum you can notice the high amplitude peaks and I think these frequencies are the main ones in the signal:
<img src="https://i.sstatic.net/JKQMY.png" alt="FFT spectrum"> </p>
<p>I think that the "low" harmonic must be clean, but I have the problem that low amplitude components are placed in every frequency interval. The main problem is that I haven't got any model of the signal for a hypothesis about noise, and as result, I must detect noise components. After this step, I must design a filter or use another method of filtering for denoising the signal. Which methods can I apply for (1) detecting noise and (2) filtering? </p>
|
<p>Just "eyeballing" your signal, it looks like the interesting things are all down below 1Hz. The peaks in your signal browser diagramm seem to be spaced at every 5 to 10 Seconds, which would be 0.1 to 0.2 Hz. Your spectrum plot goes from zero to 25 Hz, and doesn't show anything really interesting.</p>
<p>Try zooming in on the spectrum and see what lies below 1 Hz. Yo may well find that the signal itself lives down in the very low frequencies, and that separating the noise from the signal is as simple as using a low pass filter.</p>
| 345
|
signal denoising
|
Avoiding latency distortion at high denoise levels with DWT
|
https://dsp.stackexchange.com/questions/89516/avoiding-latency-distortion-at-high-denoise-levels-with-dwt
|
<p>I am denoising biological signals using the DWT, and for UI reasons would prefer the smoother waveform afforded by denoise level 5. However, higher denoise levels seem to distort the latency of salient features (peaks). I've played around with different parametrizations (which converge to a function call of MATLAB's <code>wdencmp</code>):</p>
<ul>
<li>threshold methods (rigrsure, heursure)</li>
<li>soft vs hard thresholds</li>
<li>different mother wavelets</li>
</ul>
<p>So far I've settled with threshold method: 'rigrsure', soft threshold, sym8 wavelet. I've also been playing around with padding/extending the signal on the front end of the signal, which sometimes [over] corrects the distortion. That said, I don't have a great intuition for why padding length affects signal morphology the way it does (RMSE between the raw and denoised oscillates as padding length increases).</p>
<p>In the continued presence of latency distortion, I was wondering what other parametrizations, and pre processing steps I could explore to approach latency feature conservation in my signal. I understand of course this is domain specific - but any pointers to techniques would be appreciated.</p>
<p><a href="https://i.sstatic.net/ewQv7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ewQv7.png" alt="enter image description here" /></a></p>
<p>[Raw vs denoised signals - peak latencies are offset, especially trough #2 Red and blue are denoised.]</p>
| 346
|
|
signal denoising
|
What are the pros and cons of wavelet for filtering compared to conventional filters?
|
https://dsp.stackexchange.com/questions/43951/what-are-the-pros-and-cons-of-wavelet-for-filtering-compared-to-conventional-fil
|
<p>Wavelets have been widely used in denoising or extracting one specific frequency band of a signal nowadays. However, these can also be done through conventional filters (e.g. butterworth, Chebyshev). So what are the pros and cons for these two methods for filtering? </p>
|
<p>Let us first consider the orthogonal DWT: this transform is build with several constraints: orthogonality (and invertibility) of course, and the discretization of some continuous wavelet transform, with a specific dyadic structure, yielding a frequency decomposition whose cut are approximately:</p>
<p>$$[0\;1/2^L\;1/2^{L-1}\ldots 1/2 \;1]$$</p>
<p>Discrete wavelets can be implemented as a bank of filters (or a filterbank) dedicated to one of the above intervals, followed by the appropriate subsampling. And only some specific filters (albethey in infinite number) obey the above constraints. </p>
<p>Hence, with the DWT, only dyadic boundaries are available (*), only a subset of filters can be used, and outputs are decimated. And orthogonality, scales are not used at all.</p>
<p>Whereas with standard filters, you only have linearity constraints: invertibility is not required, you can specify the bounds, ripples and decay you need (at the price of design issues).</p>
<p><strong>So for mere filtering, I cannot see an advantage of using discrete and critical wavelet schemes</strong>. However, DWT can do some crude band-pass filtering while being used for other processing needs.</p>
<p><strong>Wavelets now play a more interesting role for data sparsification</strong>, along with non-linear analysis, for compression, restoration, which standard filters generally cannot achieve.</p>
<p>As a side note, if you allow redundancy in the transformation, choosing a wavelet with a sufficient number of oscillations in a sufficiently long "window" could do a good job, but that would be overkill.</p>
<p>(*) with $M$-band wavelets and wavelet packets, you could get more generic $M$-ary integers $k/M^n$, but not all fractions or reals</p>
| 347
|
signal denoising
|
Kalman filtering in practice: biomedical processing
|
https://dsp.stackexchange.com/questions/96519/kalman-filtering-in-practice-biomedical-processing
|
<p>I'm working with a Lead II ECG signal sampled at 500 Hz that contains noise and artifacts.
I do not have access to an accelerometer or reference signals — only the ECG itself.</p>
<p>I'm new to Kalman filters and I am trying to denoise ECG signals. I have read that Kalman Filters can be useful for this, but I am confused about how to set up the filter (model the noise sources and so on).</p>
<p>Can someone help guide me through the basic steps of setting up a Kalman Filter for denoising in this context?</p>
| 348
|
|
signal denoising
|
Filtering artefacts and filtering short signals
|
https://dsp.stackexchange.com/questions/69532/filtering-artefacts-and-filtering-short-signals
|
<p>I have a signal from EEG sensors and I try to denoise it from AC frequencies. For that reason, I estimated PSD of my signal and found that 50 Hz and 100 Hz are likely to represent noise. I constructed Butterworth filter of order four and got much clearer signal, but at the start ([0:150] segment) there is even more distortion. Why is it so? If it helps, I use <code>lfilter</code> from <code>scipy.signal</code>.</p>
<p><a href="https://i.sstatic.net/TnZUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TnZUY.png" alt="enter image description here" /></a></p>
<p>Besides, in the future I want to break a signal into smaller pieces (say, length of 100). I have tried denoising them already and it seems like such filters do not work on short segments. What can I do with this?</p>
<p><a href="https://i.sstatic.net/NMkgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMkgP.png" alt="enter image description here" /></a></p>
|
<p>All real-time filtering (as opposed to post processing) wirh FIR and IIR filters will have start up transitions based on the state of the filter at start up. For optimum rejection of AC noise , Instead of a Butterworth Filter consider using an 2nd order IIR notch filter with the notch set at your AC frequency (such as 50 Hz). A design for this is further explained on this site and is simple and effective for this purpose, and happens to be demonstrated with a notch at 50 Hz. (<a href="https://dsp.stackexchange.com/questions/31028/transfer-function-of-second-order-notch-filter/31030#31030">Transfer function of second order notch filter</a>).</p>
<p>To have shorter segments, partition into segments at the output of the filter so that the filter’s memory can be maximized to meet the filtering requirement as designed (a filter’s transition bandwidth between passing a frequency of interest and rejection is dependent on the span of its memory—- for tight rejection long memory is required.)</p>
<p>If the application is for very short durations at completely different instances in time, that can make no use of the prior immediate time domain signal, then such traditional FIR and IIR filtering techniques will not be suitable beyond what they are capable of for the given time span. That said, the IIR nulling approach with its rejection optimized at the interference frequency may be sufficient and simple.</p>
| 349
|
signal denoising
|
Why should wavelet re-synthesis produce an output when the main component is suppressed and what does this mean for denoising?
|
https://dsp.stackexchange.com/questions/69441/why-should-wavelet-re-synthesis-produce-an-output-when-the-main-component-is-sup
|
<p>I understand that aliasing occurs in DWPT if the wavelet used is of low order since the "filters" are not perfect and the combination of down sampling and overlapping between bands causes aliasing.
I am using low order wavelet as I aim to implement the process in real-time and I have limitations on the number of filter taps.
Doing some simulations in matlab I saw aliasing in two cases:</p>
<ol>
<li>One case I saw it when I decomposed a signal to a tree and I plotted
FFT of a certain node but when I reconstructed the tree I noticed
that aliasing was canceled.(basically did not alter the coefficients only decomposed viewed and reconstructed).
<a href="https://i.sstatic.net/8rJqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8rJqw.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/CEShG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CEShG.png" alt="enter image description here" /></a></li>
<li>Doing the same thing as above but thresholding the node or compressing the coefficients to zero the aliasing was still present after reconstruction.
<a href="https://i.sstatic.net/mifwK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mifwK.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/yiJps.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yiJps.png" alt="enter image description here" /></a>
I compressed level 22 coefficients to zero.
<a href="https://i.sstatic.net/CCVIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CCVIS.png" alt="enter image description here" /></a>
So can I conclude that aliasing will always occur if I am using a low order wavelet say 'db4' and I manipulate the coefficients?
If so how is wavelet such famous denoising tool and aliasing can be this bad?</li>
</ol>
| 350
|
|
signal denoising
|
Removing low frequency vibrations from measured signal
|
https://dsp.stackexchange.com/questions/30690/removing-low-frequency-vibrations-from-measured-signal
|
<p>Suppose I have two sensors $s_1$ and $s_2$. $s_1$ measures the desired heart signal (with most of the frequency content below 100 Hz), and the other is a reference sensor picking mainly background noise. </p>
<p>Due to the pumping of the blood in the heart, there is also low frequency vibrations, mainly below 10 Hz that contaminates both sensors. The sensors are spatially separated, making direct cancellation unfeasible. </p>
<p>The figure below shows the frequency content of both sensors ($s_1$ is blue and $s_2$ is red), following discrete Fourier Transform. The visible peak is the 50 Hz mains hum. <a href="https://i.sstatic.net/ahDa4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ahDa4.png" alt="enter image description here"></a></p>
<p>Nonlinear denoising is effective in removing mains hum without compromising the signal. However, it seems to be ineffective in removing the low frequency vibrations. The figure below shows the effect of applying it to $s_1$ <a href="https://i.sstatic.net/N1DwG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N1DwG.png" alt="enter image description here"></a> </p>
<p>My question is how can these vibrations be removed from $s_1$, without causing too much distortion to the desired signal?</p>
| 351
|
|
signal denoising
|
How can I use interpolation to align audio
|
https://dsp.stackexchange.com/questions/89231/how-can-i-use-interpolation-to-align-audio
|
<p>I notice that when I connect two audio clips, there is a jump at the junction.</p>
<p><a href="https://i.sstatic.net/k6uLU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k6uLU.png" alt="enter image description here" /></a>
How do I use interpolation to connect these two pieces of audio?</p>
<p>Thanks for the answers, I've got an answer from my other question:
<a href="https://dsp.stackexchange.com/questions/89206/how-to-merge-audio-better/89235?noredirect=1#comment191632_89235">How to merge audio better?</a></p>
<p>Some related information:</p>
<p>1.Noise is the sound of aircraft propellers 2.I want to cut the noisy audio into 0.5s audio, and then feed it into the algorithm to denoise. Then connect the denoised segments.
The denoising alg is CMGAN.link is here:github.com/ruizhecao96/CMGAN</p>
<p>a plot of both signals I am trying to connect:
<a href="https://i.sstatic.net/HlFg0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlFg0.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/6DYQA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6DYQA.png" alt="enter image description here" /></a></p>
|
<p>A straightforward(ish) approach is cross-fading: apply a fade-out at the end of the first clip and a fade-in at the beginning of the second. Overlap the two sections that fade in/out, which will gradually blend the two together.</p>
<p>You can experiment with linear/logarithmic fade curves, and of course with the fade duration.</p>
| 352
|
signal denoising
|
Recommended resources for noise reduction
|
https://dsp.stackexchange.com/questions/51927/recommended-resources-for-noise-reduction
|
<p>I know the question <a href="https://dsp.stackexchange.com/q/427/36868">What Resources Are Recommended for an Introduction to Signal Processing (DSP)?</a> and I have already read some general DSP books, such as Rick Lyons's "Understanding DSP," and also have browsed through Oppenheim's.</p>
<p>More specifically, <strong>what (printed) book would you recommend for denoising techniques</strong> in general? </p>
<p>Including techniques adapted for audio, spectral subtraction, Wiener filtering, etc. and if possible adapted to <em>discrete</em> signal processing with an algorithm/engineering point of view
(i.e. rather <code>x[k]</code> than continuous <code>x(t)</code> and integrals...)</p>
<p>I just ordered a used version of <a href="https://www.wiley.com/en-us/Advanced+Digital+Signal+Processing+and+Noise+Reduction%2C+4th+Edition-p-9780470754061" rel="nofollow noreferrer"><em>Advanced digital signal processing and noise reduction</em>, Vaseghi</a>, but I'm looking for other recommendations.</p>
|
<p>Signal denosing is a well-studied technique in signal processing. It first began using simple techniques such as filtering. In this approach, the emphasis is laid on designing filters which can perform denosing techniques in a fast and efficient manner.</p>
<p>Later, when wavelet theory was developed, some researchers used wavelets to denosing the signal. Essentially, we can apply thresholding techniques (soft or hard threshold) on the coefficients of the wavelets. On reconstruction using the dominant coefficients, one can denoise the signal.</p>
<p>In the early 2000s, sparsity-based techniques were introduced where the input signal is represented using an over complete dictionary. In the reconstruction step, sparsity inducing norms were used to extract only the dominant columns of the dictionary. Popular approaches include LASSO and total variation denosing.</p>
<p><strong>Reference</strong></p>
<ul>
<li>Useful reference with MATLAB codes <a href="http://www.numerical-tours.com/matlab/" rel="nofollow noreferrer">http://www.numerical-tours.com/matlab/</a></li>
</ul>
| 353
|
signal denoising
|
Expanding piecewise polynomial using Daubechies wavelet
|
https://dsp.stackexchange.com/questions/29706/expanding-piecewise-polynomial-using-daubechies-wavelet
|
<p>What is the best Daubechies wavelet (i.e. the number of vanishing moment) to expand a signal $\boldsymbol{x} \in \mathbb{R}^n$?
$\boldsymbol{x}$ consists of $m$ pieces of polynomial with $d$ degree. The criterium is to make the DWT signal as sparse as possible.</p>
<p>Update:
The goal of sparsifying the signal in wavelet domain is denoising. Let $\boldsymbol{W}$ denote DWT.
$$
\boldsymbol{y} = \boldsymbol{Wx}
$$
Apply a soft-thresholding to $\boldsymbol{y}$,
$$
\hat{\boldsymbol{y}} = \text{sign}(\boldsymbol{y})(\max(\boldsymbol{y}-\lambda,0))
$$
Choose $\lambda=\sqrt{2\log n}$ according to <a href="http://biomet.oxfordjournals.org/content/81/3/425.short" rel="nofollow">Ideal spatial adaptation by wavelet shrinkage</a>.
The sparsity is defined by $\| \hat{\boldsymbol{y}} \|_0$.</p>
|
<p>According to your formula, you also apply soft-thresholding to the approximation coefficients, which is not standard. Aside, your operator $W$ does not seem to specify the number of wavelet levels used. Finally, your class of signals does not seem to address the regularity at piecewise junctions. </p>
<p>I believe in this case very unlikely that in a discrete implementation, without further relation between $m$ and $n$, you can find, theoretically, a best wavelet in all cases, because $\|\hat{y}\|_0$ is a quite sensitive index (and it is not a norm). Of course, a Daubechies with $d$ vanishing moments would seem appropriate.</p>
<p>But since the DWT is quite fast, in your context, you could just find a "generally best wavelet", by simulating random signals, and iterating over some levels, and each Daubechies wavelet with moments in $[d-2,\ldots,d+2]$ for instance.</p>
| 354
|
signal denoising
|
How to acquire physical noise samples
|
https://dsp.stackexchange.com/questions/32278/how-to-acquire-physical-noise-samples
|
<p>I need to obtain some real (simulated) data to indicate noise in communication system.</p>
<p>How do I go about it? I need it for my thesis.</p>
<p>I don't know if denoising a set of signals will help so I can have the noise data samples.</p>
<p>I just need the noise data samples.</p>
|
<p>One easy way to grab a set of samples of physical noise is by acquring a cheap SDR, such as an <a href="http://www.rtl-sdr.com/" rel="nofollow">RTL</a>, tuning it to an empty band, and just reading and saving the samples. This should give a you a nice set of mostly white noise samples.</p>
| 355
|
signal denoising
|
Detecting and Removing Noise from Signal using Python
|
https://dsp.stackexchange.com/questions/85013/detecting-and-removing-noise-from-signal-using-python
|
<p>Through this platform, I want to ask that how can I remove unwanted noise from the signal when you do not have much information regarding the frequency at which they appear? Data is collected from an inductive sensor and sampling frequency is 30000 Hz. There are a lot of electrical noises in the signal, the easily distinguishible have already been removed using a Notch Filter. However, there are some electrical noises which are not easily distinguishible from the other parts of the signal. I tested the following approaches:</p>
<ul>
<li>Removing easily visible electrical noises using Notch Filter</li>
<li>Taking FFT and applying binning to visualize other noisy parts</li>
<li>Using find_peaks() function to detect and remove noisy parts</li>
</ul>
<p>However, when I appled ifft I could not get the filtered original time series data.</p>
<p>This is the Original Time Series Signal:</p>
<p><a href="https://i.sstatic.net/8Fg8T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Fg8T.png" alt="Original Time Series Signal" /></a></p>
<p>This is the complex Fourier Transform before binning and find_peaks():</p>
<p><a href="https://i.sstatic.net/845m6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/845m6.png" alt="enter image description here" /></a></p>
<p>And this is the fourier transform after binning:</p>
<p><a href="https://i.sstatic.net/EA1CU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EA1CU.png" alt="enter image description here" /></a></p>
<p>And this is the result obtained after find_peaks():</p>
<p><a href="https://i.sstatic.net/hMFAu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hMFAu.png" alt="enter image description here" /></a></p>
<p>And this is the inverse FFT result that I have obtained after so called processing (FFT, binnig, find_peaks() technique)</p>
<p><a href="https://i.sstatic.net/oODbE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oODbE.png" alt="enter image description here" /></a></p>
<p>Can anyone please help me understand where I went wrong?</p>
<p>This is the reference data obtained usinf Eddy Current Sensor. I suppose my data should also look likr this after denoising.</p>
<p><a href="https://i.sstatic.net/lORPf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lORPf.png" alt="enter image description here" /></a></p>
|
<p>The train moves at speed <span class="math-container">$v_t$</span>. This (absolute) speed reasonably will have a supremum <span class="math-container">$v_{t\max}$</span>. It obviously has an infimum of <span class="math-container">$0$</span>, but there is nothing to measure with a train standing around. So we declare some minimum speed we are interested in and call that <span class="math-container">$v_{t\min}$</span>.</p>
<p>The rail fasteners are spaced at a distance of <span class="math-container">$d$</span>, which can be assumed as constant for these purposes. The transitions at the "heart" points can with some reason be assumed to be in the same order of magnitude as <span class="math-container">$d$</span>.</p>
<p>From this, we can derive the part of the spectrum we are interested in. For the lower corner frequency, we take <span class="math-container">$v_{t\min}$</span> and add some margin, say, a decade, below. That gives us
<span class="math-container">$$f_l = 0.1\frac{v_{t\min}}{d}$$</span>
We do the same for the upper corner frequency and set the margin again to a decade, above this time of course:
<span class="math-container">$$f_u = 10\frac{v_{t\max}}{d}$$</span>
With these two corner frequencies, design a bandpass to filter the signal. You should be able to see a lot more than before. From there further steps can be taken.</p>
<p>Remark on your attempts so far:
The operations you performed on the spectrum where <em>nonlinear</em>, so the inverse FFT was bound to produce rubbish.</p>
| 356
|
signal denoising
|
Is this better detection than Matched Filter and Gaussian noise cancellation technique for SONAR data?
|
https://dsp.stackexchange.com/questions/93291/is-this-better-detection-than-matched-filter-and-gaussian-noise-cancellation-tec
|
<p>The code below generates one sinusoid and considered as known signal. One needs to find the location of the signals under additive Gaussian noise.</p>
<p>The known signals is comprise of the sinusoids with random phase and amplitude and at random location in time samples.</p>
<p>As I understand Matched filter is the only option. The algorithm below detects the location of the signals and compare the results with matched filter output.</p>
<p>Blue is matched filter output and the red is the new algorithm's output. As you can see the inputs to the algorithm (code/functiom) Nfilter is unity norm. Is such algorithm useful.</p>
<p><strong>EDIT</strong></p>
<p><strong>As one can see, the Nfilter.m code completely ignores the energy information of the signals. Both the received signal and the matched filter are normalized by their norm and feed to the Nfilter. Further, while plotting no energy information is considered. The measure introduced here is not energy.</strong></p>
<p>I have shown only one plot.</p>
<p>I have added a measure to quantify the noise removing capability of the new algorithim, compare to the matched filter. It shows higher noise reduction, thus it added additional information. In the measure, only the energy of the single time location is considered as the desired signal and the other samples are noise. Ideally we want all other samples to be zero and the location<br />
of time samples as 1, which is ideal but not possible. Therefore the measure of the alogorithim is the ratio of on summation of the time samples with signal (3 time samples in this case) to that of all the samples. This measure is much higher for the new algorithm.</p>
<p>There are no nonlinear function possible to achieve the same results. This is beacuse the matched filter output of the smaller amplitude signal is less than the side lobe of the large amplitude sinusoid, but in case of the new algorithim all the amplitudes of the sinusoids are greater than the noise portion and the sidelobes.
EDIT: Additional information and a plot for 10 dB noise.</p>
<pre><code>ans =
Columns 1 through 7
28.8068 26.3796 34.7214 29.7934 17.1480 28.6398 19.4703
Columns 8 through 10
32.6842 32.8646 36.7861
</code></pre>
<p><a href="https://i.sstatic.net/rnNIG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rnNIG.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/wTBSo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wTBSo.png" alt="enter image description here" /></a></p>
<pre><code>close all
clear all
% clc
counter=1;
for k4=1:10
for dB=-3:-2:-3
t=.01:.01:.5;
L=length(t);
% Random frequencies)
f1=5;
%a1,a2 a3 are the amplitudes of the sinusoids at three different time locations
a1=5;
a2=1;
a3=.5;
% Sinusoid with random phase
matF=sin(2*f1*t+rand(1,1));
sM=max(max(abs(matF)));
%ampp is the amplitude of the noise.
amp=sM*1/(10^(dB/20));
% Random number of time samles between interested signals
n1=ceil(200*rand(1,1))+20;
n2=ceil(200*rand(1,1));
n3=ceil(200*rand(1,1));
n4=ceil(200*rand(1,1));
n5=ceil(200*rand(1,1));
n6=ceil(200*rand(1,1));
% 3 locations of the correct sample number
correctSamples=[n1+ceil(L/2)+1,n1+ceil(L/2)+1+n2+n3+L,n1+ceil(L/2)+1+n2+n3+L+n4+n5+L];
% Simulated signals with noise
y=[amp*randn(1,n1),a3*matF+amp*randn(1,length(t)),amp*randn(1,n2),amp*randn(1,n3),a1*matF+amp*randn(1,length(t)),amp*randn(1,n4),amp*randn(1,n5),a2*matF+amp*randn(1,length(t)),amp*randn(1,n6)];
% Matched filter output
y1=conv(y,flip(matF));
timeStamp=linspace(1,length(y),length(y));
for k=ceil(L/2):length(y)-ceil(L/2)
% Matched filter output same as y1 worte as a function
uF(ceil(L/2)+k)=matfilter(y(k-ceil(L/2)+1:k+ceil(L/2)),matF);
temp1=y(k-ceil(L/2)+1:k+ceil(L/2))/norm(y(k-ceil(L/2)+1:k+ceil(L/2)));
temp2=matF/norm(matF);
% Output of the filter. Notice that the input to the function (Nfilter) has
% unity norm
[a,b,c,d,e]=Nfilter(temp1,temp2);
uNew(ceil(L/2)+k)=a;
uNew1(ceil(L/2)+k)=b;
uNew2(ceil(L/2)+k)=c;
uNew3(ceil(L/2)+k)=d;
uNew4(ceil(L/2)+k)=e;
end
normalizeduNew(L/2:length(y1)-ceil(L))=uNew(L/2:length(y1)-ceil(L));
ufNormalized(L/2:length(y1)-ceil(L))=uF(L/2:length(y1)-ceil(L));
% plots
figure
plot(20*log(abs(y1(L/2:length(y1)-ceil(L)))),'g');
hold on
hold on
plot(20*log(abs(ufNormalized(L/2:length(y1)-ceil(L)))),'b');
plot(20*log(abs(normalizeduNew(L/2:length(y1)-ceil(L)))),'r');
signalMatchedFilter(counter)=0;
signalFilter(counter)=0;
for kk=1:3
plot(correctSamples(kk),20*log(abs(uF(correctSamples(kk)+L/2-1))),'.','MarkerSize',50)%/max(abs(uF(L/2:length(y1)-ceil(L))))
signalMatchedFilter(counter)= signalMatchedFilter(counter)+abs(ufNormalized(correctSamples(kk)+L/2-1));
signalFilter(counter)= signalFilter(counter)+abs(normalizeduNew(correctSamples(kk)+L/2-1));
plot(correctSamples(kk),20*log(abs(uNew(correctSamples(kk)+L/2-1))),'.','MarkerSize',30)%/max(abs(uNew(L/2:length(y1)-ceil(L))))
end
snrMat(counter)=signalMatchedFilter(counter)/sum(abs(ufNormalized(L/2:length(y1)-ceil(L))));
snrFil(counter)=signalFilter (counter)/sum(abs(normalizeduNew(L/2:length(y1)-ceil(L))));
counter=counter+1;
grid on
str = sprintf('Comparison %d dB',dB);
title(str)
ylabel(" Filtered output")
xlabel("Time samples")
figure(30)
plot(uNew1(L/2:length(y1)-ceil(L)))
hold on
% plot(uNew2(L/2:length(y1)-ceil(L)))
% plot(uNew3 (L/2:length(y1)-ceil(L)))
% plot(uNew4 (L/2:length(y1)-ceil(L)))
end
end
snrFil./snrMat
%
ans =
Columns 1 through 7
8.0114 13.5867 17.2081 20.3147 6.8262 14.5398 15.6664
Columns 8 through 10
8.4308 2.4626 2.9837
Published with MATLAB® R2020b
</code></pre>
<p>The same algorithim can be used to denoise sonar signals. Consider 26 element (transducer/sensor) linear array receiving return from 3 targets at the same from three different directions under additive Gaussian noise. The first plot shows the result of usual Fourier Transform beamforming after denoising and the second plot is also Fourier beamforming but <strong>no</strong> denoising.
<a href="https://i.sstatic.net/mto2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mto2n.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/mpWvp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mpWvp.png" alt="enter image description here" /></a></p>
<p>With more processing we get:
<a href="https://i.sstatic.net/R1G7J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R1G7J.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/uVz2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uVz2G.png" alt="enter image description here" /></a></p>
<p><strong>EDIT</strong>: The above plots were for narrow band signal. The following plots are for siganls of 10KHz bandwidth. It should be clear that higher bandwidth of signal will give sharp peak at the detection time on the hand the there is no noise reduction on the noise alone time. There seems to be suggestion that higher bandwith would denoise, noise also.<a href="https://i.sstatic.net/K2wu7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K2wu7.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/6Isrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Isrj.png" alt=" " /></a></p>
| 357
|
|
signal denoising
|
How well can discrete wavelet packet transform reduce noises that are similar to the input signal in the same frequency band?
|
https://dsp.stackexchange.com/questions/69345/how-well-can-discrete-wavelet-packet-transform-reduce-noises-that-are-similar-to
|
<p>If I had 50Hz noise coming from power line, and signals in the same frequency range (EEG for example 0.1Hz to 100Hz). If my sampling frequency is 30kHz but I downsample my signal to 937kHz and use the discrete wavelet packet transform (with Daubechies wavelet) for denoising purposes.</p>
<p>My frequency content now is 0-937kHz in level 0 and divided to the power of 2 with respect to the level (as in the binary tree).</p>
<hr />
<h2><a href="https://i.sstatic.net/oXaaC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oXaaC.png" alt="enter image description here" /></a></h2>
<p>I have two questions:</p>
<ol>
<li>If my 50 Hz noise is high in amplitude comparison to my signal of interest, looking at node of "43.94Hz-59.59Hz" frequency band at level 6, If I perform thresholdolding the noise would still be present in the signal as it would have high amplitude, correct? While to remove it completely I would need to zero out all coefficients in that band.</li>
<li>If my 50Hz is smaller in amplitude than my signal of interest, performing thresholding could theoretically remove the noise while keeping my signal of interest, correct? (theoretically because it depends on the threshold level and method)</li>
</ol>
<p>Please correct me if I am wrong along the process. I am still trying to understand the benefits and drawbacks of wavelets.</p>
| 358
|
|
signal denoising
|
Detect the beginning of an increasing signal
|
https://dsp.stackexchange.com/questions/48996/detect-the-beginning-of-an-increasing-signal
|
<p>After denoising and cleaning, I get amplitude signals like this (y-axis: dB):</p>
<p><img src="https://i.sstatic.net/Pl6ob.png" width="300">
<img src="https://i.sstatic.net/UlIpe.png" width="300">
<img src="https://i.sstatic.net/6BLFB.png" width="300"></p>
<p>On bottom left of each of these 3 graphs, you can see a noise floor (nearly "horizontal line"). This noise floor fluctuates in a range < +-1 dB.</p>
<p><strong>How to detect when the real signal begins, i.e. when the signal starts increasing?</strong></p>
<p>Here is what I've tried:</p>
<p>Let $t_i$ be the first time the signal goes higher than $noisefloor + i \ dB$. For example $t_4$ is the first time the signal goes 4 dB more than the noise floor.</p>
<p>Let's do a quadratic or cubic interpolation $f$ of the curve going by $(t_2, floor+2)$, $(t_4, floor+4)$, $(t_6, floor+6)$, etc.</p>
<p>Than let's check when the polynomial $f$ crosses the horizontal line $y = floor$.</p>
<p>It sometimes works, and better than all I've tried so far. But it's not perfect:</p>
<ol>
<li>Here the place where the green interpolation crosses $y = floor$ (dashed line) <strong>for the first time</strong> is more or less what I'm looking for, but there are 2 crossings, how to select the best? Sometimes it's the first to choose, sometimes it's the second..., it's quite random.</li>
</ol>
<p><a href="https://i.sstatic.net/iwLpR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iwLpR.png" alt="enter image description here"></a></p>
<ol start="2">
<li>Here the place where the green interpolation crosses $y = floor$ (dashed line) does not exist! so it doesn't work at all:</li>
</ol>
<p><a href="https://i.sstatic.net/F7VqS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F7VqS.png" alt="enter image description here"></a></p>
<p><strong>How to do better than this?</strong></p>
<p>Note: a simple threshold-detection of when the signal goes higher than $noisefloor + 0.5 \ dB$ as "the time when the signal starts" is not precise enough for my application.</p>
|
<p>It is not entirely clear what sort of signal we are dealing with here, apart from the use of the "audio" tag. If the signal had a wider bandwidth then this would be closer to <a href="https://www.google.co.uk/search?q=onset+detection" rel="nofollow noreferrer">onset detection</a>. But this is not what we are dealing with here. What we are dealing with here is a slow varying waveform that is considered "on", when it has emerged from some kind of background activity. This viewpoint would make it closer to <a href="https://www.google.co.uk/search?q=anomaly+detection" rel="nofollow noreferrer">Anomaly Detection</a> or "Outlier Detection" type of problems.</p>
<p>There are a couple of approaches you can take here. One is to fit a model that includes an "activation" parameter and then try to see when does that parameter transits to activation and take that point as the beginning of the onset. If you set off down the path of model fitting, eventually you are going to have to train your model on a number of these curves so that it learns all the different possible "avenues to activation" there might be. For example, the third trace shows background activity, transiting to an intermediate plateau, transiting to full on activation and even in the "activated" region its slope might show further variation.</p>
<p>So, before you start looking at those techniques, maybe you can try the plain old technique of detecting outliers through the statistics of the signal.</p>
<p>As a human being (?) you seem to have a good idea of when you would like to consider this curve as being "on". Therefore, collect all samples of your signal within the "background" region and use something like a <a href="https://en.wikipedia.org/wiki/Box_plot" rel="nofollow noreferrer">boxplot</a> or fit a distribution to this data. The simplest example of that distribution would be a Gaussian that has a mean and standard deviation. This models your "normality" region. Any value that could have emerged from that distribution is dubious as to whether it belongs to the "background" or "activated" segments. But that is not true for all values because soon enough (as time evolves towards the right), the curve will start pushing towards extremal points of the distribution where the probability of generating such a value becomes smaller and smaller. </p>
<p>Putting a hard threshold there, would give you an estimate of where the "activation" region starts.</p>
<p>Hope this helps.</p>
<p><strong>EDIT:</strong></p>
<p>After more information about the problem was shared, I am more inclined to suggest one of the onset detection techniques which will work directly on the audio data.</p>
<p>In any case, the following (cave) illustrations might help a bit more with the earlier suggestions in this post.</p>
<p><a href="https://i.sstatic.net/JBkeN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JBkeN.png" alt="enter image description here"></a></p>
<p>The "Human being" comment comes into play when determining the point of earliest transition from "background" to "key on" (rather than doing it automatically). You use the data in the "background" part of the waveform to estimate the statistics of what it means for a sample to be coming from the "background" part and use that to determine a threshold beyond which the samples are now more likely to belong to the "key on" part.</p>
<p>Alternatively:
<a href="https://i.sstatic.net/AHk3a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AHk3a.png" alt="enter image description here"></a></p>
<p>Combine many takes at similar settings by aligning them on the "Key on" slope and summarise all of this data with a series of boxplots, each one telling you the sort of value limits you can expect at each time instance. Use that information then to choose the point in time when there is deviation from the background.</p>
<p><em>(Inset images of boxplot and distribution from <a href="https://en.wikipedia.org/wiki/Box_plot" rel="nofollow noreferrer">this</a> and <a href="https://en.wikipedia.org/wiki/Normal_distribution" rel="nofollow noreferrer">this</a> wikipedia articles respectively.)</em></p>
| 359
|
signal denoising
|
Filtering a square signal with a median filtering to preserve the edges
|
https://dsp.stackexchange.com/questions/69867/filtering-a-square-signal-with-a-median-filtering-to-preserve-the-edges
|
<p>If needed, you can find my first post for this problem <a href="https://dsp.stackexchange.com/questions/69850/applying-a-lowpass-filter-to-a-noisy-square-signal-leads-to-a-shift-of-the-signa/69851?noredirect=1#comment144072_69851">here</a>.
I am trying to clean the following signal:</p>
<p><a href="https://i.sstatic.net/2tQgl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2tQgl.png" alt="Raw signals" /></a></p>
<p>As proposed in the comment, I tried to use this <a href="https://dsp.stackexchange.com/questions/20143/duration-of-unknown-rectangular-pulse-with-additive-white-gaussian-noise/20146#20146">post</a> proposing 2 methods: <em>median filtering</em> and <em>total variation denoising</em>. I'm a python user, and I didn't find a good implementation of the second, thus I only kept the median filtering, implemented with <code>scipy</code>.</p>
<pre><code>from scipy.ndimage import median_filter
window_size = 10
data_filtered = median_filter(data, size=window_size, mode='nearest')
</code></pre>
<p>The result is, at first look, not too bad:</p>
<p><a href="https://i.sstatic.net/XxNGU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XxNGU.png" alt="filtered signals" /></a></p>
<p>However, the ramp is not completely preserved. The first one is slightly shifted to the right; while the second remains at the correct position. If I zoom on the low amplitude signal, I get:</p>
<p><a href="https://i.sstatic.net/t5yQB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t5yQB.png" alt="Zoom on ramps of low amplitude signal" /></a></p>
<p>My goal is to measure the width of the square. This is directly impacting my measure. <strong>What is causing this phenomenon, why is it not symmetric and can it be fixed?</strong></p>
<p><em><strong>N.B:</strong></em> This is impacting my measure because I would like to measure the width at a us scale. Obviously, as those first measurements were done with a sampling frequency <code>fs = 1e6</code>, this resolution is not possible. Would this problem be solved if I increase the sampling frequency? (Max is <code>1e9</code>). Which sampling frequency would you recommend?</p>
<p>Thank you for the guidance,
Mathieu</p>
| 360
|
|
signal denoising
|
Classification of very noisy EMG signals
|
https://dsp.stackexchange.com/questions/76389/classification-of-very-noisy-emg-signals
|
<p>I'm an absolutely newbie to signal processing. I'm trying to classify EMG signals which are very noisy (decibel values are more than -70 dB in some cases). After applying EMD technique these values are improved to -30 dB to -40 dB.</p>
<p>My question is :</p>
<p>I want to classify these EMG signals. It's a binary classification problem. If these noisy signals are fed to a complex classification algorithm like CNN, it will learn the features from the noisy signals only. This knowledge it will apply to classify unknown noisy signals. <strong>As all the samples in my dataset are affected by the same kinds of noises, do the noises affect too much the final validation accuracy ? Or, should I focus in denoising further?</strong></p>
|
<p>-30 dB is still very noisy.</p>
<p>If you've had success with EMD, I'd try an inspired transform that's improved on it: <a href="https://github.com/OverLordGoldDragon/ssqueezepy" rel="nofollow noreferrer">synchrosqueezing</a>. Whether it's best to denoise before classifying depends on amount of available data: most denoising will throw away some valuable information, but also make the task easier for a classifier. If there's <em>lots</em> of data, don't denoise.</p>
<p>I'd also try scattering which'll yield timeshift-invariant features (<a href="https://youtu.be/4eyUReyIPXg" rel="nofollow noreferrer">lecture</a>); with this much noise one can't hope for precise temporal localization anyway.</p>
<p>Finally I'd look for architectures pretrained on a similar noise profile (or sufficiently powerful to generalize) and attempt transfer learning. But the most promising approach would be data-oriented: find less noisy samples.</p>
| 361
|
signal denoising
|
How does signal subtraction affect frequency response?
|
https://dsp.stackexchange.com/questions/83067/how-does-signal-subtraction-affect-frequency-response
|
<p>I had a noisy signal which I denoised using machine learning. Now assuming the noise was additive I am subtracting the denoised signal from the noisy signal to get the noise part. I just did time domain subtraction, but when I plotted the frequency response of the noisy signal, the denoised signal and the noise, I noticed that some harmonics are actually of higher amplitude in the noise signal, like the harmonic at 3.3 MHz.
<a href="https://i.sstatic.net/ITLrk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ITLrk.jpg" alt="enter image description here" /></a></p>
<p>Can somebody please tell what I am doing wrong?</p>
<p>Thanks</p>
|
<p>Well, two options:</p>
<ol>
<li>Your channel doesn't only <em>add</em> noise, but has some nonlinearity, or the noise is not just additive, so: your model is wrong, or</li>
<li>your denoiser is not perfect and doesn't only remove noise, but maybe adds some interference at some frequencies. So: your denoiser can't denoise perfectly, and has some unexpected side effects.</li>
</ol>
<p>Option 2 seems more likely. Also note that it is mathematically in general impossible to remove <em>all</em> noise. From an information-theory point of view, whenever you add noise with a PDF that has support wider than half the minimum distance between signal points, you simply <em>eradicate</em> some information. It's mathematically impossible to perfectly denoise.</p>
<p>Note that learned denoisers are trained towards one objective only – minimizing some objective function ("loss function"). I'd expect that function to be something like a euclidean distance between the denoised and the transmit sequence at the symbol instants – but that doesn't mean it's the only possible loss function for this task, nor does it mean your denoiser actually converged against a maximum estimator that's uncorrelated to your data.</p>
<p>First of all, unless you can prove the opposite, you have converged (best case) to a local optimum, not to the global optimum, and:</p>
<p>Denoising might, depending on your signal level, simply have learned to e.g. set specific samples to specific values (e.g. in a filtered/shaped pulse train, you'd expect "0" in the middle between symbol instants, on average. But learning to always set that value to zero, even when the filter doesn't fulfill Nyquist criteria, doesn't make the loss function I described above any worse, and might train a bit of a zero-forcing equalizer for the transmit filter – which might be amplifying your signal harmonics. So, I'd even call this a bit expected!</p>
| 362
|
signal denoising
|
Interpretation of Histogram in Statistical Image Processing
|
https://dsp.stackexchange.com/questions/35015/interpretation-of-histogram-in-statistical-image-processing
|
<p>I am learning statistical image processing by myself. In papers and books, it always show the histogram of original images and gradients as the following image shows. The histograms of images vary significantly while histograms of image gradients show some similarity. Does it assume that each pixel in images obey the same probability distribution for the histograms of images? Does the histogram of any image gradient obey the same probability distribution?</p>
<p><a href="https://i.sstatic.net/t93wQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t93wQ.png" alt="enter image description here"></a></p>
<p>In the paper <em>Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain</em> by Javier Portilla, Vasily Strela, Martin J. Wainwright, and Eero P. Simoncelli there is one paragraph</p>
<p><em>Contemporary models of image statistics are rooted in the
television engineering of the 1950s (see [4] for review), which
relied on a characterization of the autocovariance function for
purposes of optimal signal representation and transmission. This
work, and nearly all work since, assumes that image statistics are
spatially homogeneous (i.e., strict-sense stationary). Another
common assumption in image modeling is that the statistics
are invariant, when suitably normalized, to changes in spatial
scale. The translation- and scale-invariance assumptions, coupled with an assumption of Gaussianity, provides the baseline
model found throughout the engineering literature: images are
samples of a Gaussian random field, with variance falling as
in the frequency domain. In the context of denoising, if one
assumes the noise is additive and independent of the signal, and
is also a Gaussian sample, then the optimal estimator is linear.</em></p>
<p><strong>image statistics are spatially homogeneous</strong> What does it mean? Does image statistics means the histogram?</p>
<p><strong>an assumption of Gaussianity</strong> What is Gaussian?</p>
<p><strong>images are samples of a Gaussian random field</strong> If one image is considered as a random field, can histograms be used? The assumption that each pixel obeys the same probability distribution will not hold.</p>
|
<blockquote>
<p>Does it assume that each pixel in images obey the same probability distribution for the histograms of images? </p>
</blockquote>
<p>Images of different scenes will definitely not obey the same probability distribution of the pixel values. </p>
<p>Histograms are one way that people use to do dimensionality reduction: move from a 2D image to a 1D signal.</p>
<blockquote>
<p>Does the histogram of any image gradient obey the same probability distribution?</p>
</blockquote>
<p>What you are seeing in the image gradient is the "diffs" in the image. Because images are generally low-pass in nature, this means you are picking out the places where they change. There will be (at least) two components to this change: how the scene being imaged changes and how the sensor capturing the image perturbs the "true" pixel values.</p>
<p>For the same camera taking the images, this second component should be very similar across all images.</p>
<blockquote>
<p><strong>image statistics are spatially homogeneous</strong> What does it mean? Does image statistics means the histogram?</p>
</blockquote>
<p>means that the image statistics are very similar regardless of where in the image you look. One way the statistic show up would be in the histogram, yes.</p>
<blockquote>
<p><strong>an assumption of Gaussianity</strong> What is Gaussian?</p>
</blockquote>
<p>Gaussian means that the noise (random fluctuations in the image) follows a normal or Gaussian distribution.</p>
<blockquote>
<p><strong>images are samples of a Gaussian random field</strong> If one image is considered as a random field, can histograms be used? The assumption that each pixel obeys the same probability distribution will not hold.</p>
</blockquote>
<p>If the images are not random, then they will follow some well-defined deterministic rule.</p>
<p>Certainly, histograms can be used.</p>
| 363
|
signal denoising
|
removing noise using Scipy.signal.butter:
|
https://dsp.stackexchange.com/questions/64008/removing-noise-using-scipy-signal-butter
|
<p>I am going to remove the noise from a brain recorded signal. It was a continuous recording and with sample rate=30kHz, it was digitized. So now it is a digital signal.
I have written the code here for denoising this signal and I put two figures (the red one is the denoised one) including one big picture figure and the second one is by using zoom in. Below is the code.</p>
<blockquote>
<p>wn=0.01
n=4 #order
b, a= signal.butter(n, wn, btype='low', analog=False, output='ba')
filtered = signal.lfilter(b, a, data_raw)
plt.plot(data_raw, 'b', filtered,'r')
plt.show()</p>
</blockquote>
<p><a href="https://i.sstatic.net/8Q01S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Q01S.png" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/W4wSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W4wSg.png" alt="enter image description here"></a></p>
<p>Now I have several questions please (sorry if they are easy as I am a beginner):</p>
<p>1) Here we only determined the normalized frequency (wn). So if I want to know the low frequency of the filter, what is the default amount of sampling frequency for the <strong>Signal.butter</strong> function, please?</p>
<p>2) If I want to determine the sampling frequency of the butter filter by my self, is it the same sampling frequency as the data was digitized with? I mean the 30 kHz? If not what is the amount for this frequency, please? How should I determine it?</p>
<p>3) Do you have any other recommendations to filter this signal better and with less noise? Thanks a lot</p>
| 364
|
|
signal denoising
|
denosing given signal using wavelet
|
https://dsp.stackexchange.com/questions/15525/denosing-given-signal-using-wavelet
|
<p>let us suppose that we have given following model</p>
<p>$y(t)=A_1 \sin(\omega_1*t+\phi_1) + A_2 \sin(\omega_2*t+\phi_2) + A_3 \sin(\omega_3*t+\phi_3)+ \ldots +A_p \sin(\omega_p*t+\phi_p)+z(t)$</p>
<p>where $z(t)$ is white noise,we have everything unknown,$p,A_i,\omega_i,\phi_i$,what i want to remove noise using wavelet,i know that is it possible using discrete wavelet,so which mother waveelt is good for deterministic and periodic components?let us suppose that we have data ,some numbers of length $N=294$,which mother wavelet should i choose and which scales?please help me with practical example,because wavelet is new and i want to see what is steps for denoising,there is some link about denosing</p>
<p><a href="http://eeweb.poly.edu/iselesni/DoubleSoftware/signal.html" rel="nofollow">http://eeweb.poly.edu/iselesni/DoubleSoftware/signal.html</a></p>
<p>including matlab codes,but i want to know before i start use some method ,is it relevant for such model?as i know wavelet is good for transient signals,is it ok for steady signals?for periodic ones?if so then which steps is necessary?thanks in advance</p>
|
<p>If you have reason to believe your signal is sparse in the frequency domain, they you should be denoising in that: use FFTs, not wavelets. Look at the magnitude spectrum and see if it's very spiky, and that will be your cue. If so, attenuate the frequencies with a small response.</p>
| 365
|
signal denoising
|
What is the name of this very simple spectral subtraction technique?
|
https://dsp.stackexchange.com/questions/51724/what-is-the-name-of-this-very-simple-spectral-subtraction-technique
|
<p>Let $S = X + N$ be the sum of two audio signals $X$ and $N$ which are both stationnary (let's think X is a constant volume 440 Hz sinusoid and N is constant volume noise).</p>
<p>If the sum S has a -20 dB volume and N has a volume of -30 dB, <strong>what is the volume of X?</strong> (could be RMS or peak volume, it doesn't matter here).</p>
<p>The answer is</p>
<pre><code>20*log10(10^(-20/20)-10^(-30/20)) ~ -23.3
</code></pre>
<p><strong>i.e. X has a peak volume of -23.3 dB.</strong></p>
<p>(of course this is not true if X and N have phase-cancellation, but except this case, it works).</p>
<p><a href="https://i.sstatic.net/KMurI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KMurI.jpg" alt="enter image description here"></a></p>
<p><strong>Question: what is the name of this computation? i.e. find the volume of X given the volume of N and X + N?</strong></p>
<hr>
<p>I'm using an application of this for STFT denoising (with a noise template which is N):</p>
<ul>
<li>if a FFT bin has -20 dB amplitude (signal S), and the noise has amplitude -30 dB amplitude for the same bin (signal N), what would be the amplitude of the denoised signal X ? Answer: -23.3 dB, <strong>thus this FFT bin should be lowered by 3.3 dB.</strong></li>
</ul>
<p>It works quite well for my noise reduction application (again here X is nearly a constant sinusoid and N constant noise), but I haven't found a name for this simple technique. What would be the name?</p>
|
<p>Estimating the power of a sum of signals depends on coherence or incoherence assumptions. Details are given for instance on <a href="http://www.sengpielaudio.com/calculator-leveladding.htm" rel="nofollow noreferrer">incoherent signal summing</a> or <a href="http://www.sengpielaudio.com/calculator-leveladding.htm" rel="nofollow noreferrer">coherent signal summing</a>. Those can provide an estimate of the noise level. They can be called power sums or voltage sums. Other details in <a href="https://community.keysight.com/community/keysight-blogs/rf-test/blog/2017/04/10/voltage-and-power-add-differently-understanding-signal-power-measurement" rel="nofollow noreferrer">Voltage and power add differently</a>. I'll be digging for other names.</p>
<p>The expectation of the energy of a sum $v$ of variables $v_1$ and $v_2$ is:</p>
<p>$$ E((v_1+v_2)^2) = E(v_1^2)+E(v_2^2)+2E(v_1 v_2)\,.$$</p>
<p>Three specific cases. If $v_1=v_2$, you get $+10\log 4 = +6 $ dB. If sources are uncorrelated, $E(v_1 v_2)=0$, so you get $+10\log 2 = +3 $ dB. If sources are oppositive, they cancel each other, and the total energy becomes $-\infty$ dB.</p>
<p>Then, subtracting the noise level to the observed signal is an instance of (scalar) hard thresholding, called <a href="https://sound.eti.pg.gda.pl/denoise/noise.html" rel="nofollow noreferrer">spectral subtraction</a> in the Fourier domain. As eluded to in your <a href="https://dsp.stackexchange.com/q/51737/15892">Spectrogram with square or non-square magnitude of STFT: power vs. magnitude</a> question, scalar spectral subtraction on the observed STFT ${S}(\tau,\omega)$ can be performed with a power law, as:</p>
<p>$$|\overline{X}(\tau,\omega)|^\alpha=\max\left(|{S}(\tau,\omega)|^\alpha - \lambda|\overline{N}(\tau,\omega)|^\alpha,0\right)\,.$$</p>
<p>The recovery of the noiseless phase is a full different world.</p>
| 366
|
signal denoising
|
Spectral analysis of positive signals
|
https://dsp.stackexchange.com/questions/9220/spectral-analysis-of-positive-signals
|
<p>Suppose that I have a sensor that can acquire samples $X[k]$ of the Fourier transform of an unknown signal $Y[t]$. An example is MRI, where the acquired data is in $k-$space. Now suppose that the unknown signal $Y[t]$ is known to be real and non-negative. My question is: is there a principled way to incorporate this knowledge into the spectral analysis algorithm that will estimate $Y[t]$ from $X[k]$, in order to produce an estimate with less bias or variance? I am thinking at non-parametric spectral estimation algorithms. A naive way of course would be to take the real part of $Y[t]$ and clip the negative values, but this does not seem to be optimal. I am looking for some sort of Cadzow's denoising method for spectral data. </p>
|
<p>To give a complete answer to this question you're going to need to provide more details about the kind of models you're considering in the first place. But yes, in many cases you can augment those models with <em>a priori</em> constraints on $Y[t]$, such as $0 \leq Y[t] \leq 1$. </p>
<p>For example, if the standard model has some sort of least-squares structure, then adding constraints of that type turns the problem into a bound-constrained least squares problem. There are a variety of approaches to solving such problems, and while they are more expensive than standard least squares, they are quite tractable. And it's very likely that such constraints will produce a better reconstruction.</p>
<p>Even without knowing more, though, I will say this: if your modeling approach does not produce real signals <em>naturally</em>, then you are almost certainly using the wrong modeling approach. It concerns me that you are even proposing taking the real part of the output of some other model. You should be searching the space of real signals if you know that to be the underlying structure.</p>
| 367
|
signal denoising
|
multi-frame image restoration
|
https://dsp.stackexchange.com/questions/70972/multi-frame-image-restoration
|
<p>Suppose we have a sequence of still images each of which has been contaminated by some particles(ex, dust/sand/smoke) making the images very poor in certain areas.</p>
<p>What approach would be best to teach image regeneration using multiple frames? The simplest technique is to simply find a way to detect what parts of the image are contaminated and uncontaminated and pull uncontaminated sections from each frame.</p>
<p>For spec based contamination you could probably use one of the classic denoising techniques but I'm thinking of having whole sections be contaminated making these single image local approaches not work great.</p>
<p>I am thinking some form of deep learning restoration(advice on architectures would be great), but I am open to more classic signal processing techniques.</p>
| 368
|
|
signal denoising
|
Additive White Gaussian Noise (AWGN) and Undecimated DWT
|
https://dsp.stackexchange.com/questions/37932/additive-white-gaussian-noise-awgn-and-undecimated-dwt
|
<p>One of the benefits of DWT is that it is an orthonormal transform. </p>
<p>There are statements that the energy of noise component mainly concentrates on the high-frequency (detail) part and distributes homogeneously. The energy of noise component is included in more wavelet coefficients with smaller amplitudes, while the energy of the useful signal concentrates on fewer wavelet coefficients with bigger amplitudes. See <a href="https://journals.sagepub.com/doi/pdf/10.1260/0144-5987.28.2.87" rel="noreferrer">Wavelet Denoising of Well Logs and its Geological Performance</a>.</p>
<p>As far as I understand, that is due to the orthogonality property of DWT.</p>
<p>Does this behavior of white Gaussian noise also hold true:</p>
<ul>
<li>for undecimated DWT (like SWT or MODWT).</li>
<li>for less redundant forms of DWT (comparing to undecimated version) - DTCWT, for example.</li>
</ul>
<p>As far as I understand both of them aren't orthogonal transforms.</p>
|
<blockquote>
<p>One of the benefits of DWT is that it is an orthonormal transform</p>
</blockquote>
<p>Well, not quite. Some standard DWT are orthonormal, but not all of them. The others used in practice are biorthogonal. Which makes computations more difficult. However, for close-enough-to-orthogonal wavelet transforms, the application of orthogonal results to non-strictly-orthogonal transforms tends to work in practice. Real-world noise are rarely exactly white. But let us start from the noise.</p>
<p>Undecimated DWT or DTCWT belong to frames, a set of generating vectors subject to some bounds: for all $x$ (I am skipping technical conditions) transformed into coefficients $X$, there are $A>0 $ and $B<\infty$ such that:</p>
<p>$$ A\|x\| ^2 \le \|X\| ^2 \le B\|x\| ^2 $$</p>
<p>of which $\|X\| ^2 = \|x\| ^2 $ (orthonormality) is a special case. The case $A=B$ corresponds to tight frames, the closest "redundant" equivalent to orthonormality. In this (close-to) tight frame case, things are generally manageable. So for the noise part, the noise coefficients are not white in general, as some correlation appears with redundancy or non-orthogonality. </p>
<p>However, not all is lost with the noise:</p>
<ul>
<li>wavelet frames generally keep some noise whitening properties,</li>
<li>sometimes, a SWT can be implemented as a union of orthogonal bases, which can be processed separately, then recombined (eg with cycle-spinning), but this is a bit suboptimal, as happens with some others redundant transforms: scalar thresholding is common, but suboptimal,</li>
<li>some technical results still can be obtained with the SWT, see for instance <a href="https://cavs.msstate.edu/publications/docs/2005/09/4199Fow2005a.pdf" rel="nofollow noreferrer">The redundant discrete wavelet transform and additive noise</a>, 2005, J. Fowler</li>
<li>for DTCWT, you are much less redundant, and can even get a tight frame. The good news is that, due to special features of the Hilbert transform on primal/dual wavelets (and their cross-correlation), you can express the noise covariance very precisely, see for instance <a href="http://arxiv.org/abs/1108.5395" rel="nofollow noreferrer">Noise covariance properties in Dual-Tree Wavelet Decomposition</a>, C. Chaux et al., 2007. This helps in designed good block thresholding algorithms, like in <a href="http://arxiv.org/abs/0712.2317" rel="nofollow noreferrer">A Nonlinear Stein Based Estimator for Multichannel Image Denoising</a>, C. Chaux et al., 2008. </li>
</ul>
<p>So more or less, indeed, </p>
<blockquote>
<p>the energy of noise component is included in more wavelet coefficients with smaller amplitudes</p>
</blockquote>
<p>Now, let us focus in the signal. Orthogonality lays a lot on constraints in a basis: first vector has $N$ degrees of freedom, the second $N-1$, etc. Thus, orthobasis vectors may be less prone to nicely match, hence concentrated, <em>structured signals or images</em>. If one relaxes orthogonality, one enhances the diversity of projection vectors, and tends to have an increased sparsity, so more or less:</p>
<blockquote>
<p>the energy of the useful signal concentrates on fewer wavelet coefficients with bigger amplitudes</p>
</blockquote>
<p>But wait, in the transformed domain only, which can be redundant, and correlates noise. </p>
<p><strong>However, all in all, with a little well-managed redundancy (tight or almost tight-frame), and clever thresholding, non-critical wavelet transforms are often beneficial with respect to critically sampled DWT.</strong> This also happens with more generic filter banks, see for instance <a href="https://arxiv.org/abs/0907.3654" rel="nofollow noreferrer">Optimization of Synthesis Oversampled Complex Filter Banks</a>, 2009, J. Gauthier et al.</p>
| 369
|
signal denoising
|
If I know the RMS noise/the variance of a DC measurement, can I simply subtract it from the measurement?
|
https://dsp.stackexchange.com/questions/84149/if-i-know-the-rms-noise-the-variance-of-a-dc-measurement-can-i-simply-subtract
|
<p>Let's say I have an electronic system that's taking a measurement. It provides a simple bipolar excitation current to a resistive load (bipolar square wave so as to cancel out thermal emf), puts it through an analog front end and some anti-aliasing, into an ADC.</p>
<p>What I'm thinking is if during calibration we can just take the readings of the ADC over a long period of time on a known stable load and subtract out the DC component, all we're left with is noise, which we can calculate the variance of and store in memory. Can't we then digitally generate noise with the same variance and subtract it from our input signal when actually in use to get a signal with significantly reduced noise? From a signals/statistics standpoint, does this make any sense or is that not how noise works at all? Should I look into more complex schemes like wavelet denoising?</p>
|
<p>Yes. Your intuition is correct, you need some sort of a statistical method since you don't have a way to measure the noise separately from your noisy measurement synchronously (in which case what you propose would work for an offline process).<br />
What you do have is a measurement of the noise taken as a separate process, at a different time, which gives you a good starting point IF you assume the statistics of the noise shouldn't change much between measurements.</p>
<p>With these considerations in mind, you can use the noise reference statistics and do this sort of de-noising in the frequency domain, either through <strong>Wiener filtering</strong> approaches or <strong>Spectral Substraction</strong> (there are others, but those are the most common approaches).</p>
<ul>
<li><p>A naive approach could be to use your long noise-only measurement, and from that get a good estimate of the noise statistics through a Power Spectral Density, and perform Spectral Substraction on your subsequent noisy measurements.</p>
</li>
<li><p>As for Wiener methods, here is <a href="https://hal.inria.fr/inria-00450766/document" rel="nofollow noreferrer">a good reference paper</a> and <a href="https://www.mathworks.com/matlabcentral/fileexchange/24462-wiener-filter-for-noise-reduction-and-speech-enhancement?s_tid=srchtitle" rel="nofollow noreferrer">the author's MATLAB implementation</a></p>
</li>
</ul>
| 370
|
signal denoising
|
Sampling frequency after aggregations
|
https://dsp.stackexchange.com/questions/76845/sampling-frequency-after-aggregations
|
<p>I have accelerometer signal, which is preprocessed by the actigraph on-device. Original sampling rate is 32 Hz, but activity count is summed for every minute, so I have a signal with 1 measurement per minute.</p>
<p>For denoising and to analyze long-term dependencies (my data spans several days), especially related to circadian rhythm, I aggregate this data for each hour, taking the mean. After this I have a signal with 1 measurement per hour.</p>
<p>Libraries for analysis of frequency features, e.g. Scipy or TSFEL require sampling rate as a parameter, e.g. for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.periodogram.html" rel="nofollow noreferrer">calculating periodogram</a>.</p>
<p>What should be my sampling rate after all this processing?</p>
<ul>
<li>32 Hz, since original measurements were gathered with this frequency?</li>
<li>1 Hz, since as a part of very basic preprocessing (and since I don't care about minor movements and analysis of short term effects) a total activity per minute was calculated?</li>
<li>1/3600 Hz, since after aggregating by hour I have one measurement per hour?</li>
<li>it depends on purpose and results and no single answer is totally right or wrong?</li>
</ul>
|
<p>It depends on <em>which</em> signal you want to do frequency analysis with. If you are doing frequency analysis on the 1/3600 Hz signal, then that is the sample rate the function probably expects. If you are doing frequency analysis on the 1 Hz signal, then use 1 Hz as the sample rate parameter in the function you are using.</p>
| 371
|
signal denoising
|
Algorithms for removing oscillations?
|
https://dsp.stackexchange.com/questions/15171/algorithms-for-removing-oscillations
|
<p>I am interested in removing oscillations from a signal to capture the lower-frequency variations, similar to the objective of <a href="https://dsp.stackexchange.com/questions/9671/how-to-remove-the-periodic-oscillations-from-a-signal">this problem</a>. The oscillations vary in frequency in the time domain, so wavelet shrinkage seems to be a reasonable option, but most of the literature on wavelet shrinkage is applied to denoising, where the noise to be shrunk is i.i.d. Gaussian. Low-frequency oscillations are generally autocorrelated.</p>
<p>Is it still technically sound to apply wavelet shrinkage for removing higher frequency components even though they are not noise? Is there a better method? I am not aware of too many spatially-adaptive low-pass filters which have a convenient interpretation as wavelets.</p>
<p>Below are examples of two of such signals. As you can see, the oscillations occur with different frequencies along the space/time domain (x-axis). I have tried Fourier smoothing, but it does not seem to leave any relevant features. (I have also tried truncated SVD, but it also does a horrible job removing the oscillations.)</p>
<p><img src="https://i.sstatic.net/5K2Vh.png" alt="enter image description here"></p>
| 372
|
|
signal denoising
|
Filter away sinusoidal noise properly
|
https://dsp.stackexchange.com/questions/46033/filter-away-sinusoidal-noise-properly
|
<p>I have a stereo music signal corrupted by strong sinusoidal noise that varies over time. Here is the spectrogram of Left channel I plotted with Matlab.
As you can see there are 3 or 4 strong harmonics with frequency that varies over time.</p>
<p><a href="https://i.sstatic.net/noNzA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/noNzA.png" alt="Left Channel Spectrogram"></a></p>
<p>As a first try I computed the difference of left and right channels and it seems most of the harmonic noise disappears, so my guess is the noise in left and right are almost identical. Here is the difference spectrogram to prove this.</p>
<p><a href="https://i.sstatic.net/Eaez0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eaez0.png" alt="Left - Right"></a></p>
<p>Which technique could I use now to filter away all the interfering harmonics while preserving the original music spectrum as well as possible?</p>
<ul>
<li>Notch filters?</li>
<li>Tracking the frequencies over time and adding a sinusoid with reversed phase? </li>
</ul>
<p>How do I estimate and use the phase information? I think phase is important to have good denoising, but would not know how to proceed.</p>
<p>Thanks a lot! :)</p>
|
<p>Definitely, try to reconstruct the sine and then subtract it. A notch filter will create artifacts as the desired signal changes, with notes stopping and starting, and percussion. Even if those notes are at pitches of frequencies outside the range of the notch filter.</p>
<p>To extract the unwanted sine, use a bandpass filter. While this has the same problem as the notch filter, with notes stopping and starting, you do know that the sine is steady (or slowly varying) over time, while the desired signal is always changing. Or may include some steadily droning elements - bagpipes, anyone? Hope that's outside the passband of the filter. You also have found how the sine and the signal related between left and right, so make use of that to help extract the sine. The electronics engineer part of my brain wants to send the extracted sine to a PLL, to filter out any accidentally included signal, and provide smooth amplitude and phase data. Some sort of mathematical flywheel to steady the cycles.</p>
<p>The exact methods for extracting and purifying the sine depend on if you need to do this in real time, or near real time with a fraction of a second delay acceptable, or if you have the whole span of signal from beginning to end all at once such as with an .mp3 file.</p>
<p>I know this is the best way to do it, because Cassini's NAC camera had a similar problem. You have the audio version of what I had to fix years ago.</p>
| 373
|
signal denoising
|
Size filtering on binary images
|
https://dsp.stackexchange.com/questions/59583/size-filtering-on-binary-images
|
<p>I have noisy images from which I extract the contours using OpencCV's <code>findCountours</code>, which performs binarization internally. This results in innumerable small outlines made of three or four pixels, which I would like to avoid.</p>
<p>I want to discard these, either before extracting the contours, or during that step. I could binarize the image myself and use a <em>size filter</em> (i.e. erase the blobs tiny in area), or I could use a variant of the contouring function that implicitly discards the very short outlines.</p>
<p>But I can't find OpenCV functions that look like this. Any hint ?</p>
<hr>
<p>Note that I don't want to use any denoising function, as it will degrade the signal. I don't want to erode either, for the same reason.</p>
<hr>
<p>Example of outlines to be discarded (don't mind the annotations):</p>
<p><a href="https://i.sstatic.net/rny0Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rny0Y.png" alt="enter image description here"></a></p>
<p>I am not asking to get rid of all noisy blobs, but of the smallest ones.</p>
| 374
|
|
signal denoising
|
Using Signal to Noise Ratio (SNR) formula for machine learning metric evaluation
|
https://dsp.stackexchange.com/questions/94229/using-signal-to-noise-ratio-snr-formula-for-machine-learning-metric-evaluation
|
<p>As the SNR function is not commutative (meaning different argument positions will lead to different output results), it makes me confused to use it as metric evaluation.</p>
<p>I have this 3 signal; those are X_test, y_test, and y_pred. Those are common naming conventions for supervised learning models. So what are they for common people?</p>
<ul>
<li><strong>X_test</strong> is basically <strong>input</strong> of an ML model, which is noisy signal.</li>
<li><strong>y_test</strong> is basically a denoised signal, which is <strong>the ground truth of clean signal</strong>.</li>
<li><strong>y_pred</strong> is basically a denoised signal, which is a <strong>predicted clean signal by model</strong>.</li>
</ul>
<p><a href="https://i.sstatic.net/V09UUvth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V09UUvth.png" alt="enter image description here" /></a></p>
<p>The question is, how do I know how much the signal has been denoised?
There are some combinations, but I don't know which one.
Is it:</p>
<ol>
<li>SNR(X, y)</li>
<li>SNR(y, X)</li>
<li>SNR(y_test, y_pred)</li>
<li>SNR(y_pred, y_test)</li>
</ol>
<p>Sorry, I didn't define the SNR function due to being unsure of what SNR formula I should use.</p>
<p>If you are wondering what SNR am I being used (might be wrong), here it is:</p>
<pre class="lang-py prettyprint-override"><code>def SNR(a, b):
# At this point I don't know what is a or b? And which one is denoised and noised.
# Calculate the power of the true signal
true_signal_power = np.sum(np.square(a), axis=1)
# Calculate the power of the noise (difference between true and predicted signals)
noise_power = np.sum(np.square(a - b), axis=1)
# Calculate SNR in decibels
snr = 10 * np.log10(true_signal_power / noise_power)
return snr
</code></pre>
|
<p>The remaining is simply the difference between the noise signal and the clean signal. You can define 3 different SNRs here</p>
<p>The SNR of your original signal is</p>
<p><span class="math-container">$$SNR_{original} = 10\log_{10}\frac{\sum y_{test}^2}{\sum (x-y_{test})^2}$$</span></p>
<p>After the denoising the SNR is</p>
<p><span class="math-container">$$SNR_{denoised} = 10\log_{10}\frac{\sum y_{test}^2}{\sum (y_{pred}-y_{test})^2}$$</span></p>
<p>The SNR improvement is then simply the difference of the two</p>
<p><span class="math-container">$$SNR_{improvement} = SNR_{denoised} - SNR_{original} = 10\log_{10} \frac{\sum (x-y_{test})^2}{\sum (y_{pred}-y_{test})^2}$$</span></p>
| 375
|
signal denoising
|
Wavelet transform, scalogram, detail and approximation coefficients
|
https://dsp.stackexchange.com/questions/93387/wavelet-transform-scalogram-detail-and-approximation-coefficients
|
<p>I understand tha the wavelet transform is about computing the coefficients to assign to scaled and translated versions of the chosen mother wavelet. The coefficients measure the correlation between the signal and shifted/scaled wavelet. That said, I know that the CWT is used to compute the scalogram while the DWT is used for other tasks (denoising and decomposition) and is implemented as a bank of low-pass and high pass filters...there is also a father wavelet, beside the mother wavelet, involved in the DWT...I don't think the CWT involves a father wavelet...Both the CWT and the DWT are discrete transforms where the scale and translation parameters are discretized in different ways...</p>
<p>I am confused about the difference between the CWT and the DWT. In the case of the DWT, are all the outputs from the low-pass and high-pass filters in the filter bank simply different low frequency and high frequency versions of the original input time-signal x(t)? The filter output signals are called approximation and detail coefficients cA and cD...I guess cA and cD are not time signals, correct? Can we add all the filters outputs, cA and CD, to get the original signal x(t)?</p>
<p>I get some mother wavelets can form orthogonal sets, some non-orthogonal sets, and some biorthogonal sets...</p>
<p>Thank you!</p>
| 376
|
|
signal denoising
|
Need to learn wavelet, suggest steps and resources
|
https://dsp.stackexchange.com/questions/14109/need-to-learn-wavelet-suggest-steps-and-resources
|
<p>I am looking for a good introduction to wavelets and wavelet transforms.</p>
<p>that covers the following: Vector Spaces – Properties– Dot Product – Basis – Dimension, Orthogonality and Orthonormality – Relationship Between Vectors and Signals – Signal Spaces – Concept of Convergence – Hilbert Spaces for Energy Signals- Fourier Theory: Fourier series expansion, Fourier transform, Short time Fourier transform, Time-frequency analysis.</p>
<pre><code> MULTI RESOLUTION ANALYSIS 9
</code></pre>
<p>Definition of Multi Resolution Analysis (MRA) – Haar Basis – Construction of General Orthonormal
MRA – Wavelet Basis for MRA – Continuous Time MRA Interpretation for the DTWT – Discrete
Time MRA – Basis Functions for the DTWT – PRQMF Filter Banks<br>
CONTINUOUS WAVELET TRANSFORMS </p>
<p>Wavelet Transform – Definition and Properties – Concept of Scale and its Relation with Frequency
– Continuous Wavelet Transform (CWT) – Scaling Function and Wavelet Functions (Daubechies
Coiflet, Mexican Hat, Sinc, Gaussian, Bi Orthogonal)– Tiling of Time – Scale Plane for CWT.</p>
<pre><code> DISCRETE WAVELET TRANSFORM
</code></pre>
<p>Filter Bank and Sub Band Coding Principles – Wavelet Filters – Inverse DWT
Computation by Filter Banks – Basic Properties of Filter Coefficients – Choice of Wavelet
Function Coefficients – Derivations of Daubechies Wavelets – Mallat's Algorithm for DWT – Multi
Band Wavelet Transforms Lifting Scheme- Wavelet Transform Using Polyphase Matrix
Factorization – Geometrical Foundations of Lifting Scheme – Lifting Scheme in Z –Domain.</p>
<pre><code> APPLICATIONS
</code></pre>
<p>Wavelet methods for signal processing- Image Compression Techniques: EZW–SPHIT Coding –
Image Denoising Techniques: Noise Estimation – Shrinkage Rules – Shrinkage Functions –
Edge Detection and Object Isolation, Image Fusion, and Object Detection.</p>
<p>Please suggest the steps,resources and materials to do the same.</p>
<p>Thanks. DeeRam</p>
|
<p>For wavelet I would recommend this book:
<a href="http://www.conceptualwavelets.com/book.html" rel="nofollow">http://www.conceptualwavelets.com/book.html</a></p>
<p>It is not too much mathematics included, yet in depth.</p>
| 377
|
signal denoising
|
Detecting and removing interferences from a signal
|
https://dsp.stackexchange.com/questions/74625/detecting-and-removing-interferences-from-a-signal
|
<p>I am using MATLAB in order to denoise and remove interferences on a signal.</p>
<p>I used <code>wdenoise</code> to denoise my signal which works by setting a threshold (for example SURE) for each scale and set all coefficients below this threshold to zero (these coefficients represents noise). It works pretty well.</p>
<p>I also wanted to detect interferences, which are in my case like spikes (most of the time with high amplitude compared to the useful signal), and remove them. I try to use wavelet decomposition (<code>wavedec</code>) to detect theses spikes and set a threshold to identify them for each scale. If a coefficient is greater than this threshold, I consider that this is an interference coefficient so I set it to zero. Let's consider a decomposition of 8 level. Because of downsampling, make a threshold on D{8} for example, give me a signal with only 10 coefficients so it's very complicated to see what coefficient is an interference coefficient.</p>
<p>With this technique, I don't have great results. Any idea to reconstruct a signal with spikes? Maybe other techniques ? Maybe used <code>swt</code> or <code>modwt</code> instead of <code>wavedec</code> ?</p>
|
<p>I'd recommend using a <a href="https://en.wikipedia.org/wiki/Median_filter" rel="nofollow noreferrer">median filter</a> to smooth your signal. A median filter will get rid of the outliers, or the spikes in your signal. The length of the median filter will have to be determined by how frequently the spikes appear in your signal. The median filter is available in Python as <code>numpy.median</code>.</p>
<p>If you want to retrieve the spikes (the interference signal), you can simply subtract the filtered signal from the noisy signal to get the interference.</p>
| 378
|
signal denoising
|
Implementation of Cepstrum in Python
|
https://dsp.stackexchange.com/questions/69576/implementation-of-cepstrum-in-python
|
<p>Actually I want to denoise a signal. I know how can I implement FFT in python to denoise it. This is the implementation which I use(From <a href="https://www.kaggle.com/theoviel/fast-fourier-transform-denoising" rel="nofollow noreferrer">this</a> Kaggle notebook):
<a href="https://i.sstatic.net/qhN9h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qhN9h.png" alt="enter image description here" /></a></p>
<p>But I don't know :</p>
<ol>
<li><p>How can I use cepstrum like FFT to denoise signal?</p>
</li>
<li><p>Are there any implementations in python like FFT code?</p>
</li>
</ol>
| 379
|
|
signal denoising
|
Correlated signals separation with reference
|
https://dsp.stackexchange.com/questions/47667/correlated-signals-separation-with-reference
|
<p>I have a signal S, which needs to be split into two components Sx and Sy.</p>
<p>And I have a signal X, which is a reference signal corresponding to Sx. </p>
<p>I need to perform this split of S and check that resulting Sy is ~Y and Sx ~X (I can use X in the process of filtering\separation, but not Y). </p>
<p>This sounds like a typical denoising task, that can be accomplished with LMS\RLS filters, but in my case signals <strong>X and Y are correlated and their BW are overlapping</strong>. </p>
<p>They are also non-stationary:<br>
- sometimes X amplitude can decrease to almost 0, then Sx is also 0 and Sy should be estimated as simply S.<br>
- sometimes both X and Y are 0 for a short period of time.<br>
- most of the time X and Y are approx sinusoidal signals with similar central frequencies, one might be slightly shifted w.r.t another. </p>
<p>I tried regular LMS\RLS approaches - assume Sx is noise -> S = Sy + noise, but due to crosscorrelaton between S,X and Y the algorithms best guess is S = Sy.</p>
<p>1) How would you try to solve this?
2) What if we can use Y as well, at least for the beginning? Would it make it simpler?</p>
<p>3) More specific question. Now I have S,X,Y amplitudes in arbitrary units (adc counts). Is it better to scale them? Otherwise, I assume, choice of Sx and Sy will be dependent on amplitude ratios of the signals, or not?</p>
|
<p>I followed the link from you more current version. I think your problem is better stated here.</p>
<p>1) I don't think you can solve it as stated. You will have a one unknown parameter family of solutions.</p>
<p>2) If you can use a portion of Y, and the coefficients remain the same, you can easily solve it.</p>
<p>$$ \vec S = a \vec X + b \vec Y $$</p>
<p>Dot this equation with $ \vec X $ and $ \vec Y $:</p>
<p>$$ ( \vec S \cdot \vec X ) = a ( \vec X \cdot \vec X ) + b ( \vec Y \cdot \vec X ) $$
$$ ( \vec S \cdot \vec Y ) = a ( \vec X \cdot \vec Y ) + b ( \vec Y \cdot \vec Y ) $$</p>
<p>This is a solvable system of two equations, two unknowns.</p>
<p>3) Rescaling will not affect the values of $a$ and $b$.</p>
<p>Hope this helps.</p>
<p>Ced</p>
<hr>
<p>Followup</p>
<p>If you expect that $a$ and $b$ may be changing, then I think this problem is basically intractable. At least this approach would be. </p>
<p>These vector equations will apply to any subset of your signal. You will get the best values for $a$ in sections where you guess that $\vec Y$ is small, and the best values for $b \vec Y$ where $\vec X$ is small (as you already stated). Also, if you can determine sections where $ \vec X \cdot \vec Y = 0 $ (completely uncorrelated), you will also get good readings on $a$ and $b \vec Y$. In sections where they are highly correlated, you will get unreliable values for $a$ and $b \vec Y$.</p>
<p>If you have a rough idea of what $a$ is, you can find uncorrelated sections by looking for intervals where $ ( \vec S - a \vec X ) \cdot \vec X \approx 0 $, but this won't get you a better value for $a$ because that's what you started with.</p>
<p>There is no way to separate $b$ from $ b \vec Y $ unless you know something about $ \vec Y $.</p>
<p>Any kind of linear transform is going to look like this:</p>
<p>$$ T( \vec S ) = a T( \vec X ) + b T( \vec Y ) $$</p>
<p>I'm not sure if that could be helpful, I doubt it.</p>
| 380
|
signal denoising
|
Noisy signal filtering MATLAB
|
https://dsp.stackexchange.com/questions/30444/noisy-signal-filtering-matlab
|
<p>I'm currently working on rectifying a respiratory noisy signal shown below:</p>
<p><a href="https://i.sstatic.net/l90XF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l90XF.jpg" alt="enter image description here"></a></p>
<p>I've already tried to filter the noise as you can see in the image below (full <a href="https://drive.google.com/file/d/0B6dMKbuezqN7UTJNOWh6VUlSSXM/view?usp=sharing" rel="nofollow noreferrer">image</a>):</p>
<p><a href="https://i.sstatic.net/xwOLd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xwOLd.jpg" alt="enter image description here"></a></p>
<p>The red one is the noisy signal whereas the blue one is the one got after applying the non-linear detrending:</p>
<pre><code>dt_ecgl = detrend(ecgl);
opol = 6;
[p,s,mu] = polyfit(t,ecgnl,opol);
f_y = polyval(p,t,[],mu);
dt_ecgnl = ecgnl - f_y;
</code></pre>
<p>And the green one is the reference signal (correct one) which I'm aiming to have as result of filtering. So my question is how can I acheive a better result? My goal is to have the blue and the green ones conbined (same exact shape) in other words how can I denoise such deformed signal in MATLAB or using any other platform?.</p>
| 381
|
|
signal denoising
|
FFT/PSD/IFFT analysis on single axis piezoelectric accelerometer signals for curb impacts
|
https://dsp.stackexchange.com/questions/74449/fft-psd-ifft-analysis-on-single-axis-piezoelectric-accelerometer-signals-for-cur
|
<p>I'm trying to denoise the signal by performing PSD analysis and followed by IFFT. Ultimately, I want to generate Force and Displacement plots from the denoised acceleration signal.</p>
<p>Noisy Acceleration Signal(<span class="math-container">$a_z$</span> vs t):</p>
<p><a href="https://i.sstatic.net/nNBVe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNBVe.png" alt="enter image description here" /></a></p>
<p>PSD analysis of the signal:</p>
<p><a href="https://i.sstatic.net/F63SY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F63SY.png" alt="enter image description here" /></a></p>
<p>Setting a PSD > 0.001 in the code to filter out frequencies having less power than 0.001.</p>
<p>After IFFT(<span class="math-container">$a_z$</span> vs t):</p>
<p><a href="https://i.sstatic.net/O2S7h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O2S7h.png" alt="enter image description here" /></a></p>
<p>The denoised signal makes sense since I'm recording z-acceleration on curb impacts which comes out to be a series of impulses.</p>
<p>I'm a novice in signal processing and I don't know whether a windowing would have given a better result or not?</p>
<p>Further questions: Is it possible to find force distribution from the acceleration signal? I've been searching to find answers but none have given me a good idea.</p>
|
<p>Filtering by zeroing out bins is not a recommended approach as it will introduce significantly more time domain ringing, as detailed here</p>
<p><a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">Why is it a bad idea to filter by zeroing out FFT bins?</a></p>
<p>Consider using the <code>firls</code> function available in MATLAB / Octave or Python scipy.signal to design optimized multiband filters around your frequencies of interest.</p>
<p>However the frequency content will be driven by repetition in the data; if the actual application will repeat (or not) at unknown and variable intervals, any such filtering techniques will not be useful.</p>
| 382
|
signal denoising
|
Smoothing for damped wave signal (fixed variance noise, but changing SNR)
|
https://dsp.stackexchange.com/questions/34143/smoothing-for-damped-wave-signal-fixed-variance-noise-but-changing-snr
|
<p>My colleagues and I are arguing about how to smooth a damped wave signal.</p>
<p>This signal is corrupted by white noise of a steady magnitude.</p>
<p>However the signal damps out as it goes along. So, the SNR and the noise as a standard dev of the signal range, both decrease.</p>
<p>My colleagues say I can denoise it the same the entire time, because the magnitude of the noise is the same. I say I have to adapt, because the SNR is the guiding factor. </p>
<p>Who's right?</p>
| 383
|
|
signal denoising
|
Intro Question to Signal Processing (Low-Pass Filter)
|
https://dsp.stackexchange.com/questions/14897/intro-question-to-signal-processing-low-pass-filter
|
<p>I have a noisy signal file in Matlab and I have to denoise the polluted signal using a discrete Fourier transform.</p>
<p>I'm asked to perform the fourier transform, then take its absolute value. Then study/examine the absolute values to then implement a low-pass filter for the actual sound (and corresponding high-pass filter for the background noise) in Matlab. </p>
<p>Any ideas on general approaches? This is an intro course so suggestions shouldn't be too formal. Also this is homework so only hints please.</p>
|
<p>Based on your description it sounds like the idea is to use the FFT to figure out where your signal energy is in terms of frequencies. Once you know that you know what the passband of your filter should be, and from that you can decide on a reasonable cutoff frequency. You want to cutoff as much of the noise energy as possible which means you want to have the cutoff frequency as close to the passband frequency as possible, but you need some space for the transition band to make the filter implementable with a reasonable number of taps.</p>
<p>By the way, I often like to look at the signal energy in the log scale because it helps to get a better understanding of the weaker portions of the signal. In Matlab you do it like so:</p>
<pre><code>plot(20*log10(abs(fft(signal))))
</code></pre>
| 384
|
signal denoising
|
Understanding noise removal method using wavelets
|
https://dsp.stackexchange.com/questions/71212/understanding-noise-removal-method-using-wavelets
|
<p>I am trying to understand how wavelet transform can be used to denoise a time series or signal and how to plot the scalogram image. My signal has a lot of fluctuations and as such I am finding it difficult to denoise. Morevoer, to plot the scalogram I need to know the frequency. I don't know what is the frequency for this particular kind of time series obtained from a dynamical system of the form: Logistic Map given by:
<span class="math-container">$$x[n] = 4\big(x[n-1]\big)\big(1-x[n-1]\big)$$</span>
Systems similar to this type of dynamical systems are the Lorenz, Mackey-Glass. Can somebody please help:</p>
<ol>
<li><p>How to properly denoise the signal? As observed, from the plot the denoised signal <code>denoised</code> does not look exactly the same as the clean signal <code>x</code> (black dotted line), so what are other parameters or wavelet types that I could use and how to decide which ones to use. Is there a rule of thumb?</p>
</li>
<li><p>What is the sampling and nyquist frequency for this kind of signal and</p>
</li>
<li><p>how to plot the scalogram image: I used <code>wt()</code> to obtain the wavelet coefficients. After that how to plot the image of scalogram so that X axis is time and Y axis is Frequency?</p>
<pre><code> x(1) = 0.1; % initial condition (can be anything from 0 to 1)
M = 50; %number of data points (length of the time series)
for n = 2:M, % iterate
x(n) = 4*x(n-1)*(1-x(n-1));
end
%add noise
x_noise = awgn(x,10,'measured');
%denoise using wavelet
denoised = wdenoise(x_noise, 3,'Wavelet','db3',...
'DenoisingMethod','Bayes',...
'ThresholdRule','Median',...
'NoiseEstimate','LevelIndependent');
figure
plot(x_noise)
axis tight
hold on
plot(denoised,'r')
fb = cwtfilterbank('SignalLength',M);
[cfs,frq] = wt(fb,denoised);
</code></pre>
</li>
</ol>
<p><a href="https://i.sstatic.net/BmPAv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BmPAv.png" alt="plot" /></a></p>
|
<p>Your signal (with initial par x0 =0.1) is already noise like and high frequency. It will be hard to distinguish it from the added white noise... One thing you can do is to interpolate (resample) the time series by a large enough factor and then later add the white noise. This will artifically help to separate the noise spectrum and your signal spectrum but the signal lengths will also be increased. whether it is what you have to do is up to you !</p>
<p>The following modification apparently improves the noise removal, but fundamentlaly it's separating the noise spectrum from the signal. So whether this is a viable option is upto your applications:</p>
<pre><code> M = 50; % number of data points (length of the time series)
x(1) = 0.5; % initial condition (can be anything from 0 to 1)
for n = 2:M, % iterate
x(n) = 4*x(n-1)*(1-x(n-1));
end
U = 10; % interpolation factor
xU = resample(x,U,1); % just interpolate the obtained sequence
% add noise onto the interpolated sequence xU
x_noise = awgn(xU , 10 , 'measured');
%denoise using wavelet
denoised = wdenoise(x_noise, 3,'Wavelet','db3',...
'DenoisingMethod','Bayes',...
'ThresholdRule','Median',...
'NoiseEstimate','LevelIndependent');
denoised = resample(denoised,1,U); % downsample de-noised sequence back
figure
plot(x_noise(1:10:end)) % down-sample noisy seqeunce on the fly for displaying
axis tight
hold on
plot(denoised,'r')
plot(x,'c--');
legend('noisy','denoised','clean');
fb = cwtfilterbank('SignalLength',M);
[cfs,frq] = wt(fb,denoised);
</code></pre>
<p>The result looks like :</p>
<p><a href="https://i.sstatic.net/A8EVg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A8EVg.png" alt="enter image description here" /></a></p>
| 385
|
signal denoising
|
Unable to remove audio noise using weak signal power calculated from FFT as threshold
|
https://dsp.stackexchange.com/questions/94622/unable-to-remove-audio-noise-using-weak-signal-power-calculated-from-fft-as-thre
|
<h1>Question</h1>
<p>Please help understand the cause and solution of the problem of unable to remove the audio noise by using the signal power as filter threshold. If the approach is not correct, please advise the better or correct ways.</p>
<h1>Background</h1>
<p>Try to remove the audio noises in the <a href="https://github.com/openai/whisper/blob/main/tests/jfk.flac" rel="nofollow noreferrer">JFK speech</a>. For instance, <strong>... my fellow Americans <code><noise></code> ask not what your country can do for you <code><noise></code> ...</strong>, which are the red rectangles in the waveform.</p>
<p><a href="https://i.sstatic.net/CbhG6N2rl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbhG6N2rl.jpg" alt="enter image description here" /></a></p>
<p>Ideally aiming to get similar to the melspectgram below where the noises are removed in between phrases.</p>
<p><a href="https://i.sstatic.net/V00zmd7tl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V00zmd7tl.png" alt="enter image description here" /></a></p>
<h2>Data</h2>
<pre><code># https://github.com/openai/whisper/blob/main/tests/jfk.flac
wget -P ./data https://github.com/openai/whisper/blob/main/tests/jfk.flac
</code></pre>
<h2>Idea</h2>
<p>Using the power of the signal to remove noise as Professor Steve Brunton showed in <a href="https://youtu.be/s2K1JfNR7Sc?t=220" rel="nofollow noreferrer">Denoising Data with FFT</a>. The code exceprt from CH02_SEC02_2_Denoise.ipynb in <a href="http://databookuw.com/CODE_PYTHON.zip" rel="nofollow noreferrer">CODE.zip + DATA.zip (PYTHON CODE by Daniel Dylewsky, unzip into same folder)</a>:</p>
<pre><code>fhat = np.fft.fft(f,n) # Compute the FFT
PSD = fhat * np.conj(fhat) / n # Power spectrum (power per freq)
indices = PSD > 100 # Find all freqs with large power
fhat = indices * fhat # Zero out small Fourier coeffs. in Y
ffilt = np.fft.ifft(fhat) # Inverse FFT for filtered time signal
</code></pre>
<p><a href="https://i.sstatic.net/TBrCRJjWl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TBrCRJjWl.png" alt="enter image description here" /></a></p>
<h1>Problem</h1>
<p>Implemented the code, but it could not remove the noises in between phrases as the white cloudy parts seen in the mel spectgraum. Not sure why the weak power (white cloudy parts) cannot not been removed.</p>
<p><a href="https://i.sstatic.net/LRlJPtRdl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRlJPtRdl.png" alt="enter image description here" /></a></p>
<h2>Code</h2>
<p>First, find out the power range where the human voice <code>300 - 3000 Hz</code> exists. From the power/frequency diagram, I should extract above <code>-100 dB</code>, hence this is the <code>power threshold</code> to filter.</p>
<pre><code>import librosa
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
data, original_sampling_rate = librosa.load("./data/jfk.flac", sr=None)
sampling_rate = original_sampling_rate
N = num_total_samples = data.shape[0]
# --------------------------------------------------------------------------------
# Power per Frequency
# --------------------------------------------------------------------------------
dft = np.fft.rfft(data, norm="forward")
amplitude = 2 * np.abs(dft)
db = 20 * np.log10(amplitude)
frequency = np.fft.rfftfreq(n=len(data), d=1/sampling_rate)
plt.figure().set_figwidth(8)
plt.plot(frequency, db)
plt.xlabel("Frequency (Hz)")
plt.ylabel("Power (dB)")
plt.grid(visible=True, which='major')
plt.grid(visible=True, which='minor', linestyle='--', alpha=0.5)
plt.xscale("log")
# --------------------------------------------------------------------------------
# Mel Spectrogram
# --------------------------------------------------------------------------------
S = librosa.feature.melspectrogram(
y=data, sr=sampling_rate, n_mels=128, fmax=sampling_rate/2, n_fft=1024, hop_length=512
)
S_dB = librosa.power_to_db(S, ref=1, top_db=None, amin=1e-10)
plt.figure().set_figwidth(8)
librosa.display.specshow(S_dB, x_axis="time", y_axis="mel", sr=sampling_rate, fmax=sampling_rate/2)
plt.colorbar()
</code></pre>
<p><a href="https://i.sstatic.net/547fdlHOl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/547fdlHOl.jpg" alt="enter image description here" /></a></p>
<p>Extract the area surrounded by the threshold <code>-100 dB</code> and the <code>300-3000</code> band.</p>
<pre><code># --------------------------------------------------------------------------------
# Cut low/high frequencies with scipy.signal.butterworth
# --------------------------------------------------------------------------------
HIGHCUT = 3000
LOWCUT = 300
ORDER = 10
bandpassed = butter_bandpass_filter(signal=data, lowcut=LOWCUT, highcut=HIGHCUT, sampling_rate=sampling_rate, order=ORDER)
# --------------------------------------------------------------------------------
# Cut low power < -100 dB
# --------------------------------------------------------------------------------
THRESHOLD = -100
fhat = np.fft.rfft(a=bandpassed, norm="forward", axis=-1)
fhat_amplitude = 2 * np.abs(fhat)
fhat_power = 20 * np.log10(fhat_amplitude)
# zero out < -100 dB
fhat_filter_indices = fhat_power > THRESHOLD
fhat_power_filtered = fhat * fhat_filter_indices
plt.figure().set_figwidth(8)
plt.plot(frequency, 20 * np.log10(2 * np.abs(fhat_power_filtered)))
plt.grid(visible=True, which='major')
plt.grid(visible=True, which='minor', linestyle='--', alpha=0.5)
plt.xlabel("Frequency (Hz)")
plt.ylabel("Power (dB)")
plt.grid()
plt.xscale("log")
</code></pre>
<p><a href="https://i.sstatic.net/MguM6UpBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MguM6UpBl.png" alt="enter image description here" /></a></p>
<p>However, the noises in-between were not removed.</p>
<pre><code>data_filtered = np.fft.irfft(fhat_power_filtered, norm="forward")
plt.figure().set_figwidth(8)
plt.grid()
librosa.display.waveshow(y=data_filtered, sr=sampling_rate)
S_filtered = librosa.feature.melspectrogram(y=data_filtered, sr=sampling_rate, n_mels=128, fmax=sampling_rate/2)
S_filtered_dB = librosa.power_to_db(S_filtered)
plt.figure().set_figwidth(8)
librosa.display.specshow(S_filtered_dB, x_axis="time", y_axis="mel", sr=sampling_rate, fmax=sampling_rate/2)
plt.colorbar()
</code></pre>
<p><a href="https://i.sstatic.net/TpNOBjFJl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpNOBjFJl.jpg" alt="enter image description here" /></a></p>
<hr />
<h1>Workaround</h1>
<p>If I cut the sound data with its amplitude, the result mel spectgram looks what needed.</p>
<pre><code>data, original_sampling_rate = librosa.load("./data/jfk.flac", sr=None)
sampling_rate = original_sampling_rate
data = data * (data > 0.1) # <--- Cut the small amplitude data
# Same code as above from here.
</code></pre>
<p><a href="https://i.sstatic.net/fwHd9B6tl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fwHd9B6tl.png" alt="enter image description here" /></a></p>
<p>However, the sound has high note noise and the voice is like that of someone with stuffed nose. Also, if power is form amplitude as in <code>power = (amplitude ** 2) / 2</code>, then why cut by amplitude remove the noises in-between, but power cannot?</p>
<pre><code>from IPython.display import (
Audio,
display
)
display(Audio(data=data_filtered, rate=sampling_rate))
</code></pre>
<hr />
<h2>Butterworth filter</h2>
<pre><code>def butter_bandpass(
lowcut,
highcut,
sampling_rate: int,
order: int = 5
):
"""
Args:
signal: signal data to filter
lowcut: low cut-off frequency
highcut: high cut-off frequency
sampling_rate: sampling rate used to sample the signal
order: filter order
Returns: filtered output
"""
nyquist = 0.5 * sampling_rate
low = lowcut / nyquist
high = highcut / nyquist
sos = sp.signal.butter(N=order, Wn=[low, high], btype='band', analog=False, output='sos')
return sos
def butter_bandpass_filter(
signal: np.ndarray,
lowcut,
highcut,
sampling_rate: int,
order: int = 5
):
"""Butterworth bandpath filter
Args:
signal: signal data to filter
lowcut: low cut-off frequency
highcut: high cut-off frequency
sampling_rate: sampling rate used to sample the signal
order: filter order
"""
sos = butter_bandpass(
lowcut=lowcut,
highcut=highcut,
sampling_rate=sampling_rate, order=order)
y = sp.signal.sosfilt(sos=sos, x=signal)
return y
</code></pre>
<hr />
<h1>Update</h1>
<p>It looks zero-ing in the frequency domain is not a good idea according to <a href="https://dsp.stackexchange.com/a/6224/73490">Why is it a bad idea to filter by zeroing out FFT bins?</a>, but I have no idea what the explanation mean.</p>
<blockquote>
<p>Zeroing bins in the frequency domain is the same as multiplying by a rectangular window in the frequency domain. Multiplying by a window in the frequency domain is the same as circular convolution by the transform of that window in the time domain. The transform of a rectangular window is the Sinc function (<span class="math-container">$\sin(\omega t)/\omega t$</span>). Note that the Sinc function has lots of large ripples and ripples that extend the full width of time domain aperture. If a time-domain filter that can output all those ripples (ringing) is a "bad idea", then so is zeroing bins.</p>
</blockquote>
|
<p>I still do not completely understand the exact mechanism, but using a threshold and zero-out in the frequency domain was a wrong way as explained in <a href="https://dsp.stackexchange.com/a/6224/73490">Why is it a bad idea to filter by zeroing out FFT bins?</a>, which further broken down into the explanation in <a href="https://dsp.stackexchange.com/a/94624/73490">Explanation to layman on the effect of zero-ing in frequency domain</a></p>
<blockquote>
<p><strong>Zeroing out a bin is the same as subtracting out a perfect sine wave of a specific bin centered frequency</strong>.</p>
<p>Most (or all) real world signals are imperfect. If the signal you are trying to filter out is slightly different from an exactly perfect sinewave of that bin frequency, the subtraction will leave behind a lot of "junk".</p>
<p>Signals that are "between bins" are different from any (or any small subset of) basis vectors of an FFT, thus can't be removed by subtracting a basis vector, which are all perfect sine waves of bin centered frequencies, e.g. frequencies whose period is an exact integer sub-multiple of the FFT's length. The subtraction will leave behind, often nasty, "junk".</p>
</blockquote>
<hr />
<h1>Weiner Filter</h1>
<p>As suggested by Jdip, experimented it and got a better result with background noise reduced in the speech.</p>
<pre><code>wienered = sp.signal.wiener(im=data, mysize=28) # <--- 28 is found out by tweakig mysize paramter.
bandpassed = butter_bandpass_filter(signal=wienered, lowcut=LOWCUT, highcut=HIGHCUT, sampling_rate=sampling_rate, order=ORDER)
S = librosa.feature.melspectrogram(y=bandpassed, sr=sampling_rate, n_mels=128, fmax=sampling_rate/2)
S_dB = librosa.power_to_db(S)
plt.figure().set_figwidth(12)
librosa.display.specshow(S_dB, x_axis="time", y_axis="mel", sr=sampling_rate, fmax=sampling_rate/4)
plt.colorbar()
</code></pre>
<p><a href="https://i.sstatic.net/51nrSMgH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51nrSMgH.png" alt="enter image description here" /></a></p>
| 386
|
signal denoising
|
wavelet reconstruction
|
https://dsp.stackexchange.com/questions/48191/wavelet-reconstruction
|
<p>I am doing mtech project now.
Actually I am working on discrete wavelet transform.I have done till wavelet decomposition. I have been stuck in reconstructing the signal back to original .please help me out.</p>
<p>code--</p>
<pre><code>f=10;
fs=200;
amp_x = 14;
% amplitude for sinusodial 2
amp_y = 9;
% Time vector
t=(0:1:5000);
% Create a sine wave
x = amp_x * (sin(2*pi*f/fs*t));
%filter coefficients
LO_D =[ 0.7071 0.7071];
HI_D =[-0.7071 0.7071];
C = conv(LO_D,x);
k = 2;
CA= C(2:k:length(C));
D = conv(HI_D,x);
g=2;
CD= D(2:g:length(D));
</code></pre>
<p>CA and CD are my approximation and detailed coefficients.
now i do squaring for CD coefficients so as to remove artefacts.</p>
<pre><code>for i=1:length(CD)
cd1(i)=CD(i).^2;
end
</code></pre>
<p>after squaring I put a threshold and remove the values greater than the threshold. So i get a denoised signal.</p>
<p>Now i need to reconstruct the denoised signal to original.how do i do it without using inbuilt commands. please answer me.</p>
| 387
|
|
signal denoising
|
Adding multiple noise sources with target SNR with ECG data using the MIT BIH Noise Stress Test Database
|
https://dsp.stackexchange.com/questions/85700/adding-multiple-noise-sources-with-target-snr-with-ecg-data-using-the-mit-bih-no
|
<p>I'm new to signal processing, and I believe to understand what additive noise is. However, while reading several ECG denoising papers, I've noticed that some combine multiple noise sources from the <a href="https://physionet.org/content/nstdb/1.0.0/" rel="nofollow noreferrer">MIT BIH Noise Stress Test database</a>. For example, in <a href="http://www.cs.newpaltz.edu/%7Elik/publications/Jilong-Wang-NC-2019.pdf" rel="nofollow noreferrer">here</a> and <a href="https://www.semanticscholar.org/paper/Noise-Reduction-in-ECG-Signals-Using-Fully-Chiang-Hsieh/1161e5997e5b4e974e17ffac6ccf0fab7d9a5565" rel="nofollow noreferrer">here</a>.</p>
<p>I'm also considering that applying a single noise source to ECG data isn't straight forward. Luckily, Physionet provides a tool <code>nst</code>, which also describes how we can <a href="https://physionet.org/physiotools/wag/nst-1.htm" rel="nofollow noreferrer">apply a signal noise source</a></p>
<p>The <code>nst</code> tool allows specifying a target SNR and takes as input a single noise source and an ECG record. My questions here is:</p>
<p><em>How can I apply multiple noise sources to reach a target SNR?</em>
In every paper I've read so far where authors applied multiple noise sources, they simply mention how they "combined the noise sources" and compared their methods using the same SNR as with a single noise source.</p>
<p>One thing I would like to rule out, is that the authors just applied the <code>nst</code> tool with the same SNR target for each noise source. As in, rather than checking whether the combined noise sources amount to the target SNR they applied the <code>nst</code> tool once for each noise source additively, where at each step they input the same target SNR.</p>
<p>For example, let record <code>100</code> be the noise free record and 10 is our target SNR. Then,</p>
<ol>
<li>Apply <code>nst</code> with target 10dB with noise source <code>EM</code>, outputs <code>100_em</code></li>
<li>Apply <code>nst</code> with target 10dB with noise source <code>BW</code>, outputs <code>100_em_bw</code></li>
<li>Apply <code>nst</code> with target 10dB with noise source <code>ma</code> outputs, <code>100_em_bw_ma</code></li>
</ol>
<p>I'd greatly appreciate if anyone could also point me to the appropriate theory, and it feels like I'm missing something really fundamental when authors collective do not believe it is important to mention how the noise sources are combined exactly.</p>
| 388
|
|
signal denoising
|
What do I measure in a Sound Sample Buffer to remove noise from an audio file using the Kalman Filter?
|
https://dsp.stackexchange.com/questions/81547/what-do-i-measure-in-a-sound-sample-buffer-to-remove-noise-from-an-audio-file-us
|
<p>I am developing a computer program that removes or reduces the background noise from an audio file using the Simple Kalman Filter. I have implemented the Kalman Filter and a way of obtaining the "sample buffer" for the audio file.</p>
<p>I understand how the Kalman Filter works in terms of the purpose of each of the variables. This is my first time attempting Digital Signal Processing, however, so I am not sure what my measurements are supposed to be in order to use the Kalman Filter correctly.</p>
<p>I've looked at lots of research papers and articles on <a href="https://ggbaker.ca/data-science/content/filtering.html" rel="nofollow noreferrer">Noise Filtering</a>, but they have not been helpful.</p>
<p>I'm getting the sense that I need to determine the frequency of the noise or determine the noise wave, and remove it, perhaps by adding the inverse of the noise wave. And that I need to estimate the signal using the signal+noise input, or estimate the noise? Is this correct?</p>
<p>How do I model the Kalman Filter in this particular application in order to perform background noise removal?</p>
<p>I am trying to achieve a similar output to <a href="https://audiodenoise.com/" rel="nofollow noreferrer">https://audiodenoise.com/</a></p>
<p><strong>UPDATE 2/15/2022</strong></p>
<p>From my additional research, it seems the simple Kalman Filter deals with white noise, and I need to estimate the signal. The Kalman Gain should be higher for the samples that contain speech and low for the samples that do not contain speech.</p>
<p>I still don't understand, though, <em>what</em> I am measuring. Even if I measured every sample in the audio file individually as a one-dimensional state, then what would I do with this value?</p>
<p>Currently, I have the individual samples as measurements and replace my current sample with the calculated estimate for that iteration. It results in a decrease in amplitude for the entire file, which when re-amplified, reveals the noise again.</p>
<p><strong>Research Papers dealing with Kalman Filter for Audio Denoising</strong></p>
<ul>
<li><a href="https://www.researchgate.net/publication/325622133_Speech_Background_Noise_Removal_Using_Different_Linear_Filtering_Techniques" rel="nofollow noreferrer">https://www.researchgate.net/publication/325622133_Speech_Background_Noise_Removal_Using_Different_Linear_Filtering_Techniques</a></li>
<li><a href="https://www.researchgate.net/publication/261356618_Speech_enhancement_using_Kalman_Filter_for_white_random_and_color_noise" rel="nofollow noreferrer">https://www.researchgate.net/publication/261356618_Speech_enhancement_using_Kalman_Filter_for_white_random_and_color_noise</a></li>
<li><a href="https://github.com/mahdimohajeri/Speech-Enhancement-Kalman-Filter" rel="nofollow noreferrer">https://github.com/mahdimohajeri/Speech-Enhancement-Kalman-Filter</a></li>
<li><a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.417.7052&rep=rep1&type=pdf" rel="nofollow noreferrer">https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.417.7052&rep=rep1&type=pdf</a></li>
</ul>
<p><strong>UPDATE 2/21/2022</strong></p>
<p>I am currently looking into <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.417.7052&rep=rep1&type=pdf" rel="nofollow noreferrer">another research paper</a> that seems to be much more specific in how I can implement speech enhancement for the Kalman Filter</p>
|
<p>Using the <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.417.7052&rep=rep1&type=pdf" rel="nofollow noreferrer">AMS-based modulation-domain Kalman Filtering framework</a>, this can be done.</p>
| 389
|
signal denoising
|
custom raw compression
|
https://dsp.stackexchange.com/questions/81432/custom-raw-compression
|
<p>I'm planning to acquire between 50k and 200k image per day with a 50MPixels (or 68MPixels or 130MPixels) sensor; I'll be acquiring the raw data (10 or 12 or 14 bits) from the sensor through SLVS-EC and create a raw file of my own design. The raw bitrate from the sensor may go up to 75.2 Gbps.</p>
<p>I may have to store 50k-250k images per day (eg., 17.5TB if 250k images are 70MB-50MPixels each). I need to keep high quality images (in particular, colors must remain accurate and textures fully detailed, hence the lossless or only light loss and nothing below 10 bits per channel), and a flexibility in edition (hence the raw).</p>
<p>Also images will share a lot, since I may have 2-24Hz framerate at capture; also a first processing will drop (delete) between 10% and 50% of images, so a keyframe based compression may not be suitable.</p>
<p>Since I need to keep the storage cost as low as possible without doing too hard compression (maybe go below 30-50MB per raw image). I'm planning to allow compression within this raw file, this compression can be lossless or lightly lossy. I'm thinking about wavelet and auto learnt dictionaries (patchs and sparse coding) for the compression, but this is not a requirement.</p>
<p>I will not release any sdk or raw image, so there are no need or requirement on the standard and adoption side. I'll very likely use an FPGA for signal processing (up to 75.2Gbps from the sensor), since I need very high IO and fast signal processing, and the whole package will be embedded, and as compact as possible and reasonnably light (say less than 1-2kg).</p>
<p>About the images, it will be natural environment with natural day light; it may include shadows and sky with sun, and hence high dynamic, but also rich (high frequency) textures which must be preserved. So likely I won't add further denoising, but I want to keep the fexibility with color processing: in particular the ability to change the signal amplification and the white/black balance.</p>
<p>Do you have thoughts and pieces of advice about the compression strategy for this raw format ? In particular do you think video compression algorithms (eg., HEVC) could be adapted to raw bayered data ?</p>
|
<p>Raw files are (ideally) the raw readout of a sensor. Suitable for research, or if you want to eek out all possible information from a sensor using fancy offline processing. Now or in 10 years. In some cases, you might not need all of the information contained in a full raw image, but be satisfied with having maksimum exposure freedom - ie to avoid tonemapping/tonecurves baked into something like a jpeg preprocessing.</p>
<p>Do you specifically need to store it in Bayer format, or could you do debayer and use some off the shelf YCbCr 4:2:0 compression? What kind of compute platform do you have between sensor and storage (a PC?)</p>
<p>If file size is a major concern, something like x264 with >8 bits and high-ish bitrate is going to be hard to beat with home-grown tools in terms of quality per bit or quality per cycle unless you have very specific requirements or a lot of skill and time.</p>
<p>Edit:
Responding to some of the comments below.
I would borrow a nice camera, take two snapshots of the scene in question, read the raw files using dcraw or some similar tool and import into matlab/python.</p>
<p>Then you can play with debayer, fixed whitebalance (?), whitepoint and blackpoint and gamma (I think that h26x tends to be limited to 10 bits, but note that this is usually nonlinear quantization - more resolution in the blacks where it matters more). Finally, do a 3x3 matrix to a pseudo YCbCr format, save and pass it to a lossy encoder. Observe the file size of the first frame (intra) and the second (inter). That tells you a lot about how compressible the stream will be.</p>
<p>Then check the output file, carefully reversing the steps above. Check if quality is sufficient for your needs. Be prepared to do some fiddling until the stars align.</p>
| 390
|
signal denoising
|
What is the best to add accelerometer noise to PPG signals?
|
https://dsp.stackexchange.com/questions/93172/what-is-the-best-to-add-accelerometer-noise-to-ppg-signals
|
<p>I have a BIDMC PPG signals. I'm trying to add accelerometer data as a noise to PPG signals and trying to denoise it using deep generative models like GANs, VAEs, Diffusion Models. So, what can be the best way to add noise so that the it wouldn't distort the signal so much that we lose all the original information?</p>
|
<p>I assume that your question refer to the stochastic noise sources of the sensor, and not to the deterministic ones (mis-alignement, scale-factors, non-orthogonalities, etc.)</p>
<p>Accelerometers tend to have three different stochastic noise sources:</p>
<ul>
<li>White Noise</li>
<li>Flicker noise (bias instability)</li>
<li>Random walk</li>
</ul>
<p>You can find a detailed explanation of each of these stochastic noise sources in these papers <a href="https://ieeexplore.ieee.org/document/9955423" rel="nofollow noreferrer">[1]</a> and <a href="https://www.sciencedirect.com/science/article/abs/pii/S026322411730578X" rel="nofollow noreferrer">[2]</a>.</p>
<p>The applicability of each noise source depends of several factors but most importantly the quality of your sensor. A MEMS sensor will have bigger noise levels that a Navigation-grade sensor. Overall though, on small acquisition time (less than a few minutes for a MEMS sensor, few hours for a high-end one), the assumption of a white noise only can hold.</p>
<p>The noises values you can expect can be directly taken from your sensor datasheet or derived from an Allan Variance curve.</p>
| 391
|
signal denoising
|
How does noise reduction for speech recognition differ from noise reduction that is supposed to make speech more "intelligible" for humans?
|
https://dsp.stackexchange.com/questions/42422/how-does-noise-reduction-for-speech-recognition-differ-from-noise-reduction-that
|
<p>this is a question that has interested me for some time now, mainly because I'm working on noise reduction for an existing speech recognition system myself.</p>
<p>Most papers on noise reduction techniques seem to focus on how to make speech more intelligible for humans, or how to improve vague terms like "speech quality". </p>
<p>I'm sure that, using criteria like these, you can identify filters that make noisy speech signals easier to listen to for humans.
However, I am not sure that these criteria can simply be adapted when trying to evaluate speech signals that have been denoised to improve the accuracy of speech recognition system. </p>
<p>I don't really find papers that discuss this difference. Do speech intelligibility and speech quality correlate with the accuracy of speech recognition systems? Are there objective measures that can evaluate how "good" a denoised speech signal will be for a speech recognition system, for example if also given the original clean speech? Or is the only way to find out how good your noise reduction technique is, to train the speech recognition system on the denoised data and look at the accuracy?</p>
<p>I'd be happy if someone could point me into the right direction, or maybe give some papers that discuss this. Thanks in advance!</p>
|
<blockquote>
<p>I don't really find papers that discuss this difference.</p>
</blockquote>
<p>There are whole books on the subject:</p>
<p><a href="https://www.microsoft.com/en-us/research/publication/robust-automatic-speech-recognition-a-bridge-to-practical-applications-1st-edition-306-pages/" rel="nofollow noreferrer">Robust Automatic Speech Recognition 1st Edition</a></p>
<blockquote>
<p>Do speech intelligibility and speech quality correlate with the accuracy of speech recognition systems? </p>
</blockquote>
<p>Usually no, usually noise reduction corrupts features in unpredictable way and reduces speech recognition accuracy.</p>
<blockquote>
<p>Are there objective measures that can evaluate how "good" a denoised speech signal will be for a speech recognition system, for example if also given the original clean speech? Or is the only way to find out how good your noise reduction technique is, to train the speech recognition system on the denoised data and look at the accuracy?</p>
</blockquote>
<p>Second. Moreover feature-based noise reduction actually removes important information from the spectrum altogether so you can not repair an accuracy of the clean system. For that reason modern approach is to perform multi-style training on noisy data instead of using noise reduction algorithm beforehand. It ends in more accurate recognition.</p>
| 392
|
signal denoising
|
Why adaptive filter does not work in my application
|
https://dsp.stackexchange.com/questions/23229/why-adaptive-filter-does-not-work-in-my-application
|
<p>I got a problem when I was trying to denoise a signal. Actually, it is a simple simulation. The signal is the addition of a step signal (The info I wish to get) and a sinusoidal one (the noise I wish to remove). See below<img src="https://i.sstatic.net/NcyNr.jpg" alt="(a) The noise (b) The signal and (c) Signal + the noise">
However, I tried different parameters of using the adaptive filter, it simply cannot filter out the sinusoidal noise from the step signal. See figure like this.<img src="https://i.sstatic.net/REaIX.jpg" alt="Using adaptive filter"></p>
<p>Any suggestions will be greatly appreciated! </p>
<p>Below is the matlab code</p>
<pre><code>clear all
close all
%% walking induced noise
t = [1:5000]*1e-2;
f = 0.1;
WalkNoise = 1*sin(2*pi*f.*t);%+1.5*cos(3*pi*f.*t);
WN = WalkNoise + 0.05*randn(size(WalkNoise));
figure
subplot(3,1,1)
plot(t,WN);
title('Noise');
%% signal
h1 = 14; % height of the signal 1
h2 = 18; % height of the signal 2
L = 5000; % total length of the signal
bp =2500; % location of break point
x1 = h1*ones(1,bp);
x2 = h2*ones(1,L-bp);
Sig = [x1,x2];
Sig = Sig + 0.1*randn(size(Sig));
subplot(3,1,2)
plot(t,Sig);
title('Signal');
%% walking-induced-noise + signal
NoisySig = Sig + WN;
subplot(3,1,3)
plot(t,NoisySig); hold on
title('Signal + noise');
%% adaptive filtering
figure
plot(t,NoisySig); hold on
title('Signal + noise');
mu = 0.001; % LMS step size.
ha = adaptfilt.lms(20,mu);
[y,e] = filter(ha,WN,NoisySig);
plot(t,e,'r');
legend('Signal+noise','Filtered using Adaptive filter');
</code></pre>
|
<p>I tried your code change adaptfilt.lms to adaptfilt.nlms<br>
also decrease the step size to 0.0001<br>
These conditions gave me better results.<br>
nlms is better than lms as there is stability in learning filter coefficeints.The lms algorithm could change the filter coefficients drastically.</p>
| 393
|
signal denoising
|
Instantaneous velocity and displacement from acceleration signal using a proper filtering method
|
https://dsp.stackexchange.com/questions/48105/instantaneous-velocity-and-displacement-from-acceleration-signal-using-a-proper
|
<p>first I need to mention I'm new to signal processing. here is the situation:
I have an acceleration time-series derived from an accelerometer</p>
<p><a href="https://i.sstatic.net/rZjYW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rZjYW.jpg" alt="Acceleration time series"></a></p>
<p>I wanted to imply a filtering method like high pass filter to denoise the signal. Using Fast Fourier Transform, I need first to decite on Cutoff frequency value.
After looking at fft(acc) I thought the cutoff frequency should be 5hz
I must note for x-axis (frequency) I just invert time (Fr=1/t) hope it's correct.
<a href="https://i.sstatic.net/DUSWM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DUSWM.jpg" alt="fft of acceleration signal"></a></p>
<p>the estimated FFT's values are complex (not real numbers) which I force to work on the real part not imaginary. here is the code I used to filter acceleration signal noise by fft-highpass filter:</p>
<pre><code> % Matlab_code
% Filtering signals by fft-HighPass
%%
acc=xlsread('accel_signals.xlsx',1,'B2:B4036');
t=xlsread('accel_signals.xlsx',1,'A2:A4036');
figure(1)
plot(t,acc,'b')
xlabel('Time(s)');ylabel('Acceleration (m/s^2)');
hold on
%%
Ts=mean(diff(t)); % Sampling rate
Fs=1/Ts; % Sampling Frequency
Fc=5; % Cutoff frequency = 5 hertz
fft_aac=real(fft(acc));
signal_temp=[acc,1./t];
signal=signal_temp;
for i=1:length(signal_temp)
if signal_temp(i,2)<5
signal(i,1)=0;
else
signal(i,1)=signal_temp(i,1);
end
end
filtered_acc=real(ifft(signal(:,1)));
plot(t,filtered_acc,'r')
%
</code></pre>
<p>Now the graph below is the result. the blue line is the noisy one and the red line is filtered signal. the filtered acceleration signals aren't even in the range of acceleration data but it's simply a noisy straight line.</p>
<p>here I listed my questions:
1- why is that happening? filtering didn't work well!
2- Do I need to use Hi pass or low pass filter by the way
3- What is the right way to choose cutoff frequency</p>
<p>please also help me with commenting on code and the way I approached filtering signal noise.</p>
<p>And after having a good filtering do I need to use a numerical integration (like trapezoidal integration method) to measure the instantaneous velocity and position?
Thanks in advance</p>
<p><a href="https://i.sstatic.net/jlS5g.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jlS5g.jpg" alt="enter image description here"></a></p>
|
<p>There a a number of conceptual problems with you code. The first has to do with filtering by zeroing out FFT bins. This is covered in:</p>
<p><a href="https://dsp.stackexchange.com/questions/6220/why-is-it-a-bad-idea-to-filter-by-zeroing-out-fft-bins">Why is it a bad idea to filter by zeroing out FFT bins?</a></p>
<p>The second has to do with just taking the real part of the fft. I don't understand why you did this but this is the wrong way to do the wrong thing. </p>
<p>If your near term priority is to process your data. I suggest you discard most of your code and take an entirely different approach with using the built in Matlab timeseries class and use one of the built in filter methods on the object you create. The plot methods are also easy to use and you can set units on the object you create. This is the most direct way to a "correct" result with your data. </p>
<p>If you want to understand how to write your own correct code, this will take more time. There is more than one way to filter data and choosing which takes some time to learn. You can look at:</p>
<p><a href="https://dsp.stackexchange.com/questions/427/what-resources-are-recommended-for-an-introduction-to-dsp">What resources are recommended for an introduction to DSP?</a></p>
<p>Essentially, I'm pointing you at a number of books that are several hundreds of pages of study, which is the long term answer to your problem. You will know what is correct. </p>
<p>If you do some Googling, you can probably find something in the middle that suits your needs but I can't recommend anything because I don't know what you know and don't know. </p>
<p>If you may permit the analogy, If one sees someone drowning, do you throw them a floatation device or try to teach them to swim. </p>
<p>Someone else here may provide a suitable solution. </p>
<p>It is to your credit that you have shown genuine effort.</p>
| 394
|
signal denoising
|
How can I apply a Gabor filter to a sine waveform?
|
https://dsp.stackexchange.com/questions/14258/how-can-i-apply-a-gabor-filter-to-a-sine-waveform
|
<p>I am an ECE student, and I am doing my project on underwater communication.</p>
<p>The main concept of my project is to denoise the signal by using UFB and WPD algorithms, and giving that output to a matched filter. I have already generated the sinewave and added noise and given it to the matched filter. </p>
<p>How do I write the code for UFB (uniform filter bank)?</p>
| 395
|
|
signal denoising
|
Am I generating audio signal with a given particular SNR value correctly?
|
https://dsp.stackexchange.com/questions/94774/am-i-generating-audio-signal-with-a-given-particular-snr-value-correctly
|
<p>I am using this as my reference:
<a href="https://www.mathworks.com/help/deeplearning/ug/denoise-speech-using-deep-learning-networks.html" rel="nofollow noreferrer">https://www.mathworks.com/help/deeplearning/ug/denoise-speech-using-deep-learning-networks.html</a></p>
<blockquote>
<p>Add washing machine noise to the speech signal. Set the noise power
such that the signal-to-noise ratio (SNR) is zero dB.</p>
<p>noise = audioread("WashingMachine-16-8-mono-1000secs.mp3");</p>
<p>% Extract a noise segment from a random location in the noise file ind
= randi(numel(noise) - numel(cleanAudio) + 1,1,1); noiseSegment = noise(ind:ind + numel(cleanAudio) - 1);</p>
<p>speechPower = sum(cleanAudio.^2); noisePower = sum(noiseSegment.^2);
noisyAudio = cleanAudio + sqrt(speechPower/noisePower)*noiseSegment;</p>
</blockquote>
<p>If I understand the method correctly, assuming both the signal and noise values are within range +/-1.0, it should not really matter what the average level of the signal and noise are. The combined signal will have SNR of 0 dB. I want to construct an audio file that contains sine tone as the signal and some recorded noise wav file as the noise. This is an outline of how I am doing it so far (in matlab):</p>
<pre><code>clear; clc; close all
Fs = 48000;
td = 5; % seconds
ns = td * Fs;
T = 1/Fs;
F = 1000; % sine wave frequency
t = (0:ns -1)*T;
Amp = 0.05;
signal = Amp * sin(2*pi*F*t);
signal = signal'; % match array dimension with noise file
[noise_file, Fs] = audioread('noisy.wav');
speechPower = sum(signal.^2);
noisePower = sum(noise_file.^2);
x_snr = 251.1886; % multiplier = 10^(desired_snr/10), eg: 10^(24/10) = 251.1886
noisyAudio_xdB = signal + sqrt(speechPower/(x_snr * noisePower) ).*noise_file;
audiowrite('noisy_file.wav', noisyAudio_20dB_2ch, Fs);
</code></pre>
<ol>
<li>Is my thinking correct so far? Will this give me an audio file with 24 dB SNR?</li>
<li>Is matlab's <code>R = snr(X, Fs, N)</code> function a good way to verify this? According to matlab help,</li>
</ol>
<blockquote>
<pre><code>R = snr(X, Fs, N) computes the signal to noise ratio (snr) in dBc, of
the real sinusoidal input signal, X, with sampling rate, Fs, and number
of harmonics, N, to exclude from computation when computing snr. The
default value of Fs is 1. The default value of N is 6 and includes the
fundamental frequency.
</code></pre>
</blockquote>
<p>For 0 dB and 10 dB SNR files that I created, this function gave me SNR values of -0.3111 and 9.7008 respectively which are close, but since these are calculation generated noise files I was hoping the SNR would be even closer than this. Is this an acceptable margin of error for this snr function?</p>
<ol start="3">
<li>Is the way I calculated the multiplier above [<code>multiplier = 10^(desired_snr/10)</code>] correct? I think this is the correct way since I am calculating SNR using power (or maybe more correctly, energy) of the digital audio signals [<code>sum(signal.^2);</code>] but I just want to make sure its not multiplier = 10^(desired_snr/20).</li>
</ol>
|
<p>It appears to be correct, assuming we define SNR as the total power in the signal relative to the total power in the noise although I would proceed a little differently. To note briefly first, the SNR of actual concern may be quite different from this depending on the bandwidth of interest and the noise density within that bandwidth (for example the noise file may have strong noise components at frequencies well away from an area or concern that could be filtered out with no effect to the signal- although we would measure it as noise perhaps unfairly).</p>
<p>Here are my thoughts assuming we stick with the simpler first definition:</p>
<p>"Signal" as speech is being represented as a tone, and the level is manually set. The assumption with the subsequent scaling that the OP proceeds to do is that the speech power and the noise power are the same, but this isn't clear that's the case. That said, I would omit setting Amp and simply normalize the two as follows, allowing for the later introduction of any arbitrary speech file:</p>
<pre><code>speech_scale = std(signal)
noise_scale = std(noise_file)
</code></pre>
<p>We can then ensure the two are the same power as follows (and this can then be combined as one line with setting the SNR as done next; I did it separately to make the operations clearer):</p>
<pre><code>noise = speech_scale/noise_scale * noise_file
</code></pre>
<p>This has scaled the rms magnitude (as the square root of the power quantity). If we want or need the power, it would not be the sum of all the samples squared (that is the energy) but the average of that (mean sum of squares). The results would be proportionally the same, but doing the mean sum of squares can keep large waveforms from growing to unmanageable levels). Still there is no need as we'll see next to actually compute the power.</p>
<p>To set the SNR, we want the speech signal to be 24 dB higher. There is no need for us to convert anything to power since we can work with magnitude quantities directly in the conversion to dB levels (using 20Log10(magnitude ratio) instead of 10Log10(power ratio)). Therefore, we increase the signal magnitude by 24 dB as follows:</p>
<pre><code>signal = signal * 10**(24/20)
</code></pre>
<p>And the noisy audio waveform would then be:</p>
<pre><code>noisyAudio_24dB = signal + noise
</code></pre>
<p>The estimates for the SNR are reasonable and will depend on the total number of samples involved. As another option to measure SNR, consider using the correlation coefficient given you have a known reference waveform. I detail the relationship between SNR and correlation coefficient at <a href="https://dsp.stackexchange.com/a/30854/21048">this post</a>. What is useful about doing it this way and not the estimate with the case of a sine wave is that it can then function with any reference (known) waveform as signal.</p>
| 396
|
signal denoising
|
Noise sensitivity of the (classical) Empirical Mode Decomposition routine
|
https://dsp.stackexchange.com/questions/61692/noise-sensitivity-of-the-classical-empirical-mode-decomposition-routine
|
<p>I tried to apply a MATLAB Empirical Mode Decomposition routine to denoise a signal, basically retaining only the last IMFs, with a criterion based on the mode energy.</p>
<p>To validate the routine, I have built a synthetic signal with added Gaussian noise + a sinusoidal disturbance. I noticed that the EMD routine (at least mine) seems very sensitive to the noise. In fact, if I launch it twice, the generated noise is of course different and the IMFs are also quite different.</p>
<p>Do you have any suggestions on how to "stabilize" the routine?</p>
|
<p>Indeed, at least in my experience, computing IMFs can be sensitive to borders, impulse signals and noise realizations. As you are interested in wavelets, note that in <a href="https://doi.org/10.1109/LSP.2003.821662" rel="nofollow noreferrer">Empirical mode decomposition as a filter bank</a> a link is made with DWT:</p>
<blockquote>
<p>we report here on numerical experiments based on fractional Gaussian
noise. In such a case, it turns out that EMD acts essentially as a
dyadic filter bank resembling those involved in wavelet decompositions</p>
</blockquote>
<p>Finally, there are regularized EMD schemes than apparently stabilize EMD decomposition, see for instance: <a href="http://perso.ens-lyon.fr/nelly.pustelnik/pdf/eusipco2012a.pdf" rel="nofollow noreferrer">A multicomponent proximal algorithm for empirical mode decomposition</a> and related papers by this team.</p>
| 397
|
signal denoising
|
Some questions about the intuition of the DWT
|
https://dsp.stackexchange.com/questions/68852/some-questions-about-the-intuition-of-the-dwt
|
<p>Assuming a DWT of a signal of length 8 with Haar filter taps. At the lowest level, I end up with a3 and d3 both of length 1, d2 of length 2 and d1 of length 4 which is the same number of coefficients of the original signal and which I can plot on a dyadic grid.</p>
<p>In contrast to the WPT in the DWT there is only basis representation for a given decomposition level ([a3,d3,d2,d1] for the 3 level decomposition in the example above). From this sequence one can choose to recover the original signal or zero out some coefficients and then reconstruct for denoising purposes and the likes.</p>
<p>The dyadic grid representation is analogous to the CWT and so displays signal energy over a non-uniformly discretized 2-dimensional grid of time and scale/frequency.</p>
<p>I often see in books a representation in which the coefficients are just concatenated and plotted on a 1-dimensional frequency axis just next to the original signal axis.</p>
<p><a href="https://i.sstatic.net/mzOoa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mzOoa.png" alt="enter image description here" /></a></p>
<p>I find this representation quite confusing since it suggests to directly compare both graphs although they are displayed over time (figure a) and frequency (figure b) respectively.</p>
<p>Now my questions:</p>
<ol>
<li><p>What is the intuition behind the stacked frequency plot of the coefficients, i.e. what is the best way to read it? Going from right to left there is a decreasing time-resolution but increasing freq resolution. In reference to the signal above, I would have the 4 d1 coefficients on the most right followed by the 2 d2 coefficients and so on. Is the correct way of interpreting this to say, that in the spectrum of [pi/2T, pi/T] I have frequency information of the original signal at just 4 different points in time (due to downsampling with a factor of 2) for [pi/4T, pi/2T] one has freq information at only 2 points in time but being narrowed down to half the frequency interval than before and so on?</p>
</li>
<li><p>Does this mean, that the signal representation I described above has now been transformed from a time-series (the original signal) to a pseudo tf-representation and only becomes a pure time-signal again by sending it again through a synthesis filter bank? Can both representations still be considered equivalent? If one would not have the axis description, how would it be possible to recognize, if one is represented the original signal or a decomposition?</p>
</li>
<li><p>As for a WPT there are now a large number of possible signal representations and one has to keep track of the chosen one, if a later reconstruction is desired. Limiting ourselves to a level basis of level 3 in the above example, what is now the meaning of the approx and detail coefficients in every freq subband? For the DWT one can argue, that d3 are the signal details at the coarsest scale that are necessary to go from a3 to a2 and so forth. So the details capture the missed-out fine-grained signal details that were neglected along the decomposition.
Is there a similar meaning to a WPT decomposition? I have a hard time transferring this intuition to the more general signal deconstruction of a WPT</p>
</li>
<li><p>In all these transforms, usually boundary problems arise whenever using a filter that has a larger number of taps than 2. So decomposing a length 8 signal with dB2 filters already yields more than 4 values both for a1 and d1 at the first decomposition level. If I were to plot the signal at this level, would I have to cut off the additionally introduced values somehow or would the signal at this level just naturally comprise then more values than the original time signal?</p>
</li>
</ol>
<p>Thanks a lot for any help to deepen my understanding of this complex matter</p>
|
<p>First, one should be cautious about processing short signals like these. I am unsure about the length of 3 for <code>a3</code> and <code>d3</code>. Now, the many questions:</p>
<ol>
<li>I would not call them "frequency plot", but stacked subband plots. They are just illustrations of the behavior of the wavelet coefficients. For each subband, different scalings may be used. Another rendering is stacking the intensity of coefficients in a image. For the interpretation of the points: the coefficients in each subband at located in time and scale, so the interpretation in <span class="math-container">$[\pi/2T, \pi/T]$</span>, <span class="math-container">$[\pi/4T, \pi/2T]$</span> is only approximative (because of aliasing, subsampling, etc.).</li>
<li>No, one is concatenating chunks of coefficients in different subbands and at different rates</li>
<li>Yes, subbands of a given WPT can generally be interpreted in dyadic portions of the time-scale axis</li>
<li>Haar wavelets (2-tap filters) have the rare property (among standard wavelet filters) they don't overlap at a given scale. For longer filters with 2-band wavelets, those filters overlap, and yield troubles at the extremities of a finite-length signal. There is a lot of works on non-expensive expansions, relaying on the symmetry/antisymmetry of the wavelet.</li>
</ol>
| 398
|
signal denoising
|
What is the best method to filter a signal where baseline and the signal of interest have overlapping frequency range?
|
https://dsp.stackexchange.com/questions/92054/what-is-the-best-method-to-filter-a-signal-where-baseline-and-the-signal-of-inte
|
<p>I am reading out the movement of a motor arm using a Hall sensor and a magnet pair. The hall sensor measures the distance between the sensor and the magnet.
The motor arm is being moved with a band-limited Gaussian white-noise signal (0-300 Hz). Due to the movement being very small, the stimulus being a white-noise, and the inherent noise of the Hall sensor, I have a terrible Signal to Noise Ratio. I am trying to improve the SNR by filtering. But the problem is that, the frequencies in the sensor output in the absence of any movement of the motor (baseline noise, in blue) hugely overlap with the frequencies of the actual movement (0-600 Hz) and noise (stimulus, in orange).</p>
<p>The figure below shows the generated white-noise signal that I use to actuate the motor (in black). Grey background marks the presence of a movement stimulus to the motor arm (this signal is in orange in rest of the figures). Notice the 0 baseline outside the grey region in subplot 1. The subplot below it shows the unfiltered hall sensor output. Notice the high baseline noise in the absence of any actual movement. So, this "baseline noise" is the inherent noise of the Hall sensor.</p>
<p><a href="https://i.sstatic.net/dbV2T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dbV2T.png" alt="Generated signal and hall sensor output" /></a></p>
<p>Without filtering, the baseline and stimulus are indistinguishable.
<a href="https://i.sstatic.net/OOWg0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OOWg0.png" alt="Raw data from Hall sensor" /></a></p>
<p>But, the powers are slightly different for frequencies below 150 Hz. But the noise and the signal have the same power after 150 Hz. I do need better SNR in this range.</p>
<p><a href="https://i.sstatic.net/Duwva.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Duwva.png" alt="PSD of unfiltered baseline noise and stimulus with noise" /></a></p>
<p>I tried filtering the signal with 10th order Butterworth with cut-off at 600 Hz (Because I need 300 Hz to be represented properly). I can now distinguish baseline from stimulus but SNR is still very bad, as visible in the PSD.</p>
<p><a href="https://i.sstatic.net/FEvy1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FEvy1.png" alt="Filtered Hall sensor output" /></a>
<a href="https://i.sstatic.net/BxuZy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BxuZy.png" alt="PSD of filtered baseline noise and stimulus with noise" /></a></p>
<p>I want to use the noise in the baseline to denoise the stimulus. How should I do it?</p>
|
<p>This would be a good application for using the Cross Spectral Density between the input and output. The Cross Spectral density (which is Coherence when normalized) provides the relative magnitude and phase for the system transfer function based on the correlated components between the input noise-like signal and the resulting output noise-like signal while attenuating the independent noise components present regardless of input. Please see these other posts for further explanation of this as well as further details in computing the CSD:</p>
<p><a href="https://dsp.stackexchange.com/a/85712/21048">https://dsp.stackexchange.com/a/85712/21048</a></p>
<p><a href="https://dsp.stackexchange.com/questions/66219/intuitive-explanation-of-coherence/66237#66237">Intuitive explanation of coherence</a></p>
<p>The result will have best fidelity where the input signal has energy in the frequency domain, so for this purpose a white noise input is ideal. The stronger the input can be while still in the linear range of the system, the better the resulting estimate will be (in terms of SNR).</p>
| 399
|
convolution
|
Convolutions with changes to the argument
|
https://dsp.stackexchange.com/questions/70523/convolutions-with-changes-to-the-argument
|
<p>I think I understand what happens when I shift the argument, but I'm not sure what should happen when the signal is compressed or expanded. In particular I'm trying to figure out what happens when the convolution where <span class="math-container">$y(t)=x(t)*h(t)$</span> is changed to <span class="math-container">$x(t/k)*h(t/k)$</span> or <span class="math-container">$x(t/k)*h(t)$</span>. Supposedly the first should be <span class="math-container">$y(kt)/k$</span> and the second should be unable to be done but I don't know how to get there. When I work the first one out, I get <span class="math-container">$\int{x(\frac{\tau}{k})h(\frac{t-\tau}{k})}$</span>. This doesn't look like something I can get by changing the argument of <span class="math-container">$y$</span>, so I must be missing something. Any help on how to go about doing these convolutions with arguments other than t would be helpful.</p>
|
<p>Let me show you how to manipulate the convolution integral.</p>
<p>Let</p>
<p><span class="math-container">$$y(t) = x(t) \star h(t) = \int_{-\infty}^{\infty} x(\tau) h(t-\tau) d\tau \tag{1} $$</span></p>
<p>Let <span class="math-container">$a > 1$</span> that we have a <em>compressed</em> signal <span class="math-container">$y(at)$</span> ;</p>
<p><span class="math-container">$$y(at) = \int_{\tau=-\infty}^{\infty} x(\tau) h(at-\tau) d\tau \tag{2} $$</span></p>
<p>Substitude <span class="math-container">$\tau' = \tau/a$</span> so that <span class="math-container">$\tau = a \tau'$</span> and <span class="math-container">$d\tau = a \cdot d\tau'$</span> into Eq.2 :</p>
<p><span class="math-container">\begin{align}
y(at) &= \int_{\tau'=-\infty}^{\infty} x(a \tau')~ h(at-a\tau') ~a ~d\tau' \tag{3}\\ \\
&= a \int_{\tau'=-\infty}^{\infty} x(a \tau') ~ h(a(t-\tau')) ~d\tau' \tag{4}\\ \\
&= a \int_{\tau=-\infty}^{\infty} x(a \tau) ~ h(a(t-\tau)) ~d\tau = a \cdot \big( x(at) \star h(at) \big) \tag{5}
\end{align}</span></p>
<p>Eq.5 shows that the compressed output <span class="math-container">$y(at)$</span> is obtained by the convolution of the compressed input <span class="math-container">$x(at)$</span> and the compressed impulse response <span class="math-container">$h(at)$</span>, and (also the result scaled by <span class="math-container">$a$</span>)....</p>
<p>Note that <em>convolution</em> operator assumes an LTI (linear time-invariant) framework. However, the <em>compressor</em> is not an LTI system and it does not have an impulse response <span class="math-container">$h_c(t)$</span>, so that <span class="math-container">$y(at) = y(t) \star h_c(t)$</span> can not be true.</p>
<p>Assume for a moment that the compressor is an LTI system with impulse response <span class="math-container">$h_c(t)$</span>. Then we can consider the generation of the compressed output in two (serial) LTI stages:</p>
<p><span class="math-container">$$ x(t) \longrightarrow \boxed{ h(t) } \longrightarrow \boxed{ h_c(t)} \longrightarrow y(at) \tag{6}$$</span></p>
<p>From LTI theory we know that the cascade of two LTI blocks is also LTI with an equivalent impulse response given by <span class="math-container">$h_e(t) = h(t) \star h_c(t)$</span> hence Eq.6 can be written as :</p>
<p><span class="math-container">$$ y(at) = x(t) \star h_e(t) = x(t) \star ( h(t) \star h_c(t)) = x(t) \star h(at)\tag{7}$$</span></p>
<p>Which is contradicting the result we have obtained in Eq.5, hence the assumption that the compressor could have been represented with an impulse reponse was wrong.</p>
<p>Another implication of this is also observed in the <em>time-reversal</em> of the output <span class="math-container">$y(t)$</span> :
<span class="math-container">$$y(-t) = x(-t) \star h(-t) \neq x(-t) \star h(t) \neq x(t) \star h(-t) $$</span></p>
<p>The reason of the unequal signs is again about <em>reversal</em> operation not being an LTI system.</p>
<p>On the other hand, if you want to apply an LTI operation on the output such as <span class="math-container">$y_d(t) = y(t-d)$</span> , then it can be simply be reflected in the convolution as:</p>
<p><span class="math-container">$$ y(t-d) = y(t) \star \delta(t-d) = (x(t) \star h(t)) \star \delta(t-d) = x(t) \star h(t-d)$$</span>...</p>
| 400
|
convolution
|
generator matrix coefficient in convolutional code
|
https://dsp.stackexchange.com/questions/72067/generator-matrix-coefficient-in-convolutional-code
|
<p>I cant determine a type of the following code:</p>
<pre><code>G_1=1 // g_1=1
G_2=11 // g_2=x+1
</code></pre>
<p>Accorsing to description, it is convolutional code but I dont understand the type ( code rate)?</p>
|
<p>It is a <em>systematic</em> rate-<span class="math-container">$\frac 12$</span> convolutional with constraint length <span class="math-container">$2$</span>.</p>
| 401
|
convolution
|
Verifying the computation of a convolution
|
https://dsp.stackexchange.com/questions/3529/verifying-the-computation-of-a-convolution
|
<p>I have an input signal $$x(n)=\left(3,-5,4,3,-1,-2,6,8\right), n=-3,..,4$$ and impulse response $$h(n)=(1,-1,1,-1,1), n=-1,...,3.$$</p>
<p>The convolution between $x(n)$ and $h(n)$ is </p>
<p>$$x(n)*h(n)=\sum_{-\infty}^\infty x(k)h(n-k)$$
If I'm not mistaken, I can reduce this to the finite sum
$$=x(-3)h(n+3)+x(-2)h(n+2)+x(-1)h(n+1)+x(0)h(n)+x(1)h(n-1)+x(2)h(n-2)+x(3)h(n-3)+x(4)h(n-4)$$</p>
<p>So, if I were to calculate the sum by hand, I would proceed by first evaluating</p>
<p>$x(-3)h(n+3)=3h(n+3) = (3,-3,3,-3,3)$ for $n=-4,...,0$
$x(-2)h(n+2)=-5h(n+3) = (-5,+5,-5,+5,-5)$ for $n=-3,...,1$<br>
etc... </p>
<p>Then I would only add up the terms whose position $n$ aligns (that is, add all terms at n=-4 together, all terms at n=-3 together, etc...).</p>
<p>Is my approach to evaluating the convolution correct?</p>
|
<p>I would have liked to give a longer answer, but yes, your approach is correct. I don't see any problems with what you laid out in your question.</p>
| 402
|
convolution
|
Breaking a convolution into smaller pieces
|
https://dsp.stackexchange.com/questions/6298/breaking-a-convolution-into-smaller-pieces
|
<p>For a project I need to do convolution and i use gpu for calculations. Sometimes I have to deal with kernel sizes of 50x50 and this size of kernel is sufficiently large that it chokes the gpu. (not enough memory svailable by the gpu) I need to find a way to break the kernel into smaller sizes (8x8 or similar) and do the convolution this way (i.e. piece by piece and later stitch together) so that I can realize gpu enhancements. What is the best way to break these kernels into smaller pieces? (I have no prior knowledge of kernel size so the solution must be something I can deal with in run time. )</p>
|
<p>Yes, you can split them up. Convolution is a linear process, which means that <a href="http://en.wikipedia.org/wiki/Superposition_principle" rel="nofollow">superposition</a> holds. Thus, you can break up any convolution kernel $k$ into multiple parts ($k_1, k_2, ... k_N$) such that $k = \sum k_i$.</p>
<p>For example, if you had a convolution kernel that looked like [1 2 3 4] (granted this is a silly kernel, but does fine for purposes of illustration), you could break it up into the following kernels, [1 2 0 0] and [0 0 3 4]. Now, you could simply do the convolutions like normal and then add the results together. The sum would be equal to the convolution product of the original kernel.</p>
<p>Doing that would be inefficient, though, both in terms of computations and memory. Instead, you could simply drop the 0's at the ends of the "sub-kernels", and do the convolutions using [1 2] and [3 4]. The one tricky part to this is that you have to offset the results before adding them together to get the correct answer.</p>
| 403
|
convolution
|
Is it meaningful to find linear convolution of just two random sequences?
|
https://dsp.stackexchange.com/questions/11012/is-it-meaningful-to-find-linear-convolution-of-just-two-random-sequences
|
<p>Could it bring any meaning? Is this kind of convolution be useful is solving anything?</p>
|
<p>No, if the two sequences are random then convolving them is not useful.</p>
| 404
|
convolution
|
Convolution that results in an all-zero sequence
|
https://dsp.stackexchange.com/questions/11518/convolution-that-results-in-an-all-zero-sequence
|
<p>I am asked to find a pair of sequences, each one of which contains three distinct values and where the convolution is an all-zero sequence.</p>
<p>I've came to the conclusion that one of the sequences must be infinite but just can't think of any examples...</p>
|
<p>thanks for the input. I found a simple solution: </p>
<p>sequence A: {...1,0,-1,0,1,0,-1,0...} (i.e. 1,0,-1,0 periodic) </p>
<p>and</p>
<p>sequence B: {0,x,y,x,y,0}</p>
| 405
|
convolution
|
Convolution from bottom right
|
https://dsp.stackexchange.com/questions/16232/convolution-from-bottom-right
|
<p>I want to do a convolution from the bottom right and not as usual from the top left. I think conv2 of Matlab only does from the top left.</p>
<p>How can I do a convolution in Matlab from the bottom right?</p>
<p>Thank you very much for the answers.</p>
|
<p>I hope I understand your question correctly, in that you're trying to produce a mirror image of the convolution kernel (filter) and then convolve. Flip you convolution kernel. In MATLAB, you can use the <code>flip</code> command. If you flip it left-to-right, this should do it.</p>
<p>However, if you're simply saying that you want to use the same kernel but start the convolution process from a different point, this would not give you a different result back at all. This is one of the reasons you can do FFT based convolution.</p>
| 406
|
convolution
|
Convolution equivalent to matrix multiplication?
|
https://dsp.stackexchange.com/questions/26176/convolution-equivalent-to-matrix-multiplication
|
<p>Is it possible to write the full convolution between the image and the filter as a matrix multiplication operation? If so, can someone give a simple example of how that works?</p>
| 407
|
|
convolution
|
Can you present the convolution of sinusoidal with itself?
|
https://dsp.stackexchange.com/questions/27154/can-you-present-the-convolution-of-sinusoidal-with-itself
|
<p>Ladies, Gentlemen,
Because I am homeless (in France), and get internet access only in public libraries with many restrictions in timing etc, I can not write down even a simple convolution. So I ask you post here the convolution of some sinusoidal function with itshelf. </p>
<p>For example: 0.951056516295, 0.587785252292, -0.587785252292, -0.951056516295, 0. </p>
<p>Regards </p>
|
<p>For your data points I get:</p>
<blockquote>
<p>-2.467162e-16 -9.045085e-01 -1.118034e+00 7.725425e-01 2.500000e+00 7.725425e-01 -1.118034e+00 -9.045085e-01 -1.480297e-16</p>
</blockquote>
<p>from</p>
<pre><code>#27154
data <- c(0.951056516295, 0.587785252292, -0.587785252292, -0.951056516295, 0.)
output <- convolve(data,data, type ="open")
</code></pre>
| 408
|
convolution
|
Is output of convolution of sinusoidal with itself, also sinusoidal?
|
https://dsp.stackexchange.com/questions/27161/is-output-of-convolution-of-sinusoidal-with-itself-also-sinusoidal
|
<p>Ladies, Gentlemen, </p>
<p>In my last question I asked you present the convolution of sinusoidal with itself. I accept Mr Peter K.'s answer. Now my question is much more important. Is output of convolution of sinusoidal with itself, also sinusoidal? </p>
<p>Regards </p>
| 409
|
|
convolution
|
What does convolution has the meaning?
|
https://dsp.stackexchange.com/questions/29594/what-does-convolution-has-the-meaning
|
<p>As I know, if we want to know the LTI system output, then we do convolution between input x[n] and impulse response h[n].
but actually,in this question, I want to know what does convolution has the behind meaning. </p>
<p>Why do we do convolution(sum of products) not just using adder or multiplier for calculation between input signal and impulse response? </p>
|
<p>I explained in <a href="https://en.wikipedia.org/wiki/Talk:Convolution#Why_the_time_inversion.3F" rel="nofollow noreferrer">"Why is time inversion?"</a></p>
<p><a href="https://i.sstatic.net/WdcEZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WdcEZ.png" alt="enter image description here"></a></p>
<p>Red is a reference delta impulse (of height 1), whereas green is a typical response to that impulse, denoted by function h(t). In LTI, output is proportional to the input, that is if we have a delta impulses at the input at time 0, output at time t will be a*h(t). Now, instead of single impulse at the origin, you apply a series of input impulses at various times. What will be the output? Say, there was input impulse of height $a_1$ at $T_1$. Since current time is $t$, impulse occurred $t-T_1$ seconds ago and its contribution to the current output y(t) is $a_1 h(t - T_1)$. There is another contribution from another impulse, occurred at $T_2$. Its contribution is $a_2 h(t-T_2)$. So, $y(t) = a_1 h(t-T_1) + a_2 h(t-T_2)$. You simply add up the contributions beacuse of LTI linearity. </p>
<p>In general, you have $y_t = \sum_0^t {a_i h(t-i)}$. That is a convolution formula. </p>
<p>It also appears when you multiply to polynomials $(a_0 + a_1 z + a_2 z^2 + ...)(b_0 + b_1 z + b_2 z^2 + ...) =\sum_0^\infty {c_n z^n} $ where $c_n = a_i b_{n-i}$. That is why you tend to represent series as z-transforms. In this case you can simply multiply them and have convolution on the background. </p>
| 410
|
convolution
|
Minimal number of additions in convolution
|
https://dsp.stackexchange.com/questions/42618/minimal-number-of-additions-in-convolution
|
<p>The Winnograd algorithm can be used to reduce the number of multiplications in convolution. Is there a known method of reducing the number of additions in convolution?</p>
|
<p>One method which attracted my attention recently is <a href="http://www.rle.mit.edu/dspg/documents/Discrete-TimeRandom.pdf" rel="nofollow noreferrer">Discrete Time Random Sampling</a>.</p>
<p>Which is an approximation method useful to reduce both multiplications and additions in filtering. </p>
<p>Filter and $N$-length signal by a $L$-length filter can be performed in $\mathcal{O}(LM)$ instead of $\mathcal{O}(LN)$. Where $M\ll N$.
Which is done by picking samples at random in a way described in the paper.</p>
<p>See also <a href="http://www.rle.mit.edu/dspg/documents/rsICASSPSourav2007.pdf" rel="nofollow noreferrer">Frequency-Shaped Randomized Sampling</a> for a more sophisticated approach which allows you to shape the spectrum of the error.</p>
| 411
|
convolution
|
Impulse response time period in circular convolution
|
https://dsp.stackexchange.com/questions/44584/impulse-response-time-period-in-circular-convolution
|
<p>While considering an input to be periodic of Period N, can the impulse response not be periodic of period greater than N ? If it can be, how can one compute it’s convolution?</p>
|
<p>Circular convolution assumes that all signals ($x[n]$, $h[n]$ and $y[n]$) are periodic in the same integer $L$. When any of the signals are shorther than $L$ then they are padded with enough zeros to make them periodic with $L$. When $x$ or $h$ has a period larger than $L$ then there will be <strong>aliasing</strong> in the computed output $y[n]$ which is still periodic with $L$.</p>
| 412
|
convolution
|
Where does convolution fit into tracking fluorescent photons using MC models?
|
https://dsp.stackexchange.com/questions/45175/where-does-convolution-fit-into-tracking-fluorescent-photons-using-mc-models
|
<p>I have been reading this old paper by Steven Jacques, a titan in the world of using Monte-Carlo methods for photon propagation and distributions.</p>
<p><a href="https://www.osapublishing.org/ao/abstract.cfm?uri=ao-28-20-4286" rel="nofollow noreferrer">https://www.osapublishing.org/ao/abstract.cfm?uri=ao-28-20-4286</a></p>
<p>For the moment, I have a pretty solid grasp how to continue with my research. However, there is one element that has me perplexed. The author states this powerful formula:</p>
<p>$$
F(\lambda,r) = \int_0^D\int_0^{2\pi}\int_0^\infty \psi(r',z')\beta(\lambda,z) \circledast E(\lambda,\sqrt(r^2+r'^2-2rr'cos\theta')
$$
Which shows up again and again in more recent papers.
<a href="https://www.ncbi.nlm.nih.gov/pubmed/9203387" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pubmed/9203387</a></p>
<p><a href="https://www.ncbi.nlm.nih.gov/pubmed/24307289" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pubmed/24307289</a></p>
<p><a href="https://www.ncbi.nlm.nih.gov/pubmed/21945055" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/pubmed/21945055</a></p>
<p>And it seems to work well enough to last the ages, so I would like to implement it myself. However, I get confused by the convolution operator presented above. $\psi$ is just in terms of the depth of the laser and corresponds to the excitation laser being launched. This has spatial dependence hence r and z. So we can approximate this as a 3D matrix. $\beta$ is by the layer, so it depends on how many layers we want to simulate. In skin, we could have 4 layers. E is the escape function, which means how many photons reach the surface of the medium when launched isotropically from a depth in the medium. E and $\beta$ also depend on the wavelength and are experimentally validated. My BIG question, though, is how to do the convolution to both sides, and what would that actually represent. </p>
<p>We summarize, we have $\psi$ as a 3D matrix, $\beta$ as a vector of equivalent depth and E as a 3D matrix the same size as $\psi$.</p>
| 413
|
|
convolution
|
Linear convolution of discrete signals with defined lengths
|
https://dsp.stackexchange.com/questions/45503/linear-convolution-of-discrete-signals-with-defined-lengths
|
<p>What is the maximum count of non-zero elements, that can a linear convolution of discrete signals of "lengths" 5 and 7 have?</p>
<p>When I label the length of signal $x[n]$ as $M$, and the length of
signal $h[n]$ as $N$, then the length of their convolution is the signal $y[n]=M+N-1$. However this is not that thing I was looking for. So how can I find the maximum count of non-zero elements?</p>
|
<p>It seems like you have already the correct answer, but try to visualize what's going on</p>
<p>First understand that signals of length $n_0$ are really infinite length, but have nonzero values at $n = 0$ and $n = n_0 - 1$. The values in between can be anything, but for the purposes of this problem take them to be nonzero as well. </p>
<p>Now perform the discrete convolution by literally shifting the length-5 signal and dot multiplying it with the length-7 signal. Your result will also be an infinite length signal with nonzero values <em>only</em> where the two signals overlap (when they dont overlap, you should find the convolution to be zero). In this case, there are 11 points where the signals actually overlap. </p>
<p>If some parts within the signal are zero, it is possible that you get fewer nonzero values in the result. However, in the max case where the full signal is nonzero you get this max, $11 = 7 + 5 - 1$ samples</p>
| 414
|
convolution
|
Convolution Sum
|
https://dsp.stackexchange.com/questions/49519/convolution-sum
|
<p>I understand convolution is linear combination of delayed impulses of decomposed signal.</p>
<p>$$\int_{-\infty}^{+\infty} x(\tau)h(t-\tau)\mathrm{d}\tau = g(t)$$</p>
<p>I want to know about these decomposed signals.</p>
<p>If I have my $g(t)$, can I decompose it in different ways?
If yes what are those few decomposed signal?
Else, I am sure that we decompose in one way for sure. But I am not able to think in reverse way.</p>
<p>Please guide me. If this way of thinking is correct.</p>
<p>I am novice in Signal Processing.</p>
<p>Thanks.</p>
|
<p>I'm afraid you <em>don't understand convolution</em></p>
<p>Let,<br>
$x(t)$ : <code>[1 2 3 4]</code><br>
$h(t)$ : <code>[1 2]</code></p>
<p>Your <code>decomposed signals</code> are now<br>
<code>[1 0 0 0]</code><br>
<code>[0 2 0 0]</code><br>
<code>[0 0 3 0]</code><br>
<code>[0 0 0 4]</code> </p>
<p>If you were to individually apply each to the system, you'd end up with<br>
<code>[1 2]</code><br>
<code>[0 2 4]</code><br>
<code>[0 0 3 6]</code><br>
<code>[0 0 0 4 8]</code> </p>
<p>Now a <em>linear combination</em> of these (which here is element-wise addition) yields<br>
<code>[1 4 7 10 8]</code></p>
<p>This is what your convolution equation gives you as well.</p>
<p><strong>If I have my $g(t)$, can I decompose it in different ways?</strong><br>
Any way you like !<br>
Could we write $g(t)$ above as a convolution of<br>
$x(t)$ : <code>[1 4 7 10 8]</code>, with<br>
$h(t)$ : <code>[1]</code> ?<br>
If yes, do you see how your decomposed signals are now different ?<br>
Fourier transform does something similar as well, but, it decomposes signals into sinusoids.</p>
<p><strong>If yes what are those few decomposed signal</strong><br>
Try to work out the answer above.</p>
<p>Being helpful (since you're starting off with DSP), there is usually a good depth in definitions and a quite some detail in wordings. Try to understand each in their appropriate context to get the complete picture !
Follow this <a href="https://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/lecture-notes/MITRES_6_007S11_lec04.pdf" rel="nofollow noreferrer">OCW</a> and good luck !</p>
<p><em>Disclaimer : I've liberally used terms like <code>combination</code>, <code>decompose</code> and <code>linear</code>. Like @Stanley points, they are very broad and only a narrow definition is used here.</em></p>
| 415
|
convolution
|
Graphical method for convolution?
|
https://dsp.stackexchange.com/questions/52414/graphical-method-for-convolution
|
<p>Is graphical method the best way to solve convolution questions whether they be discrete or continuous?</p>
<p>I was given a question:</p>
<p><span class="math-container">$$x[n]=1$$</span> <span class="math-container">$$0\leq n \leq 4 $$</span></p>
<p><span class="math-container">$$h[n]=\alpha^n$$</span> <span class="math-container">$$0\leq n \leq 6$$</span></p>
<p>for all other values <span class="math-container">$n$</span> is <span class="math-container">$0$</span>.</p>
<p>I solved this question using graphical method and I was successful but when I solved the following question using graphical method, I was unable to:</p>
<p><span class="math-container">$$x[n]=2^n u[-n]$$</span>
<span class="math-container">$$h[n]=u[n]$$</span> </p>
<p>Please tell me how can I solve this question?</p>
|
<p>Graphical evaluation of convolution (flip n drag) is a very useful, helpful and indipensible method which aids in a very quick visual anticipation of the output, in terms of the input sequences. Indeed even if you don't use specifically the graphical method, you would still benefit from drawing a plot of input sequences and the rough sketch of the expected output in any case. Yet no method is the best for all types of problems. </p>
<p>For this purpose let me solve this problem without graphical method. Given the input sequences <span class="math-container">$$x[n] = 2^n u[-n] ~~~, ~~~\text{ for } -\infty <n \leq 0 $$</span> and <span class="math-container">$$h[n] = u[n] ~~~, ~~~\text{ for } 0 \leq n < \infty$$</span> then the convolution sum is :
<span class="math-container">$$y[n] = x[n] \star h[n] = \sum_{k=-\infty}^{\infty} x[k] h[n-k]$$</span></p>
<p>First, observe (graphically) that output will extend from <span class="math-container">$n=-\infty$</span> to <span class="math-container">$n=\infty$</span>. This is because a right sided sequence is convolved with a left sided sequence of both semi infinite lengths.</p>
<p>Then, for each <span class="math-container">$n$</span>, look for the valid range of summing index <span class="math-container">$k$</span> that yields nonzero signal values according to their arguments. It can be seen from the definitions of nonzero ranges of <span class="math-container">$x[n]$</span> and <span class="math-container">$h[n]$</span> that the summing index <span class="math-container">$k$</span> should satisfy the following two relations due to <span class="math-container">$x[k]$</span> and <span class="math-container">$h[n-k]$</span> as:</p>
<p><span class="math-container">$$ \{-\infty \leq k \leq 0 \} \cap \{0 \leq n-k < \infty \} $$</span></p>
<p>or equivalently
<span class="math-container">$$ \{-\infty < k \leq 0 \} \cap \{-\infty < k \leq n \} $$</span></p>
<p>the intersection of which is:
<span class="math-container">$$ \max\{ -\infty, -\infty \} < k \leq \min \{0 , n \} $$</span>
<span class="math-container">$$ -\infty < k \leq \min \{0 , n \} ~~~,~~~\text{ for all } ~~~ n$$</span></p>
<p>then the convolution sum becomes:
<span class="math-container">$$y[n] = \sum_{k=-\infty}^{\min\{0,n\}} 2^k$$</span></p>
<p>using the summation formula yields:
<span class="math-container">$$y[n] = \frac{ 2^{-\infty} - 2^{\min\{0,n\}+1} }{1-2} = 2^{\min \{0,n\} + 1}~~~,~~~\text{ for all } ~~~ n $$</span></p>
<p>then according to whether <span class="math-container">$n<0$</span> or <span class="math-container">$n \geq 0$</span> the output <span class="math-container">$y[n]$</span> becomes:
<span class="math-container">$$ y[n] = \begin{cases} 2^{n+1} ~~~ &, n < 0 \\ 2 ~~~&, n \geq 0\\ \end{cases}$$</span></p>
<p>Note that the graphical method should provide this answer in a less number of steps (due to visual aids that manipulate the indexing without hassle)...</p>
| 416
|
convolution
|
Circular convolution of a non causal signal
|
https://dsp.stackexchange.com/questions/53955/circular-convolution-of-a-non-causal-signal
|
<p>I know how we compute the <span class="math-container">$N$</span> point circular convolution of a two causal signals, but what about a signal such as <span class="math-container">$\{1,-1,2,1\}$</span> where, the position of 2 is the <span class="math-container">$0^{th}$</span> index and the other sequence is <span class="math-container">$\{2, -1\}$</span> which we can assume to be causal, what about the 4 point circular convolution. According to me it is </p>
<p><span class="math-container">$$\begin{bmatrix}1&1&2&-1 \\-1&1&1&2\\2&-1&1&1\\1&2&-1&1 \end{bmatrix} \begin{bmatrix} 2\\-1\\0\\0\end{bmatrix} =\begin{bmatrix} 1\\-3\\5\\0\end{bmatrix}$$</span>
With the position of 5 being the zeroth index because only then the 2 from the first signal got multiplied with the 2 of the second signal, giving off the zero position. But now I am confused, as to how to arrange the other indices. Can anyone help me out?</p>
|
<p>For an <span class="math-container">$N$</span>-point circular convolution you can think of each signal as being periodically extended with period <span class="math-container">$N$</span>. For your example with <span class="math-container">$N=4$</span> that would mean that the two sequences are</p>
<p><code>2 1 1 -1</code> and <code>2 -1 0 0</code></p>
<p>where both now start at index <span class="math-container">$n=0$</span>. The result of the cyclic convolution is</p>
<p><code>5 0 1 -3</code></p>
<p>which is just a cyclic shift (by <span class="math-container">$2$</span>) of the (correct) result that you obtained.</p>
| 417
|
convolution
|
Impulse response convolution and normalization2
|
https://dsp.stackexchange.com/questions/58896/impulse-response-convolution-and-normalization2
|
<p>when I take inverse Laplace transform of a system transfer function \</p>
<p>Lets say LPF whose TF is </p>
<p><span class="math-container">$$\frac{Y(s)}{X(s)} \triangleq H(s) = \frac{W}{s+W} $$</span></p>
<p>the inverse Laplace/impulse response is </p>
<p><span class="math-container">$$h(t) = We^{-Wt}u(t) $$</span></p>
<p>where <span class="math-container">$u(t)$</span> is the Heaviside unit step function:</p>
<p><span class="math-container">$$ u(t) \triangleq \begin{cases}
1 \qquad & t \ge 0 \\
0 \qquad & t < 0 \\
\end{cases} $$</span></p>
<p>Now to see the system response to a square wave <span class="math-container">$x(t)$</span> with <span class="math-container">$|x(t)| = 1$</span>, I need to convolve </p>
<p><span class="math-container">$$ y(t) = h(t) \star x(t) $$</span></p>
<p>Now if you look at <span class="math-container">$h(t)$</span>, the maximum amplitude of <span class="math-container">$h(t)$</span> is </p>
<p><span class="math-container">$$\max{|h(t)|} = W $$</span></p>
<p>Then <span class="math-container">$y(t)$</span> is amplified by <span class="math-container">$W$</span> times. </p>
<p>So what is happening? how do I normalize this?</p>
<ol>
<li>Should I normalize this by <span class="math-container">$\max{|h(t)|}$</span> or</li>
<li>Should I normalize this with </li>
</ol>
<p><span class="math-container">$$W_{z_1} W_{z_2} \cdots W_{z_n}/(W_{p_1} W_{p_2} \cdots W_{p_n})$$</span></p>
<p>(product of zeroes)/(product of poles) of transfer function?</p>
<p>Why is this even happening?</p>
|
<p>I think you're likely forgetting how the anti-derivative of <span class="math-container">$h(t)$</span> affects the gain of the convolution operation. Recall that somewhere in your convolution integral, you'll be taking an integral of the form <span class="math-container">$\int We^{W\tau} d\tau$</span>. The Chain Rule requires the <span class="math-container">$W$</span> in the exponent must appear as a <span class="math-container">$1/W$</span> factor after integrating. This factor cancels the <span class="math-container">$W$</span> multiplier in <span class="math-container">$h(t)$</span> giving unity gain at DC.</p>
| 418
|
convolution
|
Convolution of $f(2x)$ and $g(3x)$
|
https://dsp.stackexchange.com/questions/35479/convolution-of-f2x-and-g3x
|
<p>As I know, convolution is defined as $f(x)*g(x) = \int_{-\infty}^{+\infty}f(\tau)g(x-\tau)d_{\tau}$, but what if we want to convolve $f(2x)$ and $g(3x)$? It should be like $f(2x)*g(3x) = \int_{-\infty}^{+\infty}f(2\tau)g(3x-\tau)d_{\tau}$ or $f(2x)*g(3x) = \int_{-\infty}^{+\infty}f(2\tau)g(3x-3\tau)d_{\tau}$ or anything else?</p>
|
<p>Replace $x$ by $x - \tau$ . So option b i.e $3x-3\tau$.</p>
| 419
|
convolution
|
How to take the linear convolution of these two signals?
|
https://dsp.stackexchange.com/questions/35736/how-to-take-the-linear-convolution-of-these-two-signals
|
<p>How do I perform the linear convolution of the following two signals? I am having trouble relating $x[n]$ to a series of points, like was given by $h[n]$ below.</p>
<p>$$x[n] = e^{j\pi n}\left\{{u[n]}-u[n-8]\right\}\quad\text{and}\quad h[n] = (-1)^{n}\left\{{u[n]}-u[n-4]\right\}$$</p>
<p>$x[n]$ is a finite-length sinewave of length $L=8$, and $h[n]$ is a causal filter of length $M=4$, expressed as $h[n]=\{1,-1,1,-1\}$.</p>
<p>The solution is:
$y[n]=\{1,-2,3,-4,4,-4,4,-4,3,-2,1\}$</p>
|
<p>For $n = 1\ldots 8$
$$x[n] = e^{j\pi n}\{{u[n]}-u[n-8]\} = (-1)^{n}$$
and for $n = 0\ldots 3$
$$h[n] = (-1)^n$$
Else, if $n > 8$ or $n < 1$, then $x[n] = 0$. Similarly, if $n < 0$ and $n > 3$ then $h[n] = 0$. Using the definition of convolution,
$$y[k] =(h * x)[k] = \sum\limits_{m = 0}^3 h[m]x[k-m] $$
For $k = 1$
$$y[1] =(h * x)[1] = h[0]x[1] = 1$$</p>
<p>For $k = 2$
$$y[2] =(h * x)[2] = h[0]x[2] + h[1]x[1] = -1 -1 = -2$$
$\vdots$</p>
<p>For $k = 5$
$$y[5] =(h * x)[5] = h[0]x[5] + h[1]x[4] + h[2]x[3] + h[3]x[4]= 1 +1 + 1 +1 = 4$$
$\vdots$</p>
<p>For $k = 8$
$$y[8] =(h * x)[8] = h[0]x[8] + h[1]x[7] + h[2]x[6] + h[3]x[5]= -1 -1 - 1 -1 = -4$$</p>
<p>$\vdots$</p>
<p>For $k = 11$
$$y[11] =(h * x)[11] = h[0]x[11] + h[1]x[10] + h[2]x[9] + h[3]x[8]= 0 + 0 + 0 + 1 = 1$$</p>
| 420
|
convolution
|
Variance of zero-mean signal after convolution for SIR computation
|
https://dsp.stackexchange.com/questions/62619/variance-of-zero-mean-signal-after-convolution-for-sir-computation
|
<p>my goal is to scale desired, interfering signal at the receiver in order to achieve desired SIR (signal to interference ratio) for beamforming (source separation) application.</p>
<p>Let be:</p>
<ul>
<li><span class="math-container">$s(t)$</span> a known speech signal with zero mean <span class="math-container">$\mu_s = 0$</span> and known variance <span class="math-container">$\sigma^2_s$</span>. </li>
<li><span class="math-container">$q(t)$</span> a known speech signal with variance <span class="math-container">$\sigma^2_q$</span>.</li>
<li><span class="math-container">$h_s(t)$</span> and <span class="math-container">$h_q(t)$</span> two known <em>deterministic</em> acoustic impulse responses.</li>
<li><span class="math-container">$y(t) = (h_s \ast s)(t) + (h_q \ast q)(t)$</span> the reverberant speech signal.</li>
</ul>
<p>Suppose we have access only to the <em>premix</em> and the <em>mix</em>, that means to
<span class="math-container">$x_s(t) = (h_s \ast s)(t)$</span>, <span class="math-container">$x_q(t) = (h_q \ast q)(t)$</span> and <span class="math-container">$y(t)$</span>.</p>
<p>To achieve desired SIR at the receiver, I can simply make <span class="math-container">$x_s$</span> of unit variance, and scale <span class="math-container">$x_q$</span> and then <em>mix</em> again the two quantity accordingly.
However now, how do I scale <span class="math-container">$s(t)$</span> and <span class="math-container">$q(t)$</span> accordingly.</p>
<hr>
<p>I think a more general approach is then, given the deterministic LTI filter <span class="math-container">$h(t)$</span>, what is the variance of <span class="math-container">$y(t) = (h \ast s)(t)$</span>?</p>
<p><span class="math-container">$$ \sigma^2_y = \mathbb{E}[ (y - \mathbb{E}[y])^2 ] =
\mathbb{E}[(h \ast s - \mathbb{E}[h \ast s])^2]$$</span></p>
<p>Since <span class="math-container">$s$</span> is zero-mean periodic speech signal and <span class="math-container">$h$</span> is a deterministic filter, then <span class="math-container">$\mathbb{E}[h \ast s] = 0$</span>.</p>
<p>It follows that <span class="math-container">$$ \sigma^2_y = \mathbb{E}[ (h \ast s)^2 ]$$</span>
Using the Parceval Theorem and the convolution theorem, I can write
<span class="math-container">$$ \sigma^2_y = \mathbb{E}[ (H X)^2 ]$$</span></p>
<p>However if I write everything directly in the frequency domain, as
<span class="math-container">$$\mathcal{P}(Y) = \mathbb{E}[ H^2 X^2 ] = | H |^2 \mathbb{E}[X^2] = | H |^2 \mathcal{P}(X)$$</span>
where <span class="math-container">$H$</span> and <span class="math-container">$X$</span> are the DFT of <span class="math-container">$h$</span> and <span class="math-container">$s$</span> respectively, while <span class="math-container">$\mathcal{P}(\cdot)$</span> is the PSD of operator. </p>
<p>And here I am not sure how to continue, since the PSD is defined over frequencies, while the variance of the signal is a scalar.</p>
<p>Thanks</p>
| 421
|
|
convolution
|
What does shift and multiply-accumulate mean in terms of Convolutional Neural Networks?
|
https://dsp.stackexchange.com/questions/78079/what-does-shift-and-multiply-accumulate-mean-in-terms-of-convolutional-neural-ne
|
<p>While reading this <a href="https://arxiv.org/pdf/1811.08383.pdf" rel="nofollow noreferrer">paper</a>, I came across the following paragraph -</p>
<p>"Our intuition is: the convolution operation consists of shift and multiply-accumulate.
We shift in the time dimension by ±1 and fold the multiply-accumulate from time dimension to channel dimension."</p>
<p>Can someone please explain what these terms mean?</p>
| 422
|
|
convolution
|
Convolution problem
|
https://dsp.stackexchange.com/questions/32447/convolution-problem
|
<p>This will be maybe quite easy fore somebody but I am not sure how to solve it. If I have a signal which is equal to</p>
<p>$$
y(n)=x(n)\star g(n), \quad n\in[0,1,...,N]
$$
where $\star$ is convolution operator, how do I get expression for taking every $K^{\textrm{th}}$ sample of $y(n)$, i.e., $y(Kn)$?</p>
|
<blockquote>
<p>how do I get expression for taking every Kth sample of y(n), i.e., y(Kn)?</p>
</blockquote>
<p>What I understand from this is you want a <strong>notation</strong> that represents $y[Kn]$ as a <strong>convolution</strong> operator. That doesn't exist. Or I cant remember.</p>
<p>For example the notation: $$y[Kn] = x[Kn]*g[n]$$ is <strong>wrong</strong>.</p>
<p>The following is <strong>wrong</strong> either: $$y[Kn] = x[Kn]*g[Kn]$$ </p>
<p>Fundamentally $y[Kn]$ is defined as the <strong>samples</strong> of $y[n]$ as: $$y[Kn] = y[n]|_{n=Kn} = \sum{x[k]g[Kn-k]}$$</p>
<p>This last sum <strong>cannot</strong> be defined in a compact and simple manner by using the base signals $x[n]$,$g[n]$ and <strong>convolution</strong> operator alone. Rather the preferred way is to define $y[n] = x[n]*g[n]$ and indicate $y[Kn]$ to be used explicitly.</p>
<p>I strongly suggest you to look at the books and papers on the <strong>multirate signal processing</strong> subject in which such operations are abundant and some exclusive notation might have been inrtoduced so far. </p>
| 423
|
convolution
|
the sub-range of circular and linear convolution
|
https://dsp.stackexchange.com/questions/37107/the-sub-range-of-circular-and-linear-convolution
|
<p>circular convolution $x_{_3p}[n]$ = $x_1[n]~\circledast_N ~x_2[n]$</p>
<p>is a period version of the linear convolution $x_{_3p}[n]=x_1[n] * x_2[n]$ </p>
<p>The length of $x_1[n]$ and $x_2[n]$ are $L$ ($n\in[0,\ldots,L-1]$) and $P$ ($n\in[0,\ldots,P-1]$) points, respectively.</p>
<p>The minimum value of $N$ that makes $x_{_3p}[n] = x_{_3}[n]$ for $n \in [0,\ldots N-1]$
is when $N\geq L+P-1$, right?</p>
<p>My question is: If $N=L$, what is the sub-range of $[0,\ldots,N-1]$ that $x_{_3p}[n]=x_{_3}[n]$?</p>
|
<p>Given an <span class="math-container">$L_x$</span>-point discrete-time sequence <span class="math-container">$x[n]$</span>, nonzero for the range <span class="math-container">$0 \leq n < L_x$</span>, and <span class="math-container">$L_y$</span>-point sequence <span class="math-container">$y[n]$</span>, nonzero for the range <span class="math-container">$0 \leq n < L_y$</span>, their <strong>linear</strong> convolution
<span class="math-container">$$ z[n] = x[n] \star y[n] = \sum_{k=-\infty}^{\infty} x[k]h[n-k] ~~~,~~~0\ \leq n < L_z$$</span></p>
<p>will have <span class="math-container">$L_z = L_x + L_y - 1$</span> samples.</p>
<p>Also their <span class="math-container">$N$</span>-point <strong>circular</strong> convolution is defined as:
<span class="math-container">$\newcommand{\circled}[1]{ \require{enclose}
\enclose{circle}{#1} }$</span></p>
<p><span class="math-container">$$w[n] = x[n] \circled{N} y[n] = \sum_{k \in <N>} x[(k)_N]y[(n-k)_N] ~~~,~~ 0 \leq n < N.$$</span></p>
<p>which uses modulo-<span class="math-container">$N$</span> arguments to effectively interpret the sequences as periodic with <span class="math-container">$N$</span>.</p>
<p>Since, most typically, the circular convolution is used to implement a linear convolution, using the DFT property: <span class="math-container">$$x[n] \circled{N} y[n] \longleftrightarrow X[k]Y[k] $$</span> where <span class="math-container">$X[k]$</span> and <span class="math-container">$Y[k]$</span> are the <span class="math-container">$N$</span>-point DFTs of <span class="math-container">$x[n]$</span> and <span class="math-container">$y[n]$</span>, then we are interested in the relation between <span class="math-container">$z[n]$</span> and <span class="math-container">$w[n]$</span>; i.e., what's the range of <span class="math-container">$n$</span> for which they are the same ?</p>
<p>The answer depends on <span class="math-container">$L_z$</span> and <span class="math-container">$N$</span>:</p>
<ul>
<li>if <span class="math-container">$~~ L_z \leq N ~~~ $</span> then <span class="math-container">$w[n] = \begin{cases} { z[n] ~~~,~~~ 0 \leq n < L_z \\ ~0~ ~~~~~,~~~~ L_z \leq n < N }\end{cases} $</span></li>
</ul>
<p><span class="math-container">$$\\\\$$</span></p>
<ul>
<li>if <span class="math-container">$~~ N < L_z ~~~ $</span> then <span class="math-container">$w[n] = \begin{cases} { \text{aliased} ~~~,~~~ 0 \leq n < L_z-N \\ z[n]~ ~~~,~~~~ L_z-N \leq n < N }\end{cases}$</span></li>
</ul>
<p><span class="math-container">$$\\\\$$</span></p>
<p>In the second case, if <span class="math-container">$L_z - N \geq N$</span> or <span class="math-container">$ 2N \leq L_z$</span> there will be no matching samples between <span class="math-container">$w[n]$</span> and <span class="math-container">$z[n]$</span>.</p>
<p>The following Matlab stem-plot shows the matching and unmatching samples (forced to zero for clarity of display) between linear and circular convolutions of sequences of length <span class="math-container">$L_x = 32$</span> and <span class="math-container">$L_y=10$</span>, with modulus <span class="math-container">$N=25$</span>. It also plots the full extended result of the linear convolution <span class="math-container">$z[n]$</span> of length <span class="math-container">$L_z = 41$</span> samples.</p>
<p><a href="https://i.sstatic.net/oRD2H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oRD2H.png" alt="enter image description here" /></a></p>
<p>Since <span class="math-container">$N < L_z$</span>, the first <span class="math-container">$L_z-N = 16$</span> samples of <span class="math-container">$w[n]$</span> in the range <span class="math-container">$0 \leq n < 16$</span> will be aliased, and only the remaining <span class="math-container">$N-(L_z-N) = 2N-L_z = 9$</span> samples in the range <span class="math-container">$ 16 \leq n < 25$</span> will be equal to <span class="math-container">$z[n]$</span>.</p>
<p>Note, the circular convolution just has <span class="math-container">$N=25$</span> samples only, which is periodically extending. I've set the initial <span class="math-container">$16$</span> aliased samples of <span class="math-container">$w[n]$</span> to zero for clarity of display. Also plotted those 16 aliased sample locations on together with the last 16 sample of linear convolution which has a length of <span class="math-container">$41$</span>. Hence the last plotted <span class="math-container">$16$</span> forced-zeros of the circular convolution actually belong to the first <span class="math-container">$16$</span> samples of the next period of the periodic result of the circular convolution...</p>
| 424
|
convolution
|
Why does convolution reverb work?
|
https://dsp.stackexchange.com/questions/79604/why-does-convolution-reverb-work
|
<p>I've just begun learning about signal processing on my own, and after reading about convolution I'm curious about <em>why</em> convolution reverb works. That is given a recorded impulse <span class="math-container">$\hat{f}$</span> and an audio signal <span class="math-container">$g$</span>, why does the convolution <span class="math-container">$$h = \hat{f} \circledast g$$</span> produce an audio signal which sounds like the signal <span class="math-container">$g$</span> was recorded in the environment <span class="math-container">$\hat{f}$</span> was recorded (based on <span class="math-container">$\hat{f}$</span>)? If this question is better suited for sound design/physics stack exchange, feel free to redirect me!</p>
|
<p>A room consists of many hard surfaces. When you generate a wideband click sound in that room («perturbations about the mean pressure»), those waves will travel into the room, be reflected by surfaces, travel once more, be re-reflected etc. As time goes by, the wave tends to diminish due to spherical expansion, and because of losses in reflections (and in the air).</p>
<p>For any observer in the room, some set of reflected waves will hit his ears. This is the reverb as a function of space, time(-shift) and an impulse input. Because the function is close enough to linear, you can generalize to any input by convolving with the impulse response. Ie treating the input as lots of little impulses and sum a scaled and shifted set of impulse responses tracking the input.</p>
| 425
|
convolution
|
Plot the sum frequency generation spectrum using convolution MATLAB
|
https://dsp.stackexchange.com/questions/80989/plot-the-sum-frequency-generation-spectrum-using-convolution-matlab
|
<p>I am attempting to calculate the spectrum of a pulse that has undergone sum-frequency generation (in this case it is a gaussian, so it is correct to also say frequency doubling/Second harmonic generation). The SHG signal in the frequency domain is given as,</p>
<p><span class="math-container">$$E_{SHG}(2\omega) = E_1(\omega)*E_2(\omega)$$</span></p>
<p>Therefore a signals SHG spectrum is just an autoconvolution of the original spectrum.</p>
<p>However, I am unfamiliar with the practical use of discrete convolution and do not know how to transform the new x-axis in to a suitable vector for plotting?</p>
<pre><code>clear all; close all;
dt = 0.01;
x = 200:dt:1000; %Frequency axis (THz)
%Generate Stokes Profile
width_stokes = 20;
% center frequency
f = 500;
Es = exp(-(x-f).^2/width_stokes^2);
Es=Es./max(Es);
plot(x,Es);
title("Stokes spectrum");
SHG = conv(Es,Es,'same');
SHG = SHG./max(SHG);
figure
% New x-axis for SHG plot
x1 = (1:length(SHG));
plot(x1,SHG)
xlabel('frequency (A.U.)')
<span class="math-container">```</span>
</code></pre>
|
<p>You have a gaussian centered at 500 THz. We would expect the convolution to have a single gaussian centered at 1000 THz.</p>
<p>A linear convolution of two sequences of N points each will have a length of 2*N-1 samples. You have the added complication that your frequency vectors don't start at 0Hz. One way to fix this would be to have them simply start at zero</p>
<pre><code>dt = 0.01;
x = 0:dt:1000; %Frequency axis (THz) starting at 0
N = length(x);
freqAxisConvolution = dt*(0:2*N-2);
</code></pre>
<p>Alternatively, you can just calculate the offset. If both vectors where unit impulses (starting at 200 THz) the convolution would be a unit impulse (starting at 400 THz), so the offset is simply the sum of the individual offsets. In other words if each vectors spans from 200 THz to 2000 THz the convolution will span from 400 THz to 4000 THz</p>
<p>Here is the full thing</p>
<pre><code>%%
close all
dt = 0.01;
x = 200:dt:1000; %Frequency axis (THz)
%Generate Stokes Profile
width_stokes = 20;
% center frequency
f = 500;
Es = exp(-(x-f).^2/width_stokes^2);
Es=Es./max(Es);
plot(x,Es);
title("Stokes spectrum");
% discrete convolution produces 2*N-1 output samples
SHG = conv(Es,Es,'full');
SHG = SHG./max(SHG);
figure
% X axis: spans the sum of the original axes
N = length(Es);
freqAxis = 2*x(1)+(0:2*N-2)*dt;
plot(freqAxis,SHG)
grid on
xlabel('frequency in THz');
</code></pre>
| 426
|
convolution
|
The proof of the dual convolution/multiplication properties?
|
https://dsp.stackexchange.com/questions/53921/the-proof-of-the-dual-convolution-multiplication-properties
|
<p>I've been trying to find a rigorous proof of the dual convolution / multiplication, but I found nothing, can you give me a hand with this?</p>
<p><span class="math-container">\begin{align}
f(t) * g(t) &\overset{\mathcal F}{\iff} F(j\omega)G(j\omega)\\
f(t)g(t) &\overset{\mathcal F}{\iff}\frac1{2\pi} F(j\omega) * G(j\omega)\\
\end{align}</span></p>
|
<p>Just do the double integration:</p>
<p><span class="math-container">$$\begin{align*}\mathscr{F}\left\{f(t) * g(t)\right\} &= \mathscr{F}\left\{\int_{-\infty}^\infty f(\tau)g(t-\tau)d\tau\right\} \\
\\
&= \int_{-\infty}^\infty\left[\int_{-\infty}^\infty f(\tau)g(t-\tau)d\tau\right]e^{-j\omega t}dt\\
\\
&= \int_{-\infty}^\infty f(\tau)\left[\int_{-\infty}^\infty g(t-\tau)e^{-j\omega t}dt\right]d\tau\\
\\
&= \int_{-\infty}^\infty f(\tau)e^{-j\omega \tau}G(\omega)d\tau\\
\\
&= F(\omega)G(\omega)\\
\end{align*}$$</span></p>
<p>The above derivation used Fubini's Theorem to switch the order of integration and the Fourier Transform Shift Theorem.</p>
<p>The proof for convolution in the frequency domain is analogous to the one above.</p>
| 427
|
convolution
|
Convolving two signals
|
https://dsp.stackexchange.com/questions/3045/convolving-two-signals
|
<p>I saw a video where this guy used a program to do a frequency analysis on a voice signal and a sawtooth wave (I'm assuming this was FFT). Then he saved the plots as images and combined them pixel by pixel through multiplication using photoshop. He then put this picture back into the program and it did the inverse transform, turning it back into a sound. He said that this was an implementation of a vocoder but the low quality made it hard for me to tell if he was correct. </p>
<p>If so, then couldn't one simply implement a vocoder as a convolution operation? You would just have your two signals then select a window from each of these of the same width at the same position, then perform the convolution operation on these two windows (and probably use a window function as well, such as Hanning). You would, of course, have to do this for every sample, so you would be doing this as many times as you have samples in your tracks (and the windows would sometimes lie partially outside of a track, so they would have to be zero padded). </p>
<p>This seems like it might work because the convolution theorem says convolution in the time domain is itemwise multiplication in the frequency domain, so if it doesn't implement a vocoder, it at least implements exactly what the man in the video was doing (at higher quality). And, I'm not just asking this question blindly, I actually tried it. I get a very cool voice effect but I'm not sure it's the same as a vocoder. In fact, it sounds nothing like a vocoder. If some kind individual could tell me exactly what is happening here, I would be very grateful.</p>
|
<p>From your description, here is what is happening in the video:</p>
<ul>
<li>The short-term Fourier transform (aka spectrogram) of the signals is computed. The output of this operation is a matrix of complex values, which cannot be represented as images. Thus, the magnitude or the square of the magnitude is extracted to yield a single positive real value converted into a pixel intensity.</li>
<li>The magnitudes (pixel intensities) are multiplied.</li>
<li>The inverse short-term Fourier transform of the product is synthesized, presumably with made-up phases (or with the original phase of the carrier).</li>
</ul>
<p>There are so many reasons why this has nothing to do with a convolution. I'm sure other members will point a few more:</p>
<ul>
<li>What is multiplied are magnitudes or their squares; but not the actual complex values produced by the Fourier analysis.</li>
<li>Even if the quantities that were multiplied were the actual complex values, keep in mind that when dealing with discrete signals of finite length, the inverse discrete Fourier transform of the product of the discrete Fourier transforms is the <a href="http://en.wikipedia.org/wiki/Circular_convolution" rel="nofollow">Circular Convolution</a>, not the convolution.</li>
<li>And still... If you split two signals $x$ and $y$ into blocks of length N, and compute the pair-wise circular-convolution of length N of these blocks, you'll get something very different from $x * y$.</li>
</ul>
<p>Convolving two audio-signals is a rather meaningless operations. One usually convolve an audio signal with the impulse response of a system; and when convolution is performed in the frequency-domain, the details are trickier than just pair-wise multiplications of FFTs (there's some blocking issues; + necessary overlap add or save).</p>
<p>What you saw would be more accurately described as spectral cross-synthesis. And there is one major reason why this is a very different beast from a vocoder. The goal of a vocoder is to apply the <em>spectral envelope</em> of the modulator onto the carrier. I emphasize on spectral envelope, because when you use a vocoder to make a stack of sawtooth waves say "hello", you apply the formants and overall loudness envelope of the original speech signal to the saw waves, but the last thing you want is the pitch information of the speech signal to get involved. The analysis filter bank of the vocoder should be designed to make abstraction of the individual spectral peaks in the modular signal - what matters is the rough spectral envelope - the location of the bumps (formants). This is why a vocoder doesn't need more than 12-30 channels - too few channels and it doesn't capture the formants, too many channels and it starts capturing pitch-dependent fine spectral peaks. Just like the features used in speech recognition...</p>
<p>Let me give you an example. Let us say you have a modulator speech signal with a 200 Hz f0 ; and formants at 1kHz and 1.5kHz - its spectrum is a sequence of narrow peaks at 200 Hz, 400 Hz, 600Hz, 800 Hz, 1000 Hz, 1200 Hz, 1400 Hz, 1600 Hz, 1800 Hz ; with the peaks at 1000 Hz and 1400 / 1600 Hz emphasized (formants). Let's say your carrier signal is a 140 Hz sawtooth - the spectrum is made of narrow peaks at 140 Hz, 280 Hz, 420 Hz... with a decreasing $1/n$ amplitude. If you go with what you described in your question (product of STFTs), you wouldn't have much left because the two spectra have very little overlap - the first common frequency in the sequence would be at 1400 Hz! What you want to do is somehow capture that the modulator spectrum has a bump at 1kHz and 1.5kHz, and use this information to boost the carrier spectrum in this frequency area. That's how vocoders work.</p>
<p>To do so, and if you really want to go the STFT route, an option would be to smooth the modulator spectrum with a rather wide kernel before doing the multiplication - so that what you are really doing is applying the spectral envelope of one signal onto the other one. This would be akin to applying a motion-blur on the Y axis in Photoshop using this silly image manipulation presentation...</p>
| 428
|
convolution
|
What type of circuit is responsible for convolution in the classic analog telephone?
|
https://dsp.stackexchange.com/questions/2891/what-type-of-circuit-is-responsible-for-convolution-in-the-classic-analog-teleph
|
<p>I'm interested in learning how telephones work, so I did a little bit of reading about signal processing. When I came up with the word convolution, I quickly realized the importance of this term.</p>
<p>To begin with, I want to know how classic analog telephones worked. The apparent simplicity of their design appeals to me.</p>
<p>What kind of circuit was responsible for convoluting the microphone impulses in a classic telephone receiver, so that this signal could be transmitted over wires?.</p>
|
<p>Convolution is a mathematical abstraction describing how a linear, time-invariant system affects a signal going through it.</p>
<p>Sometimes one explicitly designs a system to convolve a signal by a predefined impulse response (for example when building a digital filter); but more often than not, convolution is used to <em>model</em> various physical processes involved in a system. These physical processes can be transmission delays or dispersions, the limited bandwidth of an amplifier, transmission medium or electro-mechanical system, the explicit use of passive R, L, C network to achieve some filtering, etc... Thus, it might not be out of place to find convolution used in the description of some elements of a telephone system, but it's a modeling tool, not an actual process.</p>
<p>Here is an analogy: the trajectory of a cannonball is a parabola, but it makes little sense to ask which mechanical device in a cannon is responsible for computing $y = ax^2 + b$.</p>
| 429
|
convolution
|
Can two nonzero signals $x[n]$ and $y[n]$ give a zero convolution
|
https://dsp.stackexchange.com/questions/11262/can-two-nonzero-signals-xn-and-yn-give-a-zero-convolution
|
<p>Suppose $x[n]$ and $y[n]$ are two nonzero signals(i.e., $x[n] \neq 0$ for at least one value of n and similarly for $y[n]$).Can the convolution between $x[n]$ and $y[n]$ result in an identically zero signal? In other words, is it possible that $\displaystyle\sum_{k = -\infty}^{k = +\infty}x[k]y[n-k] = 0$ for all n.</p>
|
<p>Yes, for example let</p>
<p>$$x[k]=1$$</p>
<p>for all $k$ and</p>
<p>$$y[k] = \begin{cases}1 & k=0\\-1 & k=1\\0 & otherwise \end{cases}$$</p>
<p>It is easy to see that in case of a convolution, the result will be zero for all values of $n$.</p>
| 430
|
convolution
|
Basic question: Why is the output of a system the convolution between the impulse response and the input?
|
https://dsp.stackexchange.com/questions/20455/basic-question-why-is-the-output-of-a-system-the-convolution-between-the-impuls
|
<p>I forgot a very simple fact and I am now struggling to find reference that proves this basic property?</p>
<p>How would you prove that for a single in single out system, the system output is the impulse response convoluted with the input?</p>
|
<p>Because it is the response of the system when an unit impulse (delta function) is applied to the input of the system. So basically, If you multiply this output impulse response of the system to each of the input samples and then add them all, you get the overall output of the system. And this is only true, If the system agrees with the linearity properties such as homogeneity, additivity and shift invariance. For more details and to understand the mathematics behind this, you can go through this link: <a href="http://www.dspguide.com/ch6.htm" rel="nofollow">http://www.dspguide.com/ch6.htm</a></p>
<p>I tried to put this as comment instead of answer, since its not detailed enough, however, due to my lack of reputation, I could not comment. Anyways, the link I have provided is very useful. Hope it helps!</p>
| 431
|
convolution
|
How can I calculate the cyclic (periodic) convolution?
|
https://dsp.stackexchange.com/questions/7979/how-can-i-calculate-the-cyclic-periodic-convolution
|
<p>I'd like to understand how to calculate the cyclic convolution as well as understand what that means exactly. How should I go about finding the output for various periods for a system?</p>
<p>I have an example: </p>
<p>$$
x(n) =
\begin{cases}
n & \textrm{for} \quad 1 \leq n \leq 3\\ 0 &\textrm{otherwise}\end{cases}
$$</p>
<p>$$
h(n) =
\begin{cases}
n & \textrm{for} \quad 1 \leq n \leq 2\\ 0 & \textrm{otherwise}
\end{cases}
$$</p>
<p>If I perform the convolution, then I get the following values for $y(n)$:</p>
<p>$$y(2) = 1;\quad y(3) = 4;\quad y(4) = 7;\quad y(5) = 6$$</p>
<p>Now, if I want a period = 3, then:</p>
<p>$$x(n) = x(n+3k) \quad\textrm{and}\quad h(n) = h(n+3k)$$</p>
<p>At this point, I'm unsure of what to do to get values that correspond to a period.</p>
|
<p>People generally define the cyclic convolution of <em>periodic</em> sequences
$x$ and $h$ of period $N$ as
$$y[n] = \sum_{m=0}^{N-1}x[m]h[n-m], n = 0, 1, \ldots, N-1.\tag{1}$$
Note that the above expression consists of $N$ different sums that
you have to compute, and if while computing any particular sum,
the value of $n-m$ is not in the range of numbers for which you
are given $h[\cdot]$, then you use the periodicity ($h[n-m] = h[n-m+N]$
or $h[n-m] = h[n-m-N]$) to get the argument into the range for which
you know the value of $h[\cdot]$.
Also, note that $(1)$ holds for <em>all</em> integers $n$, but we don't
need to <em>calculate</em> more than $N$ sums like $(1)$ because
$y[n]$ is <em>also</em> a periodic sequence of period $N$ and so we have
for any integer
$M$ that $y[M] = y[M \bmod N]$ where, of course, $0 \leq M \bmod N \leq N-1$.</p>
<p>Exercise: <strong>write out</strong> the above formula <strong>explicitly</strong>, meaning
no summations, for $n = 0, 1, 2$ and proceed from there. Go on;
you can do it. There are only three sums of three terms each.</p>
| 432
|
convolution
|
Applications or physical interpretation of auto-convolution?
|
https://dsp.stackexchange.com/questions/22463/applications-or-physical-interpretation-of-auto-convolution
|
<p>I wonder if anyone has any experience with auto-convolution. In particular i'm interested in understanding the physical interpretation of it.
I understand what convolution, correlation and auto-correlation are, also i'm aware that the definition of auto-convolution will be something like
$$f\ast f = \int_{-\infty}^{\infty} f(\tau)f(t-\tau)d\tau $$
but i still don't get what are the implications o the meaning of it. I've been looking for a while and so far i haven't found any good or detailed explanation (on constrast with auto-correlation).
So, if anyone has any experience dealing with this topic or has any intuitive interpretation that could share, i'll appreciate it. Thanks.</p>
|
<p>Autoconvolution is used in signal detection, but the way you've written it is not correct. Suppose you're trying to detect a signal $f(t)$ by filtering with h(t).</p>
<p>$y(t) = f(t) \ast h(t)$</p>
<p>You want to maximize your response to the signal $f(t)$. We can do this by maximizing the correlation coefficient between $f$ and $h$. Here the correlation is time-varying , so we'll maximize the <em>average</em> autocorrelation coefficient. We'll assume the $f$ and $h$ signals have DC values of zero for simplicity.I'll use $\mu_y$ to denote average value of a the auto-correlation of response, $y$.</p>
<p>$ h = argmax_{h} \ \ \mu_y\big(E_f E_h \big)^{-\frac{1}{2}}$</p>
<p>Let's take a differential of our performance $ J \propto \mu_y\big(E_f E_h \big)^{-\frac{1}{2}}$ with respect to h</p>
<p>$\partial_{h(\tau)} J \propto \partial_{h(\tau)} \bigg((\mu_y\big(E_f E_h \big)^{-\frac{1}{2}}\bigg)$ </p>
<p>$\ \ \ \ \ \ \ \ \ = \frac{1}{|dom(f)|}\bigg(\partial_{h(\tau)}\mu_y\bigg)\big(E_fE_h)^{-\frac{1}{2}} +\frac{1}{2}\mu_yE_f^{-\frac{1}{2}}E_h^{-\frac{3}{2}}\bigg(\partial_{h(\tau)}E_h\bigg)$</p>
<p>$\ \ \ \ \ \ \ \ \ \propto \bigg(\partial_{h(\tau)}\frac{1}{|dom(f)|}\int_{\infty}^{\infty}f(t-\tau)h(\tau)d\tau \bigg)\big(E_fE_h\big)^{-\frac{1}{2}} +\frac{1}{2}\mu_yE_f^{-\frac{1}{2}}E_h^{-\frac{3}{2}}\bigg(\partial_{h(\tau)}E_h\bigg)$</p>
<p>$\ \ \ \ \ \ \ \ \ = \bigg(\frac{1}{|dom(f)|}\int_{\infty}^{\infty} f(t-\tau)d\tau \bigg)\big(E_fE_h\big)^{-\frac{1}{2}} -\frac{1}{2}\mu_yE_f^{-\frac{1}{2}}E_h^{-\frac{3}{2}}\bigg(\partial_{h(\tau)}E_h\bigg)$</p>
<p>$\ \ \ \ \ \ \ \ \ = \big(E_fE_h\big)^{-\frac{1}{2}} \frac{1}{|dom(f)|}\int_{\infty}^{\infty} f(t-\tau)d\tau-\frac{1}{2}\mu_yE_f^{-\frac{1}{2}}E_h^{-\frac{3}{2}}\bigg(\partial_{h(\tau)}\int_{\infty}^{\infty} h^2(\tau)d\tau \bigg)$</p>
<p>$\ \ \ \ \ \ \ \ \ = \frac{1}{|dom(f)|}\big(E_fE_h\big)^{-\frac{1}{2}} \int_{\infty}^{\infty} f(t-\tau)d\tau-\mu_yE_f^{-\frac{1}{2}}E_h^{-\frac{3}{2}}\int_{\infty}^{\infty} h(\tau)d\tau$</p>
<p>$\ \ \ \ \ \ \ \ \ \propto \frac{1}{|dom(f)|}\int_{\infty}^{\infty} f(t-\tau)d\tau-\mu_yE_h^{-1}\int_{\infty}^{\infty} h(\tau)d\tau$</p>
<p>$\ \ \ \ \ \ \ \ \ = \int_{\infty}^{\infty} \bigg(\frac{1}{|dom(f)|}f(t-\tau)-h(\tau)\mu_yE_h^{-1}\bigg)d\tau$</p>
<p>At minimum $J$ we shouldn't assume $h(\tau) = 0$, we'll enforce $$\frac{1}{|dom(f)|}f(t-\tau)-h(\tau)\mu_yE_h^{-1} = 0$$</p>
<p>and pretty readily you get </p>
<p>$$h(\tau) = \frac{E_h}{\mu_y|dom(f)|}f(t-\tau)$$</p>
<p>Or most importantly</p>
<p>$$h(\tau) \propto f(t-\tau) \text{where} \ \ \tau \ \ \text{is time and} \ \ t \ \ \text{is the delay}$$</p>
<p>or, using perhaps better "variable names"</p>
<p>$$h(t) \propto f(t_d-t) \text{where} \ \ t \ \ \text{is time and} \ \ t_d \ \ \text{is the delay}$$</p>
<p>That is, if we want to maximize the correlation between our signal detector with impulse h(t), we better pick h(t) to be a time-reversed and time-shifted version of our signal of interest. In practice $t_d$ would probably be set to zero, as it just represents whenever the $f$ part that you're looking for finally arrives.</p>
<p>Under this chosen $h(t) \propto f(t_d - t)$, your original question makes more sense. The autocorrelation signal becomes proportional to the accumulated energy of $f(t)$ that is seen by your filter.</p>
<p>$y(t) = f(t) \ast h(t)$</p>
<p>$y(t) \propto \int f(\tau) h(t-\tau)d\tau$</p>
<p>$y(t) = \int f(\tau) f(t_d-(t+\tau))d\tau$</p>
<p>$y(t) = \int f(\tau) f(\tau + t_d - t)d\tau$</p>
<p>$y(t) = \int P_f(\tau + t_d - t)d\tau$</p>
| 433
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.