category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
wavelet transform
Why is a wavelet transform implemented as a filter bank?
https://dsp.stackexchange.com/questions/25408/why-is-a-wavelet-transform-implemented-as-a-filter-bank
<p>The mother wavelet function $\psi(t)$ must satisfy the following:</p> <p>$$\int\limits_{-\infty}^{+\infty} \frac{|\psi(\omega)|^2}{\omega} d \omega &lt; +\infty,$$ $$\psi ( \omega ) \bigg|_{ \omega =0} =0,$$ and $$ \int\limits_{-\infty}^{+\infty} \psi(t) \ dt = 0$$</p> <p>To serve as the wavelet basis for wavelet transform $$ \gamma (s, \tau ) = \int\limits_{-\infty}^{+\infty} f(t) \ \psi_{s, \tau }(t) \ dt$$</p> <p>where $ \psi_{s, \tau }(t) \triangleq \psi\left( \frac{t-\tau}{s} \right)$.</p> <p>While I understand that the wavelet must be an oscillatory function having no frequency component at $\omega=0$ and effectively have a band pass filter like spectrum, from the equation of wavelet series or wavelet transform can you tell me why is it that the wavelet transform is implemented as a filter bank? What is the intuition behind it? What makes it possible?</p> <p>I am asking this question since the fact that practically the DWT is implemented as a filter bank means that it is not a DWT anymore, it is just a set of low pass and high pass filters. It is mind bogling.</p>
<p>First of all the basic idea of wavelet transforms lies in multi-resolution analysis. What this means is that the signal is looked at from different scales.</p> <p>It is probably easier to understand this with images (which are 2D signals). The idea of multi-resolution is like zooming in and out of a reference signal and compared with windows of the image according to the amount of zoom.</p> <p>Similarly for 1D you are trying to see how the mother wavelet at different scales (think of it as compressing and expanding in the 1D) compares with the signal at different delays (at different points on the discrete 1D).</p> <p>Now when you look at a given set of discrete points you want to transform based on a mother wavelet it easy to see them as filters (based on mother wavelets) that are applied at different scales.</p> <p>For a better understanding of wavelets and DWT I suggest you read Polikar's tutorial : <a href="https://users.rowan.edu/%7Epolikar/WTtutorial.html" rel="nofollow noreferrer">Wavelet Tutorial</a></p>
234
wavelet transform
Discrete Wavelet Transform: Specifics of Filter Bank
https://dsp.stackexchange.com/questions/53106/discrete-wavelet-transform-specifics-of-filter-bank
<p>So I have been given to understand that the discrete wavelet transform is able to provide both time and frequency resolution in ways that classic Fourier and even short time Fourier cannot. By carrying out discrete convolutions of the wavelet at different scaling factors, one can perform this transform. Now, I have been reading about the implementation and it seems as though this transformation is often carried out by convolving quadrature mirror filters over the signal, downsampling by a factor of two, then performing the same operation on the downsampled version to determine lower frequency components. </p> <p>I am very confused about the particulars of why this works. It would appear that a particular QMF filter bank, a binomial filter bank, designed by Ali Akansu, is able perform the same transformation as the Daubechies wavelet. <a href="https://en.wikipedia.org/wiki/Binomial_QMF" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Binomial_QMF</a></p> <p>I am wondering if the same can be said for other wavelets. Are there binomial QMF filter banks designed to perform the discrete wavelet transforms for different families of basis wavelets? What are the mathematics behind this alternate approach to calculating discrete wavelet transforms and what are the advantages of using this filter bank technique (computing resources, etc)? </p>
<p>The paper by H. Caglar and A. Akansu show that other known wavelet families can also be designed by using Bernstein polynomial approximation. Their examples in H. Caglar and A.N. Akansu, A Generalized Parametric PR-QMF Design Technique Based on Bernstein Polynomial Approximation, IEEE TRANSACTIONS ON SIGNAL PROCESSING, PP. 2314-2321, VOL. 41, NO. 7, JULY 1993 include most regular and coiflet wavelets in addition to Daubechies wavelets. See <a href="https://web.njit.edu/%7Eakansu/PAPERS/CaglarAkansuBernstein.pdf" rel="nofollow noreferrer">https://web.njit.edu/~akansu/PAPERS/CaglarAkansuBernstein.pdf</a></p>
235
wavelet transform
What the difference between the family of Discrete Wavelet Transform?
https://dsp.stackexchange.com/questions/30870/what-the-difference-between-the-family-of-discrete-wavelet-transform
<p>When I useed <em>Mathematica</em>,If found many Transforms in the list of Discrete Wavelet Transforms.</p> <p>For example:</p> <ol> <li>discrete wavelet transform (DWT)</li> <li>stationary wavelet transform (SWT)</li> <li>lifting wavelet transform (LWT)</li> <li>discrete wavelet packet transform (DWPT) </li> <li>stationary wavelet packet transform (SWPT)</li> </ol> <p>I always use DWT,and found it can be used in many cases such as de-noising,filtering data,embedding watermarking,detecting the discontinuity and so on.</p> <p>Is it different cases use different transforms?So how to know it?</p>
236
wavelet transform
Using continuous verses discrete wavelet transform in digital applications
https://dsp.stackexchange.com/questions/8009/using-continuous-verses-discrete-wavelet-transform-in-digital-applications
<p>I am familiar with much of the mathematical background behind wavelets. However when implementing algorithms on a computer with wavelets I am less certain about whether I should be using continuous or discrete wavelets. In all reality everything on a computer is discrete of course, so it seems obvious that discrete wavelets are the right choice for digital signal processing. However <a href="http://en.wikipedia.org/wiki/Continuous_wavelet_transform">according to wikipedia</a> it is the continuous wavelet transform that is primarily used in (digital) image compression as well as a large number of other digital data processing activities. What are the pros and cons to consider when deciding whether to use an (approximate) continuous wavelet transform instead of an (exact) discrete wavelet transform for digital image or signal processing?</p> <p>P.S. (Checking an assumption here) I am assuming continuous wavelet transforms are used in digital processing by simply taking the value of the continuous wavelet at evenly spaced points and using the resulting sequence for wavelet computations. Is this correct?</p> <p>P.P.S. Usually wikipedia is pretty precise about mathematics, so I am assuming that the applications in the article on Continuous Wavelet Transforms are in fact applications of the Continuous Wavelet Transform. Certainly it mentions some that are specifically CWT so there is clearly some use of CWT in digital applications.</p>
<p>As Mohammad stated already the terms Continuous Wavelet Transforms (CWT) and Discrete Wavelet Transforms (DWT) are a little bit misleading. They relate approximately as (Continuous) Fourier Transform (the math. integral transform) to DFT (Discrete Fourier Transform).</p> <p>In order to understand the details it is good to see the historical context. The wavelet transform was originally introduced in geophysics by Morlet, and was basically a Gabor transform with a Window that grows and shrinks together with the selected scale/frequency. Later Daubchies (a physicist-ett from Belgium) realized that by choosing special orthogonal wavelet bases the infinitely redundant CWT can be critically sampled on a dyadic grid. From the resulting DWT the corresponding full CWT can be obtained by convolving the DWT with the reproducing kernel of the respective wavelet. The reproducing kernel is the CWT of the wavelet itself.</p> <p>Daubchies findings gave a big boost to the wavelet theory in the early 80ies. The next big result was that the DWT can be computed very efficiently (this is sometimes called FWT [fast WT] as well) by using techniques from the theory of filterbanks, namely quadrature mirror filters (QMF) together with downsampling filterbanks. By constructing special QMFs the corresponding DWT can be computed via filtering and downsampling, which is the state-of-the-art algorithm to compute DWTs today. You do not need the scaling function to compute the DWT, it is just an implementation detail that FWT process.</p> <p>Concerning the application side the CWT is the more ideal candidate for signal or time series analysis due to its more fine grained resolution and is usually chosen in most tasks (e.g. singularity detection). The DWT is more of interest in the context of fast non redundant transforms. The DWT has a very good energy compactification and is thus a good candidate for lossy compressions and signal transmissions.</p> <hr> <p>Hope that clarified things.</p>
237
wavelet transform
Selecting the number of cycles for wavelet transform
https://dsp.stackexchange.com/questions/18722/selecting-the-number-of-cycles-for-wavelet-transform
<p>I'm trying some wavelet analysis of EEG signals, using the phase lock measures from <a href="http://brainimaging.waisman.wisc.edu/~lutz/LeVanQuyen_et_all_JNM_2001.pdf" rel="nofollow">[1]</a>, specifically the S-PLV measure.</p> <p>In order to calculate that we perform a wavelet transform on the signals and one of the parameters we have to set is the number of cycles for the wavelet.</p> <p>The authors suggest that the number of cycles is </p> <p>$nco = 6 f σ$</p> <p>where $f$ is the frequency at which we are creating the wavelet transform at and $σ$ is proportional to the inverse of $f$.</p> <p>The authors mention that in most of their studies they used $nco$ between 3 and 8, but I was wondering:</p> <p><strong>Is there any benefit in varying the number of cycles for different frequencies?</strong></p> <p>In my case I am calculating the Wavelet transform for a number of bands ranging from 1Hz to 100Hz, so setting the number to a constant seems like a bad idea given the wide spectrum of frequencies.</p> <p>[1] : <a href="http://brainimaging.waisman.wisc.edu/~lutz/LeVanQuyen_et_all_JNM_2001.pdf" rel="nofollow">Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony</a></p>
238
wavelet transform
Wavelet transform in MATLAB
https://dsp.stackexchange.com/questions/30871/wavelet-transform-in-matlab
<p>Suppose I have a wave with $20 \textrm{ kHz}$, $100 \textrm{ kHz}$ and $300 \textrm{ kHz}$. Sampling frequency used is $1000 \textrm{ kHz}$. I apply the discrete wavelet transform on the wave like <code>dwt(wave,'db2')</code>. I will get one level of approximation and detail coefficients. According to the basics, the detail coefficient will have $300\rm k$ component and approximation coefficient will have the $100\rm k$ and $20\rm k$ components </p> <p>But when I did <code>fft</code> on the output(on approximation and detail), I didn't get what I expected. </p> <ul> <li>Could anyone post some MATLAB code with which I can verify it?</li> <li>And also tell whether I am doing the procedure correctly? </li> <li>Also if anyone could explain the practical side of this tool? </li> </ul>
<p>What you might be forgetting is that <code>dwt</code> does downsampling. After filtering (low-pass or high-pass), the filtered signal is subsampled. In order to keep the number of samples. So aliasing may occur, depending on the quality of the filters.</p> <p>And <code>db2</code> is a quite poor filter. So if you <code>idwt</code> either approx or details, by replacing the other coefficients by zeros, you almost get your coefficients in order: on the top, the original signal, the two low frequencies in the second plot, the high frequencies in the third plot. Of course you have some aliasing, due to the poor resolution.</p> <p><a href="https://i.sstatic.net/QSIQz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSIQz.png" alt="DWT db2"></a></p> <p>Try the attached code with <code>db24</code>, you will get much less artifact. </p> <p><a href="https://i.sstatic.net/B8P3o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B8P3o.png" alt="DWT db24"></a></p> <p>What happens now with the approximation and the details? My interpretation is that the approximation is now sampled at <span class="math-container">$500$</span> kHz, and the details too, except that it is now aliased to the base-band:</p> <p><a href="https://i.sstatic.net/yfCf2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yfCf2.png" alt="DWT coefficients"></a></p> <p>and they need to be unmixed by reciprocal filters of the <code>idwt</code>.</p> <p>The <a href="http://www.laurent-duval.eu/Documents-Common/FFTR.m" rel="nofollow noreferrer"><code>FFTR.m</code></a> code should be downloaded. </p> <p>But why study sines with discrete wavelets?</p> <pre><code>f = [20/1000 100/1000 300/1000]; nSample = 1024; time = (1:nSample)'; data = cos(2*pi*time*f); dataSum = sum(data,2); waveName = 'db2'; [a,d] = dwt(dataSum,waveName); z = zeros(size(a)); dataSumD = idwt(z,d,waveName); dataSumA = idwt(a,z,waveName); timeAD = 1:2*length(a); timeAD=timeAD(1:2:end); figure(1) subplot(3,2,1) plot(time,dataSum);axis tight subplot(3,2,2) plot(FFTR(dataSum));axis tight subplot(3,2,3) plot(time,dataSumA);axis tight subplot(3,2,4) plot(FFTR(dataSumA));axis tight subplot(3,2,5) plot(time,dataSumD);axis tight subplot(3,2,6) plot(FFTR(dataSumD));axis tight figure(2) subplot(2,2,1) plot(timeAD,a);axis tight subplot(2,2,2) plot(FFTR(a,2));axis tight subplot(2,2,3) plot(timeAD,d);axis tight subplot(2,2,4) plot(FFTR(d,2));axis tight </code></pre> <p>So you have the code, I do not know whether you did stuff correctly, and wavelets are quite useful if used widely on signals that are worth it. Sines typically are not on the list.</p>
239
wavelet transform
Power/Energy from Continuous Wavelet Transform
https://dsp.stackexchange.com/questions/86181/power-energy-from-continuous-wavelet-transform
<p>How can power or energy be computed from Continuous Wavelet Transform? Is it just <span class="math-container">$\sum |\text{CWT}(x)|^2$</span>, or are there other considerations, particularly if interested in a subset of frequencies? Do the results interpret differently from what's computed from DFT?</p>
<p>CWT power is tricky. We must distinguish between <strong>energy of transform</strong> and <strong>energy of signal</strong>, and the two can be very different. As energy is the more fundamental and less conceptually intricate quantity, I develop this answer for energy, then relate to power. Answer applies to CWT <strong>and STFT</strong>, with differences described near bottom.</p> <p>CWT here is defined &quot;semi-discrete&quot; (continuous domain, finite number of wavelets) for notation, but all generalizes to fully continuous and fully discrete where applicable.</p> <h3>The two compared, briefly</h3> <img src="https://i.sstatic.net/nhN4x.png" width="700"> <p>Each function above is used to compute energy. Again, very different.</p> <h3>TL;DR?</h3> <p>Read &quot;Which to use?&quot; and TL;DR in &quot;Conditions on <span class="math-container">$\psi$</span>&quot;, and adjust &quot;Code example&quot; to your use, but there's no shortcuts to avoiding mistakes.</p> <h3>Energy of transform</h3> <p>There is a &quot;Parseval's equivalent&quot; for each; for transform, we have</p> <p><span class="math-container">$$ E(\text{CWT}_\psi(x)) = \sum_i \|x \psi_i\|^2, \tag{1} $$</span></p> <p>which is the sum of energies of coefficients. Note this is simply Parseval-Plancherel's theorem over the entire transform; for any one coefficient from wavelet e.g. <span class="math-container">$\psi_4$</span>, the convolution in frequency domain is <span class="math-container">$\hat x(\omega) \hat \psi_4(\omega)$</span>, its energy is <span class="math-container">$\|\hat x(\omega) \hat \psi_4(\omega)\|^2$</span>, which via theorem is <span class="math-container">$\|x(t) \psi_4(t)\|^2$</span>, and since <span class="math-container">$|AB| = |A||B|$</span>, that's <span class="math-container">$\int |x(t)|^2 |\psi_4(t)|^2$</span> (can't factor the aggregation too, i.e. <span class="math-container">$\|x\|^2 \|\psi\|^2$</span>).</p> <p>As the expression is factorable,</p> <p><span class="math-container">$$ \int_{-\infty}^{\infty} |x|^2 \sum_i |\psi_i|^2, \tag{2} $$</span></p> <p>we have the &quot;energy transfer function&quot; for the transform:</p> <p><span class="math-container">$$ E_{\psi, \text{CWT}}(\omega) = \sum_i |\hat\psi_i(\omega)|^2, \tag{3} $$</span></p> <p>the <em>sum of energies of wavelets, pointwise</em>, known as the Littlewood-Paley sum. For a subset of frequencies, just have <span class="math-container">$i$</span> correspondingly range from <span class="math-container">$\omega_\min$</span> to <span class="math-container">$\omega_\max$</span>.</p> <h3>Energy of signal</h3> <p>CWT is a decomposition: the total is the combination of its parts. Conveniently, the &quot;combination&quot; here is simply a sum of its coefficients, for each timestep - the <a href="https://dsp.stackexchange.com/a/76239/50076">one-integral inverse</a>, <code>x == sum(CWT(x), axis=0)</code>. For a frequency subset of interest, then, &quot;signal energy&quot; is energy of that subset's signal, e.g. <span class="math-container">$\|x * \psi_4 + x * \psi_5\|^2$</span> which is <span class="math-container">$\|\hat x \hat\psi_4 + \hat x \hat\psi_5\|^2$</span>. Repeating above logic, the expression for the entire signal is now</p> <p><span class="math-container">$$ E(x_\text{inv}) = \left\| \sum_i \hat x(\omega) \hat \psi_i(\omega) \right\|^2, \tag{4} $$</span></p> <p>which factors as</p> <p><span class="math-container">$$ \int_{-\infty}^{\infty} |x|^2 \left| \sum_i \psi_i \right|^2, \tag{5} $$</span></p> <p>so the energy transfer function is</p> <p><span class="math-container">$$ E_{\psi, x_\text{inv}}(\omega) = \left| \sum_i \hat\psi_i(\omega) \right|^2, \tag{6} $$</span></p> <p>the <em>energy of sum of wavelets, pointwise</em>. Compared, we have <span class="math-container">$\sum \|\psi \|^2$</span> vs <span class="math-container">$\|\sum \psi \|^2$</span>: not same! (Consider <span class="math-container">$|2|^2 + |3|^2$</span> vs <span class="math-container">$|2 + 3|^2$</span>). For a subset of frequencies, just have <span class="math-container">$i$</span> correspondingly range from <span class="math-container">$\omega_\min$</span> to <span class="math-container">$\omega_\max$</span>.</p> <h3>Which to use?</h3> <p><strong>TL;DR</strong> If you're <em>asked to</em> find energy/power, then almost certainly ET. Else, depends:</p> <ul> <li><strong>Energy of transform</strong> (ET) is the energy of the coefficients of the transform.</li> <li><strong>Energy of signal</strong> (ES) is the energy of the signal that's obtained by inverting the transform.</li> <li>Each can be applied over a part of the transform, or the whole.</li> </ul> <p>Which to use depends on application. Use ET if the transform modulus (scalogram) is of interest. Use ES if the signal underlying the transform is of interest. Examples:</p> <ul> <li>Feeding scalogram to neural net: ET</li> <li>Synthesizing signal after modifying CWT: ES</li> </ul> <p>There's certainly more I'm forgetting. Important notes, ES and ET have different properties:</p> <ol> <li>ET is <strong>non-decreasing</strong>: including more coefficients never decreases energy, unlike ES</li> <li>ET is <strong>separable</strong>: energies of different bands can be analyzed independently. ES can't do this per 1, which also means we can't plot &quot;Energy vs Frequency&quot; for ES.</li> <li>ES <strong>lacks linearity</strong> (w.r.t. input freqs), which just restates 2: <span class="math-container">$\text{ES(5Hz to 50Hz) + ES(50Hz to 100Hz)}\ \neq \text{ES(5Hz to 100Hz)}$</span>.</li> </ol> <p>Again there's more I'm missing, just compare the respective functions. Lastly, small note on ES - I made it up. Because, it makes sense. Maybe it's already a thing, but case in point, most sources mean ET when they say &quot;energy&quot;.</p> <h3>Conditions on <span class="math-container">$\psi$</span></h3> <p><strong>TL;DR</strong> use filters that are real-valued in frequency, are L1 normalized, are either real-valued in time or are strictly analytic (negative frequencies = 0), use <code>hop_size=1</code>. Also &quot;more wavelets&quot; is better (see &quot;Normalization&quot;).</p> <p>The following <a href="https://dsp.stackexchange.com/a/76239/50076">one-integral inverse conditions</a> apply (strict analyticity, unit stride) apply:</p> <ol> <li><p>ET &amp; ES: (A) strictly analytic (no <code>-</code> freqs) -- (B) anti-analytic (no <code>+</code> freqs) -- (C) real-valued in time</p> <ul> <li>Real-valued <span class="math-container">$x$</span>: (A) <em>OR</em> (B) <em>OR</em> (C)</li> <li>Complex-valued <span class="math-container">$x$</span>: (A) <em>AND</em> (B), <em>OR</em> (C)</li> </ul> </li> <li><p>ET &amp; ES: Real-valued in frequency (zero-phase). More precisely, the sum of all filters in freq domain must not have an imag part, but it's unlikely if individual filters have imag part.</p> </li> <li><p>ES: Unit stride (<code>hop_size=1</code>)</p> </li> <li><p>ES: filters are L1 normalized. (note this isn't in the linked post)</p> </li> </ol> <p>Ignoring these conditions:</p> <ul> <li>ET: still yields the energy of the transform, but then the interpretation changes as the transform itself mirepresents the signal.</li> <li>ES: no longer yields the energy of the inverted signal, though it may be close.</li> </ul> <p>Lastly, for calculations to match <em>exactly</em>, no unpadding. This is necessary, as the padded filters' and signal's DFT aren't same as unpadded or non-padded. However, we should still pad, and results should be quite close. Subject of another post.</p> <h3>Normalization</h3> <p>As ET and ES functions can differ significantly in shape and norm, it's usually impossible to design filterbanks that satisfy both perfectly. The most important trait to optimize for is <strong>flatness</strong> in frequency domain - flat in ET is at least approximately flat in ES, and vice versa, and makes ET and ES within scalar multiples of each other. For this, more wavelets = better.</p> <p>Whether these adjustments are &quot;valid&quot; depends on use.</p> <ul> <li>If a result (e.g. convolving along frequency of modulus of CWT) expects the CWT to have certain energy, then our adjustments of said energy must also be done on the coefficients.</li> <li>If we're simply inspecting, or aiming for CWT that's a sampled version of the continuous result, then the coefficients shouldn't be adjusted, and filters shouldn't be rescaled to satisfy energy measures.</li> <li>If we're trying to compare different signals, or compare ET to ES in a scaling-agnostic manner, or obtain certain normalized measures, then it makes sense to adjust both ET and ES.</li> <li>If we seek ET &amp; ES that's comparable to the signal (e.g. physical quantities), then it's <strong>mandatory.</strong></li> <li>For operating on the scalogram, I doubt it's ever valid to adjust L2-normed CWT except for scalar scaling.</li> </ul> <p>A scalar multiple adjustment is done in code examples below so our transform-based results don't contradict the original signal (i.e. exceeding its energy/power).</p> <h3>Energy conservation</h3> <p>ET provides a simple two-number measure of energy conservation; if</p> <p><span class="math-container">$$ A \leq \sum_i |\hat\psi_i(\omega)|^2 \leq B $$</span></p> <p>then multiplying by <span class="math-container">$|\hat x(\omega)|^2$</span> and applying Parseval-Plancherel's theorem</p> <p><span class="math-container">$$ A \|x\|^2 \leq \| \text{CWT}(x) \|^2 \leq B \|x\|^2 $$</span></p> <p>bounds worst-case energy losses/gains by <span class="math-container">$A, B$</span>.</p> <ul> <li><span class="math-container">$A = B = 1$</span> is a <strong>tight frame</strong>. This might imply ET=ES, I'm unsure (it is the case for Morlets).</li> <li><span class="math-container">$A = 0$</span> means the transform is <strong>lossy</strong> (non-invertible).</li> <li><span class="math-container">$B &gt; 1$</span> means the transform is <strong>expansive</strong>. <span class="math-container">$B &lt; 1$</span> makes it contractive.</li> <li>The DC bin is excluded from compute as it makes the measure useless (all wavelets are zero-mean; finite CWT discards signal offset).</li> </ul> <h3>STFT?</h3> <p>STFT is convolution with windowed complex sinusoids. With <code>hop_size=1</code>, all logic here translates smoothly. With <code>hop_size &gt; 1</code>, this answer is already too long, but one should mind that the <em>spectrogram</em> is <a href="https://dsp.stackexchange.com/a/80920/50076">always aliased</a> (to different extents, sometimes negligibly), so ET won't yield consistent results, and ES is either done differently or not at all.</p> <h3>ET vs ES, when should I really care?</h3> <p>When ET and ES plots disagree in flatness. In provided codes below, plot <code>ET_tfn</code> and <code>ES_tfn</code> (done at top of this post).</p> <h3>Physical/instantaneous energy/power?</h3> <p>See <a href="https://dsp.stackexchange.com/a/86132/50076">this answer</a>. In general case, <code>|cwt(x)|^2</code> is already instantaneous power by definition, but if physical units make it instantaneous energy instead - apply answer to differentiate the ET or ES that lacks the temporal aggregation step, along accounting for sampling period and duration, and normalization (this answer).</p> <p>Note, the complex modulus heavily shifts energy toward lower frequencies, meaning differentiating CWT/STFT modulus rows suffers minimal aliasing in most cases. This yields excellent estimation, at expense of loss in temporal resolution - so the result shouldn't quite be treated as &quot;instantaneous&quot;.</p> <p>This also summarizes the case <strong>vs DFT</strong> - time-frequency results are <em>localized</em>, in both time and frequency. However, with simple-spread wavelets like Morlet and high time frequency resolution, the difference along frequency may be inconsequential; here, <a href="https://dsp.stackexchange.com/a/71399/50076">synchrosqueezing</a> can offer significant enhancement.</p> <h3>Code example</h3> <p>Available on Github, in <a href="https://github.com/OverLordGoldDragon/StackExchangeAnswers/blob/main/SignalProcessing/Q86181%20-%20CWT%20Power%20and%20Energy/cwt_power_example.py" rel="nofollow noreferrer">Python</a> and <a href="https://github.com/OverLordGoldDragon/StackExchangeAnswers/blob/main/SignalProcessing/Q86181%20-%20CWT%20Power%20and%20Energy/cwt_power_example.m" rel="nofollow noreferrer">MATLAB</a>. <span class="math-container">$f_s = 400\ \text{Hz}$</span>, <span class="math-container">$T = 5\ \text{sec}$</span>. Output (Python):</p> <pre><code>Between 50 and 150 Hz, DISCRETE: 970.433 -- energy (transform) 883.466 -- energy (signal) 0.485217 -- mean power (transform) 0.441733 -- mean power (signal) Between 50 and 150 Hz, PHYSICAL (via Riemann integration): 2.42608 Joules -- energy (transform) 2.20867 Joules -- energy (signal) 0.485217 Watts -- mean power (transform) 0.441733 Watts -- mean power (signal) Original signal: 1913.8 -- energy (discrete) 0.956901 -- mean power (discrete) 4.7845 Joules -- energy (physical) 0.956901 Watts -- mean power (physical) </code></pre> <p>and as a sanity check, include full transform:</p> <pre><code>Between -1 and 201 Hz, DISCRETE: 1895.69 -- energy (transform) 1878.75 -- energy (signal) 0.947844 -- mean power (transform) 0.939374 -- mean power (signal) Between -1 and 201 Hz, PHYSICAL (via Riemann integration): 4.73922 Joules -- energy (transform) 4.69687 Joules -- energy (signal) 0.947844 Watts -- mean power (transform) 0.939374 Watts -- mean power (signal) </code></pre> <h3>Code validation</h3> <p>Demonstrating that ET and ES work as described. Available on Github <a href="https://github.com/OverLordGoldDragon/StackExchangeAnswers/blob/main/SignalProcessing/Q86181%20-%20CWT%20Power%20and%20Energy/cwt_power_validation.py" rel="nofollow noreferrer">in Python</a>.</p> <h3>Appendum: Shortcomings: unpadding, stride</h3> <p>Not discussed is the need to account for padding and non-unity stride. That's its own topic, but major points on unpadding:</p> <ul> <li><strong>ET does not conserve energy</strong>! With padding, energy is conserved <em>while padded</em>, but then contracted upon unpadding. The effect is the most severe with zero-padding, and hard to quantify with other schemes, but I found <code>'reflect'</code> to work best for aiding conservation. For worst case, consider the case <code>x = zeros(N); x[0] = 1</code>; half of ET is gone.</li> <li><strong>ES conserves energy</strong>! The sum of a tight frame, zero-phase filterbank, is an all-pass filter. It's actually the unit impulse. That gives it perfect temporal localization, hence unpadded energy exactly matches the signal's energy. <ul> <li><em>Caveat</em>: for a subset of the transform, the subset must fully capture a &quot;component&quot; (AM/FM, or intended signal) for results to be meaningful. Example, unit impulse: energy should be zero everywhere but at one time instant, but this won't happen unless all frequencies are included.</li> </ul> </li> <li>Unpadding is aliasing, making <a href="https://dsp.stackexchange.com/q/83082/50076">exact correction for ET impossible</a>.</li> <li><em>Not</em> padding circumvents the problem, at expense of feature quality.</li> </ul> <p>On stride - with large strides, the exact unpad index may be fractional. This means we either under-unpad, or over-unpad. Either can be accounted for as follows:</p> <ul> <li>Suppose input length is 64, and stride is 8. Then, <code>len(out) = 8</code>. Suppose input length is 65, and stride is 8. Then, <code>len(out) = 9</code>, yet the exact length is <code>65/8 = 8.125</code>, which is unachievable. Hence, because of just one sample, we have an up to <code>9/8</code> energy difference relative to the first case - or, to be exact, <code>9/8.125</code>. Make up for it with <code>coef *= sqrt(8.125/9)</code>.</li> <li>This assumes a uniform energy extension of the missing continuum, i.e. that it's accurately predicted by the mean of the rest. Not flawless but pretty good in unaliased convolution settings.</li> <li>Besides inexact unpadding, the basic adjustment is <code>coeffs *= sqrt(hop_size)</code>, via Parseval's, assuming energy-unaliased convolutions. &quot;Energy-unaliased&quot; here means the result doesn't fold on itself (<a href="https://dsp.stackexchange.com/a/74734/50076">subsampling in time <span class="math-container">$\Leftrightarrow$</span> folding in Fourier</a>) over non-zero regions (changes shape of Fourier modulus), so aliasing that's just a frequency shift is fine.</li> </ul> <h3>Full derivations</h3> <p>Note, again, <span class="math-container">$|AB| = |A||B|$</span> and <span class="math-container">$|AB|^2 = (|AB|)^2 = (|A||B|)^2 = |A|^2|B|^2$</span>. We derive &quot;transfer functions&quot; in sense of</p> <p><span class="math-container">$$ \texttt{OUT} = \int_{-\infty}^{\infty} \texttt{IN}(x) \cdot \texttt{TRANSFER}\ (x)\ dx $$</span></p> <p><strong>Energy of transform</strong>:</p> <p><span class="math-container">$$ \begin{align} E(\text{CWT}_\psi(x)) &amp;= \sum_i \|x \psi_i\|^2 \\ &amp;= \sum_i \int_{-\infty}^{\infty} |x(t) \psi_i(t)|^2 dt \\ &amp;= \sum_i \int_{-\infty}^{\infty} |x(t)|^2 |\psi_i(t)|^2 dt \\ &amp;= \int_{-\infty}^{\infty} |x(t)|^2 |\psi_0(t)|^2 dt + \int_{-\infty}^{\infty} |x(t)|^2 |\psi_1(t)|^2 dt\ +\ ... \\ &amp;= \int_{-\infty}^{\infty} |\hat x(\omega)|^2 |\hat \psi_0(\omega)|^2 d\omega + \int_{-\infty}^{\infty} |\hat x(\omega)|^2 |\hat \psi_1(\omega)|^2 d\omega\ +\ ... \\ &amp;= \int_{-\infty}^{\infty} |\hat x(\omega)|^2 \left(|\hat \psi_0(\omega)|^2 + |\hat \psi_1(\omega)|^2 +\ ...\right) d\omega \\ &amp;= \int_{-\infty}^{\infty} |\hat x(\omega)|^2 \sum_i |\hat \psi_i(\omega)|^2 d\omega \\ &amp;= \int_{-\infty}^{\infty} |\hat x(\omega)|^2 E_{\psi, \text{CWT}}(\omega) d\omega \\ \end{align} $$</span></p> <p><strong>Energy of signal</strong>:</p> <p><span class="math-container">$$ \begin{align} E(x_\text{inv}) &amp;= \left\| \sum_i \hat x(\omega) \hat \psi_i(\omega) \right\|^2 \\ &amp;= \int_{-\infty}^{\infty} \left| \sum_i \hat x(\omega) \hat \psi_i(\omega) \right|^2 d\omega \\ &amp;= \int_{-\infty}^{\infty} \left| \hat x(\omega) \sum_i \hat \psi_i(\omega) \right|^2 d\omega \\ &amp;= \int_{-\infty}^{\infty} \left| \hat x(\omega) \right|^2 \left| \sum_i \hat \psi_i(\omega) \right|^2 d\omega \\ &amp;= \int_{-\infty}^{\infty} \left| \hat x(\omega) \right|^2 E_{\psi, x_\text{inv}}(\omega) d\omega \\ \end{align} $$</span></p> <h3>Notation</h3> <ul> <li><span class="math-container">$\| f(t) \| = \sqrt{\int_{-\infty}^{\infty} |f(t)|^2 dt}$</span>, the L2 norm</li> <li><span class="math-container">$E(f(t)) = \text{Energy}(f(t)) = \|f(t)\|^2$</span></li> <li><span class="math-container">$\hat f(\omega) = \mathcal{F}(f(t))$</span>, Fourier transform</li> </ul>
240
wavelet transform
Get spectral picture from a wavelet transform
https://dsp.stackexchange.com/questions/13535/get-spectral-picture-from-a-wavelet-transform
<p>According to <a href="https://dsp.stackexchange.com/questions/13527/spectral-structure-of-sinusoidal-model">my previous question</a>, I have changed the generate command to: </p> <pre><code>y=generate1(100,1000,1); </code></pre> <p>and got the following picture:</p> <p><img src="https://i.sstatic.net/2KsED.png" alt="enter image description here"></p> <p>Now I want to test a wavelet transform on the same signal, which of course exists in Matlab, but I have one thing which I should clarify: generally, as I know large scale values correspond to small frequencies and small scale values to large frequencies. So, how should I determine if the given frequencies are small or large? Also, which wavelet transform should i use?</p> <p>For example:</p> <pre><code>m=cwt(y,1:100,'sym2'); </code></pre> <p>and plot it</p> <pre><code>plot(m) </code></pre> <p><img src="https://i.sstatic.net/Pkyis.png" alt="enter image description here"></p> <p>How should I read the second picture? What kind of information can I get?</p>
<p>Try this one:</p> <pre><code>m=cwt(y,1:100,'sym2','plot'); colormap(pink) </code></pre> <p>Result (btw, I cannot reproduce your curve in time domain with the function <code>generate1</code>):</p> <p><img src="https://i.sstatic.net/OQDSs.jpg" alt="enter image description here"></p> <p>Low scale values compress the wavelet and correlate better with high frequencies. While high scale values stretch the wavelet and correlate better with the low frequency content of the signal.</p>
241
wavelet transform
What is &quot;modified frequency slice wavelet transform&quot;?
https://dsp.stackexchange.com/questions/85519/what-is-modified-frequency-slice-wavelet-transform
<p>What is &quot;modified frequency slice wavelet transform&quot;? It was written in an article but It's not been described on the web.</p>
242
wavelet transform
Estimate Power spectral density using Discrete wavelet transform form pycwt
https://dsp.stackexchange.com/questions/84522/estimate-power-spectral-density-using-discrete-wavelet-transform-form-pycwt
<p>I want to estimate the Power spectral density using Discrete wavelet transform and a Morlet Wavelet. Bellow you can find the function I am using. Any comments or suggestions on wether or not the following code is correct?</p> <pre><code>import pycwt as wavelet mother_wave_dict = { 'gaussian': wavelet.DOG(), 'paul': wavelet.Paul(), 'mexican_hat': wavelet.MexicanHat() } def trace_PSD_wavelet(x, dt, dj, mother_wave='morlet'): &quot;&quot;&quot; Method to calculate the power spectral density using wavelet method. Parameters ---------- x : array-like the components of the field to apply wavelet tranform dt: int the sampling time of the timeseries dj: determines how many scales are used to estimate wavelet coeff (e.g., for dj=1 -&gt; 2**numb_scales mother_wave: str The main waveform to transform data. Available waves are: 'gaussian': 'paul': apply lomb method to compute PSD 'mexican_hat': Returns ------- db_x,db_y,db_zz: array-like component coeficients of th wavelet tranform freq : list Frequency of the corresponding psd points. psd : list Power Spectral Density of the signal. psd : list The scales at which wavelet was estimated &quot;&quot;&quot; if mother_wave in mother_wave_dict.keys(): mother_morlet = mother_wave_dict[mother_wave] else: mother_morlet = wavelet.Morlet() N = len(x) db_x, _, freqs, _, _, _ = wavelet.cwt(x, dt, dj, wavelet=mother_morlet) # Estimate trace powerspectral density PSD = (np.nanmean(np.abs(db_x)**2, axis=1))*( 2*dt) # Also estimate the scales to use later scales = ((1/freqs)/dt)#.astype(int) return db_x, freqs, PSD, scales </code></pre>
243
wavelet transform
Edge map based on a Haar Wavelet Transform
https://dsp.stackexchange.com/questions/38183/edge-map-based-on-a-haar-wavelet-transform
<p>I have been implementing the paper <a href="https://www.cs.cmu.edu/~htong/pdf/ICME04_tong.pdf" rel="nofollow noreferrer">Blur Detection for Digital Images Using Wavelet Transform</a> and was asking myself how the following formula could reconstruct the edges given a Haar Wavelet transformed image :</p> <p>$$ \sqrt{LH_i^2 + HL_i^2 + HH_i^2} $$</p> <p>This formula is on page 2, Algorithm 1, step 2.</p> <p>I looked at the reference given, but couldn't find where this was explained.</p>
<p>One single level of a standard separable 2-channel wavelet transform, denoted by $i$, uses a low-pass $l$ and a high-pass $h$ filters (followed by downsampling). Traditionally, one applies $l$ and $g$ on the rows of the image, putting the downsampled low-passed coefficients on a left-half, and the downsampled high-passed coefficients on a left-half. Then the resulting image is processed in the same way column-wise. </p> <p>This results in a set of four separable 2D filters:</p> <ul> <li>$l$ on rows, $l$ on columns (LL)</li> <li>$l$ on rows, $h$ on columns (LH)</li> <li>$h$ on rows, $l$ on columns (HL)</li> <li>$h$ on rows, $h$ on columns (HH)</li> </ul> <p>producing the arrangement below. This is iterated (levels) for on the image, then on the low-pass/low-pass images (LL)</p> <p><a href="https://i.sstatic.net/oSNvp.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oSNvp.gif" alt="Wavelet decomposition"></a></p> <p>Among those filters, three (LH, HL, HH) can be considered as edge detectors ($h$ is similar to a 2-point gradient) in different directions: more horizontal (HL), more vertical (LH), somehow diagonal (HH).</p> <p>The corresponding coefficients, as least their combined energy, can be interpreted as a measure of edge strength, a map computed for each level.</p> <p>Remember that for a continuous 2D field modeling an image $f(x,y)$, the <a href="https://en.wikipedia.org/wiki/Image_gradient" rel="nofollow noreferrer">image gradient</a></p> <p>$$\nabla f =\left[\frac{\partial f(x,y)}{\partial x} ,\frac{\partial f(x,y)}{\partial y} \right]^T$$</p> <p>represents the local directional change in image intensities, whose norm is a measure of gradient "magnitude". You can compute several norms, the most common being the Euclidean, the Taxicab or the max norms.</p> <p>The Haar filter $[1\,-1]$ is one possible discretization for the 1D gradient, the three 2D Haar wavelets compute an horizontal, a vertical and a somewhat diagonal gradient, hence their combined energy (Euclidean norm) is an estimate of the true gradient magnitude.</p>
244
wavelet transform
Apply wavelet transform to analyse EEG signal
https://dsp.stackexchange.com/questions/970/apply-wavelet-transform-to-analyse-eeg-signal
<p>I would like to apply The Morlet wavelet transform to analyse my EEG signals. I have many short signals each is only 1 min long. and they all recorded in 30Hz. I have Two questions:</p> <ol> <li>In the Morlet wavelet, What is the best scale (alpha) to use in my case ?</li> <li>About the edge effect: How can I know / compute what portion of my data will be corrupted due to the wavelet "edge effect" ?</li> </ol>
<p>I'm not sure whether you're talking about Discrete Wavelet Transform (DWT) or Continuous Wavelet Transform (CWT). Both can be used on discrete signals similarly to DFT and DTFT, I'm not sure if anyone calls it the Discrete Time Wavelet Transform instead. In any case, the <a href="http://www.ece.rice.edu/~yyue/research/waveDiff/node4.html" rel="nofollow">dyadic wavelet transform</a> is the non-redundant one (number of samples in $=$ number of coefficients out), and that's what we usually assume by talking about DWT.</p> <p>It looks like you need to do a bit more reading on the wavelet transform in general, because of your first question. Wavelet transform doesn't work like the Fourier Transform; rather, it's akin to the <a href="http://en.wikipedia.org/wiki/Short-time_Fourier_transform" rel="nofollow">Short Time Fourier Transform</a>, which is a function of <em>two</em> variables: frequency and time. Similarly, the wavelet transform is a function of <em>scale</em> and time. This means that you don't pick a scale and stick to it. You go through many scales (in dyadic case they increment in powers of two), and calculate the transform for each one.</p> <p>To address your second question, edge effects are usually not that easy to deal with and there's a plethora of papers on the topic. If you simply want to know what portion of the transformed signal will by affected by them, and it looks like that's what you're asking for, <a href="http://yang.gmu.edu/eos754/paper/bams_79_01_0061_wavelet_highlight.pdf" rel="nofollow">this paper</a> has a good discussion. One thing to keep in mind is that the DWT, like the Discrete Fourier Transform, has circular wrapping symmetry, so dealing with edges is undesirable if you're hoping to get a perfect inverse transform as well. You can look at <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;sqi=2&amp;ved=0CCwQFjAB&amp;url=http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.695&amp;rep=rep1&amp;type=pdf&amp;ei=26brTqrhAtCTtwfq2t3tCQ&amp;usg=AFQjCNEpgtx3Xrpix9LONfuXk_jble0zPg&amp;sig2=RQsanQAeyOejycDte8JoUA" rel="nofollow">this paper</a> for a discussion of that, and also a way to eliminate edge artifacts.</p>
245
wavelet transform
How to test wavelet transforms?
https://dsp.stackexchange.com/questions/70669/how-to-test-wavelet-transforms
<p>One pertinent attribute is <em>normalization</em>, which measures performance in describing signal spectral amplitude and energy, like <a href="https://dsp.stackexchange.com/a/70643/50076">here</a>. Others are robustness to noise, time vs frequency resolution. Anything else? And are there examples of each being computed (coded, discrete) on test signals?</p> <p>The goal's to compare Continuous Wavelet Transform implementations, but I don't have much for comparing except norms.</p>
<p>There are many options to test a wavelet transform, or compare them.</p> <p>You can first compare them on their most basic properties. One can check whether it is invertible, orthogonal, biorthogonal, critical or redundant (and the oversampling ratio), separable or not, (close-to) invariant to transformations of the signal (symmetry, rotation, scaling, shear). They take weak care about data. <em>Because those properties are often tested against &quot;all data&quot;, or random enough (like several noise realizations)</em>.</p> <p>And at the end of the processing workflow (denoising, compression, restoration), you can compare the outcome of some objective or subjective measures (SNR, PSNR, SSIM, other norm-related metrics).</p> <p>In between, one can estimate vector features on wavelet coefficients. Such measures can be computed and evaluated via standard statistics (mean, standard deviation, median, median absolute deviation) and across data models. But you can also think of higher order moments: absolute, centered, ratios of moments. From now on, I will call coefficients from the same scale &quot;subband&quot; (aka voices for continuous wavelet transforms), and use the discrete wavelet transform formalism. Beware: subbands can be subsampled in different manners, and scaled accordingly.</p> <p>Typically, one computes a vector made of statistics of each subband, or a gathering of subbands. As wavelet subbands are normally zero-mean, coefficients are typically norms (non-centered moments) of subband coefficients. Suppose a three-level decomposition, with <span class="math-container">$A_3$</span>, <span class="math-container">$D_3$</span>, <span class="math-container">$D_2$</span> and <span class="math-container">$D_1$</span> approximation and detail coefficients. A feature vector could be made of the norms of <span class="math-container">$D_3$</span>, <span class="math-container">$D_2$</span> and <span class="math-container">$D_1$</span>, and may something for <span class="math-container">$A_3$</span> (but for standard wavelets, the approximations could be close.</p> <p>Sparsity is another interesting feature: how well are wavelet coefficients concentrated with respect to the original signal? A 2009 paper, <a href="https://arxiv.org/abs/0811.4706" rel="nofollow noreferrer">Comparing Measures of Sparsity</a>, provided axioms to compare statistical distributions, among them: Gini indices, norm ratios, kurtosis, etc.</p> <p>As they have been recently considered for quality metrics, stopping criteria or robust sparsity estimators, I would like to pinpoint two of our recent adventures into using regularized ratios of norms (<span class="math-container">$\ell_1/\ell_2$</span> in <a href="https://arxiv.org/abs/1407.5465" rel="nofollow noreferrer">Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization</a>) or <a href="https://arxiv.org/abs/2001.08496" rel="nofollow noreferrer">SPOQ ℓp-Over-ℓq Regularization for Sparse Signal Recovery applied to Mass Spectrometry</a> with its <a href="https://www.mathworks.com/matlabcentral/fileexchange/88897-spoq-smooth-sparse-lp-over-lq-ratio-regularization-toolbox" rel="nofollow noreferrer">Matlab toolbox</a>.</p> <p>Norm or quasi-norm/norm ratios can be quite efficient while dealing with sparsifying transformations.</p>
246
wavelet transform
Relationship between Wavelet transform and Fourier Power Spectral Density
https://dsp.stackexchange.com/questions/38543/relationship-between-wavelet-transform-and-fourier-power-spectral-density
<p>Is there anyway to obtain the Fourier Power Spectral Density from a <a href="http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf" rel="nofollow noreferrer">wavelet transform</a> of a time series?</p> <p>I am particularly interested in this problem because I was wondering if there is any possibility to obtain the local Power Spectral Density from the wavelet transform.</p> <p>If I am not wrong, according to Torrence and Compo, the average of all the local wavelet spectra tends to approach the Fourier Spectrum of the time series.</p> <p>However when I compute the Wavelet spectra the results is much widther than the one given by the Fourier Transform.</p> <p>Here there is an example:</p> <ul> <li>I made an oscillatory time series with some noise (first panel)</li> <li>I compute the Wavelet Continious transform and then I compute the wavelet power spectra (second panel) using a Morlet Transform.</li> <li>According to Torrence and Compo, we can compute the Fourier periods with the formulae than they provide, and also we can compute the Fourier frequencies. They said that a vertical cut in any time is the local Fourier Spectrum, and if we average all the local wavelet spectra we will obtain the Fourier Spectra.</li> <li>However when I do this the result obtained from the wavelet is much wider than the Power Spectra obtained from Fourier:</li> </ul> <p><a href="https://i.sstatic.net/2BSPb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2BSPb.png" alt="enter image description here"></a></p> <p>The main frequencies seems to be OK, but I don't understand why the spectrum from the wavelets is so wide.</p> <p>To rephrase the question again, I would like to know if I can approximate the Fourier Spectra with the wavelet transform without this wideness. And again: I am particularly interested in this problem because I was wondering if there is any possibility to obtain the local Power Spectral Density from the wavelet transform.</p> <p>Thank you very much.</p>
247
wavelet transform
Properties of a custom wavelet family for continuous wavelet transform
https://dsp.stackexchange.com/questions/23971/properties-of-a-custom-wavelet-family-for-continuous-wavelet-transform
<p>First of, I'm new to signal processing. I have a signal which is a linear composition of several basis signals, whereas the same basis signal can occur several times, that is translated, but not scaled. These basis signals look very similar to wavelets. I'm thinking of using a continuous wavelet transform to get the decomposition of the signal with respect the basis signals. However, I assume that a set of basis signals needs to fulfill certain properties to be considered a wavelet family. Could you please point out to me what these may be?</p>
<p>Your question is quite central to the development of wavelet theory. </p> <p>Indeed, the word wavelet has an early history. It is coined in "<a href="http://dx.doi.org/10.1190/1.1441816" rel="nofollow">The form and nature of seismic waves and the structure of seismograms</a>", Norman Ricker, 1940, Geophysics. Here, the wavelet is quite close to your problem, and the scale factor is not an issue. Later, following Jean Morlet, wavelets have become scale-related transformations.</p> <p>Resultingly, following @Batman advices, I would relate your question to matched filtering, basis pursuit or deconvolution/source separation. </p> <p>Yet, you can perform matched filtering of basis signal templates in a wavelet domain, which can help in the case of noise, and may simplify the local estimation of adapted matched filters. An example in seismic signal processing is given in <a href="http://arxiv.org/abs/1108.4674" rel="nofollow">Adaptive multiple subtraction with wavelet-based complex unary Wiener filters, 2012</a> with continuous complex wavelet frames, or in a more complete optimization paradigm with bounds on sparsity in <a href="http://arxiv.org/abs/1405.1081" rel="nofollow">A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal, 2014</a>.</p> <p>What you should made clearer is you really want to do with your signal.</p>
248
wavelet transform
inverse continuous wavelet transform and [Parm] in cwtft
https://dsp.stackexchange.com/questions/10593/inverse-continuous-wavelet-transform-and-parm-in-cwtft
<p>what is '<code>parm</code>' means when you set the name of wavelet function in <code>cwtft</code> or <code>icwtft</code>. <code>wave = {wname,[7.6]}</code>. also can I change Fb and Fc when I use '<code>morl</code>' function in <code>cwtft</code> or <code>icwtft</code> transform? and If not, then how can I reconstruct my signal with <code>cwt</code> transform? cause cwt let me to select optional value for fb and fc (<code>cmorfb-fc</code>). Matlab doesn't have direct function for inverse wavelet transform.</p> <pre><code>N = 1024; t = linspace(0,1,N); y = sin(2*pi*8*t).*(t&lt;=0.5)+sin(2*pi*16*t).*(t&gt;0.5); dt = 0.05;s0 = 2*dt;ds = 0.4875;NbSc = 20; wname = 'morl';sig = {y,dt};sca = {s0,ds,NbSc}; wave = {wname,[7.6]}; cwtsig = cwtft(sig,'scales',sca,'wavelet',wave); sigrec = icwtft(cwtsig,'signal',sig,'plot'); </code></pre>
<p><code>cwtft</code> and <code>icwtft</code> use Fourier transform of wavelet function to reconstruct the signal. The ‘<code>morl</code>’ in <code>wname</code> is analytic morlet function. So it’s exactly <strong>complex morlet</strong> and will give you phase and magnitude information about signal. The ‘<code>parm</code>’ in <code>wave={‘morl’,[parm]}</code> is wo or 2*pi*fc. So it’s corresponded to center frequency. Default value of ‘parm’ is 6 so fc=6/2*pi.molet wavelet function is psi(t,fc)=exp(j*2*pi*fc*t)*exp(-t^2/2) and its Fourier transform is psi^(k)=sqrt(2*pi)exp(-0.5(2*pi*k-ko)^2). <code>ko= parm = 2*pi*fc</code>. so you can config fc of morlet wavelet with changing parm. </p>
249
wavelet transform
Is online Continuous Wavelet Transform possible?
https://dsp.stackexchange.com/questions/83480/is-online-continuous-wavelet-transform-possible
<p>I have recently created a <a href="https://dsp.stackexchange.com/a/83477/62730">real-time STFT</a> with 50% overlap.</p> <p><a href="https://i.sstatic.net/WwjV1.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WwjV1.gif" alt="enter image description here" /></a></p> <p>I wanted to know if this window-based is possible for scalogram, especially continuous wavelet transform. I haven't found anyone implementing it real-time.</p> <p>Also, if it is possible, does one need to compute the CWT of the whole window signal or simply a portion of it, like in a recursive way. I started learning wavelets and I look forward to any hints.</p>
<p>Whether it's &quot;real-time&quot; depends on sampling rate. What's true is that most implementations fall short for realistic <span class="math-container">$f_s$</span>.</p> <p>I am currently working on CWT that will be faster than any other I know of, best case by several times. Though, it may be a while until it's open-sourced.</p> <p>An often overlooked point is, CWT has <code>hop_size</code> just like STFT. Especially if using a scalogram (modulus), there's very little to lose from <code>hop_size=2</code> or <code>4</code>, and sometimes we can do <code>100+</code>. This is implemented easily with <a href="https://dsp.stackexchange.com/a/74734/50076">Fourier-domain subsampling</a>:</p> <pre><code>ifft(a * b)[::2] == ifft((a * b).reshape(2, -1).mean(axis=0)) </code></pre> <p>which reduces FFT size by <code>hop_size</code>. The relevant question is, what <code>hop_size</code> is safe? This can be measured directly, as discussed in <a href="https://dsp.stackexchange.com/a/80920/50076">this answer</a>. Note, perfect inversion is possible for some CWT with complex-valued outputs and <code>hop_size&gt;1</code>, but max permissible <code>hop_size</code> is fairly small and the inversion algorithm not straightforward. However, we don't need complex for inversion; inversion is <a href="https://dsp.stackexchange.com/a/78531/50076">still possible</a> within a global phase-shift.</p> <p>Another major slowdown is in low-support wavelets, which are better off with <a href="https://www.wikiwand.com/en/Overlap%E2%80%93add_method" rel="nofollow noreferrer">overlap-add/overlap-save</a>.</p> <p>Putting all these together elevates max supported <span class="math-container">$f_s$</span> significantly.</p>
250
wavelet transform
Is there a reason why wavelet in continuous wavelet transform are symmetric?
https://dsp.stackexchange.com/questions/89420/is-there-a-reason-why-wavelet-in-continuous-wavelet-transform-are-symmetric
<p>I have looked at several packages to do continuous wavelet transform (cwt), and the usable wavelet families that are available are always symmetric. Is there a reason for that? The kind of signals I am studying can be highly asymmetric sometimes, so I would like to do cwt with asymmetric wavelets.</p>
<p>Continuous wavelets do not need to be symmetric. Due to their high redundancy, mathematical conditions for a function to be an admissible wavelet are quite mild. In a nutshell, integrability, zero-mean and fast decay in the frequency domain are &quot;enough&quot;.</p> <p>Issues arise when one wants non-redundant critically discretized wavelets. For two-band discrete wavelets, finite support, orthogonality, real-values and symmetry properties cannot be meet together, except for Haar wavelets. Hence, most standard orthogonal discrete wavelets are not symmetric.</p> <p>Let remember that in standard signal/image processing, symmetry is common in windows, filters, transforms. Therefore, in continuous wavelets as well, symmetry is commonly used. Symmetry limits dependence on the arrow of time (mirror invariance), and ease interpretation of wavelet coefficients.</p> <p>Therefore, you can choose non-symmetric continuous wavelets to study non-symmetric signals. However, the combination of asymmetries is no guarantee of better results.</p>
251
wavelet transform
how does wavelet transform detect pulses from a signal
https://dsp.stackexchange.com/questions/76327/how-does-wavelet-transform-detect-pulses-from-a-signal
<p>I am learning wavelet transform, below image is an example I have that uses haar wavelet for decomposing the simple haar wavelet like signal.</p> <p>i know that the coefficients at each level are results of the convolution of the signal and the wavelet, the magnitude represents the similarities between them.</p> <p>I am not certain how wavelet transform detects pulses. at level 1, detail coefficients are zero while there are two positive pulses at level 2. i don't understand why we have constant zero at level 1 since the signal is similar to the haar function. at each level, the length of each signal has cut by half. why do we need this practice?</p> <p><a href="https://i.sstatic.net/GSysu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GSysu.png" alt="a wavelet decomposition" /></a></p>
<p>At level 1, the discrete Haar wavelet transform reduces to a 2-tap discrete derivative, downsampled by a factor of two. This downsampling is the main reason for the wavelet shift-variance, and for missed &quot;pulses&quot; or discontinuities. For instance, if your signal is:</p> <p><span class="math-container">$$ [0,\,0,\,1,\,1,\,0,\,0,\,0,\,0]$$</span></p> <p>the two-tap derivative will yield:</p> <p><span class="math-container">$$ [0,\,1,\,0,\,-1,\,0,\,0,\,0]$$</span></p> <p>so apparently, the discontinuities are detected. Yet, after a downsampling by 2, one obtains:</p> <p><span class="math-container">$$ [0,\,.,\,0,\,.,\,0,\,.,\,0]$$</span></p> <p>The pulses are not apparent (here) anymore. If the same signal were shifted by one:</p> <p><span class="math-container">$$ [0,\,1,\,1,\,0,\,0,\,0,\,0,\,0]$$</span> you would have obtained: <span class="math-container">$$ [1,\,.,\,-1,\,.,\,0,\,.,\,0]$$</span> instead.</p> <p>Note that with longer wavelet filters, and further decomposition levels, the pulses will somehow reappear with higher amplitudes.</p> <p>From this observation, you can guess that discrete wavelets don't detect pulses directly, but that some processing and statistical selection is required over several detail levels. Pulse detection is often easier with continuous wavelet transforms.</p>
252
wavelet transform
Subsampling property of wavelet transform
https://dsp.stackexchange.com/questions/30551/subsampling-property-of-wavelet-transform
<p>One of the properties I have seen for isotropic tight wavelet frames is</p> <p>$$\sum_{i\in\mathbb{Z}} \left|h(2^i\omega)\right|^2 = 1$$</p> <p>where $h(\omega)$ is the frequency spectrum of the original wavelet. See page 8 in <a href="http://bigwww.epfl.ch/preprints/unser1202p.pdf" rel="nofollow">A unifying parametric framework for steerable wavelet transforms.</a></p> <p>I have always assumed that this enables the downsampling often seen in wavelet decompositions, which start off with a wavelet satisfying $h(\omega) = 1\, \forall \omega &gt; \pi/2$, meaning that the top half of the spectrum is covered and thus subsampling of the low pass remainder is possible because there are no longer any frequencies above $\pi/2$.</p> <ul> <li>Is this correct? </li> <li>Or can you perform downsampling still having frequencies above $\pi/2$?</li> <li>If one does not need downsampling or wants to have different partitions, is the following condition sufficient, $$\sum_{i=1}^{I} \left|h_i(\omega)\right|^2 = 1$$ where $\{h_i(\omega)\}$ are the frequency responses of the wavelets in the frame?</li> </ul>
<p>I am not completely confident with the full theory of wavelet frames, however I will share some pointers and my beliefs, do not take any of the following for a truth. The property you show is (partly) a characterization of an orthogonal system. In Hernandez &amp; Weiss, A first course on wavelets, 1996, you find Theorem 2.12:</p> <p>If $\psi\in L^2(\mathbb{R})$ is a band-limited function such that $\hat{\psi}$ is zero in a neighborhood of the origin and $\{ \psi_{j,k}: j,k\in \mathbb{Z}\}$ is an orthonormal system, then this system is coomplete if and only if $$\sum_{j\in \mathbb{Z}} |\hat{\psi}(2^j \xi)|^2=1 \quad \text{a.e. on }\mathbb{R}-\{0\}$$ and $$\sum_{j=0}^{\infty} |\hat{\psi}(2^j \xi)\hat{\psi}(2^j (\xi+2k\pi))| =0\quad \text{a.e. on }\mathbb{R},\quad k\in 2\mathbb{Z}+1$$ with the system defined as: $$ \psi_{j,k}(x)= 2^{\frac{j}{2}}\psi(2^j x-k)\,.$$ So the conditions are a little bit stricter for an orthonormal system. So it seems rather to me a characterization in 1D (f band-limited wavelets), than can help define radial 2D wavelets, than a characterization of radial wavelet <em>per se</em>. So if I understand your first question well, this imply the possibility of discrete wavelets and downsampling.</p> <p>Since it is a characterization, I (very unsure) would say that you can downsample, but you have no guarantee about the completeness. I do not understand your condition $$h(\omega)=1, \forall \omega&gt;\pi/2, h(\omega)=1, \forall\omega&gt;\pi/2,$$ but I can add that in Chapter 3.4 of the above book, you have characterization of wavelets with support in $[-\frac{8}{3}\pi,-\frac{2}{3}\pi] \cup [\frac{2}{3}\pi,\frac{8}{3}\pi] $, so I would say yes, as $\frac{8}{3}&gt; \frac{1}{2}$. I would say that the limit on $\pi/2$ is not an issue because of the dilations of the mother wavelet.</p> <p>For the last question, without further information on the $h_i$, it is unlikely that you generate a wavelet system (possibly because of refinement equations), but as long as you cover the whole spectrum, who really cares? You get a decomposition that is energy preserving, but I am not sure yet it suffices to define a tight frames.</p> <p>Additional lectures:</p> <ul> <li><a href="https://www.math.tu-berlin.de/fileadmin/i26_fg-kutyniok/Kutyniok/Papers/Relationwavelets.pdf" rel="nofollow">Affine Density, Frame Bounds, and the Admissibility Condition for Wavelet Frames</a>, 2007, Gitta Kutyniok, with results in frames with generalized samplings $ \psi_{j,k}(x)= a^{\frac{j}{2}}\psi(a^j x-b)$</li> <li>A first course on wavelets, 1996, Eugenio Hernandez and Guido L. Weiss</li> <li>Ten Lectures on Wavelets, 1992, I. Daubechies, esp. Chapter 3</li> <li><a href="http://alpha.math.uga.edu/~mjlai/papers/Nam_Kyunglim_200508_phd.pdf" rel="nofollow">Tight Wavelet Frame Construction and its application for Image Processing</a>, 2005, Kyunglim Nam</li> <li>Frames and bases. An introductory course, 2008, Christensen, O.</li> </ul>
253
wavelet transform
discrete wavelet transform matrix for vectorized image
https://dsp.stackexchange.com/questions/59389/discrete-wavelet-transform-matrix-for-vectorized-image
<p>Yesterday I asked about how to extract 2D DFT matrix for a vectorized image. Today my question is how can I extract 2D DWT matrix for a vectorized image. Fourier transform have this property that rows of the image are transformed first, than columns are transformed. Is there a similiar property for discrete wavelet transform such that 1D DWT matrix can be utilized for 2D DWT? </p>
<p>For only one level on both direction, the classical DWT schemes are applied separately (rows then columns, or colums then rows). </p> <p>For more levels, there exist two schemes:</p> <ul> <li>one interleaving decompositions: row, col, row, col, etc. (also called non-separable/square/non-standard/isotropic/Mallat),</li> <li>one on all rows, then all colums (also called separable/rectangle/standard/anisotropic/tensor). </li> </ul> <p>The second one only deserves the "fully separable" mention.</p> <p>References to related questions:</p> <ul> <li><a href="https://dsp.stackexchange.com/a/25601/15892">2D DWT Image Issue</a></li> <li><a href="https://dsp.stackexchange.com/questions/58843/what-is-the-correct-order-of-operations-for-a-2d-haar-wavelet-decomposition/58864#58864">What is the correct order of operations for a 2D Haar wavelet decomposition?</a></li> <li><a href="https://dsp.stackexchange.com/a/37092/15892">2D DWT computation order</a></li> </ul>
254
wavelet transform
Obtaining normalized matrix for the Haar Wavelet Transform
https://dsp.stackexchange.com/questions/34687/obtaining-normalized-matrix-for-the-haar-wavelet-transform
<p>I've been reading this article: <a href="http://aix1.uottawa.ca/~jkhoury/haar.htm" rel="nofollow">http://aix1.uottawa.ca/~jkhoury/haar.htm</a> which explains the Haar Wavelet Transform. </p> <p>At a certain point, the author says:</p> <blockquote> <p>... Since the transformation matrix $W$ is the product of three other matrices, one can normalize $W$ by normalizing each of the three matrices. The normalized version of $W$ is ... </p> </blockquote> <p>and provides final matrix.</p> <p>But he doesn't provide the exact procedure for matrix normalization he used. Probably someone with a deep understanding of the Haar Wavelet Transform can explain me, how matrices were normalized so that final result was obtained.</p> <p>I've been searching the Internet for matrix normalization and couldn't find anything suitable. Also contacted author of article but didn't get any response so far.</p>
<p>After a few quick calculations, it seems to me that the trouble comes from poor notations for the root in your reference. If you read, in the final normalized matrix, $\sqrt{8/64}$ and $\sqrt{2/4}$ instead of $\sqrt{8}/64$ and $\sqrt{2}/4$ (along with the $\pm$ signs), then the final result is correct.</p> <p>The matrices $V_i$ are orthogonal. To normalize them, you only need to multiply them by a diagonal matrix $D_i$ made from the inverse of the norm of each column. </p> <p>For $V_1$, the norms are all equal to $\sqrt{1/2^2+1/2^2}=\sqrt{1/2}$. Hence, you can multiply $V_1$ by the diagonal matrix $$D_1= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2}\}\,.$$ and get an orthonormal matrix. Similarly, you get that: $$D_2= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,1,\,1,\,1,\,1\}\,,$$ and $$D_3= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,1,\,1,\,1,\,1,\,1,\,1\}\,.$$</p> <p>Finally, because of commutativity, you get a matrix $D=D_1 D_2 D_3$:</p> <p>$$D= \operatorname{Diag} \{2\sqrt{2},\,2\sqrt{2},\,2,\,2,\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2}\}\,.$$</p> <p>If you now multiply the matrix $W$ (the one with the $1/8$)</p> <p><a href="https://i.sstatic.net/XTAgd.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XTAgd.gif" alt="enter image description here"></a></p> <p>with that matrix $D$, you get a final matrix $W$:</p> <p><a href="https://i.sstatic.net/704Or.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/704Or.gif" alt="enter image description here"></a></p> <p>provided the notation interpretation for the square root. For instance, you can recover $1/8\times 2\sqrt{2} = \frac{\sqrt{8}}{\sqrt{8^2}} =\sqrt{8/64} $.</p>
255
wavelet transform
Difference between Gabor filtering and Discrete Wavelet Transform
https://dsp.stackexchange.com/questions/74389/difference-between-gabor-filtering-and-discrete-wavelet-transform
<p>Both Gabor filtering and discrete wavelet transform (DWT) analyze the image in both spatial and frequency domains, unlike Fourier transform which analyzes the image only in the frequency domain. What is the difference between DWT and Gabor filtering?</p>
<p><em>Per se</em>, a Gabor filter in image processing is one linear filter at a certain scale and 2D frequency used for orientation filtering and texture analysis.</p> <p>It would be easier to compare Gabor representations and discrete wavelet transforms. Both are related to a linear decomposition of possibility multidimensional data at different scales with somehow oscillating functions. Main differences are:</p> <ul> <li>Discrete wavelets: critical or non-redundant scheme (stable, invertible), discretize some families of wavelets, more scale-based though wavelets range from weakly oscillating (Haar) to oscillation (Shannon), often applied separably across dimensions so weakly directional, a lot of statistical and approximation properties (moments, regularity). Their redundant counterparts: discretized continuous wavelets, shift-invariant or stationary wavelets.</li> </ul> <p><a href="https://i.sstatic.net/CIUhO.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CIUhO.gif" alt="Different types of 1D discrete Daubechies wavelets" /></a></p> <ul> <li>Gabor transforms: (highly) redundant decomposition, mostly one shape: a modulated Gaussian, more frequency-based though computed at a couple of scale (Gaussian spread), inherently non-separable or directional. Their non-redundant counterparts: modulated or orthogonal lapped transforms, Malvar wavelets.</li> </ul> <p><a href="https://i.sstatic.net/yB4w8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yB4w8.png" alt="Gabor atoms" /></a></p>
256
wavelet transform
Reconstruct a signal via continuous wavelet transform (CWT)
https://dsp.stackexchange.com/questions/67037/reconstruct-a-signal-via-continuous-wavelet-transform-cwt
<p>I want to use the <a href="https://en.wikipedia.org/wiki/Continuous_wavelet_transform" rel="nofollow noreferrer">continuous wavelet transform</a> (CWT) to modify the amplitude of a signal in the frequency domain (in a specific frequency band) and then reconstruct the signal. I want to do it by scaling the wavelet coefficients in that band. Is that the right way?</p> <p>Thanks in advance for your help.</p>
257
wavelet transform
Is there a reason why with symmetric padding, the inverse wavelet transform is not the adjoint of the wavelet transform?
https://dsp.stackexchange.com/questions/55840/is-there-a-reason-why-with-symmetric-padding-the-inverse-wavelet-transform-is-n
<p>I recently stumbled upon a bothering fact when using the <code>pywavelet</code> library in Python. When we use the default <code>"symmetric"</code> padding, the inverse wavelet transform is not the adjoint of the wavelet transform (and it's not the mathematical inverse of the wavelet transform, "only in one direction").</p> <p>However, if you use the <code>"zero"</code> padding the adjoint property is verified (not the inverse property). And if you use the <code>"periodization"</code> padding then all the properties are verified.</p> <p>I wanted to know if there was a theoretical reason behind why the padding has such a big influence on these properties or if it was down to implementation.</p> <p>You can find a code below highlighting the said issue:</p> <pre><code> import numpy as np import pywt wavelet_type = "db4" level = 4 mode = "symmetric" coeffs_tpl = pywt.wavedecn(data=np.zeros((512, 512)), wavelet=wavelet_type, mode=mode, level=level) coeffs_1d, coeff_slices, coeff_shapes = pywt.ravel_coeffs(coeffs_tpl) coeffs_tpl_rec = pywt.unravel_coeffs(coeffs_1d, coeff_slices, coeff_shapes) _ = pywt.waverecn(coeffs_tpl_rec, wavelet=wavelet_type, mode=mode) def py_W(im): alpha = pywt.wavedecn(data=im, wavelet=wavelet_type, mode=mode, level=level) alpha, _, _ = pywt.ravel_coeffs(alpha) return alpha def py_Ws(alpha): coeffs = pywt.unravel_coeffs(alpha, coeff_slices, coeff_shapes) im = pywt.waverecn(coeffs, wavelet=wavelet_type, mode=mode) return im x_example = np.random.rand(*coeffs_1d.shape) y_example = np.random.rand(512, 512) print("Adjoint:") x_Tadj_y = np.dot(x_example, np.conjugate(py_W(y_example))) T_x_y = np.dot(py_Ws(x_example).flatten(), np.conjugate(y_example.flatten())) print(np.allclose(x_Tadj_y, T_x_y)) print("\n Inverse from image to image:") print(np.allclose(py_Ws(py_W(y_example)), y_example)) print("\n Inverse from coefficients to coefficients:") print(np.allclose(py_W(py_Ws(x_example)), x_example)) </code></pre> <hr> <p>EDIT: When reading the <a href="https://pywavelets.readthedocs.io/en/v0.4.0/ref/signal-extension-modes.html" rel="nofollow noreferrer">documentation about the different paddings</a>, I could see that the problematic paddings compute a redundant representation of the original image (for example you have more coefficients than pixels in your original image). Therefore, it's normal that the inverse property is not verified from coefficients to coefficients (if taken at random without structure). However, this doesn't explain why the adjoint property is not verified.</p> <p>Also I don't know whether this redundance is theoretical to ensure perfect reconstruction or is an implementation choice.</p>
<p>Actually the redundancy perfectly explains why the adjoint property is not verified. The reconstruction operator is not orthogonal anymore and therefore the inverse is not necessarily its adjoint.</p> <p>You can compute the adjoint efficiently according to <a href="https://arxiv.org/pdf/1707.02018.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1707.02018.pdf</a>.</p> <p>Regarding the necessity of redundance, according <a href="https://github.com/PyWavelets/pywt/issues/472#issuecomment-470129454" rel="nofollow noreferrer">to this person</a> it is theoretically needed, although he couldn't provided a resource or a proof of that. But since it's a separate issue, I am going to ask another question (you can <a href="https://dsp.stackexchange.com/questions/55863/for-discrete-wavelet-transforms-is-redundancy-needed-to-ensure-perfect-reconstr">check it here</a>).</p>
258
wavelet transform
How do I implement a nonorthogonal quadratic spline wavelet into discrete wavelet transform?
https://dsp.stackexchange.com/questions/67473/how-do-i-implement-a-nonorthogonal-quadratic-spline-wavelet-into-discrete-wavele
<p>I am trying to use the discrete wavelet transform for signal processing (time series data from a plant). I would like to use a mother wavelet that is not in Matlab. Mallat and Zhong 1992 described a nonorthogonal quadratic spline wavelet I would like to use. They name some coefficients and impulse responses, but I'm not sure how to use these (reference below). To detect and treat abnormal sudden changes, the wavelet must satisfy a very particular property. We call a smoothing function any function θ(x) whose integral is equal to 1 and that converges to 0 at infinity. This function must be twice differentiable and define ψ_1 the first order derivative and ψ_2 the second order derivative.</p> <p>ψ_1=(dθ(x))/dx and ψ_2=(d^2 θ(x))/(dx^2 )</p> <p>The wavelet transform is computed by convolving the signal function f with a wavelet function (dilated at a scale s). </p> <p>W_s^1 f(x)=f<em>ψ_(1,s) (x) and W_s^2 f(x)=f</em>ψ_(2,s) (x)</p> <p>The extrema points of the first wavelet transform (WT) gives us the zero crossings points of the second WT. The extrema of W_s^1 f(x) and the zero-crossings of W_s^2 f(x) are the inflection points or the sharp variations points of f(x) smoothed by θ(x).Those edges in the signal can only be found if the wavelet is the second derivative of a smooth function. Furthermore, the first- and second-order WT are used in the detection of steady-state periods. </p> <p>So I would like to code a quadratic spline wavelet function and a cubic spline scaling function like those in Mallat and Zhong (1992) article. Could anyone help me ? </p> <p><a href="https://i.sstatic.net/zFcd4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zFcd4.png" alt="enter image description here"></a></p> <p>Mallat, S., &amp; Zhong, S. (1992). Characterization of Signals From Multiscale Edges. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(7), 710-732. doi:10.1109/34.142909</p>
<p>I would think it is available in the Matlab codes from the <a href="https://statweb.stanford.edu/~wavelab/" rel="nofollow noreferrer">Wavelab</a> toolbox. It hosts codes for papers and books. Especially, there is a directory <code>\Wavelab\Books\WaveTour\</code> for Stéphane Mallat's book: <a href="https://www.elsevier.com/books/a-wavelet-tour-of-signal-processing/mallat/978-0-12-374370-1" rel="nofollow noreferrer">A Wavelet Tour of Signal Processing</a>. You can check images chapter by chapter from <a href="https://wavelet-tour.github.io/figures/" rel="nofollow noreferrer">A Wavelet Tour/Images</a>. Those images can be regenerared. For instance, from Chapter 5, Wavelab codes in <code>\Matlab_Wavelab\Books\WaveTour\WTCh05</code> say from instance in <code>wt05fig04.m</code>:</p> <blockquote> <p>disp('The dyadic wavelet is a quadratic spline with one vanishing moment.') disp('These graphs display the modulus square of its Fourier transform') disp('dilated by 2^j for 0 </blockquote> <p>which may suit your needs.</p>
259
wavelet transform
What&#39;s the similarities and differences between Wigner transform and wavelet transform?
https://dsp.stackexchange.com/questions/11425/whats-the-similarities-and-differences-between-wigner-transform-and-wavelet-tra
<p>Wigner transform and continuous wavelet transform are both some kind of time-frequency representation of a signal.</p> <p>What are the similarities and differences between them? Could you give some comparison between them? Let's restrict ourselves in 1D signal at the moment.</p>
<p>There is a nice tutorial: <a href="http://tftb.nongnu.org/tutorial.pdf" rel="nofollow noreferrer">Time-Frequency Toolbox For Use with MATLAB</a> tthat compares many transformations, notably continuous wavelet transform and Wigner and its avatars.</p> <p>Basically, the CWT belongs to atomic linear representations (a decomposition onto wavelet atoms), while Wigner-Ville is an energy distribution (to split the energy of signals over time and frequency).</p> <p>Since the tutorial comes with a toolbox and many examples, you can easily reproduce it on your signals.</p>
260
wavelet transform
Difference between Discrete Wavelet Transform and convolution
https://dsp.stackexchange.com/questions/82121/difference-between-discrete-wavelet-transform-and-convolution
<p>Sorry in advance if my question is too dumb. I'm going through the book of Mallat, and from what I understand, the approximation/wavelet coefficients <span class="math-container">$a_j[n] = &lt;f, \phi_{j,n}&gt;$</span> and <span class="math-container">$d_j[n] = &lt;f, \psi_{j,n}&gt;$</span> can either be computed by convoluting the signal with the appropriate function, or in a discret setting using a cascade algorithm.</p> <p>But when I compare the results using <code>pywt</code>, I get very different results, and I do not understand the values returned by the Discrete Wavelet Transform.</p> <p>Could somebody help me with what I'm missing here ? Thanks !</p> <p>My code:</p> <pre><code>import pywt import numpy as np if __name__ == '__main__': def f(x): return 2 * x - 1 # Signal x = np.arange(0, 1, 2**-8) y = f(x) # Using pywt coeffs = pywt.wavedec(y, 'haar') # print level 0 a_p0, d_p0 = coeffs[0], coeffs[1] print(&quot;DWT: &quot;, a_p0, d_p0) # Hand-made convolution at level 0 def phi(x): return 1 if 0 &lt;= x and x &lt; 1 else 0 def psi(x): return 1 if 0 &lt;= x and x &lt; 0.5 else ( -1 if 0.5 &lt;= x and x &lt; 1 else 0 ) a_c0, d_c0 = 0., 0. dx = 1.e-5 x_c = np.arange(0, 1, dx) for i in range(x_c.size): xi = x_c[i] a_c0 += dx * f(xi) * phi(xi) d_c0 += dx * f(xi) * psi(xi) print(&quot;Convolution: &quot;, a_c0, d_c0) </code></pre> <p>Output:</p> <pre><code>DWT: -0.06249999999999911 -8.000000000000004 Convolution: -1.0000000000026593e-05 -0.5000000000000003 </code></pre>
261
wavelet transform
Inverse of Wavelet Transforms - Background and Noise Removal
https://dsp.stackexchange.com/questions/14058/inverse-of-wavelet-transforms-background-and-noise-removal
<p><strong>Main Problem: How can you inverse Wavelet Transforms</strong></p> <p><strong>(Using the data given by <code>signal.scipy.cwt</code>)</strong></p> <p>I was wondering if anyone understands the <code>scipy.signal.cwt()</code> function well enough to use it to remove backgrounds and noise from data. I have seen where Matlab has an inverse continuous wavelet transform function which will return the original form of the data by inputting the wavelet transform, although you can filter out the slices you don't want.</p> <p><a href="http://www.mathworks.com/help/wavelet/ref/icwtlin.html" rel="nofollow noreferrer">MATALAB inverse cwt funciton</a></p> <p>Since scipy doesn't appear to have the same function, I have been trying to figure out how to get the data back in the same form, while removing the noise and background. How do I do this? I noticed that the wavelet has negative values, when the original data is made of Gaussian-like feautures, so I tried squaring it, but this gives me values way to large and not quite right. I tried reading about the transforms but everything is very complicated which was difficult to parse in order to figure out what I needed to do.</p> <p>Here is what I have been trying:</p> <pre><code># Compute the wavelet transform # I can't figure out what the width is or does? widths = range(1,11) # Ricker is 2nd derivative of Gaussian # (*close* to what *most* of the features are in my data) # (They're actually Lorentzians and Breit-Wigner-Fano lines) cwtmatr = signal.cwt(xy['y'], signal.ricker, widths) # Maybe we multiple by the original data? and square? WT_to_original_data = (xy['y'] * cwtmatr)**2 </code></pre> <p>And here is a fully compilable short script to show you the type of data I am trying to get and what I have etc.:</p> <pre><code>import numpy as np from scipy import signal import matplotlib.pyplot as plt # Make some random data with peaks and noise def make_peaks(x): bkg_peaks = np.array(np.zeros(len(x))) desired_peaks = np.array(np.zeros(len(x))) # Make peaks which contain the data desired # (Mid range/frequency peaks) for i in range(0,10): center = x[-1] * np.random.random() - x[0] amp = 60 * np.random.random() + 10 width = 10 * np.random.random() + 5 desired_peaks += amp * np.e**(-(x-center)**2/(2*width**2)) # Also make background peaks (not desired) for i in range(0,3): center = x[-1] * np.random.random() - x[0] amp = 40 * np.random.random() + 10 width = 100 * np.random.random() + 100 bkg_peaks += amp * np.e**(-(x-center)**2/(2*width**2)) return bkg_peaks, desired_peaks x = np.array(range(0, 1000)) bkg_peaks, desired_peaks = make_peaks(x) y_noise = np.random.normal(loc=30, scale=10, size=len(x)) y = bkg_peaks + desired_peaks + y_noise xy = np.array( zip(x,y), dtype=[('x',float), ('y',float)]) # Compute the wavelet transform # I can't figure out what the width is or does? widths = range(1,11) # Ricker is 2nd derivative of Gaussian # (*close* to what *most* of the features are in my data) # (They're actually Lorentzians and Breit-Wigner-Fano lines) cwtmatr = signal.cwt(xy['y'], signal.ricker, widths) # Maybe we multiple by the original data? and square? WT = (xy['y'] * cwtmatr)**2 # plot the data and results fig = plt.figure() ax_raw_data = fig.add_subplot(4,3,1) ax = {} for i in range(0, 11): ax[i] = fig.add_subplot(4,3, i+2) ax_desired_transformed_data = fig.add_subplot(4,3,12) ax_raw_data.plot(xy['x'], xy['y'], 'g-') for i in range(0,10): ax[i].plot(xy['x'], WT[i]) ax_desired_transformed_data.plot(xy['x'], desired_peaks, 'k-') fig.tight_layout() plt.show() </code></pre> <p>This script will output this image:</p> <p><img src="https://i.sstatic.net/HLVXu.jpg" alt="output" /></p> <p>Where the first plot is the raw data, the middle plots are the wavelet transforms and the last plot is what I want to get out as the processed (background and noise removed) data.</p> <p>Does anyone have any suggestions? Thank you so much for the help.</p>
<p>I ended up finding a package which provides an inverse wavelet transform function called <code>mlpy</code>. The function is <code>mlpy.wavelet.uwt</code>. This is the snippet which may interest people if they are trying to do noise or background removal:</p> <pre><code># Make 2**n amount of data new_y, bool_y = wave.pad(xy['y']) orig_mask = np.where(bool_y==True) # wavelet transform parameters levels = 8 wf = 'h' k = 2 # Remove Noise first # Wave transform wt = wave.uwt(new_y, wf, k, levels) # Matrix of the difference between each wavelet level and the original data diff_array = np.array([(wave.iuwt(wt[i:i+1], wf, k)-new_y) for i in range(len(wt))]) # Index of the level which is most similar to original data (to obtain smoothed data) indx = np.argmin(np.sum(diff_array**2, axis=1)) # Use the wavelet levels around this region noise_wt = wt[indx:indx+1] # smoothed data in 2^n length new_y = wave.iuwt(noise_wt, wf, k) # Background Removal error = 10000 errdiff = 100 i = -1 iter_y_dict = {0:np.copy(new_y)} bkg_approx_dict = {0:np.array([])} while abs(errdiff)&gt;=1*10**-24: i += 1 # Wave transform wt = wave.uwt(iter_y_dict[i], wf, k, levels) # Assume last slice is lowest frequency (background approximation) bkg_wt = wt[-3:-1] bkg_approx_dict[i] = wave.iuwt(bkg_wt, wf, k) # Get the error errdiff = error - sum(iter_y_dict[i] - bkg_approx_dict[i])**2 error = sum(iter_y_dict[i] - bkg_approx_dict[i])**2 # Make every peak higher than bkg_wt diff = (new_y - bkg_approx_dict[i]) peak_idxs_to_remove = np.where(diff&gt;0.)[0] iter_y_dict[i+1] = np.copy(new_y) iter_y_dict[i+1][peak_idxs_to_remove] = np.copy(bkg_approx_dict[i])[peak_idxs_to_remove] # new data without noise and background new_y = new_y[orig_mask] bkg_approx = bkg_approx_dict[len(bkg_approx_dict.keys())-1][orig_mask] new_data = diff[orig_mask] </code></pre> <p>Here a complete compilable script which can produce output:</p> <pre><code>import numpy as np from scipy import signal import matplotlib.pyplot as plt import mlpy.wavelet as wave # Make some random data with peaks and noise ############################################################ def gen_data(): def make_peaks(x): bkg_peaks = np.array(np.zeros(len(x))) desired_peaks = np.array(np.zeros(len(x))) # Make peaks which contain the data desired # (Mid range/frequency peaks) for i in range(0,10): center = x[-1] * np.random.random() - x[0] amp = 100 * np.random.random() + 10 width = 10 * np.random.random() + 5 desired_peaks += amp * np.e**(-(x-center)**2/(2*width**2)) # Also make background peaks (not desired) for i in range(0,3): center = x[-1] * np.random.random() - x[0] amp = 80 * np.random.random() + 10 width = 100 * np.random.random() + 100 bkg_peaks += amp * np.e**(-(x-center)**2/(2*width**2)) return bkg_peaks, desired_peaks # make x axis x = np.array(range(0, 1000)) bkg_peaks, desired_peaks = make_peaks(x) avg_noise_level = 30 std_dev_noise = 10 size = len(x) scattering_noise_amp = 100 scat_center = 100 scat_width = 15 scat_std_dev_noise = 100 y_scattering_noise = np.random.normal(scattering_noise_amp, scat_std_dev_noise, size) * np.e**(-(x-scat_center)**2/(2*scat_width**2)) y_noise = np.random.normal(avg_noise_level, std_dev_noise, size) + y_scattering_noise y = bkg_peaks + desired_peaks + y_noise xy = np.array( zip(x,y), dtype=[('x',float), ('y',float)]) return xy # Random data Generated ############################################################# xy = gen_data() # Make 2**n amount of data new_y, bool_y = wave.pad(xy['y']) orig_mask = np.where(bool_y==True) # wavelet transform parameters levels = 8 wf = 'h' k = 2 # Remove Noise first # Wave transform wt = wave.uwt(new_y, wf, k, levels) # Matrix of the difference between each wavelet level and the original data diff_array = np.array([(wave.iuwt(wt[i:i+1], wf, k)-new_y) for i in range(len(wt))]) # Index of the level which is most similar to original data (to obtain smoothed data) indx = np.argmin(np.sum(diff_array**2, axis=1)) # Use the wavelet levels around this region noise_wt = wt[indx:indx+1] # smoothed data in 2^n length new_y = wave.iuwt(noise_wt, wf, k) # Background Removal error = 10000 errdiff = 100 i = -1 iter_y_dict = {0:np.copy(new_y)} bkg_approx_dict = {0:np.array([])} while abs(errdiff)&gt;=1*10**-24: i += 1 # Wave transform wt = wave.uwt(iter_y_dict[i], wf, k, levels) # Assume last slice is lowest frequency (background approximation) bkg_wt = wt[-3:-1] bkg_approx_dict[i] = wave.iuwt(bkg_wt, wf, k) # Get the error errdiff = error - sum(iter_y_dict[i] - bkg_approx_dict[i])**2 error = sum(iter_y_dict[i] - bkg_approx_dict[i])**2 # Make every peak higher than bkg_wt diff = (new_y - bkg_approx_dict[i]) peak_idxs_to_remove = np.where(diff&gt;0.)[0] iter_y_dict[i+1] = np.copy(new_y) iter_y_dict[i+1][peak_idxs_to_remove] = np.copy(bkg_approx_dict[i])[peak_idxs_to_remove] # new data without noise and background new_y = new_y[orig_mask] bkg_approx = bkg_approx_dict[len(bkg_approx_dict.keys())-1][orig_mask] new_data = diff[orig_mask] ############################################################## # plot the data and results fig = plt.figure() ax_raw_data = fig.add_subplot(121) ax_WT = fig.add_subplot(122) ax_raw_data.plot(xy['x'], xy['y'], 'g') for bkg in bkg_approx_dict.values(): ax_raw_data.plot(xy['x'], bkg[orig_mask], 'k') ax_WT.plot(xy['x'], new_data, 'y') fig.tight_layout() plt.show() </code></pre> <p>And here is the output I am getting now: <img src="https://i.sstatic.net/krkZH.jpg" alt="Shifting Background Removal"> As you can see, there is still a problem with the background removal (it shifts to the right after each iteration), but <a href="https://dsp.stackexchange.com/questions/14086/shifting-of-shift-invariant-wavelet-transforms">it is a different question which I will address here</a>.</p>
262
wavelet transform
Steganography using wavelet transform — not every wavelet recovering the hidden message efficiently
https://dsp.stackexchange.com/questions/8202/steganography-using-wavelet-transform-not-every-wavelet-recovering-the-hidden
<p>I'm working on a project to hide simple string inside a 255x255 image using Wavelet transform. The problem is that, </p> <p>When i used <code>Haar</code> Wavelet the result is successful, This is what I've done,</p> <ul> <li>Decompose the image into its approximation and details(Vertical,Horizontal,Diognal)</li> <li>then Hiding message in the details of the Image,</li> <li>Reconstructing the Image with Hidden message,</li> <li>Applying same wavelet on the Reconstructed Image.</li> <li>Decoding the algorithms applied for the encoding of the Message.</li> </ul> <p>but when i tried these Wavelets,</p> <pre><code>Coiflets 'coif1', ... , 'coif5' Symlets'sym2', ... , 'sym8', ... ,'sym45' Discrete Meyer'dmey' Biorthogonal'bior1.1', 'bior1.3', 'bior1.5' 'bior2.2', 'bior2.4', 'bior2.6', 'bior2.8' 'bior3.1', 'bior3.3', 'bior3.5', 'bior3.7' 'bior3.9', 'bior4.4', 'bior5.5', 'bior6.8' Reverse Biorthogonal'rbio1.1', 'rbio1.3', 'rbio1.5' 'rbio2.2', 'rbio2.4', 'rbio2.6', 'rbio2.8' 'rbio3.1', 'rbio3.3', 'rbio3.5', 'rbio3.7' 'rbio3.9', 'rbio4.4', 'rbio5.5', 'rbio6.8' </code></pre> <p>I am getting weird characters rather than my original Hidden Message. What does that mean ? How would i know which Wavelet should i suppose to use when the message is totally random ? </p>
263
wavelet transform
Question about the Haar Wavelet transform
https://dsp.stackexchange.com/questions/8469/question-about-the-haar-wavelet-transform
<p>I'm trying to detect the presence of a sinusoid in 1/f noise conditions using the STFT and the DWT with Haar wavelets. I find this interesting phenomenon that I'm not able to explain (see plot at <a href="https://i.sstatic.net/x3UQ9.png" rel="nofollow noreferrer">1</a>)</p> <p>My sampling rate is 1200 Hz and the graphs show the one sided response (0-600 Hz). I am using 2 scales for the Wavelet Transform.</p> <p>My question is, why in the FFT method can I clearly see the blips in about the correct frequency as my simulated data (180 Hz), while in the WT method, the increase in energy is spread visible across all three bands. Shouldn't I see a bright line only in the highpass segment of the second scale?</p>
<p>The FFT is much better at detecting a sinusoid than a DWT. The FFT is approximating a periodic signal with a series of periodic signals. The coefficients of the FFT will be maximum in the frequency bins (could be a single bin with sufficient resolution) where the component of the FFT series best matches the periodic signal to be detected. The noise will distribute roughly evenly in all bins, so the signal to be detected will show up readily.</p> <p>The DWT is representing the signal to be detected (in this case a periodic signal) with a set of non-periodic functions, so this requires lots of components (bands as you refer to them – but DWT really works with scale not frequency) to follow the periodic nature of the signal to be detected. The DWT is better suited to detect / analyze non-periodic signals.</p>
264
wavelet transform
Inverse Continuous Wavelet Transform off by constant factors
https://dsp.stackexchange.com/questions/83724/inverse-continuous-wavelet-transform-off-by-constant-factors
<p>I am implementing a continuous wavelet transform and its inverse using morlet wavelets. When I compute the inverse, the resulting signal is off by some constant factor (but otherwise correct). Depending on which frequencies of wavelet I use for the transforms, the resulting signal is off by a different constant factor.</p> <p><a href="https://i.sstatic.net/ZLRKi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLRKi.png" alt="enter image description here" /></a></p> <p>I am using the formula above for my inverse transform, but I am missing the 1/C before the double integral. I think this 1/C is the source of my issues, but I cannot find a definition that I can understand or implement, so any help in solving my situation would be appreciated.</p> <p>Thanks!</p>
265
wavelet transform
Estimate the type of wavelet transform in receiver side
https://dsp.stackexchange.com/questions/15761/estimate-the-type-of-wavelet-transform-in-receiver-side
<p>Suppose that a signal is decomposed by using discrete wavelet transform (DWT) and transmitted. Is it possible for receiver to find which type of wavelet is applied in transmitter side ?</p> <p>I mean, if the signal is decomposed by Haar or DB and then reconstructed. Is it have any way to know the type of wavelet (like haar) that is already used ? </p>
266
wavelet transform
What does the normalization step of the Haar wavelet transform represent?
https://dsp.stackexchange.com/questions/1739/what-does-the-normalization-step-of-the-haar-wavelet-transform-represent
<p>When you perform the Haar wavelet transform, you take the sums and differences, then at each stage, you multiply the entire signal by $\small\sqrt2$.</p> <p>When taking the inverse transform, you multiply the signal by $\frac{1}{\sqrt2}$ for each iteration.</p> <p>What does this "normalization" really represent?</p>
<p>As I understand it, the normalization is because the Haar wavelet conserves energy of the signal. In that, when you take signal from one domain to another, you aren't supposed to <em>add</em> energy to it, (although conceivably you might lose energy).</p> <p>The normalization is just a way to ensure that the energy of your Haar-transformed signal in the Haar-domain has the exact same energy as your signal in the original domain. </p> <p>Intuitively speaking, Haar, Fourier, etc, are all just basis transformations, which intuitively just mean that you are looking at the signal in a different way, (technically, through a different set of bases). Therefore, if all you are doing is looking at a signal differently, its energy cannot/should not change.</p>
267
wavelet transform
wavelet packet transform and lifting scheme?
https://dsp.stackexchange.com/questions/68149/wavelet-packet-transform-and-lifting-scheme
<p>so the <a href="https://en.wikipedia.org/wiki/Generalized_lifting" rel="nofollow noreferrer">lifting</a> scheme is basically an alternative to performing the discrete wavelet transform with several advantages. </p> <p>But here are three questions which I did not find an answer to:</p> <ol> <li>is it possible to use lifting also for calculation of the wavelet packet transform or does it just apply to the normal wavelet transform?</li> <li>is the wavelet packet transform also critically sampled like the dwt or is it redundant?</li> <li>if using a level basis in the packet transform, i.e. a representation which uses only elements from the same decomposition level, the time-frequency plot is regularly discretized. How does this representation then differ, if so, from a STFT which also exhibits constant Heisenberg boxes in the time-frequency plane.</li> </ol> <p>Thank you for helping in advance</p>
<p>A lifting scheme is a method for splitting a sequence of discrete samples into downsampled subsequences, so that you can predict a subsequence from the other, and update the former later, using possibly nonlinear predict and update operators. <strong>This corresponds to a generalization of one level of discrete wavelet transform</strong>, usually obtained fully linearly by convolution (first-generation wavelet).</p> <p>Lifting wavelets were called second-generation wavelets, and proved to be a generalization of first-generation wavelets.</p> <p>This being said, for <strong>question 1:</strong></p> <ul> <li>yes, a wavelet packet being iterations of several <strong>one level</strong> DWT on other subbands. </li> </ul> <p>For <strong>question 2:</strong></p> <ul> <li>A lifting discrete wavelet packet can be critically sampled, like the DWT, but also can be undecimated or oversampled if needed. And they are interesting questions about what kind of stationary lifted wavelets can be built.</li> </ul> <p>For <strong>question 3:</strong></p> <p>Generally, the Heisenberg boxes are considered to be of constant area. With the STFT, titles have the same shape, whereas with wavelet they are dilated/stretched in each direction. With a uniform wavelet packet decomposition (where the time-scale plot is regularly discretized), the time-scale bins can look similar to that of the STFT. If the packet transform is critical, the result is often poorer with wavelets packets (because of the lack of redundnacy). </p> <p>At the same redundancy, aliasing and constraints on wavelet shapes often yield less degrees of freedom that full-fledged oversampled filter banks, but the filter are generally easier to optimize. Nevertheless, <strong>if you want uniform frequency bin, wavelets are rarely very beneficial</strong>.</p>
268
wavelet transform
Wavelet transform, scalogram, detail and approximation coefficients
https://dsp.stackexchange.com/questions/93387/wavelet-transform-scalogram-detail-and-approximation-coefficients
<p>I understand tha the wavelet transform is about computing the coefficients to assign to scaled and translated versions of the chosen mother wavelet. The coefficients measure the correlation between the signal and shifted/scaled wavelet. That said, I know that the CWT is used to compute the scalogram while the DWT is used for other tasks (denoising and decomposition) and is implemented as a bank of low-pass and high pass filters...there is also a father wavelet, beside the mother wavelet, involved in the DWT...I don't think the CWT involves a father wavelet...Both the CWT and the DWT are discrete transforms where the scale and translation parameters are discretized in different ways...</p> <p>I am confused about the difference between the CWT and the DWT. In the case of the DWT, are all the outputs from the low-pass and high-pass filters in the filter bank simply different low frequency and high frequency versions of the original input time-signal x(t)? The filter output signals are called approximation and detail coefficients cA and cD...I guess cA and cD are not time signals, correct? Can we add all the filters outputs, cA and CD, to get the original signal x(t)?</p> <p>I get some mother wavelets can form orthogonal sets, some non-orthogonal sets, and some biorthogonal sets...</p> <p>Thank you!</p>
269
wavelet transform
energy normalization across different scales in case of discrete wavelet transform
https://dsp.stackexchange.com/questions/72314/energy-normalization-across-different-scales-in-case-of-discrete-wavelet-transfo
<p>In case of continuous wavelet transform (CWT), the wavelets are generated from the mother wavelet by scaling and translation. To achieve energy normalization and to ensure that all wavelets have the same energy regardless of their scales, each wavelet is divided by the square root of the the scale S. In case of discrete wavelet transform, as shown in the attached figure, the energy of the wavelet having 4B bandwidth at the first scale is twice the energy of the wavelet having a bandwidth 2B at the second scale, and so on. Thus, the energy is not equal at different scales. How energy is normalized at different scales in case of DWT?<a href="https://i.sstatic.net/BgWzJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BgWzJ.png" alt="enter image description here" /></a></p>
<p><strong>It's just the same like in CWT:</strong></p> <p>multiply with one over square-root of the scale.</p> <p><strong>Here is why</strong></p> <p>The energy is defined as:</p> <pre><code>E = Sum(abs(x0(t))^2) </code></pre> <p>see <a href="https://en.wikipedia.org/wiki/Energy_(signal_processing)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Energy_(signal_processing)</a></p> <p>Be x0(t,B) a discrete wavelet function as you defined above that doubles the energy for doubling the B.</p> <p>Choosing a fixed B and try to change function x0(t,B) to a new function x(t,B) that has the same Energy as x0(t,B) at B=1:</p> <pre><code>E(B=1) = Sum(abs(x0(t,B))^2) = Econst </code></pre> <p>As stated by you energy increases linear with increase of B:</p> <pre><code>E(B) = B * Econst = B * Sum(abs(x0(t,B0))^2) </code></pre> <p>To get E independent of B introduce a compensation factor a(B):</p> <pre><code>E(B) = Econst = B * Econst * a(B) </code></pre> <p>obviously <code>a(B) = 1/B</code></p> <p>This can be transformed to:</p> <pre><code>Econst = B * 1/B * Sum(abs(x0(t,B))^2) = B * Sum(1/B * abs(x0(t,B))^2) </code></pre> <p>using:</p> <pre><code>1/B = (1/sqrt(B))^2 </code></pre> <p>Pulling the term in to the square:</p> <pre><code>Econst = B * Sum( abs(1/sqrt(B)*x0(t,B))^2 ) </code></pre> <p>Therefore the new energy-normalized function x(t,B) is:</p> <pre><code>x(t,B) = 1/sqrt(B) * x0(t,B) </code></pre>
270
wavelet transform
inquiries for writing continuous wavelet transform codes manually
https://dsp.stackexchange.com/questions/11576/inquiries-for-writing-continuous-wavelet-transform-codes-manually
<p>I want to write continuous wavelet transform codes manually by matlab. And I want to use complex morlet function. Here are some background:<br> Continuous wavelet transform definition: $C(S,T;f(t),\psi (t))=\frac{1}{\sqrt{S}}\int_{S}^{b}f(t)\psi^{\ast }(\frac{t-T}{S})$<br> S is scale vector.for example <code>1:60;</code>$T$ is time slid. And $\psi(t)= \frac{1}{\sqrt{\pi f_{b}}}e^{2\pi f_{c} t}e^{\frac{-t^{2}}{f_{b}}}$ </p> <pre><code>psi = ((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*t).*exp(-t.^2/fb); % for example fb=15;fc=1; </code></pre> <p>my discrete signal has $N$ points. discrete version of that integral for the first element in the scale vector is: <code>for s(i)</code><br> $C_{T}(s)=\sum_{n=0}^{N-1}f(n)\psi^{*}(\frac{T-n}{S})$ this must calculate for all scales,<code>1:60;</code><br> at the end it must return me a complex matrix with $N\times S$ size. I am confused about wrting this codes. Anyone can help?<br> P.S I don't want to uses matlab function <code>conv2</code> to calculate that convolution.<br> Thanks in advance. here is my first attempt, but it's not working at all. </p> <pre><code>%% user CWT clear all N=300; %sample point numbers t=linspace(0,30,N); %% signal x=5*sin(2*pi*0.5*t); % signal with freq of 0.5 HZ %% cwt fc=1;fb=15; % psi=((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*... % t).*exp(-t.^2/fb); %% convolution Psi([N-n]/S)*x(n) so we calculate convolution(psi(n/s),x(n)) for s=1:60 %scale vector s=[1:1:60] for i = 1:N % number of discrete times for k = 1:i if ((i-k+1)&lt;N+1) &amp;&amp; (k &lt;N+1) PSI = ((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*... (t/s)).*exp(-(t/s).^2/fb); c(i,s) = c(i)+ x(k)*PSI(i-k+1); end end end end </code></pre>
<p>Here is the Python code that I use for CWT. You'll need a short 16bit mono 44.1Khz <code>test.wav</code> soundfile as input. </p> <p>In order to compute the CWT, we need to compute the convolution between the input <code>x[n]</code> and the morlet wavelet. An efficient way to do this is to do <code>ifft(x_ft * morlet_ft)</code> ( I understand this as $f * g(t) =\frac{1}{2 \pi} \int_{-\infty}^{+\infty} \hat{f}(w) \hat{g}(w) e^{i t w} d w$ : it is more efficient to do a <strong>multiplication</strong> of the fourier transforms and THEN to do an inverse fourier transform, than to compute a convolution directly)</p> <pre><code>from scipy.io.wavfile import read from pylab import * import matplotlib.pyplot as plt import numpy as np import scipy # read the file (sr, samples) = read('test.wav') x = (np.float32(samples)/((2 ** 15)-1)) N = len(x) x_fft = np.fft.fft(x) # scales J = 200 scales = np.asarray([2**(i * 0.1) for i in range(J)]) # pre-compute the Fourier transform of the Morlet wavelet morletft = np.zeros((J, N)) for i in range(J): morletft[i][:N/2] = sqrt(2 * pi * scales[i]) * exp(-(scales[i] * 2 * pi * scipy.array(range(N/2)) / N - 2)**2 / 2.0) # compute the CWT X = empty((J, N), dtype=complex128) for i in range(J): X[i] = np.fft.ifft(x_fft * morletft[i]) # plot plt.imshow(abs(X[:,scipy.arange(0,N,100)]), interpolation='none', aspect='auto') plt.show() </code></pre> <p>Here is the scalogram plot that this code gives :</p> <p><img src="https://i.sstatic.net/AONAo.png" alt="enter image description here"></p>
271
wavelet transform
Heisenberg Uncertainly Principle and wavelet transform
https://dsp.stackexchange.com/questions/10884/heisenberg-uncertainly-principle-and-wavelet-transform
<p>I am using CMOR(complex morlet) wavelet in Fourier space in order to reconstruct my signal and also estimate damping and frequencies of the embedded modes. there are two main parameters in cmor. the Fb and Fc. bandwidth and center frequency, with increasing the Fb i get better result for frequency estimation, but i cant get a good damping estimation with high Fb. is it sth to do with Heisenberg Uncertainly Principle? and if yes, would you explain the role of fb and fc in the accuracy of frequency and damping estimations? The wavelet transforms provide a unified framework for getting around the Heisenberg Uncertainly Principle. </p>
<p>The HUP follows directly from the properties of the Fourier Transform, because time and frequency are orthogonal bases in which we can expand the co-efficient sequence of our signal. </p> <p>In fact all pairs of orthonormal bases will have some kind of Uncertainty Principle associated with them.</p> <p>In traditional Fourier analysis, the either the time axis or the frequency axis will be split into identical sized and spaced regions which permit us to gain information about the signal: <img src="https://i.sstatic.net/bR1GA.png" alt="enter image description here"></p> <p>Wavelets allow you to analyse signals by splitting up the time-frequency plane into regions of different sizes. </p> <p>If you are increasing the bandwidth (i.e. frequency), you will necessarily have worse resolution in time. </p> <p><img src="https://i.sstatic.net/jZSoC.gif" alt="enter image description here"></p>
272
wavelet transform
Quadtree decomposition of Discrete Wavelet Transform using bio4.4/CDF wavelet
https://dsp.stackexchange.com/questions/43224/quadtree-decomposition-of-discrete-wavelet-transform-using-bio4-4-cdf-wavelet
<p>My problem is pretty basic but fundamental. It relates to the way discrete wavelet transform behaves for biorothognal 4.4 or CDF wavelets. When using most wavelets (e.g., CDF 9/7 or bio4.4 or Daubechies higher order wavelets) the size of the returned approximation and detail matrices is not a power of two. For my application (Embedded Zero Tree compression), this presents a problem because I want to construct a quad tree decomposition of the transformed image which requires all decompositions (LL, LH, HL and HH) to be of size a power of two. For example, consider the <em>Mathematica</em> code:</p> <pre><code>data = RandomReal[{0, 1}, {16, 16}]; dwd = DiscreteWaveletTransform[data, CDFWavelet[]]; dwd["Dimensions"] (*output*) {{0} -&gt; {12, 12}, {1} -&gt; {12, 12}, {2} -&gt; {12, 12}, {3} -&gt; {12, 12}, {0, 0} -&gt; {10, 10}, {0, 1} -&gt; {10, 10}, {0, 2} -&gt; {10, 10}, {0, 3} -&gt; {10, 10}, {0, 0, 0} -&gt; {9, 9}, {0, 0, 1} -&gt; {9, 9}, {0, 0, 2} -&gt; {9, 9}, {0, 0, 3} -&gt; {9, 9}, {0, 0, 0, 0} -&gt; {9, 9}, {0, 0, 0, 1} -&gt; {9, 9}, {0, 0, 0, 2} -&gt; {9, 9}, {0, 0, 0, 3} -&gt; {9, 9}} </code></pre> <p>Here the dimensions of various decomposition levels is given as rules, e.g., $\{1\}\rightarrow \{12, 12\}$ means that the first LH decomposition matrix is of size $12 \times 12$.</p> <p>What should I do? Should I simply truncate the matrices to nearest 2's power? or something else.</p>
<p>First, for compression, it is neither advised to truncate the data nor to the nearest 2th power. First, for compression, it is generally better to expand the original image. After all, this in in use for <a href="https://dsp.stackexchange.com/a/35343/15892">JPEG DCT padding</a>. Second, you can expand the image to the next integer divisible by $2^L$, where $L$ is the number of wavelet levels. For standard images, $L=4,5,6$ is sufficient, and this is less expansive than "the next power of two". If you expand the image smoothly (half-sample or whole-sample symmetry or antisymetry, depending on the image content), the expansion gets packed easily in the low-pass part of the wavelets, and does not cost a lot, if $2^L$ is sufficiently far away from the image size.</p> <p>A contributed package apparently doing the job as expected is available with <a href="http://library.wolfram.com/infocenter/Demos/447/" rel="nofollow noreferrer">The Discrete Periodic Wavelet Transform in 1D</a> by James F. Scholl at Wolfram Library Archive. It could be adapted to 2D.</p> <p>Alternatively, you can dive in wavelets on the interval, or non-expansive filter banks, yet in my limited experience, the above scheme is relatively efficient and is perfect at first glance to focus on the essential.</p>
273
wavelet transform
What does it mean for a Wavelet transform to commute with translations?
https://dsp.stackexchange.com/questions/56696/what-does-it-mean-for-a-wavelet-transform-to-commute-with-translations
<p>Referencing this article here <a href="https://arxiv.org/pdf/1203.1513.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1203.1513.pdf</a></p> <p>It states "A wavelet transform commutes with translations, and is therefore not translation invariant". Now I understand why it is a problem that the result is not translation invariant, however, I'm confused as to why it is. </p> <p>What does it mean for a transform to commute with translation and why does the Wavelet transform commute with translations (i.e. why is the Wavelet transformation shift invariant)?.</p>
<p>We can start from what is "shift invarient":</p> <p>Transform G is shift invariant if - <span class="math-container">$$\forall x:\sigma^nG(x) = G(x)$$</span> <span class="math-container">$\sigma^n$</span> being shift by n. Examples for transforms that are invarient to shifts are histogram and the amplitude of Fourier transform.</p> <p>Commuting with shift is - <span class="math-container">$$\forall x:\sigma^nG(x) = G(\sigma^nx)$$</span> So it can't be shift invariant (unless G(x) is constant).</p> <p>Note: I'm actually not certain that wavelet transform commutes with shift, but it's most certainly is not shift inveriant.</p>
274
wavelet transform
Plotting a Time Frequency contour/colormesh plot of a Discrete Wavelet Transform
https://dsp.stackexchange.com/questions/68913/plotting-a-time-frequency-contour-colormesh-plot-of-a-discrete-wavelet-transform
<p>I have a pressure vs time data of a noise on which I wish to perform discrete wavelet transform. I have divided my frequency range into 1/3rd Octave Bands and have calculated sound pressure level at each band.</p> <p>I am very much confused on how to perform a discrete wavelet transform in 1/3rd Octave Frequency Bands or any set of bands for that matter and further how to interpret the output(coefficients) so as to plot a <strong>Time-Frequency</strong> colormesh or a contour plot of my results.</p> <p>I was able to plot a continuous wavelet transform of my data using the pcolormesh module of matplotlib. The output of pywt.cwt are easy to interpret and hence I could plot a colormesh for it.</p> <p>I am a beginner in DSP and have just started learning wavelet transforms, so any help will be much appreciated. Thankyou.</p>
<p>With wavelets, that rely on scaling, you can expect time-scale, more than time-frequency. Obtained 2D images of scalograms from continuous wavelets is relatively easy. Most of them are not exactly invertible.</p> <p>Exactly invertible discretized wavelets come in different shapes. To preserVe images with equal-sized pixels, you could prefer redundant discrete wavelets over DWT.</p> <p>So, maybe <code>rwt.rdwt()</code></p>
275
wavelet transform
Use wavelet transform to extract a waveform of certain frequency
https://dsp.stackexchange.com/questions/20310/use-wavelet-transform-to-extract-a-waveform-of-certain-frequency
<p>how to apply wavelet transform method to extract alpha waveform( higher and lower band frequency known) from a given known signal ?</p>
<p>The mother wavelet is a bandpass filter. This means we have to apply the CWT(in this case), calculate the correspondence between scales and frequencies, zero out everything outside the known frequency band for the alpha wave and invert the transform. </p> <pre><code>data = Import["your_sample_eeg_data"]; ListLinePlot[data] </code></pre> <blockquote> <p><img src="https://i.sstatic.net/nCVVO.png" alt="Mathematica graphics"></p> </blockquote> <pre><code>cwd = ContinuousWaveletTransform[data, MorletWavelet[], SampleRate -&gt; 200, Padding -&gt; 0] freq = (cwd["SampleRate"]/(#1 cwd["Wavelet"]["FourierFactor"])) &amp; /@ (Thread[{Range[cwd["Octaves"]], 1}] /. cwd["Scales"]); ticks = Transpose[{Range[Length[freq]], freq}]; WaveletScalogram[cwd, Frame -&gt; True, FrameTicks -&gt; {{ticks, Automatic}, Automatic}, FrameLabel -&gt; {"Time", "Frequency(Hz)"}, ColorFunction -&gt; "SunsetColors", ImageSize -&gt; 500] </code></pre> <blockquote> <p><img src="https://i.sstatic.net/xl1Ik.png" alt="Mathematica graphics"></p> </blockquote> <p>So, we are interested in the 5th octave(the region between <code>10.51</code> and <code>5.25</code>, of course, roughly speaking)</p> <pre><code>mwd = WaveletMapIndexed[#1 0 &amp;, cwd, Except[{5, _}]] WaveletScalogram[mwd, ColorFunction -&gt; "SunsetColors"] </code></pre> <blockquote> <p><img src="https://i.sstatic.net/pBlkO.png" alt="Mathematica graphics"></p> </blockquote> <p>It's like performing a brain surgery with a chainsaw ;__;</p> <pre><code>ListLinePlot[InverseContinuousWaveletTransform[mwd]] </code></pre> <p><img src="https://i.sstatic.net/uuaFr.png" alt="Mathematica graphics"></p> <p>Those are the basic steps I would perform if I have to extract a signal within a certain frequency band.</p>
276
wavelet transform
3D wavelet transform in the form of a matrix?
https://dsp.stackexchange.com/questions/88885/3d-wavelet-transform-in-the-form-of-a-matrix
<p>I was wondering if anyone may know of any method for the construction of a 3D wavelet transform in matrix form? I've been able to build matrices to perform 1D &amp; 2D transforms. Yet, am finding very little resources regarding the 3D case in the literature.</p>
<p>In general, if you want to construct the matrix form of a linear operator, you can always apply the operator to the sequence of basis vectors. The result for each basis vector is the corresponding column of the matrix.</p> <p>For example, let <span class="math-container">$f(\mathbf{x}) : \mathbb{C}^N \rightarrow \mathbb{C}^N$</span> be some general linear operator. If we wanted to construct the <span class="math-container">$N \times N$</span> matrix representation, we could do so computing the <span class="math-container">$i^\mathrm{th}$</span> column as <span class="math-container">$f(\mathbf{e}_i)$</span>, where <span class="math-container">$\mathbf{e}_i$</span> is the <span class="math-container">$i^\mathrm{th}$</span> standard basis vector (i.e., all zeros except a one at element <span class="math-container">$i$</span>). This works since for any matrix <span class="math-container">$\mathbf{A}$</span>, the product <span class="math-container">$\mathbf{A} \mathbf{e}_i$</span> is simply the <span class="math-container">$i^\mathrm{th}$</span> column of <span class="math-container">$\mathbf{A}$</span>.</p> <p>Note that you can do this for 1-D, 2-D, 3-D, etc. problems by stacking the dimensions on top of one another into a single vector. However, the size of your matrix will quickly grow out of control. For example, consider an <span class="math-container">$L \times P$</span> image. The matrix representation for a 2-D DFT of this image will be <span class="math-container">$LP \times LP$</span>.</p> <p>If you want the matrix representation so that you can use it in a system solver that requires the adjoint of an operator in order to compute the solution, you should handle that with function handles and avoid explicitly forming the matrix.</p>
277
wavelet transform
discrete Haar wavelet transform, fast and efficient method?
https://dsp.stackexchange.com/questions/12619/discrete-haar-wavelet-transform-fast-and-efficient-method
<p>I'm working on my own implementation of the discrete Haar wavelet transform, I understand the wavelet theory and how to construct the Haar matrix of size N to perform the transform, but obviously there is a problem using the Haar matrix in application - it's simply too big.</p> <p>I am working on an application that applies the Haar transform to an audio signal. If I sample the signal at 44.1 kHz, even a one second recording would require a 2^16 by 2^16 Haar matrix to do the transform in one step, which is obviously impractical and a hardware constrained machine such as a phone wouldn't have the capability to hold such a matrix in memory.</p> <p>The other method I've seen used is that a 2X2 Haar matrix is applied to the entire signal iteratively, and the results are stored in two arrays - one array holding the "average" Haar coefficients (first element of the output vector) and the other holding the "difference" coefficients (second element of the output vector). The process is then repeated over the "average" coefficients - as these coefficients are essentially the result of a lowpass filter. Each time the process is repeated the number of elements needed to process in the following step is halved, until only one lowpass coefficient is left. This seems fine, and definitely works but when I started to think about it, it just seems that it would be really slow.</p> <p>My main question is, what's a good way to implement a fast and efficient haar transform? Or a practical way to apply one of these two methods?</p> <p>PS: I've been learning about this completely on my own, so if I made some mistakes in my explanation or way of thinking about this stuff let me know.</p> <p>PSS: I've never done any kind of audio processing before, so if you know about some other filters that I should apply to the raw signal before doing a wavelet transform let me know!</p> <p>PSSS: I know that the Haar wavelet may not be the best for this type of signal processing, but when I tried to learn about using other DWTs such as the Daubechies wavelets, the literature seemed very confusing, or at least was directed at more advanced readers. If anyone could point me in the direction of how to implement other DWTs, that would be great. </p>
<p>The Haar wavelet is actually a part of the Daubechies wavelet, for the case D=2. There's some example code <a href="https://en.wikipedia.org/wiki/Daubechies_wavelet" rel="nofollow">on wikipedia</a> that shows the Daubachies transform.</p> <p>The Haar transform is just a low pass filter combined with a high pass filter, with the coefficients being placed in the first and second halves of the signal. Then, this iteratively (or recursively) keep going.</p> <p>The low pass filter is just the sum of two components while the high pass filter is their difference.</p> <p>For the Haar wavelet case, you can check out <a href="https://github.com/scottsievert/iSparse/blob/master/UROPv6/dwt.c" rel="nofollow">my vectorized code on Github</a>. Here, <code>idwt2</code> stands for "Inverse Discrete Wavelet Transform 2D."</p>
278
wavelet transform
Any Open Source Fast Wavelet transform libraries?
https://dsp.stackexchange.com/questions/10268/any-open-source-fast-wavelet-transform-libraries
<p>I am in need of an open source library for computing Fast wavelet transforms (FWT) and Inverse fast wavelet transforms (IFWT) - this is to be part of a bigger code I am currently writing. </p> <p>The things I am looking for in the library:</p> <p>1) Contains a good variety of wavelet families (Daub,Haar, Coif etc.)</p> <p>2) Ability to run in parallel - VERY IMPORTANT</p> <p>3) Reasonable documentation, so that I can include new wavelets of my own without having to modify the complete source ( Maybe an OOPS based approach would help?)</p> <p>I am flexible about the language - C/C++/Fortran90/Fortran77/Python .... Any language would do for me, though I would prefer something which is optimized for <em>speed</em> and in <em>parallel</em>.</p> <p>So far, I have found PyWavelets - it looks good, but it is in Python ( therefore considerably slow) and it doesn't run in parallel. I am going to use the DWT for processing really huge datasets, so speed is an important concern for me.</p> <p>Its understandable there may not be anything with <em>all</em> the requirements I mentioned. But I wanted to hear from the community if they have any suggestions.</p>
<p>you can have a look at the LTFAT's wavelet module <a href="http://ltfat.sourceforge.net/doc/wavelets/index.php" rel="noreferrer">http://ltfat.sourceforge.net/doc/wavelets/index.php</a></p> <p>it runs in Matlab/Octave with backend written in C. It has fairly large database of wavelet filters and new ones can be added easily.</p> <p>What exactly do you mean by </p> <blockquote> <p>2) Ability to run in parallel - VERY IMPORTANT</p> </blockquote> <p>Should the computation itself be somewhat parallelized or that it should be possible to run several batches of FWT calculations? I guess that running several instances of Octave should do the trick in the latter case.</p>
279
wavelet transform
Getting frequency content at different times from discrete wavelet transform coeffs
https://dsp.stackexchange.com/questions/27671/getting-frequency-content-at-different-times-from-discrete-wavelet-transform-coe
<p>After being away from DSP for a long time, I am trying to familiarize myself with wavelet transform. Here is what I (think) have understood so far:</p> <ul> <li>Wavelet transform provides you high time resolution at higher frequencies and high frequency resolution at lower frequencies.</li> <li>DWT can be calculated by using QMF pair and subsampling. When used recursively, filter pair increases frequency resolution and subsampling decreases time resolution.</li> <li>Result of level 1 DWT <code>[cA, cD] = dwt(signal, Lo_D, Hi_D)</code> essentially gives low frequency and high frequency splits of the signal called approximation and details.</li> </ul> <p>If my understanding is correct, in this algorithm, the end result is essentially in time domain. If I want to know what frequencies are present, I have to take an fft of the coefficients. Is that correct? If so, is there a way I can get the frequency domain representation without extra step of taking fft?</p> <p>Question 2: If scalogram is the answer to question 1, how is it generated using results of repeated use of <code>dwt</code>? I want to avoid using ready MATLAB functions for better understanding except maybe <code>dwt</code></p> <p>Thanks</p>
<p><strong>Question 1</strong>: I think you want to investigate <a href="http://www.mathworks.com/help/wavelet/ref/centfrq.html" rel="nofollow noreferrer">centfreq</a> &amp; <a href="http://www.mathworks.com/help/wavelet/ref/scal2frq.html" rel="nofollow noreferrer">scal2freq</a> (Scale to frequency):</p> <blockquote> <p>FREQ = <strong>centfrq</strong>('wname') returns the center frequency in hertz of the wavelet function</p> <p>F = <strong>scal2frq</strong>(A,'wname',DELTA) returns the pseudo-frequencies corresponding to the scales given by A and the wavelet function 'wname' (see wavefun for more information) and the sampling period DELTA.</p> </blockquote> <p>The intuition that I use is to think about your question is: The wavelet transform is a hierarchical (tree-like) decomposition of an input signal. I would not say it is in the "time-domain" as is commonly understood. I think of the DWT output via the duality: <strong><em>Wavelets &lt;---> Filterbanks</em></strong>.</p> <p>Imagine a set of 8 bandpass filters tuned to filter different ranges from low to high on the frequency spectrum. You know which frequency ranges correspond to which filters, so you know how much signal lives in each of the 8 frequency ranges based on the bandpass filter output. That's a basic frequency analysis. You wouldn't need to compute an FFT at this point. And with the successive QMF filtering, then decimation of your signal at each level, your DWT coefficients may only be one-sample (or have a very coarse resolution), so an FFT wouldn't make any sense. </p> <p>There's a good explanation @ <a href="https://math.stackexchange.com/questions/28581/which-time-frequency-coefficients-does-the-wavelet-transform-compute">https://math.stackexchange.com/questions/28581/which-time-frequency-coefficients-does-the-wavelet-transform-compute</a></p> <p><strong>Question 2</strong>: A scalogram (<a href="https://dsp.stackexchange.com/questions/7626/scalogram-and-related-nomenclatures-for-dwt">Scalogram (and related nomenclatures) for DWT?</a> ) is good for visualizing your transformed signal, but I think it would be overkill just to compute which frequency range your wavelet coefficients correspond to. </p>
280
wavelet transform
How to take wavelet transform of sparse input data
https://dsp.stackexchange.com/questions/71606/how-to-take-wavelet-transform-of-sparse-input-data
<p>I have a sparse dataset indexed by nanoseconds. Storing the dataset in a discrete fashion would take too much memory. I'd like to take a wavelet transform and I'd like it to be relatively fast. The dataset has about 100,000 dirac deltas in it.</p> <p>Is this possible?</p> <p>Thanks! James</p>
281
wavelet transform
Why does a synchrosqueezed wavelet transform show oscillating behavior?
https://dsp.stackexchange.com/questions/71855/why-does-a-synchrosqueezed-wavelet-transform-show-oscillating-behavior
<p>This question came up in the context of the <a href="https://github.com/OverLordGoldDragon/ssqueezepy/issues/6" rel="nofollow noreferrer"><code>ssqueezepy</code></a> library. As a basic experiment I did compute the synchrosqueezed wavelet transform of three basic signals:</p> <ol> <li>A sine of 440 Hz.</li> <li>A sine of 880 Hz.</li> <li>A signal that mixes (1) and (2).</li> </ol> <p>The result looks like this:</p> <p><a href="https://i.sstatic.net/xDjo1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xDjo1.png" alt="enter image description here" /></a></p> <p>Full reproduction code: <a href="https://gist.github.com/bluenote10/f736cc02d11e6b74efd6fefecd68f83e" rel="nofollow noreferrer">here</a></p> <p>Basically the transform manages to perfectly localize signals (1) and (2), but for the mixed signal there is a surprising oscillation pattern. Zooming in a bit:</p> <p><a href="https://i.sstatic.net/ROL3q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ROL3q.png" alt="enter image description here" /></a></p> <p>In general the issue depends on the choice of <code>mu</code> of the underlying wavelet. In this particular example, increasing <code>mu</code> slightly can help to make the transform of the mixed signal non-oscillating.</p> <p>Nonetheless I'm wondering why synchrosqueezing leads to this oscillating interpretation of the signal in the first place?</p>
<p>This was interesting to figure out. The key lies in the phase transform, and how CWT interacts with own derivative upon insufficient component separation. Relevant are, and I'll be answering, the following:</p> <ol> <li>What causes the wavy pattern?</li> <li>Is the wavy pattern truly a <em>sine</em>, or a lookalike?</li> <li>Why is only <em>one</em> band wavy, not both?</li> <li>What can be done about it?</li> </ol> <p>Answer split in two to separate meat from supplement, but much of insight's in latter; sections labeled in recommended reading order.</p> <p><strong>TL;DR:</strong> skip to Conclusion. <a href="https://pastebin.com/0FH67yYb" rel="nofollow noreferrer">Answer code</a>.</p> <hr> <h3>0. How does the phase transform work?</h3> <p>A helpful requisite for following sections, covered <a href="https://dsp.stackexchange.com/a/72238/50076">here</a>.</p> <hr> <h3>1. What causes the wavy pattern?</h3> <p>Begin by visualizing the phase transform, <code>w</code>:</p> <p><a href="https://i.sstatic.net/oQF6n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oQF6n.png" alt="enter image description here" /></a></p> <p>The lower frequency's CWT and SSQ are nearly that of a pure tone, but the higher frequency is all wiggly. Let us plot one such wiggly row:</p> <p><a href="https://i.sstatic.net/hFj3D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hFj3D.png" alt="enter image description here" /></a></p> <p>This sure looks sinusoidal. But <em>is it</em>? Stay tuned.</p> <p>First: is the wavy SSQ explained by the <code>w</code>? Absolutely: <code>w</code> is wavy across timesteps, meaning for every column, we reassign CWT rows to a different SSQ row, in a wavy/sinusoidal pattern (see [complement] on reassignment). Our answer thus lies in understanding <em>why</em> <code>w</code> is wavy to begin with. <a href="https://dsp.stackexchange.com/a/72240/50076">Second answer</a> begins here.</p> <hr> <h3>4. Inseparability: pre-transform perspective</h3> <p>Recall, the CWT and dCWT operations both take place in the frequency domain; we reach the lowest level of analysis by inspecting it directly. With <code>xh = fft(pad(x))</code>, <code>psih</code> = freq-domain wavelet, and <code>dpsih</code> = freq-domain wavelet with the derivative (<code>* 1j * xi / dt</code>), we plot both, overlapped and rescaled to fit in same graph, as well as the result of the multiplication - for our same row, and one on higher scale:</p> <p><a href="https://i.sstatic.net/jqQXi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jqQXi.png" alt="enter image description here" /></a></p> <p>First, note for our original row <code>scales[80] = 5.08</code>, we indeed have a <em>sum of</em> two pure cisoids (analytic wavelet -&gt; all negative frequencies are zero, and Morlet has pure-real DFT; cosine input -&gt; pure real DFT; thus a DFT coefficient multiplies its complex exponential basis without cancellation), one at higher frequency having much greater amplitude.</p> <p>Next, compare the cisoid amplitudes for <code>psih</code> vs <code>dpsih</code>; higher frequency dominates more for <code>dpsih</code> than for <code>psih</code> for both example scales. Coincidence? Well, they both do <code>* xh</code>, thus we expect the <code>dpsih</code> wavelet itself to skew toward right. Plotting,</p> <p><a href="https://i.sstatic.net/3KJsx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KJsx.png" alt="enter image description here" /></a></p> <p>this is confirmed. Does this happen for <em>every</em> scale? <em>Yes</em>: <code>* xi = * linspace(0, pi, N)</code> - taking the derivative involves multiplying by a line, thus skewing the entire wavelet toward higher frequencies, also breaking even symmetry (though not a lot).</p> <p>All puzzle pieces are in. The skewness creates a difference in relative amplitudes at the two frequencies w.r.t. the non-derivative. The general form for <code>dWx / Wx</code> is then (still assuming purely real <code>fft(Wx)</code>):</p> <p><span class="math-container">$$ \frac{dW_x}{W_x} = \frac{a\exp{(j(\phi_1 + \pi/2))} + b\exp{(j(\phi_2 + \pi/2))}}{c\exp{(j\phi_1)} + d\exp{(j\phi_2)}}, \tag{4} \\ \phi_i = \omega_i \frac{n}{N}, \ \ n=[0, 1, ..., N - 1] $$</span></p> <p>(<span class="math-container">$+\pi/2$</span> per the derivative). Let's first inspect interesting cases numerically, then derive an exact relation as proof (<a href="https://dsp.stackexchange.com/a/72240/50076">second answer</a>).</p> <hr> <h3>8. What can be done about it?</h3> <p>Strictly speaking, nothing, unless the wavelet decays completely to zero eventually (untrue for many wavelets); as long as there's more than one frequency in the entire DFT of the input, there will be some form of interference. However, practically, such interference can be made negligible by:</p> <ol> <li>Ensuring bands are sufficiently separated</li> <li>Reducing wavelet frequency width (for Morlet, increase <code>mu</code>)</li> </ol> <p>Let's inspect Morlet <code>mu=5</code> vs <code>mu=20</code>:</p> <p><a href="https://i.sstatic.net/rhFzI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rhFzI.png" alt="enter image description here" /></a></p> <p><code>mu=20</code> is clearly narrower; to get a better idea, single out a few rows:</p> <p><a href="https://i.sstatic.net/QPewE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QPewE.png" alt="enter image description here" /></a></p> <p>Overlapped at same center frequency (~same row):</p> <p><a href="https://i.sstatic.net/cqfSo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cqfSo.png" alt="enter image description here" /></a></p> <p>We thus predict <code>mu=20</code> will show no wiggliness we can see. In fact, <code>mu=20</code> is overkill, we'll settle for <code>mu=10</code>:</p> <p><a href="https://i.sstatic.net/ZAmcH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZAmcH.png" alt="enter image description here" /></a></p> <p>Can we also guess how to make <em>both</em> bands wiggly for small <code>mu</code>? Note, increasing scales lowers the wavelet's center frequency (shifts it left), but also reduces its frequency width; thus our wiggle odds improve toward <em>lower scales</em> (or higher frequencies). This turned out easier said than done; after some experimentation, I've had limited success, but maybe it can be done better:</p> <p><a href="https://i.sstatic.net/AtEj2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AtEj2.png" alt="enter image description here" /></a></p> <hr> <h3>9. Conclusion</h3> <ol> <li><strong>What causes the wavy pattern?</strong> The frequency-domain wavelet and its derivative intercept the input bands at slightly different ratios per latter being skewed. The resulting phase transform is (imag. part of) a ratio of sums of complex sinusoids, which is of form <span class="math-container">$(\cos(\phi) + k_1)/(\cos(\phi) + k_2)$</span>. The reassignment pattern dictated by <code>w</code> can then have a sine-like pattern across rows, which manifests as a sine-looking synchrosqueezed CWT.</li> <li><strong>Is the wavy pattern truly a sine, or a lookalike?</strong> Latter; it can never be exactly sine, but close to. The pattern is characterized by <span class="math-container">$(\cos(f_1 - f_2) + k_1)/(\cos(f_1 - f_2) + k_2)$</span>, and has a period of <span class="math-container">$|f_1 - f_2|^{-1}$</span>, where <span class="math-container">$f_1,f_2=$</span> input frequencies.</li> <li><strong>Why is only one band wavy, not both?</strong> With increasing scale (and decreasing center frequency), the frequency-domain wavelet moves to the left while shrinking in width; the resulting cisoid proportions work out such that dominantly sine reassignment pattern is toward the <em>higher</em> input frequency, where the wavelet is wider, but the lower one isn't devoid of such a pattern either.</li> <li><strong>What can be done about it?</strong> Separate input frequencies more, or reduce wavelet frequency width across all scales (for Morlet wavelet, raise <code>mu</code>).</li> </ol>
282
wavelet transform
Standard method of smoothing (amplitude envelope) of continuous wavelet transform
https://dsp.stackexchange.com/questions/83266/standard-method-of-smoothing-amplitude-envelope-of-continuous-wavelet-transfor
<p>My basic question is, &quot;What is the standard way to post-process/smooth a continuous wavelet transform result to measure the wavelet's activation at a specific frequency?&quot;</p> <p>When using wavelets for isolating a signal at a specific frequency, a continuous wavelet transform (CWT) may give the desired result. However, when using wavelets to measure the amount of energy at a specific frequency (i.e., in a way analogous to using a Fourier transform to transform the signal to the frequency domain), I think there needs to be a post-processing step after a CWT. For instance, the CWT of a sine wave with matching frequency will be another oscillating signal, rather than a high-valued DC signal. So, my question is, what is the typical way to “smooth” the result of a CWT to find the magnitude of the wavelet’s activation? I have had success using the magnitude of the Hilbert transform to extract an amplitude envelope after taking a CWT but haven’t seen others do this in the literature. An alternative could be a moving-window maximum. I would think this use case of CWT is common enough that there is a standard post-processing step.</p>
283
wavelet transform
3D (time, scale, amplitude) plot in Continuous Wavelet Transform
https://dsp.stackexchange.com/questions/73928/3d-time-scale-amplitude-plot-in-continuous-wavelet-transform
<p>I will be extremely grateful if someone could please answer this basic question.</p> <p>How can one plot a 3D (translation, scale, amplitude) plot from the Continuous wavelet transform (CWT) coefficients?</p> <p>The CWT coefficient is a M X N matrix. Which of the axis corresponds to the translation, scale and Magnitude.</p> <p>Thanks for your help.</p>
<p>It suffices to use the translation and scale as <span class="math-container">$(X,Y)$</span> axes, and build some elevation map from the absolute values of the CWT (or the phase, the real or imaginary parts). A simple example in Matlab is:</p> <pre><code>load mtlb scal = abs(cwt(mtlb,'bump',Fs)); mesh(scal) axis tight </code></pre> <p><a href="https://i.sstatic.net/eJb0t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eJb0t.png" alt="CWT wavelet 3D plot as a surface" /></a></p>
284
wavelet transform
Inverse continuous wavelet transform off by constant factor in the y axis
https://dsp.stackexchange.com/questions/86439/inverse-continuous-wavelet-transform-off-by-constant-factor-in-the-y-axis
<p>I have implemented the Continuous Wavelet Transform using the pycwt library(<a href="https://github.com/regeirk/pycwt/blob/master/pycwt/wavelet.py" rel="nofollow noreferrer">https://github.com/regeirk/pycwt/blob/master/pycwt/wavelet.py</a>) and its inverse using Morlet wavelets, however, upon calculating the inverse, the signal generated is accurate but with a constant discrepancy.</p> <p>The code to produce icwt on pycwt library had some issues that i tried to fix, which improved the result significantly. This is the code I use for the ICWT.</p> <pre><code>import pycwt def _check_parameter_wavelet(wavelet): mothers = {'morlet': pycwt.mothers.Morlet} # Checks if input parameter is a string. For backwards # compatibility with Python 2 we check either if instance is a # `basestring` or a `str`. try: if isinstance(wavelet, basestring): return mothers[wavelet]() except NameError: if isinstance(wavelet, str): return mothers[wavelet]() # Otherwise, return itself. return def icwt(W, sj, dt, dj=1/12, wavelet='morlet'): &quot;&quot;&quot;Inverse continuous wavelet transform. Parameters ---------- W : numpy.ndarray Wavelet transform, the result of the `cwt` function. sj : numpy.ndarray Vector of scale indices as returned by the `cwt` function. dt : float Sample spacing. dj : float, optional Spacing between discrete scales as used in the `cwt` function. Default value is 0.25. wavelet : instance of Wavelet class, or string Mother wavelet class. Default is Morlet Returns ------- iW : numpy.ndarray Inverse wavelet transform. Example ------- &gt;&gt; mother = wavelet.Morlet() &gt;&gt; wave, scales, freqs, coi, fft, fftfreqs = wavelet.cwt(var, 0.25, 0.25, 0.5, 28, mother) &gt;&gt; iwave = wavelet.icwt(wave, scales, 0.25, 0.25, mother) &quot;&quot;&quot; wavelet = _check_parameter_wavelet(wavelet) a, b = W.shape c = sj.size if a == c: sj = (np.ones([b, 1]) * sj).transpose() elif b == c: sj = np.ones([a, 1]) * sj else: raise ValueError('Input array dimensions do not match.') # As of Torrence and Compo (1998), eq. (11) iW = (dj * np.sqrt(dt) / (wavelet.cdelta * wavelet.psi(0)) * (np.real(W) / np.sqrt(sj)).sum(axis=0)) return iW </code></pre> <p>Here I providea minimum example where I do the CWT and reconstruct with my ICWT and the original ICWT. However:</p> <pre><code># Parameters N = 20000 dt = 1 dj = 1/4 #Create a signal forexample sig = np.sin(np.linspace(0, 5*np.pi, N))+np.random.rand(N) # DO the CWT analysis W, sj, freqs, coi,_,_ = pycwt.cwt(sig, dt, dj, wavelet='morlet') # original icwt iwave = pycwt.icwt(W, sj, dt, dj, 'morlet') # fixed icwt iwave_fixed = icwt(W, sj, dt, dj, 'morlet') #Plot results plt.plot(iwave, label='ICWT') plt.plot(iwave_fixed, label='fixed ICWT') plt.plot(sig, label='Original') plt.legend() </code></pre> <p><a href="https://i.sstatic.net/aGVwd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGVwd.png" alt="enter image description here" /></a></p> <p>As you can see when I compute the inverse, the resulting signal is off by some constant factor (but otherwise correct). Depending on which frequencies of wavelet I use for the transforms, the resulting signal is off by a different constant factor.</p> <p>Any ideas why I am getting this shift and how I could fix it?</p>
<p><code>.sum(axis=0)</code> is the <a href="https://dsp.stackexchange.com/a/76239/50076">one-integral inverse</a>, and depends on the forward transform. Check against the list of conditions outlined there. Here's a <a href="https://github.com/OverLordGoldDragon/ssqueezepy/blob/master/ssqueezepy/_cwt.py" rel="nofollow noreferrer">working <code>icwt</code></a>.</p> <p>Also CWT and iCWT are independent of the sampling rate (<code>1/dt</code>) unless we're normalizing for something, and that can be <a href="https://dsp.stackexchange.com/a/86182/50076">quite tricky</a>.</p>
285
wavelet transform
Optimized 2D wavelet transform using FFT
https://dsp.stackexchange.com/questions/28051/optimized-2d-wavelet-transform-using-fft
<p>I'm currenty aiming to optimize my fast wavelet transform (FWT) algorithm for 2D signals (images). It works as follows:</p> <ul> <li>one iteration of 1D FWT does convolution of 1D input data with a selected 1D filter (lengths from 2 to approx. 60) and downsamples the result</li> <li>algorithm for 2D transform does 1D FWT across all rows and then all columns of the input image</li> <li>it iterates if more levels are desired</li> </ul> <p>The transform is a part of interactive application that demonstrates wavelets and their use. It works fairly fast and usually responds in real time to user's interactions. But if the filter is very long some performance issues occur. I've read that using Fast Fourier Transorm (FFT) instead of convolution is effective for long enough filters.</p> <p>I've already implemented the 1D FFT, but the question is how to use it for maximum efficiency? Should I transform the input data before single 1D FWT, then perform convolution (which corresponds to multiplication in frequency domain), and then transform the data back using inverse FFT? <strike>Also, how is the multiplication done exactly? For example, the input data of length 256 and filter of length 4 are both transformed using FFT and then only the first 4 values of input data are multiplied before transforming the data back?</strike> I'm struggling a little bit on the details and would very much appreciate any insight into this.</p> <p><strong>EDIT:</strong> I've figured out that in my case I'm after circular convolution, therefore the filter should be zero padded so that its length is the same as length of the input data. But my question about efficiency still holds. How should I use FFT for FWT computation in order to be beneficial?</p>
<p>I'm still not convinced you are spending your time optimally. Probably you would get better gains doing other optimizations. Already 10-15 taps are considered very long wavelet filters for most applications that I have seen. But nevermind that I will try and answer your question.</p> <hr> <p>You can do two different approaches</p> <ol> <li>You could do it one level at a time.</li> <li>Everything at once.</li> </ol> <p>There are drawbacks and gains from either of them. I will start with the first one.</p> <p>If you go for <strong>one level at a time</strong>:</p> <ul> <li>You zero pad filters to be equal lenght as the rows and columns. Also make sure you pad so the filter gets in the middle.</li> <li>FFT the padded filters and corresponding image dimension. If width $\neq$ height you would need to do 2 FFTs per filter one of each dimension length.</li> <li>Pointwise multiplication of the FFTs of filter and image.</li> <li>Inverse FFT of the product image.</li> <li>Downsample product image in the dimension you transformed.</li> <li>Repeat for the next dimension.</li> <li>Repeat for all levels.</li> </ul> <p>Notice that this approach will involve a set of iFFTs and FFTs at <em>each</em> level. That would be double-bad.</p> <p>The <strong>everything-at-once</strong> approach: </p> <ul> <li>Same as above but instead of iFFT and decimating each level to just FFT back again on the decimated signal you just multiply several times with the filter and do iFFT and decimate 2,4,8 times et-cetera. Then you only need to do one set of FFTs (at the start) but still a full set of iFFTs for each level.</li> </ul> <p>Also the multiplications while in the Fourier domain will be more computationally expensive than ordinary multiplications as the fourier components will be $\in \mathbb{C}$ but filter and image coefficients $\in \mathbb{R}$ ( I guess ) and you would also likely need higher numerical precision to get the same error.</p>
286
wavelet transform
How to Map CWT to Synchrosqueezed wavelet transform?
https://dsp.stackexchange.com/questions/31309/how-to-map-cwt-to-synchrosqueezed-wavelet-transform
<p>I don't understand the mapping time-scale plane to the time-frequency plane in synchrosqueezed wavelet transform, i.e. $(3)$. You can find the paper <a href="https://services.math.duke.edu/~jianfeng/paper/synsquez.pdf" rel="nofollow">here</a>.</p> <p>For the given signal of $x(t)$ and mother wavelet of $\psi$, the continous wavalet transform is:</p> <p>$$W(u,s)= \int_{-\infty}^{+\infty} x(t)\psi_{u,s}^*(t)dt\tag{1}\\$$</p> <p>From $(1)$, a preliminary frequency $\omega (u, s)$ is obtained from the oscillatory behavior of $W (u, s)$ in $u$:</p> <p>$$\omega(u, s) = − i\left(W(u, s)\right)^{−1} \frac{\partial }{\partial u} W(u, s)\tag{2}$$</p> <p>$W(u,s)$ is then transformed from the time-scale plane $(1)$ to the time-frequency plane $(3)$. Each value of $W(u, s)$ is reassigned to $(u, \omega_{l} )$, where $\omega_{l}$ is the frequency that is the closest to the preliminary frequency of the original (discrete) point $\omega(u, s)$.</p> <p>$$T(u, \omega_{l}) = \left(\Delta\omega\right)^{-1} \sum_{sk:|\omega(u, s_{k} )−\omega_{l}|\leq\Delta\omega/2}^{} {W(u, s_{k})s_{k}^{−3/2}\Delta s}\tag{3}$$</p>
<p>Let me explain the intuition briefly. The authors of the paper you've cited assume that the signal $x(t)$ can be written in the form \begin{align*} x(t) &amp;= \sum_{k=1}^K a_k(t) \exp(2\pi\mathrm{i} \phi_k(t)), \end{align*} where the $a_k$ denote <em>instantaneous amplitudes</em>, the $\phi_k$ denote <em>instantaneous phases</em> and the $\phi'_k$ denote <em>instantaneous frequencies</em>. The continuous wavelet transform enables the visualization of the $\phi'_k$ in the time-scale plane, but the visualization is necessarily blurry due to the Fourier uncertainty principle. The idea is that the peaks of the continuous wavelet transform can give you an idea of what the $\phi'_k$ values are, but the energy is spread out in the time-scale plane, so the exact values of $\phi'_k$ may not be easy to obtain. The goal of the synchrosqueezing transform (SST) is to partially undo the blurring by a <em>frequency reassignment</em>, which can be explained as follows. </p> <p>First of all, let's note that each scale $s$ corresponds to a natural frequency $\xi$ satisfying the relation $s = c/\xi$, where $c$ is the <em>center frequency</em> of the mother wavelet $\psi$. Now, suppose the time $u$ is fixed. If $\xi = c/s$ is close but not exactly equal to an instantaneous frequency $\phi'_k(u)$, then the coefficient $W(u,s)$ will have some nonzero energy (i.e., $|W(u,s)|^2 &gt; 0$). The idea of synchrosqueezing is to move all this energy away from the frequency $\xi$, and <em>reassign its frequency location</em> closer to the instantaneous frequency $\phi'_k(u)$. Then, we can arrive at a time-frequency representation where the energy is more closely concentrated around the instantaneous frequency curves.</p> <p>So, for each scale $s$, we compute the <em>frequency reassignment</em> $\omega(u,s)$ of the wavelet transform coefficient $W(u,s)$ by the formula you cited above. As Daubechies, Wu, and Lu discovered in the paper you've linked, it turns out that $\omega(u,s)$ is often a good approximation to the instantaneous frequency curve $\phi'_k(u)$ for values of $(u,s)$ where $\xi = c/s$ is sufficiently close to $\phi'_k(u)$. (In fact, when $\phi'_k(u)$ is a <em>constant function</em>, $\omega(u,s)$ <em>exactly equals</em> $\phi'_k(u)$ for values of $s$ where $\xi = c/s$ is sufficiently close to $\phi'_k(u)$!) </p> <p>Then, the computation of the SST in the continuous setting is as follows. First, fix a time of interest $u$. Next, compute the frequency reassignment $\omega(u,s)$ for all scale values $s$. Then, for each frequency of interest $\eta$, we compute the SST $T(u,\eta)$ by adding up <em>all values $W(u,s)$ where the reassigned frequency $\omega(u,s)$ equals $\eta$.</em> This means</p> <p>\begin{align*} T(u,\eta) &amp;= \int_\mathbb{R} W(u,s) \delta(\eta - \omega(u,s)) s^{-3/2} ds, \end{align*}</p> <p>where $\delta$ is the Dirac delta. (To ensure the convergence result for SST, the authors use a formulation for $T$ which includes an approximation to $\delta$ rather than $\delta$ itself, but in computational practice I've never found this approximation to be necessary.) The multiplication by $s^{-3/2}$ is necessary for reconstruction purposes (which you can read more about in the paper).</p> <p>In practice, one is limited to the discrete setting where we have only finitely many possible frequency bins $\eta_\ell$. In this case, for a fixed frequency $\eta_{\ell_0}$, one finds all the values of $\omega(u, s)$ which are closer to $\eta_{\ell_0}$ than to any other frequency bin $\eta_\ell$. We can more explicitly formulate the SST in the discrete setting as follows.</p> <p>For simplicity's sake let's assume the possible frequency values $\eta_\ell$ are uniformly spaced by a distance $\Delta \omega$. So, in the discrete setting, the formula above becomes \begin{align*} T(u, \eta_\ell) &amp;= \sum_{s: |\omega(u,s) - \eta_\ell| &lt; \Delta\omega/2} W(u,s) s^{-3/2}, \end{align*} where I haven't bothered to give an explicit notation that shows that the values $u$ and $s$ are discrete.</p> <p>By doing this reassignment, you end up with a representation $T$ which is sparser than $W$, and hopefully very sharply concentrated about the curves $\phi'_k$. (In practice, for visualization purposes, it's better to reassign not the original coefficient $W(u,s)$ but the magnitude-squared $|W(u,s)|^2$, or even just the constant number $1$. This is because the coefficients $W(u,s)$ are complex-valued, and summing these coefficients may not actually leave you with a very large energy. However, if you additionally want to use the reconstruction formula given in the paper to compute the $x_k$ from the SST, then you need to sum the $W(u,s)$ in order for that formula to work.)</p> <p>By the way, I am leaving out the fact that we generally throw out coefficients $W(u,s)$ that do not fall above a certain threshold. This is because the computation of $\omega(u,s)$ is not necessarily accurate for small $W(u,s)$.</p>
287
wavelet transform
Can a wavelet transform give time dependent phase of sinusoids in signal
https://dsp.stackexchange.com/questions/18476/can-a-wavelet-transform-give-time-dependent-phase-of-sinusoids-in-signal
<p>I have a signal which contains sinusoidal components that oscillate at different frequencies. I think the phase of the sinusoids is changing with time. I could do a Fourier transform on small chunks of the signal and the phase of the sinusoids from that, but is there a better way to do this with a wavelet transform? I'm new to the concept of wavelets. From what I understand, it's good at doing what an FFT does but with time resolution. Could you get the phase of a sinusoid as a function of time from a wavelet transform ?</p> <p><img src="https://i.sstatic.net/1NC41.jpg" alt="enter image description here"></p>
<p>It is indeed possible (up to a period), with complex "continuous" wavelet transforms, provide your mother wavelet ooscillate sufficiently. You can look at the scalogram modulus and its phase at the appropriate scales, as illustrated from <a href="http://fr.mathworks.com/help/wavelet/examples/wavelet-coherence.html?refresh=true" rel="nofollow noreferrer">Matlab: Wavelet coherence</a>:</p> <p><a href="https://i.sstatic.net/6D9n4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6D9n4.png" alt="Wavelet coherence"></a></p> <p>Alternatively, you can convert your signal into an analytic one (using the Hilbert transform) and perform a real wavelet analysis. </p> <p>Finally, you might dig into other linear (short-term Fourier transform) or bilinear time-frequency (Wgner-Ville, Choi-Williams) or time-scale tools.</p>
288
wavelet transform
use wavelet transform to extract frequencies from given signal
https://dsp.stackexchange.com/questions/15178/use-wavelet-transform-to-extract-frequencies-from-given-signal
<p>let us suppose that we have following signal values,which consists by deterministic components and random noise(white noise)</p> <pre><code>56.69 75.24 13.77 8.56 -12.88 -65.34 -45.33 -48.78 -22.23 54.12 83.77 11.84 2.31 39.59 -32.09 -88.86 5.45 50.24 -37.39 -35.69 38.62 7.06 -30.01 22.36 60.71 30.96 5.90 -38.91 -58.15 -40.87 -13.18 -14.77 35.36 103.24 39.04 -50.76 6.98 1.23 -87.46 -60.86 65.08 23.93 -28.36 2.42 31.67 -5.22 4.02 37.50 12.36 5.37 -18.83 -68.70 -38.11 35.47 7.47 11.64 103.89 88.26 -62.41 -62.44 10.09 -33.92 -72.48 17.83 74.35 7.56 -5.35 22.56 -2.11 -2.82 2.59 -21.33 -0.52 -4.11 -50.68 -66.68 28.14 62.10 6.73 32.39 93.62 -18.56 -112.15 -38.18 3.33 -44.78 13.79 76.52 39.92 9.55 -7.09 -28.72 -1.59 19.01 -26.25 -22.61 32.54 0.06 -80.17 -2.02 103.68 31.50 -9.29 44.79 6.10 -93.30 -87.67 2.06 5.53 20.94 67.28 48.24 16.43 4.28 -49.05 -43.92 14.48 2.02 -56.29 16.29 54.09 -42.38 -33.91 67.16 64.26 -34.38 1.31 17.57 -69.49 -86.43 -14.43 20.61 38.08 66.31 38.91 9.62 -4.38 -45.47 -91.23 -9.17 52.92 -21.17 -31.51 69.74 22.55 -62.65 22.34 72.59 -6.34 -49.96 0.06 -38.30 -55.57 -22.15 28.14 61.12 81.82 37.94 -32.41 -14.05 -16.05 -90.12 -46.52 75.39 50.48 -36.08 21.54 64.57 -32.83 -47.71 47.82 35.80 -31.32 -37.71 -30.45 -36.51 6.90 30.17 25.81 69.67 50.19 -55.42 -76.26 -11.97 -47.71 -80.69 52.76 91.17 -5.08 -12.75 49.10 5.44 -46.98 -0.40 22.20 -12.89 -17.56 -24.62 -18.95 22.94 53.27 8.67 35.09 67.76 -27.38 -111.75 -37.70 14.80 -56.16 2.87 105.52 59.46 -42.36 -7.17 12.65 -32.87 -38.61 -0.33 1.07 0.12 6.79 -41.04 3.19 76.28 18.78 -26.27 47.46 16.52 -116.17 -82.44 25.21 8.68 2.55 70.61 86.29 -10.63 -44.12 -19.48 -34.54 -31.95 8.52 -10.48 11.89 40.45 -10.66 -34.27 63.24 42.83 -32.87 -18.44 37.47 -64.48 -101.98 7.93 59.48 22.63 43.46 74.91 2.31 -49.18 -56.15 -55.96 -11.68 37.66 3.36 13.40 45.50 14.21 -57.98 6.29 74.98 -7.25 -70.83 12.29 7.90 -76.29 -24.42 69.80 60.11 28.97 39.65 -11.23 -59.30 -60.80 -63.71 -16.72 66.48 59.98 -17.27 36.01 55.04 -38.70 -59.95 48.63 </code></pre> <p>i know that sampling frequency=100;i am interested what is a basic steps using wavelet to extract frequencies and phases?i know that there is function of cwt for compute continuous wavelet transform and from coefficients it tries to determine frequencies,now if i know sampling frequency and dont know frequency components but suppose that it must be less then sampling frequency/2 or fs/2,because of Nyquist criteria to held,how can i take scales?i have tried following example</p> <pre><code>&gt;&gt; B=xlsread('data_generations1','A1','g8:g301'); &gt;&gt; scales=1:100; &gt;&gt; coeff=cwt(B,1:100,'db2','plot'); </code></pre> <p>and got following picture</p> <p><img src="https://i.sstatic.net/CM6PT.png" alt="enter image description here"></p> <p>now how can i determine which frequency and phases are in signal?or which wavelet basis should i choose?please help me in this problem</p>
<p>To answer inside the framework you set:</p> <p>In the CWT domain the frequencies will stay the same, the wavelet will behave just like any other filter. It will only filter out frequencies, not change them. You can see banded regions in the image where the peaks are. Since you picked a wavelet that does not have a very steep frequency response, "beating" is visible and appears to be about as strong as the signal itself.</p> <p>If you choose a wavelet with better frequency resolution (a longer one), the signals will stand out better.</p> <p>If you try to do a DWT, you will get aliases, and the signals are not far enough apart to separate them completely.</p>
289
wavelet transform
Discrete Wavelet Transform (DWT) Filter Bank
https://dsp.stackexchange.com/questions/58209/discrete-wavelet-transform-dwt-filter-bank
<p>I have some stumbling block in my thesis writing.</p> <p>Do we use the same filter pair while implementing DWT filter bank with downsampling of the filter output, or filters do change also from level to level? Though on the Wiki page and in the book "Biosignal and Medical Image Processing, 3rd edition" by J. L. Semmlow and B. Griffel there are same labelling of filters at each decomposition level on the filter bank schemes: <a href="https://i.sstatic.net/gQRzM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gQRzM.png" alt="Wavelet filter bank from the book"></a> <a href="https://i.sstatic.net/81mai.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/81mai.png" alt="enter image description here"></a>, the description in Wikipedia states: "The filter output of the low-pass filter <strong><em>g</em></strong> in the diagram above is then subsampled by 2 and further processed by passing it again through a <strong>NEW</strong> low- pass filter <strong><em>g</em></strong> and a high- pass filter <strong><em>h</em></strong> with half the cut-off frequency of the previous one.."(c)</p> <p>This resulted in big confusion for me. Logically, as I have been thinking before, for DWT we "travel" in frequency domain by signal downsampling and leaving the filters untouched (while e.g. in Stationary Wavelet Transform it is achieved through upsampling the filters themselves without signal modification). The word "<strong>new</strong>" in Wiki frustrated me a lot.</p> <p>Please, help me to resolve this issue.</p> <p>With respect,</p> <p>-Andrey.</p>
<p>In the DWT scheme, whether it is the classical <span class="math-container">$2$</span>-band or the <span class="math-container">$M$</span>-band wavelet setting, the very same analysis filter bank (lowpass/highpass + subsampling) is used at each level. Under this condition, one can derive the <a href="https://en.wikipedia.org/wiki/Cascade_algorithm" rel="nofollow noreferrer">cascade algorithm</a> that provides the spectrum of the scaling function: <span class="math-container">$$\Phi(\omega)= \prod_{k=1}^\infty \frac {1} {\sqrt 2} H\left( \frac {\omega} {2^k}\right) \Phi^{(\infty)}(0)$$</span> where iterated half-cut-off frequencies are apparent.</p> <p>However, this is especially important when one wants to address the properties of the underlying wavelet, or cascade many wavelet levels. In practice, you can easily choose a different set of perfect reconstruction filter bank at each level, for several reasons:</p> <ul> <li>actual filters down the scales tend to become larger, sometimes too much with respect to the signal features, and the signal length. When a convolutive filter becomes longer than the data itself, you run into troubles, especially in regard of boundary handling (symmetry, padding). The consequence can be high details coefficients caused mostly by wrapping around border artifacts.</li> <li>filtered/down-sampled data don't have the same characteristics at each scale, so different filters can do a better job</li> <li>do we really need multicale theory after all on 3 levels? In practice, it can be useful to use longer filters on the first levels, for better frequency separation, and shorter ones on lower levels, for the multiscale effect.</li> </ul> <p>This is not mainstream, as one has to choose filters wisely, sometimes without many hint. However, there were works with DWT or GenLOT filters on the first scales and DCT or Haar on the lower resolutions. <a href="https://en.wikipedia.org/wiki/JPEG_XR" rel="nofollow noreferrer">JPEG XR</a> standard somehow behaves in that philosophy. One of the papers I have met on this direction is: <a href="https://doi.org/10.1016/j.acha.2009.02.005" rel="nofollow noreferrer">On the frame bounds of iterated filter banks</a>, 2009.</p>
290
wavelet transform
Trying to understand WAVELET TRANSFORM Frequncy-Time diagram
https://dsp.stackexchange.com/questions/9319/trying-to-understand-wavelet-transform-frequncy-time-diagram
<p>I am reading </p> <p><a href="http://users.rowan.edu/~polikar/WAVELETS/WTpart3.html" rel="nofollow noreferrer">The Wavelet Tutorial</a></p> <p>Part III MULTIRESOLUTION ANALYSIS &amp; THE CONTINUOUS WAVELET TRANSFORM</p> <p>by Robi Polikar </p> <p>The author explains about the following fig and says;</p> <blockquote> <p>Note that boxes have a certain non-zero area, which implies that the value of a particular point in the time-frequency plane cannot be known. All the points in the time-frequency plane that falls into a box is represented by one value of the WT.</p> </blockquote> <p>I am unable to interpret this fig and also that why value of a particular point in the time-frequency plane cannot be known. I can always intersect two lines one passing through a particular value of t and another through f. The intersecting point will give me exact location of frequency at particular time (and V.V). I am not able to understand what thsi diagram is all about and what is its significance? </p> <p><img src="https://i.sstatic.net/Eegv5.gif" alt="enter image description here"></p>
<p>I believe by the WT, you are talking about the discrete wavelet transform, DWT.</p> <p>This can be thought of as a subsampling of the continuous wavelet transform, CWT. In the case of the DWT, we pick frequencies of the form $2^{j-1}$ for ($j=1,2,\dots$) and then pick times seperated by multiples of $2^j$.</p> <p>You can see this in the diagram: As frequency increases, boxes double in height and half in width.</p> <p>I believe what the author means by the statement</p> <pre><code>Note that boxes have a certain non-zero area, which implies that the value of a particular point in the time-frequency plane cannot be known. </code></pre> <p>is that the DWT, is an 'averaging' of the CWT within the boxes i.e. the boxes represent only one value and that is the average of the the values within it from the CWT. Hence as the boxes have non-zero area, it is impossible to know any of the exact values making up this average, so it is impossible to know the exact value of any point in the time-frequency plane.</p>
291
wavelet transform
Wavelet transform and FFT using to extract feature power bands with EEG signals
https://dsp.stackexchange.com/questions/49272/wavelet-transform-and-fft-using-to-extract-feature-power-bands-with-eeg-signals
<p>I am using 5 channels [ fz , cz , c3 , c4 , pz] to detect drowsiness of driver My First Question is, what is the right input to get feature power band ( Theta , alpha , gamma , beta ) to wavelet transform ? ( these 5 channels or 1 channel or what ? ) My Second Question is, Is it right to classify data based on theta only got from wavelet transform ? </p>
<h3>Alpha rhythm</h3> <p>Generally, alpha-band oscillatory activations (8-10Hz) relate to relaxation and in principle accompanied with closure of eyes. This is the prime marker that is used to detect drowsiness, but surely not the only one (see alpha dropout, NREM1, eye-rolling EEG artefacts).</p> <h3>Event detection</h3> <p>To detect alpha, electrodes from occipital or adjacent regions are used. Due to low SNR in EEG signals and its variance across EEG systems, you may want to try different detection methods to decide for your classifier.</p>
292
wavelet transform
Phase Information at Higher Frequencies in Continuous Wavelet Transform
https://dsp.stackexchange.com/questions/38055/phase-information-at-higher-frequencies-in-continuous-wavelet-transform
<p>I'm using the code I found <a href="https://dsp.stackexchange.com/a/12880/26474">here</a> to compute the wavelet transform of a sine wave with a constant frequency. </p> <pre><code>#!/usr/bin/python2 from pylab import * import matplotlib.pyplot as plt import numpy as np import scipy x = np.linspace(0, 10, 65536) y = np.sin(2 * pi * 60 * x) N = len(y) Y = np.fft.fft(y) J = 128 scales = np.asarray([2 ** (i * 0.1) for i in range(J)]) morletft = np.zeros((J, N)) for i in range(J): morletft[i][:N/2] = sqrt(2 * pi * scales[i]) * exp(-(scales[i] * 2 * pi * scipy.array(range(N/2)) / N - 2) ** 2 / 2.0) U = empty((J, N), dtype=complex128) for i in range(J): U[i] = np.fft.ifft(Y * morletft[i]) plt.imshow(abs(U[:,scipy.arange(0,N,1)]), interpolation='none', aspect='auto') plt.title("Sine Wave") plt.xlabel("Translation") plt.ylabel("Scale") plt.show() </code></pre> <p>The result looks alright. <a href="https://i.sstatic.net/CdfLh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CdfLh.png" alt="alright"></a></p> <p>What I'm really interested in looking at is the phase. I'm extracting phase from the above code using: </p> <pre><code>imshow(np.unwrap(np.angle(U)), aspect='auto') </code></pre> <p>and it looks like this: <a href="https://i.sstatic.net/d2VR3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d2VR3.png" alt=""></a></p> <p>Why is the phase information present at frequencies higher than (or scales lower than, NOTE: inverted y-axis) that of the signal?</p>
<p>As you can see there is energy also in higher frequencies than your original. This is probably since every level of the wavelet has a band, so the wavelet can show energy at frequencies higher.</p> <p>The phase exists only as long as there is energy... so it directly connected to it. You can see if the data on the spectrum exists only where there is non zero energy in the spectrum.</p>
293
wavelet transform
Spectogram: Short-time Fourier Transform vs Time-Frequency Wavelet Transform
https://dsp.stackexchange.com/questions/95008/spectogram-short-time-fourier-transform-vs-time-frequency-wavelet-transform
<p>So far, I used STFT a.k.a windowed fourier transform for convert signal to time-frequency domain.</p> <p>In general, the parameter I can tune is window size, where smaller window size lead lower frequency resolution but higher time resolution. And vice versa. The resulted spectogram is consistent.</p> <p>I have read that the Wavelet Transform can be used to create spectogram, the different result is, the resolution between frequency and time is not linear. In another word, lower frequency has higher time resolution while higher frequency has low time resolution. What is the intuition behind this?</p> <p>My goal is, I want convert any arbitrary signal to the spectogram.</p> <p>Suppose I have signal with length 1000 samples, I chose to limit the frequency range output within range a to b. I want the spectogram result has balanced information between frequency information and time information.</p> <p>So, what is the different between STFT and Time-Freq WT?</p>
294
wavelet transform
In continous wavelet transform, all wavelets must have the same energy regardless of their scales,why?
https://dsp.stackexchange.com/questions/87188/in-continous-wavelet-transform-all-wavelets-must-have-the-same-energy-regardles
<p>In Continuous Wavelet Transform (CWT) To achieve energy normalization and to ensure that all wavelets have the same energy regardless of their scales, each wavelet is divided by the square root of the the scale S.</p>
295
wavelet transform
Log vs. linear frequency scales of Fourier and wavelet transforms
https://dsp.stackexchange.com/questions/41259/log-vs-linear-frequency-scales-of-fourier-and-wavelet-transforms
<p>I'm trying to understand the difference between the output of a Fourier transform and a wavelet transform. A Fourier transform is done via the following function:</p> <p>$$\hat{f}(\xi) = \int^\infty_{-\infty}\ f(t)\ e^{-2\pi i t \xi}\ dt$$</p> <p>Whereas a wavelet transform is going to use this:</p> <p>$$F(a,b) = \int^\infty_{-\infty}\ f(x)\psi^*_{a,b}(x)\ dx$$</p> <p>Now, spectrograms use a windowed Fourier transform. When looking at a signal over time, you can get a graph showing you which frequencies were present at any given time. The frequencies are scaled linearly. However, scalograms use a wavelet transform to obtain the same information. I've frequently seen the output scaled logarithmically (in frequency).</p> <p>Is there something about wavelets that make their output fundamentally logarithmic? I'm having a hard time seeing it in the above formulas. It seems like you set $\xi$ or $a$ &amp; $b$ give you the frequency you want, and you could calculate results for any frequency you desire.</p>
<p>And FFT can be considered a filter bank. The bandwidth of each filter is inversely proportional to the length of the FFT. The width of each filter sets the spacing such that there isn't either too much overlap or gaps between filters. In an STFT spectrogram, the window width is fixed for the entire FFT, thus the filter spacing is fixed, which turns out to be a linear spacing.</p> <p>With wavelets the length of each wavelet, being individually adjustable, can be shorter for the higher frequencies, thus have a wider bandwidth, which allows them to be spaced further apart in frequency without leaving gaps between them in the frequency response, thus allowing logarithmic spacing.</p>
296
wavelet transform
Continuous Wavelet Transform time vector in python
https://dsp.stackexchange.com/questions/40502/continuous-wavelet-transform-time-vector-in-python
<p>I have a signal sampled at 128 Hz. I used to extract features with the spectrogram function and I decided to upgrade my algorithm and I'm trying to analyze it using Continuous Wavelet Transform (pywt.cwt) in python. this function has only 2 outputs: coefficient and frequency, while spectrogram returns the time vector as well. Similar to what is said in theory, the resulting frequency vector has varying differences between each two samples (high density at low frequencies, and vice versa). According to the uncertainty principle, the time vector also has to change its density inversely to the frequency vector, is not it? My understanding is based on the famous graph in the picture attached. <a href="https://i.sstatic.net/YmBgG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YmBgG.png" alt="enter image description here"></a> If I'm not mistaken, the time vector (that is not returned) is just the original time vector of the signal, meaning the difference between each two adjacent samples is constant. Please help me understand this mismatch.</p>
<p>Similar question here <a href="https://dsp.stackexchange.com/q/651/29">Which time-frequency coefficients does the Wavelet transform compute?</a></p> <p>The picture you've shown is used for DWT such as <a href="http://pywavelets.readthedocs.io/en/latest/ref/dwt-discrete-wavelet-transform.html#multilevel-decomposition-using-wavedec" rel="nofollow noreferrer"><code>pywt.wavedec</code></a>, not CWT.</p> <p>CWT is a continuous function; it exists at all points in the time-scale plane.</p> <p><a href="http://pywavelets.readthedocs.io/en/latest/ref/cwt.html" rel="nofollow noreferrer"><code>pywt.cwt</code></a> produces a 2D array with constant numbers of columns and rows, so it's a regular sampling of the CWT, at whatever scales you specify, for every sample of the input:</p> <pre><code>shape(y) Out[5]: (512,) shape(freqs) Out[6]: (128,) shape(coef) Out[7]: (128, 512) </code></pre> <p>I think the DWT's coefficients contain just enough information to reproduce the original exactly with an inverse DWT, while the CWT output has a lot of redundant information and is maybe missing some information.</p>
297
wavelet transform
Difference between these two Continuous Wavelet Transforms?
https://dsp.stackexchange.com/questions/66889/difference-between-these-two-continuous-wavelet-transforms
<p>I am porting <a href="https://github.com/ebrevdo/synchrosqueezing" rel="nofollow noreferrer">Synchrosqueezing</a> to <a href="https://github.com/OverLordGoldDragon/ssqueezepy" rel="nofollow noreferrer">Python</a>, and came across an implementation difference on CWT between mine and <a href="https://github.com/PyWavelets/pywt" rel="nofollow noreferrer">PyWavelets'</a> - details below. The idea is to merge this implementation to PyWavelets if latter doesn't already have it, but I cannot explain the difference as I'm unfamiliar with wavelet transforms.</p> <p>What is each implementation trying to accomplish, and what are some key differences?</p> <hr> <p><a href="https://github.com/PyWavelets/pywt/issues/543#issuecomment-614974001" rel="nofollow noreferrer">Full post</a>; comparing <a href="https://github.com/PyWavelets/pywt/blob/master/pywt/_cwt.py#L37" rel="nofollow noreferrer"><code>pywt.cwt</code></a> vs. <a href="https://github.com/OverLordGoldDragon/ssqueezepy/blob/master/ssqueezepy/wavelet_transforms.py#L389" rel="nofollow noreferrer"><code>cwt_fwd</code></a>, pasting transform excerpts:</p> <pre><code># pywt.cwt for i, scale in enumerate(scales): step = x[1] - x[0] j = np.arange(scale * (x[-1] - x[0]) + 1) / (scale * step) j = j.astype(int) # floor if j[-1] &gt;= int_psi.size: j = np.extract(j &lt; int_psi.size, j) int_psi_scale = int_psi[j][::-1] size_scale = next_fast_len( data.shape[-1] + int_psi_scale.size - 1 ) if size_scale != size_scale0: # Must recompute fft_data when the padding size changes. fft_data = fftmodule.fft(data, size_scale, axis=-1) size_scale0 = size_scale fft_wav = fftmodule.fft(int_psi_scale, size_scale, axis=-1) conv = fftmodule.ifft(fft_wav * fft_data, axis=-1) conv = conv[..., :data.shape[-1] + int_psi_scale.size - 1] </code></pre> <pre><code># cwt_fwd xh = np.fft.fft(x) # x == data xi = np.zeros((1, N)) psihfn = wfiltfn(wavelet_type, opts) for ai in range(na): a = Wx_scales[ai] psih = psihfn(a * xi) * np.sqrt(a) / np.sqrt(2 *PI) * (-1)**k dpsih = (1j * xi / opts['dt']) * psih xcpsi = np.fft.ifftshift(np.fft.ifft(psih * xh)) Wx[ai] = xcpsi dxcpsi = np.fft.ifftshift(np.fft.ifft(dpsih * xh)) dWx[ai] = dxcpsi </code></pre> <hr> <p><strong>Observations</strong>:</p> <ul> <li><code>cwt_fwd</code> has an imaginary component, <code>pywt.cwt</code> doesn't</li> <li><code>pywt.cwt</code>'s norm is ~10x greater (see <code>norm_abs</code>) [in 'full post']</li> <li>The signal (<code>x</code>) preprocessing seems to undo itself in this case as evident in vs. Unprocessed</li> <li><code>pywt.cwt</code> handles the noisy signal (<code>sN</code>) case considerably differently in the upper-half</li> </ul> <hr> <p><strong>Visualized differences</strong>:</p> <p><img src="https://i.sstatic.net/A1Abk.jpg" width="600"></p>
298
wavelet transform
The downsampling step with discrete wavelet transform
https://dsp.stackexchange.com/questions/82508/the-downsampling-step-with-discrete-wavelet-transform
<p>For one stage discrete wavelet transform (DWT), if we have a signal with 1000 samples occupying the frequency range from zero to 500 Hz, the output of the low-pass filter is a signal with frequency range 0-250 Hz, and the output of the high-pass filter is a signal with frequency range 250-500 Hz. Downsampling is then performed. This is right according to the sampling theory; the sampling rate must be at least twice the highest frequency of the signal. Since the output highest frequency of the low-pass filter is reduced to the half, downsampling its output by a factor of two can be performed. What about the output of the high-pass? its highest frequency is still 500, how can we perform downsampling? the bandwidth of the high-pass is 250, but its highest frequency is 500? according to the sampling theory, fs must be at least 2fm, what is fm? the highest frequency of the signal, or its bandwidth?</p>
<p><strong>TL' DR:</strong></p> <p>When you down-sample a real signal that is sampled at <span class="math-container">$f_s$</span> by a factor of two, the new sampling rate will be <span class="math-container">$f_s/2$</span>. The frequency span from <span class="math-container">$0$</span> to <span class="math-container">$f_s/4$</span> will be intact as it was originally, but any spectrum that was originally from <span class="math-container">$f_s/4$</span> to <span class="math-container">$f_s/2$</span> will now alias to the same <span class="math-container">$0$</span> to <span class="math-container">$f_s/4$</span> spectrum, which is now the primary Nyquist band. The result of bandpass filtering and down-sample by two includes a direct frequency translation from the original <span class="math-container">$f_s/2$</span> to DC. (This is the reason we must have anti-alias filters in the analog prior to an A/D Converter when sampling). The digital filter in this case serves as the anti-alias filter prior to resampling (decimation). Just as in A/D conversion, we can use bandpass filters to select higher Nyquist zones, which will alias to the primary Nyquist band.</p> <p><a href="https://i.sstatic.net/ZCNa4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZCNa4.png" alt="aliasing in Dec by 2" /></a></p> <p><strong>Details for the weary:</strong></p> <p>I like to explain this starting with sampling an analog signal since the effects are identical once we understand the fundamentals of sampling in general. Consider the process of sampling a 3 Hz sinusoid with a 20 Hz sampling clock. The 3 Hz real sinusoid has a frequency tone at +/-3 Hz (consistent with <span class="math-container">$cos(2\pi 3t) = 0.5e^{j2\pi 3 t}+0.5e^{-j2\pi 3 t}$</span>: each of those exponentials is an impulse in the frequency domain). This is the top spectrum shown. Note specifically how a copy of this appears around every multiple of the sampling frequency if we extended our digital frequency axis to <span class="math-container">$\pm \infty$</span> (the noted periodicity in frequency that occurs in discrete time systems).</p> <p><a href="https://i.sstatic.net/uMC9T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uMC9T.png" alt="sampling 3 Hz cosine" /></a></p> <p>Because of this replication we only need to be concerned with the frequency from <span class="math-container">$\pm f_s/2$</span> since it repeats exactly everywhere else, and further with real signals such as in the OP's case, the spectrum is Hermitian symmetric (same magnitude, opposite phase), so we really only need to be concerned with the frequency from DC to <span class="math-container">$f_s/2$</span>, just as the OP has mentioned. However when we start dealing with really understanding aliasing effects, and multi-rate signal processing (decimation and interpolation), I find it very intuitive to keep this equivalent view in mind: the unrolled digital spectrum.</p> <p>Now helpful to understand the aliasing effects when down-sampling (and aliasing when sampling in general) is the plot below showing that we will get the exact same digital spectrum with a 23 Hz sinusoid (or 43, 63, 83, ... Hz sinusoid). Each of these unique bands that can alias to our unique digital span from <span class="math-container">$-f_s/2$</span> to <span class="math-container">$+f_s/2$</span> is referred to as a Nyquist zone. If the reason for that is confusing, I provide further links at the bottom of this post on other answers here at StackExchange that detail this further.</p> <p><a href="https://i.sstatic.net/RaSVY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RaSVY.png" alt="Sampling 23 Hz Cosine" /></a></p> <p>So from this we gain two main points: First, in order to not have severe distortion or elevated noise from every Nyquist zone that will equally create our unique digital spectrum in the <span class="math-container">$-f_s/2$</span> to <span class="math-container">$+f_s/2$</span> range, we must filter the spectrum of choice prior to sampling the signal. Second, this filter can be a low pass filter such as the case of the 3 Hz sinusoid, or it can be a bandpass filter such as the case of the 23 Hz sinusoid- both will produce an identical sampled signal and it is our knowledge of what filter was used that allows us to distinguish which signal was actually sampled. Bandpass filtering and sampling is also referred to as &quot;undersampling&quot;.</p> <p>Now with that all in mind, consider the OP's case of a down-sample by two, with a low pass filter from DC to <span class="math-container">$f_s/4$</span>. Note the identical operations, in that the primary spectrum (the first Nyquist zone) in all cases is replicated around every copy of the sampling rate, and how after the down-sample by 2, <span class="math-container">$f_s/2$</span> becomes the new sampling rate. We select what we want to have as the primary spectrum with filtering, here below is the result after a low-pass filter:</p> <p><a href="https://i.sstatic.net/rMeAB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rMeAB.png" alt="down-sample by 2 low pass" /></a></p> <p>And the same but with a high pass filter from <span class="math-container">$f_s/4$</span> to <span class="math-container">$f_s/2$</span>:</p> <p><a href="https://i.sstatic.net/m0oV2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m0oV2.png" alt="down-sample by 2 high pass " /></a></p> <p>So the result of bandpass filtering and down-sample by two includes a direct frequency translation from the original <span class="math-container">$f_s/2$</span> to DC.</p> <p><strong>Details for the very interested:</strong></p> <p>To really understand this process universally to signal processing applications, it is imperative to go beyond the &quot;real-only&quot; signals and follow this through for the case of complex signals; understanding this will avoid a lot of misunderstandings, particularly with aliasing effects. I explain this all in greater detail in the following posts which are recommended for further reading:</p> <p><a href="https://dsp.stackexchange.com/questions/31843/aliasing-after-downsampling/31929#31929">Aliasing after downsampling</a></p> <p><a href="https://dsp.stackexchange.com/questions/54423/higher-order-harmonics-during-sampling/54432#54432">Higher order harmonics during sampling</a></p> <p><a href="https://dsp.stackexchange.com/questions/81695/does-t-1-f-always-hold/81698#81698">does T = 1/F always hold?</a></p>
299
signal denoising
Signal Denoising Uniformly in Frequency Domain
https://dsp.stackexchange.com/questions/54897/signal-denoising-uniformly-in-frequency-domain
<p>I have a noisy sparse signal containing number of frequency components. Is there any method to uniformly denoise this signal. in other words, a method that estimated and eliminates the noise power across all the frequencies in the band and not only on the borders like in wavelet denoising?</p> <p>The following plot is an example of what I was saying. I have 3 peaks and want to cancel the noise below these peaks.</p> <p><a href="https://i.sstatic.net/UDT8O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDT8O.png" alt="example of one of my signals " /></a></p>
<p>This sounds like a great opportunity to attempt to use singular spectrum analysis (SSA):</p> <p>It appears you have some observed signal, <span class="math-container">$Y[n]$</span>, which is some sort of mixture, and we take <span class="math-container">$Y[n]$</span> and create a Hankel matrix from it using some desired frame length M (this will effectively determine your sub-space resolution...more on that later).</p> <p>Now that we have our Hankel matrix, we can either compute <span class="math-container">$R_Y$</span> directly and then do an eigendecomposition, or we could go with a Singular Value Decomposition of the trajectory matrix. You'll find them to essentially be equivalent if we use the left singular vectors; either method is perfectly valid.</p> <p>So once we get our matrix of eigenvalues, <span class="math-container">$U$</span>, we need to project them via a linear transform to get principal components, <span class="math-container">$P$</span>:</p> <p><span class="math-container">$P = Y^H U$</span>.</p> <p>Now that we're here, we can use the singular values to determine amplitude of the signals themselves, and the principal components essentially determine the "parts" of the signal, i.e. <span class="math-container">$X[n]$</span> and <span class="math-container">$\epsilon[n]$</span>.</p> <p>Let's say we have N principal components, and we know that we only "need" the first one: from our principal components, we perform what is typically called Eigentriple grouping via the following multiplication:</p> <p><span class="math-container">$C = UP^H$</span> (where H denotes the Hermitian operator)</p> <p>This new matrix, C, is what we call the reconstructive component(s) of <span class="math-container">$Y$</span>. We're not quite done yet, because <span class="math-container">$C$</span> is the same dimension as our Hankel matrix, <span class="math-container">$Y$</span>, so do get back to our signal, we'll need an additional step. To reconstruct the series itself, we'll perform what's called diagonal averaging. The goal here is that we'll finally map back to a single signal, let's call it <span class="math-container">$Y_out[n]$</span>. It's in this step that we'll be able to utilize that one-to-one relationship of the Hankel to the series to extract out signals. This step is a bit long winded, so I've omitted it for brevity, but you can consult <a href="https://en.wikipedia.org/wiki/Singular_spectrum_analysis#Basic_SSA" rel="nofollow noreferrer">Wikipedia</a> or any of the many papers/texts on SSA for more information.</p> <p>So that's SSA in a nut-shell, and naturally there is a lot you can do with the method. So long as you have some appreciable signal to noise ratio (SNR) in the frequency domain, I think you'll find SSA works quite well for extracting out frequency-separable signals. The big issue is obviously going to be determining which principal components you care about.</p> <p>The key to discerning you signals is that if you can assume they're all separable in frequency and have some non-negative SNR value in the frequency domain, SSA will provide a decomposition of these signals, and the power of the individual signals is stored in the eigen/singular values you computed earlier on in the method (you'll have M of these). Simply by inspecting the eignespectrumn, you can typically discern actual signals from noise; noise signals will typically be low power and spread across several principal components/eigenvalues, whereas signals will be contained typically by 1 or very few of the principal components/eigenvalues</p> <p>So, in short, if you have some time series observation, you can use SSA to attempt to discern individual signals/trends. It's a really powerful tool, and as long as the signals are separable, you should have some reasonable success.</p> <p>Now if you have several observations of the same data (let's say you had several sensors observing the same singals), you could attempt to use some classic blind source separation techniques, such as principal components analysis (PCA), independent components analysis (ICA), etc. Your question didn't seem to indicate such, but if that's the case let me know and I can include some details on those methods as well. Hope that helps!</p>
300
signal denoising
Time averaging denoising signal
https://dsp.stackexchange.com/questions/72762/time-averaging-denoising-signal
<p>I would like to use time averaging technique for denoising vibration signal, using the below function, how do we choose the appropriate parameters D and N for optimal denoising !</p> <pre><code>% sigav.m - signal averaging % % y = sigav(D, N, x) % % D = length of each period % N = number of periods % x = row vector of length at least ND (doesn't check it) % y = length-D row vector containing the averaged period % It averages the first N blocks in x function y = sigav(D, N, x) y = 0; for i=0:N-1, y = y + x((i*D+1) : (i+1)*D); % accumulate i-th period end y = y / N; </code></pre>
<p>Your approach will only denoise anything, if your signal is actual periodic. That's typically the case if you use a sine wave or periodic random noise or sweep as an excitation signal for a measurement. It can also work, if you vibration source is periodic (e.g. an engine running an constant rpm).</p> <p>In this case <span class="math-container">$D$</span> should be chosen to represent EXACTLY one period. <span class="math-container">$N$</span> is simply determined by the length of your data set. The longer the better.</p> <p>This works best if the excitation clock and acquisition clock are phase locked, i.e. derived from the same clock source. If that's not the case, it may be required to do some sort of clock recovery or clock tracking first.</p>
301
signal denoising
Denoising a signal
https://dsp.stackexchange.com/questions/50664/denoising-a-signal
<p>I'm starting hydraulic experiments, where I'd have to measure velocity in an unsteady flow with a device called Acoustic Doppler Velocimeter. In DSP terms, I'd have a nonstationary signal in a shape of waves (In the figure below, the instantaneous velocity (cm/s) as function of time (s) in one point, the period is about 70 sec in my case). This signal contains the mean component <em>(mean velocity)</em> and noise <em>(turbulence)</em>. My goal is to <strong>extract the mean velocity.</strong> </p> <p>I have looked up DSP and found many interesting models (Huang-Hilbert Transform, Wavelet Transform, Short Fourier Transform) to denoise. The only problem is that, in steady case, they need about <strong>3 minutes measuring in one point so that they can average (arithmetic averaging) and filter out this noise</strong>. Since I'm in unsteady, I'd probably need more. Besides, my signal lasts about 1.5 minute. So I'm a little bit lost: can I still apply the denoising models (They're applied in the literature) ?</p> <p>Thank you!</p> <p><a href="https://i.sstatic.net/aeZx5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aeZx5.png" alt="Velocity as functin of time "></a></p>
<p>As @matthewjpollard has already indicated, you are acting too complex for a possibly simpler problem. You shall always begin with the simplest solution.</p> <p>I hope the following OCTAVE code can convince you on the justification of this principle. Note that I've used the simplest (yet more complex than simple polynomials) model of a nonlinear wave fitting into your description. A more realistic model might be required for a through investigation.</p> <pre><code>clc; clear all; close all; N = 256; % cosine period (and signal length) M = 32; % tail length of signal n=[0:N-1]; % time-index x1 = 1-cos(2*pi*n/N); % wanted signal x1 = [x1, zeros(1,M)]; % append same tail (for practical pupose here) x2 = 0.15*randn(1,N+M); % enough wideband gaussian noise, to be added to our signal x = x1 + x2; % total signal as noise + wanted b = fir1(64,0.01); % simple FIR LP filter of order 64 and wc=0.01*pi y = filter(b,1,x); % Filter the input signal and obtain the result. figure,plot(x1);title('what you want') figure,plot(x);title('what you have') figure,plot(y);title('what you get') </code></pre> <p>The result is: (note the <em>delay</em> of the filter...) Depending on your accuracy requirement, you may investigate other filter types too.</p> <p><a href="https://i.sstatic.net/Soig5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Soig5.jpg" alt="enter image description here"></a></p>
302
signal denoising
Denoising Signal With Butterworth-Filter
https://dsp.stackexchange.com/questions/87273/denoising-signal-with-butterworth-filter
<p>im trying to denoise a signal to which i added AWGN. Here is what ive done so far:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy import signal &quot;&quot;&quot;Creating a cosine and adding AWGN to it. Then denoising it. &quot;&quot;&quot; # Create cosing and cosine with noise f_0 = 50 f_max = 2*f_0 t = np.linspace(0, 2*1/f_0, 256) y = np.cos(2*np.pi*f_0*t) sigma, mu = 3, 5 noise = sigma * np.random.randn(len(t)) + mu y_noise = y + noise # Plot both signals figure, (ax1, ax2, ax3) = plt.subplots(3, 1) ax1.plot(t, y, label='Signal') ax1.set_xlabel('Time / <span class="math-container">$s$</span>') ax1.set_ylabel('Voltage / <span class="math-container">$V$</span>') ax1.legend(loc='upper right') ax2.plot(t, y_noise, color='green', label='Signal with noise') ax2.set_xlabel('Time / <span class="math-container">$s$</span>') ax2.set_ylabel('Voltage / <span class="math-container">$V$</span>') ax2.legend(loc='upper right') #peaks, _ = signal.find_peaks(y_noise) #ax2.scatter(t[peaks], y_noise[peaks], color='red') # Design a low-pass filter for denoising order = 6 f_c = 10 f_s = f_0 / 2 b, a = signal.butter(order, f_c / f_s) # Apply filter to signal y_denoised = signal.lfilter(b, a, y_noise) # Plot transformed signal ax3.plot(t, y_denoised, label='Denoised Signal') ax3.set_xlabel('Time / <span class="math-container">$s$</span>') ax3.set_ylabel('Voltage / <span class="math-container">$V$</span>') ax3.legend(loc='upper right') </code></pre> <p>Any ideas why this doesnt get me the cosine signal?</p>
<p>AWGN, by definition, is &quot;white&quot; and therefore has a constant power spectral density expectation across all frequencies. Therefore, your signal is buried in noise that is also partly at the exact same frequency. You cannot simply filter to &quot;de-noise&quot; because your filter passband will pass both signal and noise.</p>
303
signal denoising
Implications of adding gaussian noise after denoising original signal
https://dsp.stackexchange.com/questions/96644/implications-of-adding-gaussian-noise-after-denoising-original-signal
<p>I'm new to DSP due to a research project related with audio classification. We are designing a DL-based pipeline to determinate which kind of vessel appears in underwater recordings.</p> <p>Thing is that I've seen that it would be good (and given how many papers refer and focus on the denoising process, almost mandatory) to apply denoising techniques to the raw signal. We've taken it as a logical step as leaving an hydrophone on the wild for weeks results in <strong>a lot</strong> of ambient noise. However, there is a class imbalance problem with our training set (samples recorded with the same equipment but labeled semi-manually via the ship's AIS).</p> <p>Other than adding class weights, I've read a lot about adding Gaussian noise to the model's input during training epochs, so that it learns to generalize simulating natural conditions and has more variety of examples (combined with time and frequency masking, time shifting, etc...)</p> <p>Now this step also seems logical, but the question arises:</p> <p>Wouldn't be contradictory to do denoising to add noise afterwards? At first I thought &quot;ok, it removes aggressive or intrusive non-uniform noise from input data and the added noise is similar between all samples&quot;. But if the idea is to mimic natural conditions, isn't better the original noise?</p> <p>Thanks!</p>
<blockquote> <p>Wouldn't be contradictory to do denoising to add noise afterwards?</p> </blockquote> <p>Maybe. Maybe not.</p> <p>This is turning out to confuse <em>me</em> as I explain it, so bear with me:</p> <p>Just adding Gaussian noise to your denoised data -- not contradictory per se., but possibly not the best way to corrupt your data set in a manner that's going to be constructive.</p> <p>Adding the <em>sort of</em> noise that you'd see if your original data <em>had been</em> taken in a noisier environment -- oh, yes, absolutely.</p> <blockquote> <p>At first I thought &quot;ok, it removes aggressive or intrusive non-uniform noise from input data and the added noise is similar between all samples&quot;.</p> </blockquote> <p>I'm contradicting my above statement -- sorry-not-sorry. Adding some limited white Gaussian noise should make the result more robust to unanticipated pseudo-features in your data. Any sort of machine learning model is famous for finding correlations on the most unexpected (and, to our minds, stupid) things, like &quot;all people appear in pictures with fuzzy backgrounds, while landscapes are sharp across the entire picture&quot;.</p> <p>If you inadvertently build a truism into your model (all real ships are recorded in Portsmouth*, all noise with no ships is recorded off the end of the public dock in Bar Harbor**) then that will skew your identifications.</p> <blockquote> <p>But if the idea is to mimic natural conditions, isn't better the original noise?</p> </blockquote> <p>Yes. But if you want to classify ships in <em>any</em> location and <em>any</em> sea condition, then you need to add in original noise from <em>everywhere</em> and <em>everywhen</em>. You also need to take each ship's native sound output and apply whatever reverberation that it's going to be affected by in every place that you want your algorithm to work -- deep sea, the English Channel, the straits of Juan de Fuca (or Gibraltar), mirror-smooth sea, 12-foot waves, 12-<strong>meter</strong> waves, any Portsmouth that actually has ships, etc..</p> <p>If I could get grant money I think I'd work on a model that can listen to ambient noises and form its own internal model of reverberations, and then apply the inverse of that model to any &quot;ship's noise&quot; that it hears. I suspect that some of the layers of auditory processing that animals possess do that, by virtue of the half a billion of years of animal evolution that hasn't yet been applied to machine learning.</p> <hr /> <p>* Any Portsmouth -- if you're in any English-speaking country, choose the closest one that actually has ships. Or get all nostalgic and choose Portsmouth, England.</p> <p>** Tillamook, if you're from Oregon. Or any minor port*** anywhere.</p> <p>*** Sorry B.H. and Tillamook. But really -- compare yourself to New York. Or Portsmouth England. Or San Francisco. Whatever.</p>
304
signal denoising
How do you apply Kalman Filter to track a signal?
https://dsp.stackexchange.com/questions/17333/how-do-you-apply-kalman-filter-to-track-a-signal
<p>The example that I've seen on state estimation involves deriving the ABCD matrix of a physical system (i.e. falling object) and tracking that object.</p> <p>I would like to use Kalman Filter for signal denoising applications (specifically EEG signal). How could I apply KF in this situation since ABCD matrix cannot be derived for a signal.</p>
<p>You'd probably be better off with a an adaptive filter like LMS (Least Mean Squares) or RLS (Recursive Least Squares) than Kalman for something like an EEG signal. As you've pointed out, it can be difficult or even impossible to develop the state model for Kalman for that type of signal. LMS and RLS are "learning" algorithms and better suited.</p> <p><a href="http://en.wikipedia.org/wiki/Adaptive_filter" rel="nofollow">The Wikipedia article on adaptive filters</a></p> <p>mentions the case of an ECG signal as an example application, although it doesn't go into detail.</p> <p>Simon Haykin's "Adaptive Filter Theory" is a good reference on these types of filters.</p>
305
signal denoising
Denoise a randomly occuring signal
https://dsp.stackexchange.com/questions/11485/denoise-a-randomly-occuring-signal
<p>I have a program which takes in data from an oscilloscope, and due to reflections in the medium, we will get random signals at random intervals. The two grey windows at the right bottom show the original data (below red line), and after FFT forward and backward (above red line). </p> <p>The data set is originally 10,000 points, so the FFT has 20000 doubles stored. You can see in the command shell that the backwards FFT uses 19000 of those 20000 values. My issue is at the green rectangle I get waves before and after the real signal, which means I can hardly denoise it due to the signal itself being a rather sharp and high frequency. </p> <p>Would I first find and extract the areas where the signal itself is present, and then run the FFT on that subset, or how would I go with denoising such an area?</p> <p>The noise itself is not visible, since the 10k data points are scaled to 500 pixel, but its random electric noise coming from the oscilloscope.</p> <p>Link to the image in big: <a href="https://i.sstatic.net/pQ7Qu.png" rel="nofollow noreferrer">https://i.sstatic.net/pQ7Qu.png</a></p> <p>The other two images show the similar data being captured, once being averaged 128 times, one a single set. I need to find a way to denoise the single capture.</p> <p><img src="https://i.sstatic.net/XUIgb.png" alt="Dataset and current settings"></p> <p><img src="https://i.sstatic.net/Rzlqe.jpg" alt="Data non-average"></p> <p><img src="https://i.sstatic.net/FDrHk.jpg" alt="Data averaged"></p>
306
signal denoising
Denoise Techniques When Clean Signal and Pure Noise Are Available
https://dsp.stackexchange.com/questions/47815/denoise-techniques-when-clean-signal-and-pure-noise-are-available
<p>I have the clean version of the signal.<br> I can obtain the environmental noise. I want to apply an effective denoising technique on a noisy signal (i.e., clean plus environmental noise).</p> <p>Some observations:</p> <ul> <li>The noise to signal ratio is extremely low.</li> <li>The noise is spread across all frequencies in the frequency domain.</li> </ul> <p>Are there any techniques for possibly taking advantage the noise and the clean signal towards denoising the noisy signal?</p>
<p>I'm not sure I understand the question, but if you have the exact waveform you want to recover, you can basically employ a <a href="https://en.wikipedia.org/wiki/Matched_filter" rel="nofollow noreferrer">matched filter</a> to detect the existence of the signal in the acquired data. </p> <p>This does not really constitute denoising, but if you have full knowledge of the signal and the noise functions, I'm not sure how denoising would be useful, as the best possible denoising would be to just subtract the noise from the function (assuming all noise is additive).</p> <p><strong>Edit</strong> I am not able to comment (not enough reputation) so I'm going to reply here.</p> <blockquote> <p>Imagine that I have different classes of clean signals. I know what the signal looks like in each one of them. Before I turn the signal source on I listen for environmental noise. Then the signal source turns on but I don't know which kind of signal it is transmitting. Is there a way to denoise the noisy signal to be able to infer which signal was transmitted? – dr.doom 18 hours ago</p> </blockquote> <p>I'm now very convinced that you are looking for a matched filter. In fact, that's exactly their application.</p> <p>I don't exactly know how many types of signals you are using for this, but attempt the following:</p> <ul> <li>Acquire noisy data.</li> <li>Cross-correlate your noiseless signals one by one with the retrieved signal.</li> <li>Any sharp peak will indicate the presence of your signal in the retrieved data.</li> </ul> <p>Now, there is a possibility that your signals are not orthogonal (that is, they are similar enough that you will measure some correlation for more than one signal), so in that case you should look for the largest correlation, and not just any signal that produces a correlation peak when matched with the data.</p>
307
signal denoising
sparse representation for image denoising
https://dsp.stackexchange.com/questions/30557/sparse-representation-for-image-denoising
<p>When I read papers on image denoising, I always encounter sparse representation. For image denoising, we try to separate image signal from noise. It is assumed that signal is correlated and noise is uncorrelated. Sparse representation represents one signal as a linear combination of a small number of dictionary elements. It hope to use as few as non-zero coefficients to represent signal. In my view, it is appropriate for image compression. How can it is related to image denoising?</p>
<p>I will start the explanation from the compression viewpoint. There are two main types: lossless compression, and lossy compression. Noise, at least divergence or loss from the original data, arises only with lossy compression. </p> <p>When one considers the original data as the "clean" reference, lossy compression adds a amount of loss related (generally vaguely increasing) to the compression ratio allowed.</p> <p>However, it is interesting to take another perspective, which I borrow from <a href="http://dx.doi.org/10.1109/78.482110" rel="nofollow">Filtering Random Noise from Deterministic Signals via Data Compression</a>, 1995, B. K. Natarajan, that I found influential. </p> <p>The main idea starts from the observation that noise (with a sense of randomness, lack of predictability) is hard to compress, while structured signals possess a certain amount of correlation that somehow can be compacted. In other words, when data is easy to compress, it is likely to be structured.</p> <p>This led Natarajan to the definition of Occam filters, a name related to W. of Ockham and his (alleged) law of parsimony. Starting from noisy data, one compresses it to a loss close to the original noise power. If the compression system is well designed, this tends to remove (partly) some noise from the original signal. Shocking enough, lossy compression can denoise, killing two birds (denoising and compression) with one stone. This can be found for instance in <a href="http://dx.doi.org/10.1109/83.862633" rel="nofollow">Adaptive wavelet thresholding for image denoising and compression</a>, 2000, where the noise limit can be made adaptive.</p> <p>Sparse representations, such as the wavelets above, but other transforms as well, have been found since effective at concentrating the energy of the structured signals, and at spreading or whitening some random noises. Being sparse, the sparse representation of the image has structures easy to compress, and spread noises. The very same action of obtaining a sparse representation is a good preprocessing for both compression and denoising.</p> <p>Note however that some abuse about "compression". Actual compression encompasses quantization, coefficient location, entropy coding, etc., and merely producing few large coefficients and many zeroes does not really qualify for compression.</p> <p>Finally, the analogy between compression and image denoising is not valid for all images, all types and levels of noises, but it is sufficiently correct in standard cases to allow them to be taught together.</p> <p>As for comments:</p> <ul> <li><p>if you have a sparse or compressible signal $x$ (in some representation $f$), and add uncorrelated noise $n$ to it, then applying $f$ on $x+n$ is not denoising <em>per se</em>, because for denoising you need to decide, or choose, what to keep as a signal component and what to filter as a noise. But it generally provides:</p> <ul> <li>a clearer "distinction" between them, in terms of (local) amplitude, that often simplify the use of statistical tools, or make them more robust</li> <li>better access to signal, or noise, properties, to parametrize the denoising</li> </ul></li> <li><p>orthogonal transformations are quite good, but the orthogonality limit their ability to sparsely represent signals. Overcomplete transforms induce correlations in the noise (<a href="http://arxiv.org/pdf/1108.5395" rel="nofollow">Noise Covariance Properties in Dual-Tree Wavelet Decompositions</a>), but as the signal coefficients are sparser, the relative amplitude of (sparse) signal coefficients and noise is (optimally) more favorable. Examples of correlated noise whitening with for instance redundant complex wavelets can be found for instance in the above <a href="http://arxiv.org/pdf/1108.5395" rel="nofollow">reference</a> or <a href="https://pdfs.semanticscholar.org/d2bc/d8c1af2c7ded8fa6c1652c9e32e2a304cf6a.pdf" rel="nofollow">Removal of Correlated Noise by Modeling the Signal of Interest in the Wavelet Domain</a>.</p></li> </ul> <p>Of course, a lot the above assumes that the signal is sparse, that the noise is not correlated, and that you are able to find a proper representation, which is not evident in practice.</p>
308
signal denoising
Denoising a signal using eigendecomposition
https://dsp.stackexchange.com/questions/53269/denoising-a-signal-using-eigendecomposition
<p>I have a <strong>complex</strong> observable series <span class="math-container">$Y(t)$</span> which is the result of summing two <strong>complex</strong> r.v <span class="math-container">$X(t)$</span> (unobservable) and a <span class="math-container">$\epsilon(t)$</span> (unobservable).</p> <p><span class="math-container">$$Y(t)=X(t)+\epsilon(t)$$</span></p> <p>Assume that <span class="math-container">$X$</span> and <span class="math-container">$\epsilon$</span> are independent, and that <span class="math-container">$\epsilon$</span> is serially uncorrelated and has constant mean and variance. </p> <p>I construct a Hankel Matrix of the observed series and then I create a sort of variance-covariance matrix by multiplying the later by its conjugate transpose.</p> <p><span class="math-container">$$ R_Y=\mathcal{Y}\cdot \mathcal{Y^*}=\mathcal{X}\cdot \mathcal{X^*}+\mathcal{E}\cdot \mathcal{E^*}$$</span> (The last equality holds because of the independence of <span class="math-container">$X$</span> and <span class="math-container">$\epsilon$</span>.)</p> <p>After that, I make the eigendecompositon of that matrix and I obtain:</p> <p><span class="math-container">$$U\cdot (\Lambda_X +\sigma^2\cdot I)\cdot U^T$$</span> Where <span class="math-container">$U$</span> is the orthogonal matrix of eigenvectors, <span class="math-container">$\Lambda_X$</span> is a diagonal matrix containing the eigenvalues (in decreasing order) of <span class="math-container">$\mathcal{X}\cdot \mathcal{X^*}$</span> and <span class="math-container">$\sigma^2$</span> represents the "variance" of <span class="math-container">$\epsilon$</span> (under my assumptions <span class="math-container">$\mathcal{E}\cdot \mathcal{E^*}$</span> is a diagonal matrix with <span class="math-container">$\sigma^2$</span> in its diagonal.</p> <p>Assume that the matrix <span class="math-container">$\mathcal{X}\cdot \mathcal{X^*}$</span> has not full rank, and because of that it has some null eigenvalues.</p> <p>By the eigendecomposition of <span class="math-container">$R_Y$</span> we could select which eigenvalues correspond to the <span class="math-container">$\mathcal{X}\cdot \mathcal{X^*}$</span> matrix, (by considering the first <span class="math-container">$d$</span> eigenvalues, where <span class="math-container">$d$</span> represents the rank of the matrix <span class="math-container">$\chi \chi ^*$</span>) and then we can estimate <span class="math-container">$\sigma^2$</span>. </p> <p>Knowing <span class="math-container">$\sigma^2$</span> we could estimate <span class="math-container">$R_X=\mathcal{X}\cdot \mathcal{X^*}$</span>.</p> <blockquote> <p>Is there any way to factorize <span class="math-container">$R_X$</span> in order to obtain <span class="math-container">$\mathcal{X}$</span> and then by the one-to-one relation between Hankel matrices and Series, obtain <span class="math-container">$X(t)$</span>? if this is possible, under which additional conditions?</p> </blockquote>
<p>So I'll get this answer started by saying again that what you are describing reminds me a lot of Singular Spectrum Analysis. The process is pretty much exactly how you describe: you have some observed signal, <span class="math-container">$Y[n]$</span>, which is some sort of mixture, and we take <span class="math-container">$Y[n]$</span> and create a Hankel matrix from it using some desired frame length M (this will effectively determine your sub-space resolution).</p> <p>Now that we have our Hankel matrix, we can either compute <span class="math-container">$R_Y$</span> directly and then do an eigendecomposition, or we could go with a Singular Value Decomposition of the trajectory matrix. You'll find them to essentially be equivalent if we use the left singular vectors; either method is perfectly valid.</p> <p>So once we get our matrix of eigenvalues, <span class="math-container">$U$</span>, we need to project them using your typical Karhunen-Loeve method to get what we'll call principal components (this is sort of classic SSA notation, I dislike the naming but I digress):</p> <p><span class="math-container">$P = Y^H U$</span>.</p> <p>Now that we're here, we can use the singular values to determine amplitude of the signals themselves, and the principal components essentially determine the "parts" of the signal, i.e. <span class="math-container">$X[n]$</span> and <span class="math-container">$\epsilon[n]$</span>.</p> <p>Let's say we have N principal components, and we know that we only "need" the first one: from our principal components, we perform what is typically called Eigentriple grouping via the following multiplication:</p> <p><span class="math-container">$C = UP^H$</span></p> <p>This new matrix, C, is what we call the reconstructive component(s) of <span class="math-container">$Y$</span>, and is based on which principal components we selected. We're not quite done yet, becuase <span class="math-container">$C$</span> is the same dimension as our Hankel matrix, <span class="math-container">$Y$</span>, so do get back to our signal, we'll need an additional step. To reconstruct the series itself, we'll perform what's called diagonal averaging (a bit of a misnomer I think). The goal here is that we'll finally map back to a single signal, let's call it <span class="math-container">$Y_out[n]$</span>. It's in this step that we'll be able to utilize that one-to-one relationship of the Hankel to the series to extract out signals. This step is a bit long winded, so I've omitted it for brevity, but you can consult <a href="https://en.wikipedia.org/wiki/Singular_spectrum_analysis#Basic_SSA" rel="nofollow noreferrer">Wikipedia</a> or any of the many papers/texts on SSA for more information.</p> <p>So now that we've briefly gone through it, you might find yourself asking "when will this method actually work". SSA, like anything, is a tool and it has its limits. From a signal processing standpoint, it's great at determining individual signals which are fairly narrowband (i.e. they don't cover the entire spectrum) and are "separable". In the context of SSA, "separable" means that the signals are non-overlapping in frequency. For a simple example, let's say I have two sinusoids, one at 50Hz, and another at 500Hz. SSA will be able to easily separate the two signals, and we will be able to isolate the signals pretty easily; we would call these signals separable. However, let's say I change the second signal to be 51Hz instead of 500Hz. In this case, SSA will struggle, as the signals are very, very close to one another; we would call these signals non-separable.</p> <p>How do we determine which signals we care about? That's the difficulty with this method. If you have some appreciable signal-to-noise ratio (SNR), you can perhaps use that to your advantage and identify which signals are of interest and which signals are just purely noise by inspecting the eigenspectrum; true signals as represented by the principal components will likely be associated with large eigenvalues, whereas noise will be relatively low power, and spread.</p> <p>So, in short, if you have some time series observation, you can use SSA to attempt to discern individual signals/trends. It's a really powerful tool, and as long as the signals are separable, you should have some reasonable success. Hope that helps!</p>
309
signal denoising
what is a Gaussian signal?
https://dsp.stackexchange.com/questions/27890/what-is-a-gaussian-signal
<p>I am learning Wiener filtering for 2D images by myself. From one book, it reads</p> <p><em>It should be noted that the study of the more general problem of signal denoising dates back to at least Norbert Wiener in the 1940s. The celebrated Wiener filter provides the optimal solution to the recovery of Gaussian signals contaminated by AWGN. The derivation of Wiener filtering, based on the so-called orthogonality principle, represents an elegant solution and the only known situation where constraining to linear solutions does not render any sacrifice on the performance.</em> </p> <p>It mentions Gaussian signals. I know Gaussian noise. What is Gaussian signal? I cannot find the definition of Gaussian signal via google. Does it mean one signal is contaminated by Gaussian noise? </p>
<p>The stochastic description of signals is realized by random processes. The book you cite actually speaks of a Gaussian random process. By definition, every random variable drawn from that process has a Gaussian probabilty density function. Mathematically speaking: let $\mathbf X(n)$ be the random process representing the signal, then for every $n_1$ the random variable $$ X=\mathbf X(n_1) $$ is Gaussian distributed. This in turn means, that $X$ has the probability density function $$ p_\mathrm X (x) = \frac{1}{\sqrt{2\pi\sigma^2}}\mathrm e^{-\frac{(x-\mu)^2}{2\sigma^2}}, $$ where $\mu$ and $\sigma$ are the mean and standard deviation of $X$, respectively.</p>
310
signal denoising
FFT denoising vs low pass filter
https://dsp.stackexchange.com/questions/82676/fft-denoising-vs-low-pass-filter
<p>Say I have a noisy signal in which I want to denoise.</p> <p>There's two methods I'm considering</p> <ol> <li>FFT denoising, where I take the FFT of the signal and then threshold somewhere, attenuate all the frequencies below this threshold, and then take the IFFT.</li> <li>Low pass filtering, which in the Fourier domain is just attenuating all the frequencies above a particular frequency anyway.</li> </ol> <p>My question is, what's the difference? Aren't we effectively selecting the frequencies we want at the end of the day? When would I prefer one over the other? I can only think using FFT denoising if we know the noise spans ALL of the spectrum including the lower frequencies.</p>
311
signal denoising
Bibliographic References on Denoising Distributed Acoustic data with Deep Learning
https://dsp.stackexchange.com/questions/85371/bibliographic-references-on-denoising-distributed-acoustic-data-with-deep-learni
<p><strong>Distributed Acoustic Sensing (DAS)</strong></p> <p>I have an iDAS (intelligent distributed acoustic sensing) dataset obtain from an undersea optical fibre. iDAS data have a 2D dimensional representation. On the one axis we have the channel axis, i.e. the point on the cable from which we measure the strain rate obtained from the backscatter light (Rayleigh Backscatter) on that point and on the other axis we have the sampling points obtained with fixed frequency in time. Therefore, iDAS data have both spatial and temporal information. Another way to think of this is by looking a particular channel, then, for this fixed channel we obtain a signal which measures the strain rate of the cable with respect to time.</p> <p><strong>Motivation</strong></p> <p>This technology can be used in various applications, e.g. earthquake detection (see [<a href="https://www.nature.com/articles/s41467-020-15824-6" rel="noreferrer">1</a>] and this <a href="https://www.youtube.com/watch?v=Q2XyMu_3fqY" rel="noreferrer">video</a> fro example), for detecting volcanic events [<a href="https://www.nature.com/articles/s41467-022-29184-w" rel="noreferrer">3</a>] and many others. However, a big challenge on these datasets is to alleviate the noise that might occur from irrelevant events. My aim is approach this problem via a Self-Supervised Deep learning approach. There are a some papers in the literature addressing this approach such as [<a href="https://pdfs.semanticscholar.org/a7e6/1d3c1f69412a26bb1e6f29518ecec72daef3.pdf" rel="noreferrer">4</a>]. I have verified the approach in [<a href="https://pdfs.semanticscholar.org/a7e6/1d3c1f69412a26bb1e6f29518ecec72daef3.pdf" rel="noreferrer">4</a>] on the datasets that the authors use and works also in some other cases. However, I would like to improve the results on a specific dataset.</p> <p><strong>Question</strong></p> <p>Therefore, I would be very pleased if anyone can provide any references, ideas or approaches (e.g. different architectures) for this problem. One idea is to approach to this problem via Vision Transformers, e.g. similar to [<a href="https://arxiv.org/pdf/2111.06377.pdf" rel="noreferrer">5</a>]. Also, papers related to signal denoising via Self Supervised techniques might also provide valuable information related to the problem.</p> <p><strong>References</strong></p> <p>[<a href="https://www.nature.com/articles/s41467-020-15824-6" rel="noreferrer">1</a>] <em>Distributed acoustic sensing of microseismic sources and wave propagation in glaciated terrain.</em></p> <p>[<a href="https://www.youtube.com/watch?v=Q2XyMu_3fqY" rel="noreferrer">2</a>] <em>Fiber Optic Seismology In Theory And Practice (Video Webinar on YouTube).</em></p> <p>[<a href="https://www.nature.com/articles/s41467-022-29184-w" rel="noreferrer">3</a>] <em>Fibre optic distributed acoustic sensing of volcanic events.</em></p> <p>[<a href="https://pdfs.semanticscholar.org/a7e6/1d3c1f69412a26bb1e6f29518ecec72daef3.pdf" rel="noreferrer">4</a>] <em>A Self-Supervised Deep Learning Approach for Blind Denoising and Waveform Coherence Enhancement in Distributed Acoustic Sensing Data.</em></p> <p>[<a href="https://arxiv.org/pdf/2111.06377.pdf" rel="noreferrer">5</a>] <em>Masked Autoencoders Are Scalable Vision Learners.</em></p>
<p>I'll compare this to the problem of equalization in optical fibres.</p> <p>Assume a single straight fibre for a start, and neglect noise.</p> <p>There's some localized phenomenon that effects a pressure wave to propagate through the water in which that fibre is suspended, straining it in different positions. Probably, this is overlayed with some slower, less localized effects (basically, plane wavefronts hitting the fibre at different angles.</p> <p>The strain over fibre distance is hence a function of the geometric distance of source of phenomenon, potentially different propagation speeds (if we need to assume the water isn't homogenous w.r.t. that), yielding a different delay between source and strain effect, and the excitation of that source; that means we can model the source –&gt; strain-over-length channel as one linear channel with an array of 1D <em>impulse responses</em> (or, if we want to stay spatially continuous, a 2D impulse response).</p> <p>There's different ways of evaluating the Rayleigh scattering, but we can basically all break them down to sending in a pulse (or a spread pulse) and looking for the echos (or, if necessary, despread at the receiver to get a pulse + echos).</p> <p>So, we can also model this, for any fixed instant, as an (1D) impulse response. The impulse response depends on the strain-over-length; it's basically equivalent. It's a time-dependent function of the strain as described above.</p> <p>In total we get:</p> <ol> <li>Source signal —&gt;</li> <li>Convolution with 2D impulse response (location relative to source × time) —&gt;</li> <li>Convolution with 2D impulse response (length along cable × time) —&gt;</li> <li>observed signal</li> </ol> <p>Therein, 2. and 3. are linear operators of concatenable dimensionality, which we can't really tell apart – so, we'll treat that <em>one</em> unknown linear mapping.</p> <p>(That's, by the way, the polystatic radar problem.)</p> <p>Sadly, neither do you know 1., nor is 4. free of noise. This is a bit of an <em>equalizer</em> problem: The <em>one</em> job of communications technology is to establish knowledge of what some source sent at a receiver, under noise, and under an unknown channel. An equalizer's job is to &quot;revert&quot; the channel as good as possible without making noise worse more than you learn about the original signal through that, to get the channel out of the equation.</p> <p>In wireless/wired/optical comms, you can often exploit knowledge on the signal (1.) in your equalizer: usually, you build both sides of a communication system, so that you can make the transmitter transmit some wideband fixed sequence whose (conjugate) time-inverse you can directly use to convolve the received signal, finding an approximation of the impulse response of the channel. Or, you make use of statistical properties (&quot;every fixed time interval a complex value of of magnitude 1 is sent&quot;, &quot;if I look every fixed time interval <span class="math-container">$T$</span>, the values are uncorrelated&quot; etc). If you can't know such things about the transmit signal, it's called a <em>blind</em> equalizer (or more generally, a blind estimator).</p> <p>If you could find an equalizer that combines knowledge from all your fibers, which would allow you to reconstruct the whole-system impulse response, you'd have built something that you can use to identify, for each fiber, which part of the observation is caused by the phenomenon you're interested in, and which is uncorrelated noise.</p> <p>Now, small problem: I'd assume you have very limited data where you know exactly what part of that (probably pretty large, pretty noisy) set of distance-strain observations is the noise and what the effect of the phenomenon.</p> <p>I'd argue this might be a good time for a GAN, a Generative Adversarial Network.</p> <p>I'll assume you have a basic simulator for the source phenomenon -&gt; strain distribution system.</p> <p>Think of it as first building a discriminator between &quot;real observation&quot; and &quot;simulated observation that's not real&quot;. Think of the discriminator as a black box with a switch between two inputs, either &quot;simulated clean data&quot; or &quot;preprocessed real noisy input&quot;. You don't tell it which input is active, the job of the discriminator is to tell you which.<br> Now, I wrote &quot;preprocessed&quot;: the preprocessor's job is to <em>fool</em> the discriminator into thinking it's dealing with real, no-noise input data. The idea in the end is that this &quot;fooling&quot; so-called <em>generator</em> learns to, by trying to trick the generator into thinking it's producing clean data, denoise/de-distort the observation.</p> <p>Herein lies the trick: you alternate between training the discriminator and the generator. In the end, you've got a Generator that you can use as preprocessor to remove much noise.</p> <blockquote> <p>Lauinger, Vincent, et al. &quot;Blind and Channel-agnostic Equalization Using Adversarial Networks.&quot; arXiv preprint <a href="https://arxiv.org/abs/2209.07277" rel="noreferrer">arXiv:2209.07277</a> (2022)</p> </blockquote> <p>does exactly that to solve the equalization problem in nonlinear fiber optical channels. Here, the generator takes over the role of an equalizer. You would not necessarily learn an equalizer as well, but your problem reminded me enough of this problem that I wanted to draw the parallels, so that you can more easily understand the paper.</p> <p>The nice thing is: you can define the boundaries of what your generator should learn to your own likings. Maybe reproducing plausible strain-over-distance vectors for a bunch of fibers is the thing you need? Fine! Maybe you want to actually do source location, so the output is actually a source-location-over-time estimate vector? Fine!</p>
312
signal denoising
PSNR decreases while denoising
https://dsp.stackexchange.com/questions/89794/psnr-decreases-while-denoising
<p>I currently am trying to portray PSNR's ability to measure noise in an image. The PSNR is said to be high (around 40 dB) if there is a low amount of noise in the 2D signal.</p> <p>In my experiment I have fairly noisy image (artificially added) and through 6 iterations I add more and more denoising strength with <code>fastNlMeansDenoisingColored</code> from <code>opencv</code>.</p> <p><a href="https://i.sstatic.net/7tvVR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7tvVR.png" alt="PSNR and MSE over 6 steps of applying denoising" /></a></p> <p>Obviously we can see that <code>fastNlMeansDenoisingColored</code> smoothes out the image more and more but is that the actual reason why the PSNR decreases?</p> <p>I'm trying to calculate the PSNR by implementing it myself:</p> <pre class="lang-py prettyprint-override"><code>def get_psnr(original_image, denoised_image, R=255): assert original_image.dtype == denoised_image.dtype mse = np.mean((original_image.astype(np.float32) - denoised_image.astype(np.float32)) ** 2) if mse == 0: return np.inf psnr = 20 * np.log10(R / np.sqrt(mse)) return psnr </code></pre> <p>During all the steps I add more denoising effect but in every step I measure the PSNR of the original input image and the denoised image. The <code>MAX_VALUE</code> or <code>R</code> in my case always stays 255 as I don't normalize or standardize my image's pixel values.</p>
313
signal denoising
Best way to measure effectiveness of discrete wavelet denoising?
https://dsp.stackexchange.com/questions/69076/best-way-to-measure-effectiveness-of-discrete-wavelet-denoising
<p>I am using matlab wavelet toolbox to denoise physiological signals, I am plotting the denoised signal on top of the original noisy signal and making sure spikes were not removed as a measure of effectiveness as well as plotting the FFT of the two signals (original and denoised).</p> <p>Is this a correct way to check my results of DWT denoising? I saw some paper using mean squared error and SNR as a check.</p>
314
signal denoising
A self-supervised learning technique to denoise my specific signal
https://dsp.stackexchange.com/questions/84570/a-self-supervised-learning-technique-to-denoise-my-specific-signal
<p>So I work in this domain of biophysics that has to do with a light-based detection for measuring small movement of molecules (nanometer and piconewton scale) via a Quadrant Photodiode. This signal contains lots of information but is riddled with noise. One of the challenges is denoising this signal and while conventional methods such as savitsky-golay tends to work well there are set cutoff and threshold values that go into this method which makes it not as feasible.</p> <p>Time-series traces from this measurement look like a sawtooth curve and as the particle moves in space and time, the noise changes (so noise is the not the same everywhere) (<strong>Figure attached below</strong>).</p> <p>My question is - I have noise measurements from this signal (I have recordings where sawtooth event never happens and only noise is left). Can I train a self-supervised learning method to denoise this signal using my known noise recordings? For example - is there a high-frequency bandpass filter that takes in some noise and can be trained to automatically smooth this curve to what we might expect the ground truth to be? Is there a better approach to it? If my question is unclear please let me know and I can provide more information.</p> <p>Thanks.<a href="https://i.sstatic.net/MEM1E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MEM1E.png" alt="Figure1" /></a></p>
<p>You may have a look at the method called <a href="https://ieeexplore.ieee.org/document/9693096" rel="noreferrer">JOT: A Variational Signal Decomposition Into Jump, Oscillation and Trend</a> (You may access it in <a href="https://www.ipol.im/pub/art/2023/417/" rel="noreferrer">A Two Stage Signal Decomposition into Jump, Oscillation and Trend Using ADMM</a>).</p> <p>This method basically does what you're after, it decomposes the signal into 3 signals:</p> <p><a href="https://i.sstatic.net/e4RAq.png" rel="noreferrer"><img src="https://i.sstatic.net/e4RAq.png" alt="enter image description here" /></a></p> <p>You may look on the results of a signal similar to yours:</p> <p><a href="https://i.sstatic.net/9afcP.png" rel="noreferrer"><img src="https://i.sstatic.net/9afcP.png" alt="enter image description here" /></a></p> <p>The method is quite simple if you know ADMM.<br /> In any way, they supply code.</p>
315
signal denoising
understand short time fourier transform
https://dsp.stackexchange.com/questions/83023/understand-short-time-fourier-transform
<p>I am reading this <a href="https://paris.cs.illinois.edu/pubs/liu-interspeech2014.pdf" rel="nofollow noreferrer">paper</a> for signal denoising. In the paper, the authors says</p> <blockquote> <p>The core concept in this paper is to compute a regression between a noisy signal frame and a clean signal frame in the frequency domain. To do so we start with the obvious choice of using frames from a magnitude short-time Fourier transform (STFT). Using these features allows us to abstract many of the phase uncertainties and to focus on ``turning off&quot; parts of the input spectral frames that are purely noise.</p> </blockquote> <p>I don't understand the last sentence and hopefully, someone could explain it to me.</p> <p>Thank you in advance.</p>
<p>To understand:</p> <blockquote> <p>Using these features allows us to abstract many of the phase uncertainties and to focus on ``turning off&quot; parts of the input spectral frames that are purely noise.</p> </blockquote> <p>check out <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.9812&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">reference 6</a> in the linked paper.</p> <p>That reference contains the image:</p> <p><a href="https://i.sstatic.net/lUtS7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lUtS7.png" alt="Figure 4 from referenced paper by Wan and Nelson." /></a></p> <p>As you can see, this paper uses the DFT of the noisy signal, and the spectrum of the noise and &quot;turns off&quot; (subtracts) the one from the other. The phase information from the original signal’s DFT is applied to the results before synthesis of the estimated speech signal is done with the inverse DFT.</p> <hr /> <p>Added to answer rb-j's comment: <span class="math-container">$\hat{P}$</span> is just the power spectrum estimate and <span class="math-container">$\gamma$</span> can be chosen for any number, but they suggest <span class="math-container">$1$</span> for power processing or <span class="math-container">$0.5$</span> for magnitude processing.</p> <p><a href="https://i.sstatic.net/f5ijg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5ijg.png" alt="Explanation of terms in previous image" /></a></p>
316
signal denoising
Disadvantages of wavelet transform
https://dsp.stackexchange.com/questions/15148/disadvantages-of-wavelet-transform
<p>I have a question related to wavelet transform: we know that while the Fourier transform is good for a spectral analysis or which frequency components occurred in signal, it will not give information about at which time it happens. That's why the wavelet transform is suitable for the time-frequency analysis. It is also good for signal denoising, but of course it has some disadvantages.</p> <p>So I would like to know what are main advantages of the wavelet transform? Is it good for spectral estimation; like finding amplitudes, frequencies and phases, or it just helps us to find discontinuous and irregularities of a signal?</p> <p>Thanks in advance</p>
<p>If you consider the whole set of potential wavelet transforms, then you have a lot of flexibility. </p> <p>For instance, should you use 1D continuous complex wavelet transforms, by analyzing the modulus and the phase of the scalogram, and provided you use well-chosen wavelets (potentially different for the analysis and the synthesis), and a proper discretization, you can:</p> <ul> <li>find discontinuities and irregularities of a signal and its derivatives <a href="https://i.sstatic.net/jg016.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jg016.jpg" alt="enter image description here"></a></li> <li>find break point location by wavelet ridge extrapolation</li> <li>denoise</li> <li>perform matched filtering based on templates (with <a href="http://arxiv.org/abs/1108.4674" rel="nofollow noreferrer">complex continuous</a> or <a href="http://arxiv.org/abs/1405.1081" rel="nofollow noreferrer">discrete dual-tree wavelet</a> frames)</li> <li><a href="http://www.scholarpedia.org/article/Wavelet-based_multifractal_analysis" rel="nofollow noreferrer">analyse (multi-)fractalty</a></li> <li>analyse frequencies (with Gabor wavelets for instance)</li> </ul> <p>Due to the redundancy, and the quantity of available wavelets (not the same is best for different purposes), they could appear a little less efficient for the analysis of pure stationary and harmonics signals, for which Fourier is better suited.</p> <p>The main drawbacks are:</p> <ul> <li>for fine analysis, it becomes computationaly intensive </li> <li>its discretization, the discrete wavelet transform (comp. efficient), is less efficient and natural</li> <li>it take some energy to invest in wavelets to become able to choose the proper ones for a specific purpose, and to implement it correcly.</li> </ul>
317
signal denoising
Help with denoising signal and periodogram analysis resources
https://dsp.stackexchange.com/questions/71917/help-with-denoising-signal-and-periodogram-analysis-resources
<p>This is a cross posting from the crossvalidated stack exchange as I thought this may be a better forum to ask.</p> <p>I have a dataset consisting of respiratory time series signals of different lengths obtained from different groups of patients. I want to either classify or cluster the patients using these timeseries by using the commonalities of the time series of each group. However, I have no experience in dsp.</p> <ol> <li><p>Firstly, I am confused if I am supposed to filter my signals to get rid of any frequencies above the Nyquist frequency. My sampling frequency is 32Hz and my time series is somewhat noisy and has some artifacts. I am also unsure of which filter to select for this.</p> </li> <li><p>Secondly, I wanted to look at the periodogram and the average power spectral density at each frequency within a group - but I am not sure if I understand the periodogram very well - if I have different time series lengths then my periodogram length will vary too, so I am not sure how this comparison can be made.</p> </li> </ol> <p>Being from Pure Math, I know Fourier analysis purely from the perspective of functions and using Fourier transforms to obtain the coefficients that describe the projection of these functions onto an orthonormal system. With periodograms however, I noticed that the x-axis represents sample frequencies. I am confused with the distinction between sampling frequencies vs. underlying frequencies of the generating function (say I have <span class="math-container">$\sin(2\pi x)$</span> sampled at 10Hz, does the periodogram characterize the 1Hz underlying frequency of the function?)</p> <p>Any resources on understanding how to analyze and remove noisy components of time signals from a machine learning perspective would be much appreciated! Due to time constraints, I have shied away from long textbooks on digital signal processing. Thanks a lot.</p>
<blockquote> <p>Firstly, I am confused if I am supposed to filter my signals to get rid of any frequencies above the Nyquist frequency. My sampling frequency is 32Hz and my time series is somewhat noisy and has some artifacts. I am also unsure of which filter to select for this.</p> </blockquote> <p>That ship has sailed.</p> <p>Let <span class="math-container">$S=\left\{\left.\alpha e ^{i(\omega t+\varphi)}\right|\alpha &gt; 0, 0\le \varphi &lt;2\pi, \omega \in \mathbb R\right\}$</span>, i.e. the set of all distinct complex sinusoid with an amplitude, frequency and phase.</p> <p>Then <span class="math-container">$(S,\cdot)$</span> is a commutative semigroup (proof trivial).</p> <p>Introducing the equivalence relation <span class="math-container">$\sim: a\sim b \iff a\left(\frac{n}{r}\right)= b\left(\frac{n}{r}\right) \forall n\in\mathbb Z$</span> (&quot;two signals are identical after sampling with rate <span class="math-container">$r$</span>&quot;), we see that signals <span class="math-container">$s_l=\alpha e^{i(\omega_l t + \varphi)}, l=1,2,\ldots$</span> are <span class="math-container">$s_1\sim s_2$</span> if <span class="math-container">$\frac{\omega_1-\omega_2}{2\pi}=nr, n\in\mathbb Z$</span>, i.e. we can't tell signals apart after sampling if their frequency differed by a multiple of the sampling rate.</p> <p>Let's formalize this: <span class="math-container">$T=\left\{\alpha e^{i([ft+\phi]\mod 2\pi)}\right\}$</span> is a quotient semigroup of <span class="math-container">$S$</span>, i.e. <span class="math-container">$T\preceq S$</span>, and each of the elements is a leader of a <span class="math-container">$\sim$</span>-equivalence class – and this homomorpism <span class="math-container">$S\mapsto T$</span> is in fact sampling, which, as we can see above, is not bijective.</p> <p>Hence, all the original frequency components from <span class="math-container">$S$</span> got mapped to some component from <span class="math-container">$T$</span> with frequency normalized to the sampling frequency. That mapping is called <em>aliasing</em>.</p> <p>What an anti-alias filter <span class="math-container">$h$</span> does is</p> <p><span class="math-container">$$h(s): S\mapsto S, h=\begin{cases} s, &amp; f&lt; f_\text{nyquist}\\ 0 &amp; \text{else,} \end{cases} $$</span></p> <p>and as you'll figure out when inserting <span class="math-container">$h(s)$</span> above is that this yields only the elements that are not aliased to a different frequency.</p> <p>Thus, everything that &quot;survives&quot; <span class="math-container">$h$</span> will also &quot;survive&quot; aliasing without undergoing a change in frequency.</p> <p>So, if you needed an anti-aliasing filter, it's now too late. Go and do your recording again.</p> <blockquote> <p>Secondly, I wanted to look at the periodogram and the average power spectral density at each frequency within a group - but I am not sure if I understand the periodogram very well - if I have different time series lengths then my periodogram length will vary too, so I am not sure how this comparison can be made.</p> </blockquote> <p>In that case, the periodogramm is not a useful mapping on its own – you'll need to add something like a truncation / padding operation to bring all signals to the same duration, for example. At which point the periodogram doesn't seem to be a sensible approach anymore.</p>
318
signal denoising
Solving Convex Optimization Problem Used for High Quality Denoising
https://dsp.stackexchange.com/questions/3465/solving-convex-optimization-problem-used-for-high-quality-denoising
<p>The highest voted answer to <a href="https://dsp.stackexchange.com/questions/3066/bag-of-tricks-for-denoising-signals-while-maintaining-sharp-transitions">this question</a> suggests that to denoise a signal while preserving sharp transitions one should </p> <blockquote> <p>minimize the objective function:</p> <p>$$ |x-y|^2 + b|f(y)| $$</p> <p>where $x$ is the noisy signal, $y$ is the denoised signal, $b$ is the regularziation parameter, and $|f(y)|$ is some L1 norm penalty. Denoising is accomplished by finding the solution $y$ to this optimization problem, and $b$ depends on the noise level.</p> </blockquote> <p>However, there is no indication of how one might accomplish that in practice as that is a problem in a <em>very</em> high dimensional space, especially if the signal is e.g. 10 million samples long. In practice how is this sort of problem solved computationally for large signals? </p>
<p>Boyd has <a href="http://www.stanford.edu/~boyd/l1_ls/l1_ls_usrguide.pdf" rel="nofollow">A Matlab Solver for Large-Scale ℓ1-Regularized Least Squares Problems</a>. The problem formulation in there is slightly different, but the method can be applied for the problem.</p> <p><a href="http://eeweb.poly.edu/iselesni/teaching/lecture_notes/sparse_signal_restoration.pdf" rel="nofollow">Classical majorization-minimization approach</a> also works well. This corresponds to iteratively perform soft-thresholding (<a href="http://eeweb.poly.edu/iselesni/teaching/lecture_notes/TV_filtering.pdf" rel="nofollow">for TV, clipping</a>).</p> <p>The solutions can be seen from the links. However, there are many methods to minimize these functionals by the extensive use of optimisation literature.</p> <p>PS: As mentioned in other comments, FISTA will work well. Another 'very fast' algorithm family is primal-dual algorithms. You can see the <a href="http://www.cmap.polytechnique.fr/preprint/repository/685.pdf" rel="nofollow">interesting paper of Chambolle</a> for an example, however there are plethora of research papers on primal-dual methods for linear inverse problem formulations.</p>
319
signal denoising
Using Total Variation Denoising to Clean Accelerometer Data
https://dsp.stackexchange.com/questions/14968/using-total-variation-denoising-to-clean-accelerometer-data
<p>I know this is maybe a very basic question but I am doing this as a hobby and I can't find a solution to this problem. Basically I am trying to remove some noise from data I am reading from an accelerometer. This is what I want to achieve (<a href="http://eeweb.poly.edu/iselesni/lecture_notes/TVDmm/TVDmm.pdf" rel="nofollow noreferrer">Taken from Total Variation Denoising (An MM algorithm)</a>):</p> <p><img src="https://i.sstatic.net/3pc1B.png" alt="enter image description here"></p> <p>I read in <a href="https://dsp.stackexchange.com/questions/3026/picking-the-correct-filter-for-accelerometer-data">Picking the correct filter for accelerometer data</a> that Total Variaton Denoising would fit my needs. So I read <a href="http://en.wikipedia.org/wiki/Total_variation_denoising" rel="nofollow noreferrer">Wikipedia - Total Variation Denoising</a> article from Wikipedia and I think I have to use one of this equations:</p> <p><img src="https://i.sstatic.net/d4eiM.png" alt="enter image description here"></p> <p><img src="https://i.sstatic.net/Nyjec.png" alt="enter image description here"></p> <p>But I don't understand how I apply this to my signal. Suppose I have a set of x,y points like in the plots above, how I apply the equation to that data? I implemented some simple low-pass and high-pass filters like this:</p> <pre><code>gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0]; </code></pre> <p>But this is maybe too complex and I don't know where to start or how. I want to implement this in Java or C so Matlab is not an option (I have seen a lot of MatLab implementing this). I will appreciate any help to guide me in the right direction!</p>
<p>Apart from Total Variation Denoising you could try a first much simpler approach: a <a href="http://en.wikipedia.org/wiki/Median_filter" rel="noreferrer">median-filter</a>. You just move a window along your data and replace the current input value by the <a href="http://en.wikipedia.org/wiki/Median#The_sample_median" rel="noreferrer">median</a> of all data in the window. You just have to optimize the window length (by experimenting).</p> <p>By the way, the equations you copied into your question are for 2-dimensional data, but your data are 1-D.</p>
320
signal denoising
Online Noise Reduction on Non-Stationary, Broadband Signals
https://dsp.stackexchange.com/questions/91556/online-noise-reduction-on-non-stationary-broadband-signals
<p>im capturing ultrasonic waves using an analog MEMS mic and other components on a PCB. My signal is similiar broadband and non-stationary (see <a href="https://innovatus-pub.github.io/abstractpublications_archive/2020/paper1_pdf.pdf" rel="nofollow noreferrer">2</a>) and im trying to remove/reduce environmental noise. I have seen this <a href="https://dsp.stackexchange.com/questions/52691/real-time-background-noise-reduction-or-removal">post</a>, but according to my research, the preferred way to do this is using the (dicrete) wavelet transform (DWT). However, the studies i read don't explain HOW they implemented the DWT for signal denoising. Is there a good reference that explains how to do this in python?</p> <p>Since im developing a system which must work online (e.g. in real-time) at industrial conditions, it is necessary to estimate the noise profile periodically (since machines can be turned on/off at different times). How can i achieve this?</p>
321
signal denoising
Can discrete wavelet transform for denoising purposes be implemented in real time?
https://dsp.stackexchange.com/questions/68400/can-discrete-wavelet-transform-for-denoising-purposes-be-implemented-in-real-tim
<p>I have been researching effective algorithms for denoising biomedical signals (non-stationary) that can be implemented in real time either using FPGA or DSP. I can across many suggestions for algorithms and effectiveness but found DWT to be the one that can denoise most of the noises I am interested in (power line interference, surge, power dips..) I used the wavelet toolbox in matlab and performed denoising and wanted to make sure that this method had been implemented effective before as real-time. I read few papers too but want more opinions.</p>
<p>Even though I'm not much of a wavelet expert, I can answer with a definitive <strong>YES</strong>!!</p> <p>Now, do you have any messy details to complain about, like the size of the box, the power that it consumes, or the number of frames of delay between the taking of the image and its display? Because that's your real problem.</p> <p>Take <em>the particular wavelet transform you want to use</em>. Sit down and <em>with pencil and paper</em> figure out how many calculations (floating point operations, basically multiplies and adds) that need to be performed per frame. Look to see if the transform you want to use works with the fast wavelet transform. Multiply the FLOP per frame by your frame rate -- you should have an answer in MFLOPS or GFLOPS. Now hold that up against available processors or FPGAs, and figure out what needs to be in your box. Then multiply by three, because it always takes more real estate to do a calculation than you think.</p>
322
signal denoising
How do I know if my EEG signal need denoising?
https://dsp.stackexchange.com/questions/45527/how-do-i-know-if-my-eeg-signal-need-denoising
<p>I recently started working on sleep study.</p> <p>For my research I download sleep EEG data from physionet. The EEG data has 100 Hz sampling rate and was recorded from 2 bipolar EEG site.</p> <p>When I start the preprocessing stage, I encounter a simple problem:</p> <p>How would I know if my signal has an artifact or noise?</p> <p>It should be noted, based on Nyquist theorem and my signal's sampling rate, the maximum frequency of my signal is 50 Hz, so I did not filter unnecessary EEG frequency.</p> <p>In general I only used a simple notch filter at 50 Hz, and used simple threshold method in order to remove the epochs that were grossly contaminated by muscle and/or eye movement artifacts.</p> <p>Back to the main question, how should I know if I need to uses more complicated method for removing EMG or EOG artifact from my signal?</p>
<p>It primarily relies on,</p> <ul> <li><strong>what are you looking for in the EEG</strong>: define the frequency band that relates with the phenomena you are interested to investigate and filter out the rest (e.g. for sleep events bandpass ~= 0.8-30Hz).</li> <li><strong>power line of your area</strong>: notch filter at 50Hz or 60Hz.</li> <li><strong>Study design</strong>: if you are investigating events ensemble averages will remove random noise processes and highlight potential EEG events of interest.</li> </ul> <p>Moreover, you can apply baseline removal to de-noise your signal from saturation trends and other systemical artefacts. Other experimental methods include ICA, which you will not be able to apply robustly since you only have two channels.</p>
323
signal denoising
Bag of Tricks for Denoising Signals While Maintaining Sharp Transitions
https://dsp.stackexchange.com/questions/3066/bag-of-tricks-for-denoising-signals-while-maintaining-sharp-transitions
<p>I know this is signal dependent, but when facing a new noisy signal what is your bag of tricks for trying to denoise a signal while maintaining sharp transitions (e.g. so any sort of simple averaging, i.e. convolving with a gaussian, is out). I often find myself facing this question and don't feel like I know what I ought to be trying (besides splines, but they can seriously knock down the right kind of sharp transition as well).</p> <p>P.S. As a side note, if you know some good methods using wavelets, let me know what it is. Seems like they have a lot of potential in this area, but while there are some papers in the 90s with enough citations to suggest the paper's method turned out well, I can't find anything about what methods ended up winning out as top candidates in the intervening years. Surely some methods turned out to be generally "first things to try" since then.</p>
<p>L1 norm minimization (compressed sensing) can do a relative better job than conventional Fourier denoising in terms of preserving edges.</p> <p>The procedure is to minimize an objective function</p> <p>$$ |x-y|^2 + b|f(y)| $$</p> <p>where $x$ is the noisy signal, $y$ is the denoised signal, $b$ is the regularziation parameter, and $|f(y)|$ is some L1 norm penalty. Denoising is accomplished by finding the solution $y$ to this optimization problem, and $b$ depends on the noise level.</p> <p>To preserve edges, depending on the signal $y$, you can choose different penalties such that $f(y)$ is sparse (the spirit of compressed sensing):</p> <ul> <li><p>if $y$ is piece-wise, $f(y)$ can be <a href="http://en.wikipedia.org/wiki/Total_variation" rel="nofollow">total variation</a> (TV) penalty;</p></li> <li><p>if $y$ is curve-like (e.g. Sinogram), $f(y)$ can be the expansion coefficients of $y$ with respect to <a href="http://en.wikipedia.org/wiki/Curvelet" rel="nofollow">curvelets</a>. (This is for 2D/3D signals, not 1D);</p></li> <li><p>if $y$ has isotropic singularities (edges), $f(y)$ can be the expansion coefficients of $y$ with respect to <a href="http://en.wikipedia.org/wiki/Wavelet" rel="nofollow">wavelets</a>.</p></li> </ul> <p>When $f(y)$ is the the expansion coefficients with respect to some basis functions (like curvelet/wavelet above), solving the optimization problem is equivalent to thresholding the expansion coefficients.</p> <p>Note this approach can also be applied to deconvolution in which the objective function becomes to $|x-Hy|+ b|f(y)|$, where $H$ is the convolution operator.</p>
324
signal denoising
Applying Wavelet on energy disaggregation as denoising technique to remove uncertainties
https://dsp.stackexchange.com/questions/62904/applying-wavelet-on-energy-disaggregation-as-denoising-technique-to-remove-uncer
<p>I would like to apply Wavelet in <strong>MATLAB</strong> as a denoising technique on Non-Intrusive load Monitoring data. The data was captured by sensors on each appliance and one sensor on the smart meter (Normally called Aggregated Data). The dataset contains the following:</p> <ol> <li>timestamp</li> <li>Aggregate</li> <li>and Appliance from 1 - 9</li> </ol> <p><strong>Why Denoising and with Wavelets</strong> </p> <p>Research has shown that denoising signals before passing it through your Machine Learning Algorithm give better results. This result can improve prediction accuracy. And I am interested in investigating using wavelet to remove uncertainties since my dataset is in the time domain.</p> <p>This is a sample dataset</p> <p><a href="https://i.sstatic.net/Lo1W3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lo1W3.png" alt="NILM Dataset of a household "></a></p> <p><strong>Approach Used</strong></p> <pre><code>function allXd = denoiseSignalToCsvFile(filePath,saveAs) </code></pre> <p>allData=csvread(filePath,1,1); allXd = []; %allX = []; [row,col] = size(allData); % disp(col); % disp(row);</p> <pre><code>for applicantIndex = 2:col % disp(applicantIndex) x=allData(:,applicantIndex); xd = waveletTransform(x); allXd = [allXd; xd]; %allX = [allX; {xd}]; </code></pre> <p>end allXd = allXd' ;</p> <pre><code>csvwrite(saveAs,allXd); </code></pre> <p>end</p> <p>function denoisedOutput= waveletTransform(input) %Wavelet Function wname = 'db7'; % number of levels n = 4; denoisedOutput = cmddenoise(input,wname,n); %denoisedOutput = wdenoise(input,n); end</p> <p><strong>Question</strong>:</p> <ol> <li>Is the approach corrects?</li> </ol> <p>Thank you.</p>
325
signal denoising
Denoising approach for a combination of several ADC voltage channles
https://dsp.stackexchange.com/questions/24360/denoising-approach-for-a-combination-of-several-adc-voltage-channles
<p>I have 2 ADC channels of constant voltage measurement with small amount of high frequency noise only and no low frequency oscillations. The final signal should be the simple sum of those and denoised with the moving average in the end. Does the moving average denoising of each separate channel before final summation be more noise effective than applying summation first and do the final moving average later?</p> <pre><code>channel_1_noisy = channel_1_clean + noise_1 channel_2_noisy = channel_2_clean + noise_2 sum_noisy = channel_1_noisy + channel_2_noisy sum_denoised = moving_average(sum_noisy) </code></pre> <p>or sacrificing computer time for 2 moving average filtering operations?</p> <pre><code>channel_1_denoised = moving_average(channel_1_noisy) channel_2_denoised = moving_average(channel_2_noisy) sum_denoised = channel_1_denoised + channel_2_denoised </code></pre> <p>Having more channels will the effect be exaggerated?</p>
<p>According to your processing chains (filter --> sum vs sum --> filter) the two approaches, by being LTI systems, should produce the same results under infinite precision arithmetic. You can show this mathematically by using the properties of convolution operation.</p> <p>Note that a moving average filter is just an ordinary LTI system.</p> <p>Note also that nonlinearities such as quantization operations may affect the above fact's validity.</p>
326
signal denoising
Denoising / thresholding via wavelets
https://dsp.stackexchange.com/questions/25519/denoising-thresholding-via-wavelets
<p>When I apply differerent thresholding, wavelet denosing functions to non stationary time series which has been detrended via Loess regression and demean it. I expect that when this processed series are submited to denoising / thresholding will result in a clean series with smaller values than the submited signal and not to values at any point which are above those of the signal, Is my thinking correct. </p> <p>On another hand could it be thought as per those values in a procesed signal which lie above the processed signal as if the procesed signal would ought to be above those levels instead than below those levels. Of course the functions that described the signal is an aproximation so fiting erros should have to be expected. </p>
<blockquote> <p>I expect that when this processed series are submited to denoising / thresholding will result in a clean series with smaller values than the submited signal</p> </blockquote> <p>No, you can get values that are greater. For example, consider the Fourier series of a signal. The Fourier series basis functions are wavelets. If we approximate the signal with only a few of the Fourier series coefficients you will get values that are above the original signal. </p> <p>E.g. from the (image from wikipedia)</p> <p><a href="https://i.sstatic.net/u3YG1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u3YG1.png" alt="enter image description here"></a></p>
327
signal denoising
How can I denoise this signal?
https://dsp.stackexchange.com/questions/93113/how-can-i-denoise-this-signal
<p>I have data captured by a wireless sensor that is noisy. It randomly jumps in value frequently, and I want to know what this signal will look like without these jumps. I am looking for an elegant signal processing technique to do this, if one exists. Below is the time-series signal:</p> <p><a href="https://i.sstatic.net/Vadbn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vadbn.png" alt="Time-series signal from sensor, captured at 8 Hz" /></a></p> <p>I initially thought to do a Fourier Transform to see whether there is some frequency I can filter out. The FT looks like this: <a href="https://i.sstatic.net/4EMco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4EMco.png" alt="Fourier Transform of the same signal" /></a></p> <p>Applying a LPF with a cutoff frequency of 2.5 Hz to try getting rid of the 3 Hz signal doesn't yield what I want. It just smooths out the signal and I lose most of the important underlying information that I care about. Using a 10th order Butterworth LPF, with fc = 2 Hz, I get the following signal: <a href="https://i.sstatic.net/ybZov.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ybZov.png" alt="Filtering out 3 Hz signal to try recovering underlying signal" /></a></p> <p>As you can tell, I'm not very well-versed in signal processing, which is why I've come to you.</p> <p>How can I denoise this signal and get rid of the random spikes?</p>
<p>Note that it appears that the interference is isolated to 1 Hz and it's higher harmonics. We can implement a multiband notch filter easily to reject those specific frequencies while minimizing impact to the desired signal.</p> <p>A harmonic notch filter is simplified (elegant) when the sampling rate is also a harmonic (integer multiple of 1 Hz in this case) and given by the transfer function:</p> <p><span class="math-container">$$H(z) = \frac{1+\alpha^N}{2}\frac{1-z^{-N}}{1-\alpha^Nz^{-N}}$$</span></p> <p>Where the closer <span class="math-container">$\alpha$</span> is to <span class="math-container">$1$</span>, the higher the &quot;Q&quot; of the filter meaning tighter notches. This will produce periodic notches at <span class="math-container">$f_s/N$</span> where <span class="math-container">$f_s$</span> is the sampling rate. A possible implementation is shown below, where <span class="math-container">$z^{-N}$</span> indicates a delay of <span class="math-container">$N$</span> samples.</p> <p><a href="https://i.sstatic.net/v6MHV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v6MHV.png" alt="Implementation" /></a></p> <p>Note that for <span class="math-container">$\alpha$</span> close to <span class="math-container">$1$</span>, <span class="math-container">$(1+\alpha)/2 \approx 1$</span> and the first multiplier can be eliminated with minimum consequence. (That is not the case with the second multiplier, and has implications on the importance of precision used in this &quot;leaky accumulator&quot; section.)</p> <p>Here is an example frequency response for <span class="math-container">$\alpha=0.99$</span>, <span class="math-container">$f_s=10$</span>, and <span class="math-container">$N=10$</span>:</p> <p><a href="https://i.sstatic.net/6CEnL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6CEnL.png" alt="Notch Filter Frequency Response" /></a></p> <p>For sampling rates that are not conveniently a multiple of the notch frequency, the integer sampling delays given by <span class="math-container">$z^{-N}$</span> with <span class="math-container">$N$</span> as an integer can be replaced with fractional delay all-pass filter elements (so <span class="math-container">$z^{-\tau}$</span> with <span class="math-container">$\tau$</span> as any positive real number).</p> <p>I detail the derivation of the harmonic notch filter at this other post with further details and other options on harmonic rejection filters:</p> <p><a href="https://dsp.stackexchange.com/a/52728/21048">https://dsp.stackexchange.com/a/52728/21048</a></p>
328
signal denoising
Denoising a digitalized electric signal with spikes (probabaly due to EMI)
https://dsp.stackexchange.com/questions/96061/denoising-a-digitalized-electric-signal-with-spikes-probabaly-due-to-emi
<p><strong>Statement of the problem:</strong></p> <p>I used a MEMS sensor to detect break signals in a concrete beam. The sensor was attached to a L-form steel plate, which was glued on the concrete surface. The sensor was connected to a preamplifier through BNC cable. To achieve a better SNR of the signal, the sensor was shielded by connecting the L-form steel plate and the preamplifier case. The test setup is shown as follows:</p> <p><a href="https://i.sstatic.net/WaH6dawXm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WaH6dawXm.jpg" alt="Test setup" /></a> <a href="https://i.sstatic.net/LRn2XqYdm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRn2XqYdm.jpg" alt="Test setup in-situ" /></a></p> <p>During one experiment the shielding calble was loose and did not work properly. As a result I obtained a noisy signal probably due to electromagnetic interference (correct me if I am wrong). The noisy signal is shown as follows: <a href="https://i.sstatic.net/BH4tFiMzl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BH4tFiMzl.png" alt="Noisy signal" /></a></p> <p><strong>What I tried:</strong></p> <p>The purpose is to reduce the spikes in the noisy signal. First, I tried median filter. The samping rate of the signal was 10 MHz. For the application of median filter, a window size of 501 was chosen (i.e. 0.05 ms). The code in python is displayed as follows:</p> <pre><code>from scipy import signal t, amp = time_series # time_series is a tuple containing time and amplitudes of the signal amp_med_filt = signal.medfilt(amp, kernel_size=501) diff = abs(amp-amp_med_filt) threshold = 10 mask = diff &gt; threshold amp_filt[mask] = amp_med_filt[mask] </code></pre> <p>What I did in this code is basically apply the med filter with a window size of 501, then take it as the base line. Subtracting the base line from the original signal amplitudes gives me the difference array between them. A threshold of 10 mV was defined. As long as the value in the difference array exceeds this threshold it will be marked and consequently replaced by the value of the base line. Details can be found in this <a href="https://stackoverflow.com/questions/48752282/de-spiking-a-non-periodic-signal">Link</a> (the answer provided by @Mad Physicist).</p> <p>Here is what a achieved:</p> <p><a href="https://i.sstatic.net/CWbqYCrkl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CWbqYCrkl.png" alt="Denoised signal 1" /></a></p> <p>It looks much better but there are still many spikes espacially between 5 and 15 ms. So I thought that another median filtering would help. So I added the second filtering using the following codes:</p> <pre><code>amp_med_2filt = signal.medfilt(amp_filt, kernel_size=51) </code></pre> <p>And it lookes indeed better:</p> <p><a href="https://i.sstatic.net/gfLs0TIzl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gfLs0TIzl.png" alt="Denoised Signal 2" /></a> I tried to add more median filtering but it does not help further reduing the spikes.</p> <p><strong>What I want:</strong> What I want is a signal with less spikes espically in the later time range without changing the signal form too much. Does anyone have any advice?</p> <p><strong>Edit 1:</strong> As suggested by @Hilmar, I add here some information about the signal and noise. A 'normal' signal without large noise should look like this: <a href="https://i.sstatic.net/Z4Eda6Qml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4Eda6Qml.png" alt="Normal signal with good SNR" /></a></p> <p>As it can be seen, the signal is mainly concentrated in low frequency range below 50 kHz. There are some spikes at certain frequencies. The noise is basically unknown but the spikes are probably due to EMI as they look like.</p>
<p>As indicated in my comments: good denoising starts with an analysis of signal and noise and exploits as many identifiable properties as possible.</p> <p>Taking a casual look at the spectra, I would say that the noise has a lot more high frequencies than the the signal so even a simple lowpass filter could improve things considerably.</p> <p>The noise has a huge spike close to 100 kHz which seems entirely absent from the signal. This could be an indicator that the spikes are equidistant in the time domain, which could be caused if the EMI noise comes from a clock signal or other regular signal nearby.</p> <p>If that's the case, you could try to identify the time slots where spikes are likey occur and then try to replace the spike with the average of the slot (which may be a few samples wide).</p> <p>If the spikes all have the same shape, you could try to isolate a clean version of the spike. By cross-correlating the clean spike with the noisy signal, you can find time of occurrence and amplitude for spike occurrences and than simply subtract them out.</p> <p>Since the spike clock and your signal clock are likely not phase-locked, you may have to do some synchronization first or just up-sample to a very high sample rate (if you can afford that).</p>
329
signal denoising
Estimating a Signal Given a Noisy Measurement of the Signal and Its Derivative (Denoising)
https://dsp.stackexchange.com/questions/52150/estimating-a-signal-given-a-noisy-measurement-of-the-signal-and-its-derivative
<p>I have a signal and its derivative simultaneously measured, both including additive noise. The measurement is completed before the analysis, so it can be looked ahead. Now I want to reconstruct a less noisy version of the signal. I'm looking for pointers to algorithms I should look into.</p> <p>Kalman filter seems to be on the right track, but the implementations I see so far are trying to estimate based on previous measurements only, while I probably should use both previous and coming measurements for optimal results at each point.</p> <p>Ideas?</p>
<p>This is a really nice problem.</p> <h2>Problem Formulation</h2> <p>I will formulate it as following:</p> <blockquote> <p>Let <span class="math-container">$ x \in \mathbb{R}^{n} $</span> be a signal. Given <span class="math-container">$ y \in \mathbb{R}^{n} $</span> which is a noisy measurement of <span class="math-container">$ x $</span> such that <span class="math-container">$ y = x + v $</span> and <span class="math-container">$ z $</span> be a noisy measurement of the derivative of <span class="math-container">$ x $</span> such that <span class="math-container">$ z = F x + w $</span> where <span class="math-container">$ F $</span> is the finite differences operator. It is known that both <span class="math-container">$ v $</span> and <span class="math-container">$ w $</span> are White Noise which is independent form each other. Find a reasonable estimator of <span class="math-container">$ x $</span>.</p> </blockquote> <p>OK, actually, I did the hard part by formulating the problem the way I did (Formulating a connection between <span class="math-container">$ x $</span> and its derivative).<br /> I find this formulation to be reasonable. It doesn't assume too much but gives us enough to work with.</p> <p>So my first of choice would be something like:</p> <p><span class="math-container">$$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| F x - z \right\|}_{2}^{2} $$</span></p> <p>This is really easy to solve.<br /> The tricky part is <span class="math-container">$ \lambda $</span>. Well, I put it there to allow playing with the ratio between the noise level of the measurements. If the <span class="math-container">$ \sigma $</span> of both noises is the same, set <span class="math-container">$ \lambda = 1 $</span>. Else, set it as <span class="math-container">$ \lambda = \frac{ {\sigma}_{v}^{2} }{ {\sigma}_{w}^{2} } $</span>. The intuition is straight forward, if the noise level of <span class="math-container">$ w $</span> is smaller then <span class="math-container">$ v $</span> we want <span class="math-container">$ \lambda $</span> to be bigger than <span class="math-container">$ 1 $</span> in order to maximize the knowledge coming from <span class="math-container">$ z $</span>.</p> <p><strong>Remark</strong><br /> The model above is actually the Maximum Likelihood Estimator of the model above if you assume the noises are AWGN.<br /> I don't want to get into the Math of deriving it, but Least Squares is almost (Always) the ML for the AWGN model.</p> <h2>Solution to the Model</h2> <p>As mentioned above:</p> <p><span class="math-container">$$ \hat{x} = \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| F x - z \right\|}_{2}^{2} $$</span></p> <p>This is a strictly convex problem hence the solution is given in the stationary point.</p> <p><span class="math-container">$$\begin{align*} \frac{\mathrm{d}}{\mathrm{d} \hat{x}} \frac{1}{2} {\left\| \hat{x} - y \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| F \hat{x} - z \right\|}_{2}^{2} &amp; = 0 &amp;&amp; \text{Definition of Stationary Point} \\ &amp; = \hat{x} - y + \lambda {F}^{T} \left( F x - z \right) &amp;&amp; \text{} \\ &amp; \Leftrightarrow \left( I + \lambda {F}^{T} F \right) = y + \lambda {F}^{T} z &amp;&amp; \text{} \\ &amp; \Leftrightarrow \hat{x} = {\left( I + \lambda {F}^{T} F \right)}^{-1} \left( y + \lambda {F}^{T} z \right) \end{align*}$$</span></p> <h2>MATLAB Simulation</h2> <p>The model I chose is using an harmonic signal - sine.<br /> We use Finite Differences as the Derivative Model.</p> <p>Here are the signals and the estimation using <span class="math-container">$ \lambda = \frac{ {\sigma}_{v}^{2} }{ {\sigma}_{w}^{2} } $</span>:</p> <p><a href="https://i.sstatic.net/oBr49.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oBr49.png" alt="enter image description here" /></a></p> <p>It can be seen that the estimation is much better then any of the given signals.</p> <p>Let's verify the optimality of <span class="math-container">$ \lambda $</span>:</p> <p><a href="https://i.sstatic.net/pOTCs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pOTCs.png" alt="enter image description here" /></a></p> <p>It is hard to see 2 circles at the bottom but they are close to verify the optimality of <span class="math-container">$ \lambda $</span>.</p> <p>The code is available on my <a href="https://github.com/RoyiAvital/StackExchangeCodes" rel="nofollow noreferrer">StackExchange Signal Processing Q52150 GitHub Repository</a> (Look at the <code>SignalProcessing\Q52150</code> folder).</p>
330
signal denoising
Basic audio denoising in the frequency domain using minimum statistics?
https://dsp.stackexchange.com/questions/87096/basic-audio-denoising-in-the-frequency-domain-using-minimum-statistics
<p>I'm trying to do some example elementary denoising of the audio signal. Let's say input is speech with constant traffic background noise.</p> <ul> <li>First I calculated block-based overlap-add Fourier transform (size 512) and continued in the frequency domain with the signal <code>in[n]</code>.</li> <li>Then I used minimum statistics method to estimate the noise in the frequency domain <code>noise[n]</code>.</li> <li>Lastly I calculated the <code>gain[n]</code> as signal-to-noise ratio <code>in[n]/noise[n]</code>.</li> </ul> <p>Now that I have <code>gain[n]</code>, how should I continue in order to filter the signal and go back to the time domain?</p>
<p>Once you have calculated the gain function in the frequency domain, you can apply it to your noisy signal to obtain a denoised signal. The steps you should follow are:</p> <p>1- Multiply the noisy signal's Fourier transform by the gain function in the frequency domain. This gives you the processed Fourier coefficients.</p> <p>2- Take the inverse Fourier transform of the processed Fourier coefficients. This gives you the denoised signal in the time domain.</p> <p>3- In order to avoid artifacts at the beginning and end of the signal, you should overlap-add the denoised segments using a window function.</p>
331
signal denoising
What Techniques Are Used for Signal / Image Denoising (Example Image Provided)?
https://dsp.stackexchange.com/questions/42110/what-techniques-are-used-for-signal-image-denoising-example-image-provided
<p>I have an image that looks like <a href="https://drive.google.com/file/d/0B1X1jFSEgpjJNmMzZTZ6QkdSOTg/view?usp=sharing**" rel="nofollow noreferrer">this</a> (it might appear low res in a browser because it is 16 MB). This image was taken with a scanning electron microscope, but because the equipment is outdated there is a lot of noise within the image. </p> <p>What I would like to do is develop a program (in Java) that would take this image as an input and try to remove the noise as much as possible to produce a high quality image. </p> <p>However, being a beginner in the field of image/signal processing, I have no idea where to begin researching techniques. I was wondering if you guys had any good starting points for me to look into, or perhaps de-noising algorithms/techniques that I could research to potentially help me reach my goal.</p>
332
signal denoising
wavelet denoising routine for environmental data series
https://dsp.stackexchange.com/questions/7817/wavelet-denoising-routine-for-environmental-data-series
<p>I have a time series of water temperature, e.g.</p> <pre><code>y = 1+(30-10).*rand(1,365); </code></pre> <p>I have previously used the wavelet denoising routine in the wavelet toolbox by matlab to remove unwanted noise from a signal, e.g.</p> <pre><code>[s_denoised, ~, ~] = wden(y, 'minimaxi', 's', 'sln', 2, 'db4'); </code></pre> <p>However, I am unsure whether I am using this method correctly! If my time series is of water temperature which is characteristic of an environmental data series i.e. high power at low frequency (e.g. seasonal cycle) and low power at high frequency (e.g. diel cycle) and I wish to remove noise caused by internal wave motion in lakes (which cause sudden increases or decreases in temperature), should I be using the wavelet shown here or is there a better wavelet to use? </p>
<p>First, a comment - before you denoise, you are basically going to be converting your data from the (time)-domain into the wavelet domain. This is nothing but a series of projections of your data unto user-picked basis functions. (The wavelets). </p> <p>When you denoise, you will be zeroing out, or shrinking, coefficients which (ostensibly) belong to the noise. </p> <p>Now, imagine that you picked a wavelet type, and when you transformed your noiseless signal, the very first coefficient was some number, and the rest were zero. This means that you <em>know</em>, that your pure signal, will always give you a non-zero as the first coefficient in the wavelet domain, with other coefficients always being zero. </p> <p>Now imagine that you transformed your noisy signal. You would probably still see a high value for that first coefficient, (belonging to your true data), but all the other coefficients that you were expecting to be zero have some small value. If you zero them out and inverse transform, you have denoised your signal. </p> <p>The question you want to answer then becomes, <em>"Which wavelet type, when projected against my data, gives me the <strong>sparsest</strong> representation in the wavelet domain?"</em> In other words, which wavelet, when used against your (pure, noiseless) data, gives you coefficients that are mostly zero? </p> <p>Mathematically, what you want is a wavelet that has as many vanishing moments as your data template. </p> <p><strong>Example:</strong> For example, let us say that know that your data template (in your case, the temperature without internal wave motion corruption) is a polynomial of 0th order, (a constant), then you want to use the Haar wavelet, since it has 1 vanishing moment. This means that if you transform your pure-signal into the Haar domain, most of the coefficients will be zero. However when you transform your data-with-internal-wave-corruption, the coefficients that were normally zero now have some value, that you can then clobber and inverse transform. </p> <p>The same thing can be said for other templates. Let us say that your pure-data was a polynomial with maximum degree of 7. That means then that you want a wavelet that <em>makes my polynomials of maximum degree 7 disappear</em>, or in other words, has 8 vanishing moments. This would make your signal look quite sparse in the transform domain, and thus any other noise of higher order would make it less sparse, and you would be able to denoise by removing those suspicious coefficients that were supposed to be zero. </p>
333